How did we define a social robot? - correct answeran autonomous or semi-autonomous
robot that interacts and communicates with humans by following the behavioral norms
expected by the people whom the robot is intended to interact
How is the traditional Trolley Problem supposed to present a moral dilemma? - correct
answerthe agent must make a decision to (not) take action with differing ethical
consequences (usually preferable to kill the least number of people). requires action vs.
inaction, which makes it unavoidable
What are Lin's examples of AI decisions that fall into a "moral gray space"? - correct
answerjudgement calls such as : autonomous cars ("premeditated" or not), traffic
routing apps (safety vs. fastest route), drones, medical robots respecting doctor vs.
patient
What are the characteristics of sociable robot, according to Hanheide? - correct answer-
embodiment: in a situated manner
- lifelike qualities: (anthropomorphization, tendency to interpret behavior as being
intentional)
- identifying: persons, their actions and intentions: Theory of Mind and empathy core for
human awareness
- learn social situations: shaping robot's personal history by imitation or mimicry
- being understood: ability to read the activities (expressions, mimic, etc.) of a robot
What factors characterize the 6 levels of the SAE J3016 driving automation scheme? -
correct answer0-2: 'requires constant driver supervision', "driver support"
3-5: 'features drive the vehicle in most conditions', "automated driving features"
feature may request driver intervention, may fail to operate outside of required
conditions
What is one of the moral dilemmas presented by the "crash" scenario for driverless
cars? - correct answerthree choices, stay on path and hit an obstacle, killing
passengers; or swerve to either side, killing/injuring bystanders. Comparisons between
three options in different scenarios expose ethical dilemmas
compare number of dead passengers vs. bystanders, use probabilities, estimate
damage (injury and cost)
What is Searle's "Chinese Room" argument? How can it be adapted to machine ethics?
- correct answera human inside a room with Chinese characters and a set of
instructions can fool a Chinese speaker exchanging messages outside. Does he
actually know Chinese?. Similarly, do computers understand what they are doing, or are
they blindly following instructions with no real "intelligence"? Counter to the Turing Test
argument that fooling humans = intelligence.