June 3: Self-Driving Cars
Self-Driving Cars
The premise of the dilemma is that the self driving car cannot break and is faced with three choices, all equally harmful to the parties involved.
Discussion Question
If you were the programmer how would you have the car "choose": a) left into the SUV, b) straight into the obstacles, c) right into the motorcycle? Why?
I would program the car to go a) left into the SUV. As the video states, we die if we drive into the obstacles, so option B. is eliminated. Regarding options A. and C., as the SUV is larger than the motorcycle, the force of the crash impact from our van to an SUV should have less effect on its passengers.
Further Thoughts
To my knowledge, AI does not have a conscious, so self-driving cars do not have “reactions” as stated in the video. Thus, programmers must take into account the decision that self-driving cars must make during dangerous situations, such as the one detailed in the video. Real people (programmers) must make the social and ethical calls for self-driving cars.
Self-driving cars’ company will prioritize the driver (better selling if car protects driver), whereas a driver would prioritize others (legalities of killing people/liability).
No-one has consented to having self-driving cars on the road.
As all choices are detrimental to an involved party, the car can be programmed to randomly make a decision.
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home