I saw an article about this on Facebook and I thought this would lead to an interesting discussion here....
The article was about the results from a consumer research survey conducted through Amazon's Mechanical Turk program (which BTW for people that like the Task section for earning, this is another good on-line earning programs). The study posed several different situations to determine whether people would be in favor of an automated vehicle (AV) making ethical decisions even if it may hurt the driver in the process.
An example question was to imagine that you are the sole occupant in an AV. You turn a corner and suddenly find a crowd of 10 people in the middle of the street with concrete walls / barriers lining both sides of the street. The AV has determined that given the distance away from the crowd as well as the braking speed of the vehicle, that just slamming on the brakes will result in some causalities from the crowd. If the AV is fitted with ethic decision making logic, it may, instead of braking and crashing into the crowd of people, it would rather steer the vehicle into one of the barriers instead; the ethical idea being that saving 10 people is more ethical than saving just one.
If / when AVs become a viable option for consumers, do you think that AV's should include some type of ethic decision making logic? Do you think that laws should force manufacturers to include such logic? Are there certain facts that you'd want the logic to include beyond just the data from the vehicle (eg number and age of occupants, crowd is acting legally, etc.)? Is "saving more people" / "the greater good" enough to be used for such logic, or are there circumstances in which saving more people may actually not be the most ethical decision?