Stanford Business has a new article out this week, "Exploring the Ethics Behind Self-Driving Cars". It is a great article if you teach ethics in an academic setting, but the more important point is to put us all on notice. There are many new issues we need to address as artificial intelligence is added to our lives.
The article frames one conundrum. Should the car be programmed to protect your life or should it also consider the lives of pedestrians and people in other cars. Personally I think we should separate the car from the software. I can buy my car from BMW but the AI software comes from Google or Apple or MIT Automotive. Yes, there may be some issues similar to using Windows on so many different computers, but I could pick the moral position I want. No General Motors ethics for me. (BTW, whose software would you trust more--General Motors or Google or Apple)
By chance I talked to some industry people on this topic last week. They say that their analysis shows that driverless cars will have fewer accidents than human drivers. Maybe, as soon as we eliminate all human drivers we will not have so many ethical questions and the professors at Stanford can go back to thinking about Resource-based Theory.