Philosophy & Moral Decision Making in Self-Driving Cars Shouldn’t be Overlooked
A rather disappointing statement from Urmson, as it clearly indicates he is missing a huge piece of the puzzle that needs to go into those self-driving cars.
Even if he were right about the idea that “stuff just happens” when people make those decisions, at the very least he is blithely ignoring the fact that people will use moral decision-making and philosophical beliefs to judge the outcomes of what happens when a self-driving car applies its programming and algorithms and ends up killing 3 people instead instead of one.
H/t Johnny Stork for sharing this.
David Amerland this would also, I think, be relevant to the idea of trust, don’t you?
Originally shared by Johnny Stork, MSc
Self-Driving Cars – Is Philosophy Relavent?
“Earlier this week, Chris Urmson, chief of Google’s self-driving cars project, made a pretty big mistake for someone so high up at Google: he dismissed philosophers and the trolley problem as irrelevant to self-driving cars.”
#philosophy #google