The self-driving trolley problem: how will future AI systems make the most ethical choices for all of us?
The Self-driving Ethical Conundrum
As artificial intelligence (AI) continues to advance, we find ourselves confronted with complex ethical dilemmas that challenge our traditional ethical frameworks. One such dilemma is the self-driving car ethics conundrum - a modern variation on a classic ethical thought experiment. This scenario makes us think about how future AI systems, especially those involving self-driving cars, will make ethical decisions that affect human lives.imgres
The moral conundrum of
Consider the following scenario: You are a passenger in a self-driving car, driving in a city of traffic, and suddenly your brake pads fail, and there is a truck full of goods in front of you, a motorcycle on the left, and a car running normally on the right with 4 passengers, what do you think your artificial intelligence will make for you?
In the above scenario, your ai will stay on course and crash into the truck, but the cargo will most likely fall on the car and put your life in danger? Or choose to turn left and put the motorist's life at risk? Or choose to crash into the car and injure four passengers?
All these scenarios are extremely difficult for humans to choose, but ai's choices are made through calculations and programs. As technology replaces human error, self-driving cars herald a future of far fewer accidents and safer roads. However, despite the reduced frequency, some accidents still occur, such as sudden brake pad failure. So, we should be thinking about how do we get ai to make the "right" choices as possible?
Possible decision framework
1. Utilitarianism
This principle states that actions should aim to maximise overall happiness. For self-driving cars, this means making decisions that reduce overall harm. If turning to avoid pedestrians saves more lives than staying on course, then the utilitarian approach will favour turning. For example, if there are five passengers on one side, they are more likely to crash into a vehicle with only one passenger.
Utilitarianism will also lead to a situation where one of the pedestrians that may be hit is a homeless person and the other is a politician, then the ai will choose to protect the person who will contribute to society, that is, choose to hit the homeless person.
2. Deontological ethics
Deontological ethics emphasises compliance with rules and responsibilities rather than results. From this perspective, vehicles may be programmed to prioritise avoiding direct harm to individuals, even if that leads to greater overall harm. This approach emphasizes the morality of the action itself, not its consequences. That is, when an accident occurs, the ai will try to avoid both people not hitting, but it is very likely that it will not be able to achieve the final collision of two people.
The discussion revealed that there is a huge gap between theoretical ethics and real-world applications, especially in the context of decisions that AI must have a profound impact on our paths.
Even with a sound ethical framework in place, effectively translating these principles into AI's decision-making process and ensuring consistent compliance remains a daunting task. Their decisions rely on vast amounts of training data, and in rare, complex cases, specific outcomes are uncertain.
Comments
Post a Comment