For the start, let me guide you through a thought experiment:
Imagine you are riding a trolley. Suddenly, you see five people ahead standing on your track. You want to hit the brakes to avoid crashing into these people but realize that the trolley’s brakes aren’t working.
You then happen to find yourself next to a hand lever that would allow the trolley to switch the track and therefore save these five people. I guess, in this case, you would pull the lever, right? But, there is a problem. It appears that there is one person standing on the other track, totally unaware of the trolley. There’s no time to warn her/him, so pulling the lever would mean saving five people but killing one person. What do you do?
Utilitarian vs. deontological
You were just confronted with the trolley problem, an ethical dilemma dating back to 1967. The dilemma makes the distinction between two concepts of morality: utilitarian vs. deontological:
- Utilitarian: Choose the action with the best overall consequences. The means are justified by the end, therefore you chose to pull the lever and save the five people.
- Deontological: Always follow moral codes. So, you advocate for the means justifying the end. Therefore, you don’t pull the lever because making the choice of killing some is inherently wrong, even if that would mean sacrificing one to save five.
So how does this all relate to technology?
Will AI have a conscience?
In the last years, the trolley problem has increasingly been at the centre of debates surrounding the development of Artificial Intelligence. But why?
Fact is that technologies very often reflect the intentions of their designers. Consider, as an extreme example, facial recognition software, which can fail to recognise darker faces as effectively as lighter ones. In order to avoid such inherent biases in the future, we need to critically reflect and define our most fundamental values.
The trolley problem makes us reflect about morals and ethics. And as AI is increasingly automating processes in our lives, we need to ask these critical questions. For instance, in the light of self-driving cars, how will they be programmed to react in similar situations like the trolley? And what about autonomous weapons? Will they be programmed to sacrifice civilians in order to catch the big fish?
Today, AI algorithms are already being applied to areas where there are no clear boundaries between good and bad, such as criminal justice and job application processes for instance. In the future, we expect AI to care for the elderly, teach our children, and perform many other tasks that require moral human judgement. But is AI able to develop a sense of moral values? Should it even? In other words, will AI have a conscience?
On the lookout for disruptive technologies
While the Berlin startup scene is on the lookout for the next big disruptive technologies, we – the HighTech SeedLab – are looking for early-stage entrepreneurs with high-tech innovations and a sense of sustainability. In that, we include social sustainability, meaning entrepreneurs who amongst others also consider the ethical and societal implications of disruptive technologies. In that case, we would be delighted to help grow your business idea. We currently accept applications to join our batch next year. The deadline is November 15.
Do you want to turn your idea into a business and be part of a motivated group of entrepreneurs?
Applications for the High-Tech SeedLab Batch 2021 are open until November 15th 2020. If you have questions about the program or your application, please contact firstname.lastname@example.org.
This program is financed by the European Social Fund (ESF), as well as the State of Berlin.