"All of a sudden, the car experiences mechanical failure and is unable to stop. If the car continues, it will crash into a bunch of pedestrians crossing the street, but the car may swerve, hitting one bystander, killing them to save the pedestrians. What should the car do, and who should decide? What if instead the car could swerve into a wall, crashing and killing you, the passenger, in order to save those pedestrians?"
- Iyad Rahwan. September 2016 at TEDxCambridge
The interest in artificial intelligence (AI) in society is growing and this is bleeding into education, where there seems to be a new tool, toy, or shiny thing that purports to take AI into the classroom in an authentic way. I have played with AI in the online space, but at this point in time I don't think there is the depth or strength to the technology to make it useful in the classroom. Please note that AI is different to AR (augmented reality), which is outside the scope of this article.
In my current professional role I spend a lot of time in the car driving to different schools around the state which means that I am constantly dropping in and out of different radio stations and so it is easier, and I prefer, to listen to podcasts. One of these is TED Talks Daily (subscribe here) and recently I listened to a talk that addressed AI and driverless cars.
Should your driverless car kill you if it means saving five pedestrians? In this primer on the social dilemmas of driverless cars, Iyad Rahwan explores how the technology will challenge our morality and explains his work collecting data from real people on the ethical trade-offs we're willing (and not willing) to make.
In this TED talk, Iyad Rahwan asks a difficult and challenging question; one that Albus Dumbledore would have loved, what is the greater good; saving yourself or allowing yourself to be killed to save five others? Iyad poses this question as the potential of driverless cars is significant, but as this article from the ABC in March 2017 points out, there are significant other questions around safety, libaility, and of course, morals.
To the main question, however. What do you think?
Two of Iyad's students, Edmond Awad and Sohan Dsouza, built a website to help gather some data about what society thinks of this question, and to help gather data for researchers to make recommendations to legislators about these issues. The website is called Moral Machine and gives you the opportunity to make judgments about who to save in hypothetical scenarios involving driverless cars.
How does this help in the car though? Ethical Understanding is one of the general capabilities outlined in the Australian National Curriculum, yet ethics can be a challenging concept to teach. While ethics are externally driven through social norms and through laws, these social norms and laws are often driven by morals, which is our internal sense of right and wrong. There is something of a circular loop here, with our morals often guided by the social norms (ethics) which are in turn influenced through the collective moral position.
The Moral Machine website is an interesting challenge and can present some highly challenging conversations around morals and ethics such as in the scenario below.
Morals and ethics are an important part of our development as well-rounded humans. Moral Machiens presents some potential real-life scenarios where you need to make a judgement.
What would your choice be in the above situation? Leave a comment to let me know.