By the end of this day you should be able to: explain the key debates surrounding AI and describe some ways in which machines and humans can work together
Read this article, Can we teach morality to machines, by Vyacheslav Polonsk from the Oxford Internet Institute.
This article is an introduction to the discussions of morality that have become important with our new reliance on algorithms.
Watch this TED talk on how Machine intelligence makes human morals more important by techno-sociologist Zeynep Tufekci, and also this video on the human origins of machine prejudice by Dr. Joanna Bryson from the Alan Turing Institute.
Both Bryson and Tufekci are raising some topical questions about morality and how we should begin a wider conversation about our own biases.
The bias in both
Watch Kate Crawford’s talk on The Trouble with Bias from the 2017 NIPS conference with a friend or family member, or else take notes to summarise the talk to them.
Spend some time afterwards discussing what ‘parameters’ need to be reexamined in our society, and whether these are also applicable to your life or work.
Read through ProPublica’s study into machine bias, exploring the ‘Documents’ and ‘Get the Data’ sections to see how they analysed the algorithms' results.
The first big story about algorithm bias was this Pulitzer Prize winning study by ProPublica in 2016 about how Northpointe’s COMPAS recidivism algorithm demonstrated bias against people of colour when predicting reoffense rates at a prison in Florida. The important questions to ask here are about how machine bias compares to human bias. Would a human have read the data differently?
With these opinions in mind, read chapter 8 and 15 in Superintelligence.
Bostrom goes into more detail here about his vision for the distant future. How far do you think his fears are justified? Is it right to pursue developments that could have the consequences he mentions?
Read this Guardian article from June 2017 by Arwa Mahdawi and watch out for the caveat on ‘creativity’.
You may have read articles like this in the news about how a more immediate consequence of AI is that it could take over white collar jobs.
However, considerations of creativity open up the question of what AI isn’t good at. Given what you’ve learnt this week, what tasks do you think a computer cannot execute? Think about what you have done today that a computer couldn’t do.
Spot the difference
Watch this short movie called Sunspring, directed by Oscar Sharp and written by the computer programme ‘Jetson’.
Can you tell the difference between human and computer poetry? Try it out by playing this Bot or Not? game.
Can computers actually touch our human creativity? Ask yourself whether your awareness of this movie’s author affected the way you heard the words. When playing the game think about what characteristics make up ‘human’ language and poetry.
Making effective time
What daily tasks do you do at work that require creativity? Write them down. Calculate how long you spend on basic or manual tasks in comparison. If time spent on these was cut down by a half, what could you spend the remaining time focusing on?
Most importantly, keep in mind how can you make sure you implement AI in an ethical way, with proper consideration of data privacy and any potential algorithm biases.
On issues of morality and ethics, you could take a closer look at the issues surrounding self driving cars as a starting point. Read this article from Wired about morality of such decision making, and then look into the Waymo or Tesla sites. You could even try out some decision scenarios on MIT’s Moral Machine platform.
On AI and creativity, this IBM article gives a good overview of where developments are heading, or listen to this podcast by Chris Albon and Vidya Spandana from the data science podcast Partially Derivative. You could also have a look at these poems that passed the Turing Test. In some ways the output of these programmes hasn’t undergone a huge change. Check out the first book written by a computer programme and published in 1984, The Policeman’s Beard is Half-Constructed.