AI in the UK: are we ‘ready, willing and able’?

By Laura Caccia

April’s report from the House of Lords’ Select Committee on Artificial Intelligence asks if the UK is ‘ready, willing, and able’ for AI. Our Government AI Readiness Index ranked the UK government as best positioned in the OECD to take advantage of AI. On many measures, at least, we are ready. Whether we are willing and able is another question.

Based on discussions with a range of organisations and experts, the Committee makes a number of practical recommendations for AI in the UK:

  • A Government Office for AI that helps coordinate and grow AI in the UK;
  • An AI council, to create a system to inform people when decisions are being made by AI;
  • A cross-sector AI code to ensure AI development and implementation remains ethical.

You can find the official conclusions and recommendations here. We have picked out some key areas from the report not mentioned in the official conclusions and recommendations which we think are worth some further thought.

 
Palace_of_Westminster,_London_-_Feb_2007.jpg
 

1. Education: widespread education about AI is necessary, but we don’t know what to teach and to whom

As a society, our understanding of AI and other technologies shaping our digital experience is patchy. Statements from US Senators quizzing Zuckerberg in April that show a lack of basic internet awareness have been a source of much public amusement. But the average internet user’s own ignorance can be just as dangerous. It is no longer safe to avoid questions of agency and intent when we use technology: Who has built this? What are they selling? How am I paying?

The Select Committee’s report does not reach a clear conclusion on how far the UK Government should push the AI education agenda. Aside from the recommendation that ‘people should be provided with relevant information’, the Committee does not take a clear stance on who should provide such information, and what counts as ‘relevant’. The Information Commissioner’s Office (ICO) suggested that it would be more helpful to focus on AI's consequences rather than internal workings, as ‘there is a need to be realistic about the public’s ability to understand in detail how the technology works.’

We agree that a focus on the outcome can make it easier for people to engage in a debate about AI in the short term. But understanding the basics of how a learning algorithm works is a vital skill for all who engage in the world of technology that is currently capitalising on our ignorance. Without an understanding of how AI systems work, and how companies use them, how can we possibly know what we are signing up for when we click 'agree' to a set of incomprehensible terms and conditions?

2. Accountability: we need legal clarity of responsibility for AI to encourage innovation as well as to protect internet users

In the report Professor Sir David Spiegelhalter, President of the Royal Statistical Society, is quoted saying that the ultimate responsibility for maintaining clarity about how AI systems work lies with the individual researchers and practitioners who make them. He asks why they are not ‘working with the media and ensuring that the right sorts of stories appear.’ Yet as a society we are shifting towards blaming the companies that buy AI systems from researchers. The reason that the US Senate just grilled Facebook was that we are outraged Facebook did not take more responsibility for educating its users on the meaning of its privacy policy. It is still unclear where along the production chain responsibility lies.

The report’s discussion of the difficult legal issues surrounding AI technology is particularly interesting. It begins with the general point that the process for assigning accountability to an AI system is a significant gap in our current legal framework. However, it has a new take on one of the most common arguments against regulation in technology: that regulation inhibits innovation. Instead, the report states that ‘AI is different,’ since ‘without appropriately complex regulation that assigns responsibility, companies may not want to use AI tools.’ It is clear that we need to make AI safe to develop as well as to use.

3. Investment: media sensationalism is making sensible investment in AI harder

A lack of clarity on AI’s problems and potential is not just an issue for the everyday technology user. The report notes the impact of Hollywood ‘sensationalism’ that has led to a polarisation of attitudes towards AI investment into over enthusiasm and fearful reluctance. Sarah O’Connor, employment correspondent for the Financial Times, notes that articles with ‘robots’ or ‘artificial intelligence’ in the headline ensure that at least ‘twice as many people click on it.’ She mentions that some journalists sensationalise the subject in order to drive web traffic and advertising revenues.

On the one hand, this increase in AI enthusiasm has led some research scientists to inflate the potential of AI to ‘attract prestigious research grants.’ Professor Kathleen Richardson and Nika Mahnič noted that ‘huge EU funded projects are now promoting unfounded mythologies about the capabilities of AI.’ On the other hand, some AI researchers spoke of fears that developments and investment in AI might be ‘threatened with the kind of public hostility directed towards genetically modified (GM) crops in the 1990s and 2000s’ (Raymond Williams Foundation).

Again, this is an issue of responsibility. Companies of all shapes and sizes have not yet had to explain themselves properly, either to their investors or to the public. Therefore, more clarity on the capabilities of AI technologies is needed to drive investment in the UK.


Roy Amara, President of Palo Alto’s Institute for the Future, famously created ‘Amara’s law.’ This states that ‘we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.’ Hopefully with appropriate discussion and a pursuit of clarity we can strike the right balance; to be be truly ready, willing and able for artificial intelligence.

Laura Caccia