The UK Government's AI review: what's missing?
This weekend the UK Government published its independent review into how to make sure the UK remains at the forefront of artificial intelligence (AI) developments. At Oxford Insights we’ve been looking forward to reading the results of this review. AI is an enabling technology with enormous potential. The UK is one of the leading countries in the world in the research and application of AI. The UK has been at the forefront on AI developments, from early work on code by Ada Lovelace and the beginnings of AI with Alan Turing to the development of advanced neural networks and autonomous vehicles today. In the last year there have been strategic statements of intent (and substantial investment) by Russia, China, Canada and Singapore. The UK needs to act now to make sure it capitalises on its strong research position.
The report has important recommendations about
- ensuring better access to underlying data - including through new Data Trusts;
- improving the supply of skills - including from abroad;
- investing in AI research; and
- supporting industry uptake - including adoption of AI technologies by the UK government.
These recommendations target a number of the right areas and we look forward to the details contained in the government response. The UK AI Council, for example, could be a great new development if it gets real teeth, such as being able to influence the development of policy, shape the support offered by the Department for Business, Energy and Industrial Strategy (BEIS) and the UK Department for International Trade (DIT), or help decide where the government invests in AI research or technology applications. Otherwise, there’s a risk that the energy that such a body can generate will atrophy over time.
There are three areas where the report is weaker. We are keen to know how will the UK Government address:
the ethical challenges thrown up, for example, by the Stanford experiment to see whether AI can detect sexuality;
regulatory oversight of AI and AI-powered organisations; and
the impact of AI technologies on the UK workforce?
The review team had a tight scope for the review: make recommendations on how to grow the UK economy and create jobs. Ethics was explicitly out of scope. But ethics, regulation and jobs are dominating the public discourse, both in the UK and around the world. Unless the UK Government addresses them explicitly, these concerns will continue to undermine the positive potential of artificial intelligence.
While we will explore these topics in more detail in later blogs, I hoped to see recommendations in the following areas:
An ethical framework
To be considered legitimate and, in turn, to be useful, AI needs to be trusted by society. It can bring huge benefits but also can be used in ways that society would not agree with (and that might achieve counter-intuitive or counter-productive results). Government should work with the UK AI Council to create a clear set of ethical principles governing the use of AI, for example to prevent the use of AI to discriminate against particular segments of society.
I believe the first draft of these principles should be published in the next six months, because creating an ethical framework quickly is critical. Already, many are concerned that AI technology is ahead of our thinking about its ethics. This has led, for example, to DeepMind creating its own ethics research group.
AI can increase power imbalances in society - it can help those with power exercise it more effectively. It can also lead to new monopoly situations, with data or capability getting concentrated in a few key platforms.
There are existing bodies and frameworks for dealing with these issues but too often their powers are framed by the business structures and techniques of the past. Government should work to make sure that existing powers and bodies have the legal framework and skills they need to cope with data and AI. For example, access to data should be one of the aspects that the Competitions and Markets Authority can consider. The government should publicly report on progress every 12 months in order that it responds quickly to changes as AI technology develops.
AI will lead to the replacement of some work previously done by ‘white collar’ workers. The experience of Germany in the 1990s as heavy industry was mechanised shows that this can be positive and with people able to find more jobs in other sections of the economy. Government should put in place plans to manage that transition as soon as possible. This should include supporting workers in picking up skills and jobs which are complementary to work done by AI e.g. supporting people into roles that require human empathy or connection and helping people find the most meaningful parts of their work. This should be reported on as part of BEIS’s ongoing work on industrial strategy.
The UK has a strategic opportunity to be the best place to start an AI business - not because it has the least regulation, but because it has the best regulation. The UK should have a clear concept of AI ethics, a high level of trust, and legal certainty. By creating these structures, the UK can stay at the cutting edge.
Oxford Insights looks forward to reading the details of the Government’s response and contributing to this debate in the UK.