6 of 6

1 July 2018

AI – 6 of 6 Insights

UK and EU developments in AI in 2018

Countries including the US and China have traditionally led the way in developing strategies to foster and develop AI technologies, but 2018 has seen the UK and EU taking meaningful steps to position themselves at the forefront of the AI market.

More

For some time now, artificial intelligence has been touted by academics and business leaders as a new revolution in industry and society, which will make our wildest science fiction fantasies a reality. We predicted back in December 2017, that 2018 would see an acceleration in AI proliferation. At the moment AI remains relatively 'narrow', with most applications showing up in areas like internet search technology, image recognition, translation and algorithmic trading. We are not yet seeing AI completely replacing human intelligence in areas that are considered inherently human, but we are seeing a shift towards applications such as chatbots which are closer to replicating human behaviour.

Investment and development of AI in the UK has been substantial. Industry leaders such as Deepmind and SwiftKey are headquartered in London and the UK higher education institutions enjoy a great reputation as world leaders in AI research. To date, this sector has flourished largely without government intervention and, as identified by the Lords Select Committee on AI, resulted in a "flexible and innovative grassroots start-up culture". Questions remain, however, on the scaling up of AI, on its applicability to wider businesses and daily life, on the regulatory framework around AI and around public distrust of AI and data sharing.

The UK government and AI

In November 2017, the UK government published its Industrial Strategy, setting out long term government strategy towards British industry. This strategy has identified putting the UK at the forefront of the AI and data revolution as one of its four Grand Challenges, the others being maximising the advantages for UK industry from the global shift to clean growth, harnessing innovation to support an ageing society and becoming a world leader in the way people, goods and services move. In April this year, as part of the Industrial Strategy, the AI Sector Deal was announced, setting out government policy objectives regarding AI. The scope of this deal covers: developing an AI-friendly regulatory and business environment; investment objectives; skills and talent acquisition; and developing the UK's digital infrastructure.

While many of the policies set out in this strategy are not exclusively AI-specific, such as changes to visa rules, there remains a strong emphasis on encouraging the growth and development of AI technologies in Britain through these initiatives.

The AI Sector Deal has promised substantial funds for firms looking to invest in AI R&D and implementation. For firms looking to invest in AI technologies, a £2.5bn investment fund at the British business bank was announced during the Autumn Budget 2017, to be invested in "innovative and high potential businesses". The Lords Select Committee for AI also recommended that part of this fund be set aside for SMEs with a substantive AI component to enable scaling-up. The UK government has also promised to establish a £20m GovTech fund to encourage public sector adoption of AI technology.

As well as committing its own financial resources into AI, the UK government is seeking to almost double R&D investment in the UK by the late 2020s. Additionally, they have increased R&D expenditure credit from 11% to 12% from January this year, meaning that businesses can now recoup more of the costs of their R&D spending.

The AI Sector Deal also outlines changes to visa rules, aiming to attract more global talent to the UK. These changes include doubling the number of tier 1 visas (to 2,000) and making it easier for high-skilled students to work in the UK post-graduation. These visa rule changes are not AI specific although, given the highly specialised nature of AI, they are likely to favour the immigration of AI specialists over other, broader specialities.

If a self-driving car struck and killed someone, who would be to blame for this? This kind of question over the liability and legal position of AI is one that has hung over the AI industry, as well as governments and legal firms, for some time. In the AI Sector Deal we see a clear move by the government, spearheaded by the Department for Culture, Media, and Sport (DCMS) and the Department for Business, Energy and Industrial Strategy (BEIS) to set up an Office for Artificial Intelligence to regulate AI, as well as the AI Council Centre for Data Ethics and Innovation (a consultation on which has just been published) to advise on regulation, ethics and AI as a whole.

EU AI strategy

The UK has not been alone in taking bold new steps towards AI development. On 10 April this year, 25 EU states, including the UK, France and Germany, signed the Declaration of Cooperation on Artificial Intelligence. The agreement emphasises establishing strategies and policy to ensure Europe's long term competitiveness in research and deployment of AI as well as dealing with the social, economic, ethical and legal challenges of AI.

The EU Commission has pledged to increase its own investment into AI by €1.5bn between 2018 and 2020, to support the development of AI in key sectors such as health and transport. The European Fund for Strategic Investments plans to make €500m available by 2020, to support companies and start-ups investing in AI.

Addressing similar concerns as the UK regarding AI regulation, the EU Commission will present ethical guidelines on the development of AI by the end of 2018. There have already been calls from the European Parliament to grant "electronic personality" to AI devices such as self-learning robots, allowing such electronic persons to have rights and duties. This would mean that they could be insured individually and be treated in the same way as corporations in the sense that they would be allowed to take part in legal cases both as the plaintiff and the respondent. However, when the Commission outlined its strategy for AI in April 2018, this went unmentioned, which may mean that the Commission is not considering assigning personhood to AI, although individual Member States may do so.

As businesses increasingly pick up on the potential for AI to be used in decision-making scenarios, such as profiling individuals, it is important to note the impact of provisions such as Article 22 of the General Data Protection Regulation, which grants a right to individuals not to be subject to a decision solely on automated processing, but only where this produces legal effects concerning that person or similarly significantly affects the person. This indicates an increasing awareness at the legislative level of the balance to be struck between encouraging businesses to innovate using AI and preventing them from completely abandoning any human responsibility for key decisions such as employment decisions.

The future

Both the Sector AI Deal and the Industrial Strategy White Paper emphasise the applications of AI in healthcare, especially in the context of the elderly, and we expect to see greater public sector investment into health-related AI technology and, consequently, greater adoption of AI by public health services.

Given the expressed objective of the government to foster AI development in the UK, it is likely that UK regulation will be relatively innovation-friendly. However there have also been repeated recommendations to build public trust around AI and ethics in data usage. Given recent public scandals surrounding Facebook and Google over personal data collection, gaining access to sensitive data by AI technology in the near future, particularly medical data (see our article on AI in the healthcare sector), may be problematic for firms seeking access to this data to develop AI technologies (see our article on facial recognition data and the GDPR). What we are seeing, in both the UK and EU, is a move towards greater clarity of data sharing rules, as well as clarification of the risks and legal liability issues in artificial intelligence.

Both the AI strategies of the UK and EU take a long-term approach, building key workforce skills from an early age and setting long term goals for investment and development. We expect to see a far greater role for government in the AI market in the years to come. 2018 has already proven to be a year of rapid development of AI regulation and strategy, and it remains to be seen how effective the steps being taken by the UK government and the European Commission will be to encourage AI adoption, development and roll-out to the wider public.

If you have any questions on this article please contact us.

Return to

home

Go to Interface main hub