In the midst of a rapidly evolving regulatory landscape, Britain is hesitating. As one of the most influential countries in the field of artificial intelligence (AI), it has not yet decided to legislate the sector. It also remains vigilant, despite election promises and recent activity on the subject.
The UK's stated aim is to continue to develop strategic AI plans to set guidelines for future direction to facilitate coordinated regulatory efforts. The country's landscape has unique characteristics, so regulation - at least so far - is intended to be sector-specific.
However, setting measures is only one tool to ensure safety and fairness in AI development without hindering innovation. The country has opted to formulate general guidelines.
First regulatory steps
The UK has the third largest AI sector, surpassed only by the United States and China. The market is currently worth over $21 billion and is expected to grow to $1 trillion by 2023. The country ranks fourth in the Global AI Index, which assesses the levels of AI investment, innovation and technology implementation. The top two places in the ranking are clear, but Singapore is in third place.
In 2018, the House of Lords AI Inquiry Committee recommended boosting AI funding, attracting talent and bringing AI closer to citizens. Shortly afterwards, significant growth in AI investment and broad sectoral development was noted. However, concerns were also raised about the challenges the technology poses in the areas of education and ethical development.
In 2023, a white paper on AI regulation summarized this approach, which aims for innovation tied to a set of principles. The document states that AI must be safe and robust (in terms of algorithm accuracy) as well as transparent and explainable. Other commitments include fairness, accountability, governance and clear procedures for challenging AI-generated results or decisions.
Existing laws in the country, such as equality laws and data protection laws, also apply to the AI sector. While specific measures are set out by the Department for Science, Innovation and Technology, the Digital Regulation Cooperation Forum and the Information Commissioner's Office, these are general bodies. The UK approach dictates that each specific sector should have its own policies, which are set by a sectoral organization.
This approach aims to use the knowledge of the sectoral bodies to further define the framework within which AI operates. This creates more practical measures, but also requires careful coordination. Otherwise, there is a risk of diverging regulatory approaches.
The UK White Paper outlines future risk mitigation measures, such as government guidance for regulators, centralized oversight, assessment of the legal framework and principles, and a safe environment for regulating AI in various sectors.
In comparison, the European Union takes a more cross-cutting approach to regulation, which contrasts with the UK's vertical model. The EU's AI regulation imposes legal obligations at all stages of an AI system's lifecycle. These include algorithm training, testing and evaluation, risk management and post-market surveillance.
There is another important difference. The European Union has set penalties for non-compliance that can reach up to €30 million or 6% of a company's global turnover. In the UK, however, there are no such penalties.
Direction of the new British government
Progress on AI regulation was expected after the July 4th election. The Labor Party, which won the election, had expressed in its political program the intention to look more closely at this issue.
It is noteworthy that the previous government, led by Rishi Sunak, promoted the AI Safety Summit in November 2023. This iconic conference, held at Bletchley Park, where the German codes were deciphered during the Second World War, was full of good intentions. Representatives from OpenAI, Google and Anthropic promised to give a UK government working group early access to their models for evaluation purposes.
However, no regulatory conclusions were reached at this stage. Rishi Sunak's government was concerned that excessive regulation of AI could limit its development, but was also aware that the lack of regulation risked delaying action when needed. It was originally thought that Keir Starmer's new cabinet would resolve this dilemma by speeding up legislation. However, this does not appear to be the case. The current situation remains uncertain, except for one wish: to wait and see.