new UK AI regulations

New UK AI regulations ‘light touch’ among ChatGPT hype


Last week, the government published new UK AI regulations. The white paper outlines the government’s strategy, falling short of delivering new legislation that the government fears could stifle growth. Rather than imposing “heavy-handed legislation”, the government calls for “responsible use” while urging tech companies to create AI rules. Meanwhile, Elon Musk and other tech billionaires published an open letter urging a pause in the AI rollout to protect society from potential risks.

On Wednesday, the Department for Science, Innovation and Technology (DSIT) published a white paper on regulating general-purpose AI. Setting out guidelines for “responsible use,” the paper puts forward five principles: explainability, robustness, safety, security, and transparency. Rather than tasking a new single regulator with AI governance, the government wants existing bodies such as Competition and Markets Authority, Equality and Human Rights Commission, and the Health and Safety Executive to formulate “tailored, context-specific approaches that suit the way AI is actually being used in their sectors.”. Introducing strict legislation, the government fears, could “stifle innovation.”

“Over the next twelve months, regulators will issue practical guidance to organisations, as well as other tools and resources like risk assessment templates, to set out how to implement these principles in their sectors,” the government said.

“When parliamentary time allows, legislation could be introduced to ensure regulators consider the principles consistently.”

Government figures reveal that in 2022, AI contributed £3.7 billion to the UK economy, with 50,000 people working in the sector. In last month’s budget, Chancellor Jeremy Hunt launched a new £1 million AI award for the company completing the “most groundbreaking British AI research.”

The government has already made an AI sandbox available to innovators to help them develop cutting-edge technology. It now wants businesses using AI technology to provide transparency on algorithm development and use. The DSIT said companies and organisations “should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI.”

The government’s ‘light touch’ approach has raised eyebrows among AI ethicists and critics. AI and digital society researcher and fellow at the LSE Law School Guila Gentile tweeted:

“Differently from other technologies, AI exacerbates vulnerabilities. At the same time, it is in hands of a few companies shaping the ways in which this technology works.

“As a result, the absence of an AI framework has the likely potential to create a more unequal and unjust society, enhancing the market and power asymmetries between those who dominate digital tools and those who are impacted by AI technology.

“This regulatory approach departs from the models emerging in the US and in the EU, and the UK may have a hard time selling its AI products to these international partners.
“This approach is underwhelming considering the latest developments of AI, such as #ChatGPT and the #metaverse. The impression is the UK Government is allowing innovation to triumph as a value in itself without considering the broader disruptive implications for the society.”
The new UK AI regulations come as countries across the globe create regulatory frameworks for AI use. The European Union has proposed AI regulations, while China requires companies to reveal algorithm details.
Meanwhile, tech giants Microsoft are locked into a race with Microsoft-back ChatGPT making waves and Google hurrying the launch of its chatbot Bard.

Musk et al. urge to pause training powerful AI

In an open letter, more than 2,800 signatories, including Elon Musk, Apple’s Steve Wozniak, and Skype co-founder Jaan Tallinn, call on all AI labs to “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.

“AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”

The open letter adds thousands of voices to a chorus of legal and ethical concerns over sophisticated AI tools, sounding alarm bells over disinformation, cybercrime, political manipulation, and phishing attempts.

The new UK AI regulations only represent the starting point in what will likely be a long and arduous, if not impossible legislative process.

Leave a Reply

Your email address will not be published. Required fields are marked *