UK to avoid fixed rules for AI – in favor of ‘context-specific guidance’

Europe

The UK isn’t going to be setting hard rules for AI any time soon.

Today, the Department for Science, Innovation and Technology (DSIT) published a white paper setting out the government’s preference for a light-touch approach to regulating artificial intelligence. It’s kicking off a public consultation process — seeking feedback on its plans up to June 21 — but appears set on paving a smooth road of ‘flexible principles’ that AI can speed through.

Worries about the risks of increasingly powerful AI technologies are very much treated as a secondary consideration, relegated far behind a political agenda to talk up the vast potential of high tech growth — and thus, if problems arise, the government is suggesting the UK’s existing (overstretched) regulators will have to deal with them, on a case-by-case basis, armed only with existing powers (and resources). So, er, lol!

The 91-page white paper, which is entitled “A pro-innovation approach to AI regulation”, talks about taking “a common-sense, outcomes-oriented approach” to regulating automation — by applying what the government frames as a “proportionate and pro-innovation regulatory framework”.

In a press release accompanying the white paper’s publication — with a clear eye on generating newspaper headlines that frame a narrative of ministers seeking to “turbocharge growth” — the government confirms there will be no dedicated watchdog for artificial intelligence, merely a set of “principles” for existing regulators to work with; so no new legislation, rather a claim of “adaptable” (but not legally binding) regulation.

DSIT says legislation “could” be introduced — at some unspecified future period, and when parliamentary time allows — “to ensure regulators consider the principles consistently”. So, yep, that’s the sound of a can being kicked down the road. But expect to see guidance emerging from a number of existing UK regulators over the next 12 months — along with some tools and “risk assessment templates” which AI makers may be encouraged to play around with (if they like).

There will also be the inexorable sandbox (funded with £2M from the public purse) — or at least a “sandbox trial to help businesses test AI rules before getting to market”, per DSIT. But evidently there won’t be a hard legal requirement to actually use it.

The government says its approach to AI will focus on “regulating the use, not the technology” — ergo, there won’t be any rules or risk levels assigned to entire sectors or technologies. Which is quite the contrast with the European Union’s direction of travel with its risk-based framework that includes some up-front prohibitions on certain users of AI, with define regimes for use-cases specified as high risk and self regulation for lower risk uses.

“Instead, we will regulate based on the outcomes AI is likely to generate in particular applications,” the government stipulates, arguing — for example, and somewhat boldly in its choice of example here — that classifying all applications of AI in critical infrastructure as high risk “would not be proportionate or effective” because there might be some uses of AI in critical infrastructure that can be “relatively low risk”.

Because ministers have opted for what the white paper calls “context-specificity”, they decided against setting up a dedicated regulator for AI — hence the responsibility falls on existing bodies with expertise across various sectors.

“To best achieve this context-specificity we will empower existing UK regulators to apply the cross-cutting principles,” it writes on this. “Regulators are best placed to conduct detailed risk analysis and enforcement activities within their areas of expertise. Creating a new AI-specific, cross-sector regulator would introduce complexity and confusion, undermining and likely conflicting with the work of our existing expert regulators.”

Under the plan, existing regulators will be expected to apply a set of five principles — setting out “key elements of responsible AI design, development and use” — that the government wants/hopes to guide businesses as they develop artificial intelligence.

“Regulators will lead the implementation of the framework, for example by issuing guidance on best practice for adherence to these principles,” it suggests, adding that they will be expected to apply the principles “proportionately” to address the risks posed by AI “within their remits, in accordance with existing laws and regulations” — arguing this will enable the principles to “complement existing regulation, increase clarity, and reduce friction for businesses operating across regulatory remits”.

It says it expects relevant regulators to need to issue “practical guidance” on the principles or update existing guidance — in order to “provide clarity to business” in what may otherwise be a vacuum of ongoing legal uncertainty. It also suggests regulators may need to publish joint guidance focused on AI use cases that cross multiple regulatory remits. So more work and more joint working is coming down the pipe for UK oversight bodies.

“Regulators may also use alternative measures and introduce other tools or resources, in addition to issuing guidance, within their existing remits and powers to implement the principles,” it goes on, adding that it will “monitor the overall effectiveness of the principles and the wider impact of the framework” — stipulating that: “This will include working with regulators to understand how the principles are being applied and whether the framework is adequately supporting innovation.”

So it’s seemingly leaving the door open to rowing back on certain principles if they’re considered too arduous by business.

‘Flexible principles’

“We recognise that particular AI technologies, foundation models for example, can be applied in many different ways and this means the risks can vary hugely. For example, using a chatbot to produce a summary of a long article presents very different risks to using the same technology to provide medical advice. We understand the need to monitor these developments in partnership with innovators while also avoiding placing unnecessary regulatory burdens on those deploying AI,” writes Michelle Donelan, the secretary of state for science, innovation and technology in the white paper’s executive summary where the government sets out its “pro-innovation” stall.

“To ensure our regulatory framework is effective, we will leverage the expertise of our world class regulators. They understand the risks in their sectors and are best placed to take a proportionate approach to regulating AI. This will mean supporting innovation and working closely with business, but also stepping in to address risks when necessary. By underpinning the framework with a set of principles, we will drive consistency across regulators while also providing them with the flexibility needed.”

The existing regulatory bodies the government is intending to saddle with more tasks — drafting “tailored, context-specific approaches” which AI model makers can also only take on advisement (i.e. ignore) — include the Health and Safety Executive; the Equality and Human Rights Commission; and the Competition and Markets Authority (CMA), per DSIT.

The PR doesn’t mention the Information Commissioner’s Office (ICO), aka the data protection regulator, but it gets several references in the white paper and looks set to be another body pressganged into producing AI guidance (usefully, enough, the ICO has already offered some thoughts on AI snake oil). 

One quick aside here: The CMA is still waiting for the government to empower a dedicated Digital Markets Unit (DMU) that was supposed to be reining in the market power of Big Tech, i.e. by passing the necessary legislation. But, last year, ministers opted to kick that can into the long grass — so the DMU has still not been put on a statutory footing almost two years after it soft launched in expectation of parliamentary time being found to empower it… So it’s becoming abundantly clear this government is a lot more fond of drafting press releases than smart digital regulation.

The upshot is the UK has been left trailing the whole of the EU on the salient area of digital competition (the bloc has the Digital Markets Act coming in application in a few months) — while Germany updated its national competition regime with an ex ante digital regime at the start of 2021 and has a bunch of pro-competition enforcements under its belt already.

Now — by design — UK ministers intend the country to trail peers on AI regulation, too; framing this as a choice to “avoid heavy-handed legislation which could stifle innovation”, as DSIT puts it, in favor of a mass of sectoral regulatory guidance that businesses can choose whether to follow — literally in the same breath as penning the line that: “Currently, organisations can be held back from using AI to its full potential because a patchwork of legal regimes causes confusion and financial and administrative burdens for businesses trying to comply with rules.” So, um… legal certainty good or bad — which is it?!

In short this looks like a very British (post-Brexit) mess.

Across the English Channel, meanwhile, EU lawmakers are in the latter stages of negotiations over setting a risk-based framework for regulating AI — a draft law the European Commission presented way back in 2021; now with MEPs pushing for amendments to ensure the final text covers general purpose AIs like OpenAI’s ChatGPT. The EU also has a proposal for updating the bloc’s liability rules for software and AI on the table too.

In the face of the EU’s carefully structured risk-baed framework, UK lawmakers are left trumpeting voluntary risk assessment templates and a toy sandbox — and calling this ‘DIY’ approach to generating trustworthy AI a ‘Brexit bonus’. Ouch.

The five principles the government wants to guide the use of AI — or, specifically, that existing regulators “should consider to best facilitate the safe and innovative use of AI in the industries they monitor” — are:

  • safety, security and robustness: “Applications of AI should function in a secure, safe and robust way where risks are carefully managed”
  • transparency and explainability: “Organisations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI”
  • fairness: “AI should be used in a way which complies with the UK’s existing laws, for example the Equality Act 2010 or UK GDPR, and must not discriminate against individuals or create unfair commercial outcomes”
  • accountability and governance: “Measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes”
  • contestability and redress: “People need to have clear routes to dispute harmful outcomes or decisions generated by AI”

All of which sound like fine words indeed. But without a legal framework to turn “principles” into hard rules — and ensure consistent application and enforcement atop entities that choose not to bother with any of that expensive safety stuff — it looks about as useful as whistling the Lord’s Prayer and hoping for the best if it’s trustworthy AI you’re looking for…

(Oh yes — and don’t forget the UK government is also in the process of watering down the aforementioned UK GDPR — after it recently invited businesses to “co-design” a new data protection framework. Which led to a revised reform emerging that aims to make it easier for commercial entities to process people’s data for use-cases like research, and which risks eroding the independence of the privacy watchdog by adding a politically appointed board, in order to (and I quote Donelan here) ensure “we are the most innovative economy in the world and that we cement ourselves as a Science and Technology Superpower”.)

The clear trend in the UK is of existing protections being rowed back as the government seeks to roll out the red carpet for AI-fuelled “innovation”, without a thought for what that might mean for rather essential stuff like safety or fairness — and therefore trustworthiness, assuming you want people to have a sliver of trust in the AIs you’re pumping out — but ministers are essentially saying: ‘Don’t worry, just lie back and think of GB’s GDP!’

Of course any developers building AI models in the UK and wanting to scale beyond those shores will have to consider regulations that apply outside the UK. So the freedom to be so lightly regulated may, ultimately, come with a hard requirement to comply with foreign frameworks anyway — or else be tightly limited in geographical scope. (And, well, tech innovators do love to scale.)

Still, DSIT’s PR has a canned quote from Lila Ibrahim, COO (and UK AI Council Member) at Google-owned DeepMind — an AI giant that has been lagging behind rivals like OpenAI on the buzzy artificial intelligence tech of the moment (generative AI) — who lauds the government’s proposed “context-driven approach”, rubberstamping the direction of travel with the claim that it will “help regulation keep pace with the development of AI, support innovation and mitigate future risks”.

“AI has the potential to advance science and benefit humanity in numerous ways, from combating climate change to better understanding and treating diseases. This transformative technology can only reach its full potential if it is trusted, which requires public and private partnership in the spirit of pioneering responsibly,” Ibrahim also suggests.

The government is clearly hoping its offer of ‘no rules except the ones you choose’ will encourage AI startups to pick the UK over other locations — where automation is being more tightly regulated. (Or as Donelan pens it: “Our pro-innovation approach will also act as a strong incentive when it comes to AI businesses based overseas establishing a presence in the UK.”)

It’s quite the gamble by the Conservative government — given the highly scalable potential for too lightly regulated AI to go horribly wrong. And headlines about ‘AI-powered critical infrastructure that failed’ won’t wait for government press releases; they’ll write themself. But Rishi Sunak’s Tories are apparently going all in on this one.

In a press statement, Donelan goes on to offer this interesting construction — to explain what the government is doing:

AI has the potential to make Britain a smarter, healthier and happier place to live and work. Artificial intelligence is no longer the stuff of science fiction, and the pace of AI development is staggering, so we need to have rules to make sure it is developed safely.

Our new approach is based on strong principles so that people can trust businesses to unleash this technology of tomorrow.

It’s quite the collection of sentences — combining an assertion of the need to have “rules to make sure [AI] is developed safely”, with a sleight-of-hand segue to the offer of (just) “strong principles so that people can trust businesses to unleash this technology of tomorrow”. So not rules then. (And, I mean, ‘trusting businesses to unleash technology’ is not at all the same thing as trusting businesses to unleash trusted technology. In fact you could say they’re poles apart. So the choice of phase there is quite the tell.)

But back to rules vs principles. We asked ChatGPT what the difference is between these two terms. And here’s what it told us:

Rules and principles are both guidelines that govern behavior, but they differ in their nature and application.

Rules are specific instructions or directives that dictate what one should or should not do. They are usually created by authorities or institutions and enforced through consequences such as punishments or rewards. Rules are often rigid and inflexible and do not take into account situational differences or individual circumstances. For example, “Do not exceed the speed limit on the highway” is a rule that applies to all drivers regardless of the context.

Principles, on the other hand, are broad concepts or values that guide behavior and decision-making. They are more flexible and adaptable than rules and provide a framework for making ethical or moral judgments. Principles are usually not enforced through punishments or rewards but are instead internalized and followed voluntarily. For example, the principle of honesty is a value that guides behavior in a variety of situations, such as being truthful in communication, respecting others’ property, and fulfilling obligations.

In summary, rules are specific and inflexible instructions while principles are broad and flexible values that guide behavior. Rules are enforced through external means while principles are internalized and followed voluntarily.

So, assuming this large language model is not simply hallucinating again, and the nuance it’s identifying is correct, Donelan is both recognizing that fixed rules are required for AI to be safe while confirming the government has decided against setting any right now. The verbal downgrade is to purely voluntary principles. Or, basically, it’s going to let businesses make up their own minds and do what they must in order to grow as fast as possible for the foreseeable future (or at least until after the next election). What could possibly go wrong!?

It’s clear the government’s growth-at-all costs agenda has eaten a full course meal of AI hype. Pity the poor Brits set to become guinea pigs in the name of unleashing mindless automation atop a rudderless bark christened “innovation”.

Citizens of the UK will want to strap themselves in for this ride. Because if something does go wrong they’ll be forced to wait for the government to make parliamentary time available to actually pass some safety rules. Which may be a lot of breath to hold.

Products You May Like

Articles You May Like

Apple will never stop thinking about making a TV
PlayAI clones voices on command
Teleo wants to help the robotics industry reach its ‘ChatGPT moment’
German fintech unicorn N26 just had its first profitable quarter
YC-backed Formal brings a clever security reverse-proxy out of stealth

Leave a Reply

Your email address will not be published. Required fields are marked *