The Artificial Intelligence (Regulation) Bill that I introduced was making pleasing progress. I had managed to get the Bill through all House of Lords stages and it was teed up for its passage through the Commons, when, somewhat unexpectedly, on a rainy, otherwise unremarkable Wednesday last month, a General Election was called.
A quick refresh on what I am trying to achieve with the Bill:
Setting up an AI Authority
Clause 1 sets up an artificial intelligence (AI) authority – an agile, right-sized regulator, horizontally focused to look across all existing regulators, not least the economic regulators. To assess their competency, to address the opportunities and challenges presented by AI and to highlight current regulatory gaps. And there are gaps – a substantial number are identified by the excellent Ada Lovelace Institute report into the matter.
Putting the existing principles on a statutory basis
The AI authority must have at its heart the principles set out in Clause 2 of the Bill – it must not only be the custodian of those principles, but a lighthouse for them, and it must have an educational function and a pro-innovation purpose. Many of those principles will be very recognisable; they are taken from the government’s white paper but put on a statutory footing.
AI sandboxes
Clause 3 concerns sandboxes, so brilliantly developed in the UK in 2016 with the fintech regulatory sandbox. If you want a measure of its success, it is replicated in well over 50 jurisdictions around the world. It enables innovation in a safe, regulated, supported environment – real customers, real markets, real innovations, but in a splendid sandbox concept.
Last month, the Hong Kong Monetary Authority announced it is setting up an AI sandbox to explore generative applications that could be developed for financial institutions. It’s a great move. We pioneered the concept; we should be getting on with it.
AI responsible officer
Clause 4 sets up the AI responsible officer, to be conceived of not as a person but as a role, to ensure the safe, ethical, and unbiased deployment of AI in her or his organisation. It does not have to be burdensome, or a whole person in a startup; but that function needs to be performed, with reporting requirements under the Companies Act that are well understood by any business. Again, crucially, it must be subject to that proportionality principle.
Urgent action needed to address copyright and IP abuses.
Clause 5 concerns labelling and intellectual property (IP), which is such a critical part of how we will get this right with AI. Labelling – so that if anybody is subject to a service or a good where AI is in the mix, it will be clearly labelled. AI can be part of the solution to providing this labelling approach. Where IP or third-party data is used, that has to be reported to the AI authority. Again, this can be done efficiently and effectively using the technology itself.
What’s in it for me? We need public buy in
Clause 6 concerns public engagement. For me, this is probably the most important clause in the Bill, because without public engagement, how can we have trustworthiness? People need to be able to ask, “What is in this for me? Why should I care? How is this impacting my life? How can I get involved?” We need to look at innovative ways to consult and engage. A good example, in Taiwan, is the Alignment Assemblies, but there are hundreds of novel approaches.
Definitions, enforcement and jurisdiction
Clause 7 concerns interpretation, where I drew the definitions deliberately broad so as to assist the debate. Clause 8 sets out the potential for regulating for offences and fines to ensure that the measures in the Bill have teeth. Clause 9, the final clause, ensures that the provisions of the Bill are UK-wide.
The Bill has received positive support from both Labour and the Liberal Democrats at every stage. I believe the one thing we have learned about regulating tech so far is that we need to lead rather than follow.
At the second reading stage, the Liberal Democrat spokesman Lord Clement Jones said: “There is clearly a fair wind around the House for the Bill, and I very much hope it progresses and we see the government adopt it.”
At the Bill’s third reading, for Labour, Lord Leong also offered support saying: “I think it is clear that there is consensus about the need for some kind of AI regulation. The Bill sends an important message about the government’s responsibility to acknowledge and address how AI affects people’s jobs, lives, data and privacy, in the rapidly changing technological environment in which we live.
“We support and welcome the principles behind the Bill, and we wish it well as it goes to the other place.”
And so it was set on its way to the Commons, in good shape, with no amendments.
The announcement of the general election has, however, stopped it completely in its legislative tracks. To take the positives, over the period since I introduced the Bill on 22 November last year, we have seen significant movement, more parliamentary interest in AI regulation, an increasingly clear picture on Labour’s approach and, dare I say, movement within Number 10.
As I have argued repeatedly, when it comes to AI, it is time to legislate, it is time to lead.
The State Opening of Parliament is set for July 17. Come the King’s speech, that day I will bring the AI (Regulation) Bill back. Two days after there will be a ballot to determine the order in which private member’s bills will be taken. Fingers crossed my AI Bill is lucky again and makes it into the top 25 and therefore onto the legislative programme.
We need to positively regulate AI in the UK – pro-innovation, pro-citizen-rights, pro-consumer-protection AI regulation, for all our AI futures.