While many eyes watch the final developments of the EU Artificial Intelligence Act in Brussels, other corners of the world are also considering how best to approach the regulation of AI.
The U.K. made waves in 2023 when it announced a high-level AI Safety Summit, led by Prime Minister Rishi Sunak. The results produced the Bletchley Declaration, but, in recent months, the U.K. government also released a parliamentary report on generative AI, a government consultation response on a pro-innovative approach to AI regulation, as well as guidance related to AI assurance.
Though the U.K. so far has progressed with a pro-innovation, contrasting approach to AI regulation in relation to the EU AI Act, regulatory approaches are not off the table.
In November 2023, Lord Holmes of Richmond, a member of the influential House of Lords Select Committee on Science and Technology, introduced a private members’ bill called the Artificial Intelligence (Regulation) Bill. On 22 March, Lord Holmes’ bill received a second reading in the House of Lords, together with more than two hours of reaction from fellow peers in the upper chamber.
“History tells us, right-size regulation is pro-citizen, pro-consumer and pro-innovation; it drives innovation and inward investment,” Lord Holmes said during last Friday’s second reading, citing an Ada Lovelace Institute report on government approaches to AI regulation. “It really is the case that the Government really have given themselves all the eyes and not the hands to act,” he said. “What is required is for these technologies to be human led, in our human hands, and human in the loop throughout.”
At the center of Lord Holmes’ proposed regulation are principles around “trust, transparency, inclusion, innovation, interoperability, public engagement, and accountability,” he said during a recent interview for The Privacy Advisor Podcast.
Elements of the proposed regulation
The proposed Artificial Intelligence (Regulation) Bill includes an AI authority, though Holmes said, “In no sense do I see this as the creation of an outsized, do-it-all regulator.” Instead, he sees the authority as having a coordinating role ensuring existing regulators meet their obligations and identifying any gaps in the AI regulatory landscape.
Holmes envisions the regulator as being horizontal, “ensuring a consistent approach across industries and applications rather than the, potentially, piecemeal approach likely if regulation is left only to individual regulators.” He sees the AI Authority as needing to be agile and adaptable, indicating it must “conduct horizon-scanning, including by consulting the AI industry, to inform a coherent response to emerging AI technology trends.”
Transparency, sandboxes and AI officers
A guiding principle in the proposed bill revolves around transparency, whereby organizations developing, deploying or using AI must be transparent about it, while testing it thoroughly and in conjunction with existing consumer and data protection as well as intellectual property laws.
To help with testing, Lord Holmes’ bill calls for the use of sandboxes, something he sees as being successfully employed in the financial technology industry. “We have seen the success of the fintech regulatory sandbox,” Holmes said, “replicated in well over 50 jurisdictions around the world. I believe a similar approach can be deployed in relation to AI developments and, if we get it right, it could become an export of itself.”
Notably, the bill also calls for responsible AI officers. “The AI officer will be required to ensure the safe, ethical, unbiased and non-discriminatory use of AI by the business and to ensure, so far as reasonable practible, that data used by the business in any AI technology is unbiased,” he said.
For individuals involved in training large language models and other AI systems, they must supply the AI Authority with documentation of all third-party information and intellectual property used in that training, Holmes said. They must also ensure to the authority that all of that data was used with informed consent.
Engaging with the public
In the second reading of the bill last week, Lord Holmes said public engagement is perhaps the most significant part of his proposed regulation as it helps engender trust in the AI ecosystem. “No matter how good the algorithm, the product, the solution, if no one is ‘buying it,’ then, again, none of it is anything or gets us anywhere,” he said.
Public trust is tied closely with free and fair democratic elections, and 2024 is a big year for national elections around the world. The loss of public trust in institutions and democracy itself are major concerns as AI accelerates moving forward.
The bill would require the AI Authority to conduct public engagement on AI’s benefits and risks and consult the public “as to the most effective frameworks for this engagement,” he said.
House of Lords responds
Overall, during last week’s second reading, Lord Holmes’ bill received wide ranging and support across party lines from more than 20 members of the House of Lords.
Lord Thomas of Cwmgiedd backed the bill “because it has the right balance of radicalism to fit the revolution in which we are living.” Viscount Chandos said “there can have been few Private Members’ Bills that have set out to address such towering issues as this bill,” adding that the bill is “well-judged and balanced.”
And though there was wide-ranging support, there is some concerns the government will not pick up the bill. Lord Clement-Jones said there is a “fair wind behind this bill” but added that he is “somewhat pessimistic” the government would take this up. Lord Young of Cookham said the bill was a “heroic first shot” at AI regulation, and Baroness Stowell said she could not support legislation that creates another regulator.
Though the future of the bill as it is currently is uncertain, there was clearly support for several facets of the proposal, including, according to Lord Young, Clause 1(2)(c), which ensures the AI Authority conducts a gap analysis of the AI regulatory landscape, something he sees missing in the education fields, for example.
In his IAPP podcast interview, Holmes was optimistic but cautious of what the AI-powered future holds. “I hope that we will see many positive deployments of AI and other technologies in health, in education, in mobility, in financial inclusion” with “more empowered citizens,” he said.
“It is in no sense an inevitability, and it’s beholden on every single one of us to do everything we can in our power with our groups, not just with our colleagues at work, but with our friends, our families, everybody to try and ensure that that vision does become the reality for all of us.”