The UK government is backing the launch of a new advisory service to assist businesses in bringing to market AI and digital innovations that comply with regulatory standards. This initiative aims to expedite the introduction of these innovations to the market while ensuring they meet safety and compliance requirements.
Scheduled to commence next year, this pilot scheme will involve multiple regulatory bodies offering customized support to businesses, enabling them to fulfill requirements across various sectors, especially in the realm of innovative technologies like AI. With over £2 million in government funding, this streamlined service intends to simplify the process for businesses by bringing together various regulators overseeing cross-cutting AI and digital technologies.
The primary goal is to facilitate responsible and swift innovation, ultimately contributing to the growth of the UK’s economy. The Secretary of State for Technology, Michelle Donelan, emphasized the importance of keeping up with the rapid evolution of digital technology and AI while ensuring safety and regulatory compliance. This service will provide businesses with guidance to navigate the compliance process, promoting secure and responsible innovation.
As digital technologies, particularly AI, must increasingly demonstrate compliance with various regulatory regimes, there’s a growing demand for coordinated support across the regulatory landscape. This pilot scheme addresses this need, helping innovators navigate regulations efficiently, allowing them to dedicate more time to developing cutting-edge products.
The service will be managed by the Digital Regulation Cooperation Forum (DRCF), comprising the Information Commissioner’s Office, Ofcom, the Competition and Markets Authority, and the Financial Conduct Authority, known as DRCF AI and Digital Hub. Established in 2019 and formally launched in 2020, the DRCF collaborates voluntarily to address emerging regulatory issues spanning the domains of the four regulators, with the aim of simplifying compliance with multiple regulatory regimes.
The pilot is expected to last approximately a year and will assess industry uptake, service feasibility, and the interaction of innovators with it. Innovators and businesses seeking guidance will be invited to apply, with the DRCF planning a competition to determine which innovators require regulatory support to ensure compliance with cross-cutting regulatory regimes. The criteria for successful applications will be jointly agreed upon by regulators and the relevant government department.
This announcement aligns with the government’s AI Regulation white paper commitments, including the establishment of a central AI risk function within government. This function, housed in the Department for Science, Innovation, and Technology, will identify, measure, and monitor existing and emerging AI risks, involving expertise from government, industry, and academia. It will focus on regulatory risks related to foundation models and frontier AI.
Furthermore, the government is collaborating with UK regulators to develop appropriate regulations for AI, considering its cross-cutting nature and impact on various sectors. Several regulators have already begun working on this, including the Medicines and Healthcare products Regulatory Agency and the Office for Nuclear Regulation. Additionally, the Competition and Markets Authority recently published an initial review of AI Foundation Models, outlining opportunities and risks related to competition and consumer protection.
Earlier this year, the UK government committed to establishing a multiple regulator sandbox to help organizations understand how their products interact with different regulatory regimes. This announcement reinforces the importance of AI innovations spanning multiple sectors, such as generative AI models, with the potential to expand its coverage to various industry sectors over time.
In November, the UK will host the first major global AI Safety Summit at Bletchley Park. The summit aims to build consensus on rapid, international action to advance AI safety, focusing on risks posed by powerful AI systems and exploring how safe AI can benefit public welfare, from medical technology to safer transport which initiatives such as this new advisory service will be used to show case the UK commitment.
(Source: Gov.uk)