PRESS RELEASE : New advisory service to help businesses launch AI and digital innovations [September 2023]
The press release issued by the Department for Science, Innovation and Technology on 19 September 2023.
Businesses across the UK will have the opportunity to showcase that their new AI and digital innovations comply with regulatory standards, so they can quickly bring them to market.
- Businesses to receive tailored advice on how to meet regulatory requirements for digital technology and artificial intelligence
- new advisory service to launch next year, helping new products and innovations reach the market quickly, safely and responsibly
- announcement comes as government sets up a new function to identify, measure and monitor existing and emerging AI risks
Organisations across the country will be able to demonstrate that their new artificial intelligence and digital innovations meet regulatory requirements so they can quickly bring them to market,
A new pilot scheme set to launch next year will see a number of regulators develop a multi-agency advice service providing tailored support to businesses so they can meet requirements across various sectors while safely innovating – including through innovative technologies such as AI.
Backed by over £2 million in UK government funding, the streamlined service is intended to make it easier for businesses to get the help they need, by bringing together the different regulators involved in the oversight of cross-cutting AI and digital technologies.
In turn, businesses will be able to take their new innovations to market responsibly and more quickly, helping to grow the UK’s economy.
Technology Secretary Michelle Donelan said:
Digital technology and artificial intelligence are rapidly evolving, and regulation must keep pace – but we don’t want it to be at the expense of stifling the launch of new innovations that can improve our everyday lives.
While safety is at the heart of our approach to regulation here in the UK, this new service will help businesses navigate the process of making sure they are compliant – supporting safe and responsible innovation.
We are a nation that backs businesses both big and small, and we want to make sure that as they can quickly get to grips with rules and regulations around emerging technology.
With digital technologies such as artificial intelligence needing increasingly to demonstrate compliance with a range of regulatory regimes, there is a growing need for joined-up advice across the regulatory landscape. This pilot scheme will meet business demands for coordinated support and help innovators navigate regulations, so they can spend more time developing cutting edge new products.
The service will be run by members of the Digital Regulation Cooperation Forum (DRCF), made up of the Information Commissioner’s Office, Ofcom, the Competition and Markets Authority and the Financial Conduct Authority, and known as DRCF AI and Digital Hub.
The Digital Regulation Cooperation Forum came together as a voluntary collaboration in 2019, launching formally in 2020, and works to explore emerging regulatory issues which cut across the remits of the four regulators with the goal of making it easier for industry to comply with multiple regulatory regimes.
The trial is expected to last around a year, and will assess industry take up, service feasibility and how innovators are interacting with it. Innovators and businesses requiring advice will be invited to apply in due course with the DRCF expected to run a competition for innovators to outline where they need support from regulators to ensure innovative new technologies comply with cross-cutting regulatory regimes. Successful applications will be selected against criteria agreed jointly by regulators and the department.
Today’s announcement delivers on other commitments made as part of the government’s AI Regulation white paper, including the establishment of a central AI risk function within government. Over the last few months, the government has moved quickly to set up the central risk function within the Department for Science, Innovation and Technology (DSIT). It will identify, measure and monitor existing and emerging AI risks using expertise from across government, industry, and academia – with a specific focus on exploring the regulatory risks of foundation models and frontier AI.
In addition, the government is working with UK regulators on how they might need to regulate the technology given its cross-cutting nature and impact on various sectors – many have already started work on this from the Medicines and Healthcare products Regulatory Agency to the Office for Nuclear Regulation. Only yesterday, the Competition and Markets Authority published it’s initial review of AI Foundation Models, which set out the opportunities and risks which foundation models could bring for competition and consumer protection.
Earlier this year the UK government committed to a multiple regulator sandbox, which helps organisations work with regulators to understand how their products interact with different regulatory regimes. Today’s announcement delivers on this, in recognition of the importance of AI innovations that have implications in multiple sectors such as generative AI models, with the potential to expand its capability to cover multiple industry sectors over time.
On 1 and 2 November the UK will host the first major global AI Safety Summit at Bletchley Park, building consensus on rapid, international action to advance safety at the cutting edge of AI technology. It will bring together key countries, as well as leading technology organisations, academia and civil society to inform rapid national and international action at the frontier of artificial intelligence development.
The summit will focus on risks created or significantly exacerbated by the most powerful AI systems, particularly those associated with the potentially dangerous capabilities of these systems. For example, this would include the proliferation of access to information which could undermine biosecurity. The summit will also focus on how safe AI can be used for public good and to improve people’s lives – from lifesaving medical technology to safer transport.