Blog and news
December 4, 2025

AI Growth Lab Consultation: IFOW Response

The UK Government has called for views on the AI Growth Lab, a cross-economy sandbox that will oversee the deployment of AI-enabled products and services that current regulation hinders. The AI Growth Lab is intended to support growth and responsible AI innovation, by making targeted regulatory modifications under robust safeguards and with careful monitoring.  

The IFOW Responsible AI Sandbox Project is the first globally to consider the workplace a domain of interest, prototyping and evaluating future regulatory interventions before their integration into formal rule regions, and surface evidence on interactions, decisions and approaches in practice.  

Grounded in our Sandbox project, this is IFOW's response to help inform government policy development around the AI Growth Lab.

What advantages do you see in establishing a cross-economy AI Growth Lab, particularly in comparison with single regulator sandboxes?

The UK's longstanding leadership in AI has been parallelled by its role as a pioneer of the now popular approach to its experimental governance: sandboxing. Now is an appropriate time to continue this global leadership, by redefining how these architectures can operate, to steer, shape and drive the transition to a new economy.

To remain competitive and relevant while harnessing the power of AI, the UK must take a broader approach than deregulation, just as it must take a more ambitious approach to new value creation through AI adoption than the pursuit of efficiencies. To steer the capabilities of new technology in the service of effective and sustainable growth, approaches which surface the interaction of technical and human capabilities – as these relate to and are shaped by compliance with the law, soft and informal rules, and professional norms and standards – is critical. A Lab model has the potential to achieve this, particularly if it uses a framework and methodology which allows for recognition of the interactions between these factors and a process which looks to promote innovation and good work together. Our research suggests that good work is a mediator and precondition for meaningful productivity.

The domain in which all AI tools are adopted and have the potential afford meaningful productivity is the workplace. This is a multifaceted legal, regulatory and cultural domain. When Sandbox environments are restricted to the application of a single regime, the ability to comprehend relationships between decisions, behaviour, interpretation, and interoperability is weakened. To effectively redesign our legal system to support innovation we must understand how competing demands come into play in the everyday practices – and regionally varied contexts – of business decision-making. To understand which laws can be practically suspended or deleted, surfacing their intersections and interdependencies is also key. This requires a cross-regulator approach, as we have tried and tested in the IFOW Sandbox.

What disadvantages do you see in establishing a cross-economy AI Growth Lab, particularly in comparison with single-regulator sandboxes?

Sandboxing can refer to many things. Increasingly, the regulator-led and compliance-focussed approach is being recognised as too limited.

For us, Sandboxing is a methodology to identify and resolve areas of legal, regulatory, or governance uncertainty, weak interoperability, or gaps. The approach can be used before, during, or after legislation is enacted by focusing on sharp research questions which allow for feedback into regulatory design and processes. This can be through prototyping and evaluating proposed legal, regulatory, or socio-technical governance interventions, and by using these activities to surface new evidence on interactions, interdependencies, inefficiencies and impacts of different governance approaches. We believe the design of Sandbox processes should consider the interactions between technology, business models, and law and regulation, while understanding how these interactions are shaped by cultural, political and economic factors. These principles could be applied to Growth Labs.

This remit may also serve the fundamental objective of rebuilding public trust in our institutions. This is increasingly critical, at a time of wide-reaching scepticism about the efficacy of institutions. Forthcoming work from IFOW highlights not only that the ‘dividend’ of AI is unevenly distributed, but that workers perceive highly varied rights and protections in relation to these impacts. As Acemoglu and others have suggested, it is the institutions of democratic nation states that may determine whether they succeed or fail through the course of technological revolutions. We therefore invite consideration of the dual functions of sandboxing as a route to enabling and perceiving more effective governance.

Further, a focus on growth, defined narrowly, could overlook wider, necessary objectives and outcomes of innovation. IFOW work delivering action-research interventions with 8 large UK firms, funded by Innovate UK, suggests that for AI adoption to succeed, a focus on good work design, which includes the more objective characteristics of job quality, are routes to understand and then potentially to reconfigure what is required for successful AI implementation. Good work design mediates not only productivity, but also wider outcomes for firms and employees. Our case studies also highlight the role of good governance as a catalyst for new or different types of innovation, in contrast to the artificial lock in to narrow metrics that drive towards substitution.

What, if any, specific regulatory barriers (particularly provisions of law) are there that should be addressed through the AI Growth Lab? If there are, why are these barriers to innovation? Please provide evidence where possible.

The government’s intention to uphold employment law as a domain of 'red lines' is welcome. This said, there are areas of employment law which could be seen as impediments to good innovation, which, under suitable conditions, could be reviewed within Growth Labs. For instance, our case studies suggest that laws relating to redundancy and associated consultation requirements may discourage early approaches to work redesign, which is a barrier to securing good work and innovation together.

Further, some areas of absent law could be considered as domains for testing prototype or prospective future legislation, rather than its suspension. For instance, our work in the Pissarides Review into the Future of Work and Wellbeing; in our Sandbox activity in year 1; and in a more recent action research project with 8 firms funded by innovate, demonstrates the inefficiencies arising from various kinds of information friction within firms. Good governance processes can go some way to supporting this but there are contexts within which a legal basis could form a helpful ‘signifier’ of the utility of greater information sharing.

We also note the consequences of legal change made to other regimes for employment outcomes can only be understood through an approach that looks at the workplace as a socio-technical system. This is demonstrated, for instance by (a) the need for specific considerations of the workplace as a distinct category within which ‘consent’ can be given under GDPR; (b) the reliance on parts of GDPR to enable conditions which allow for the meaningful realisation of protections under Equality Law through disclosure of information. Changes in regimes can have secondary, associated consequences for those which are exempt from consideration. In turn, methodologies that allow these relationships to be surfaced and explored during experimental processes, are key.

Which sectors or AI applications should the AI Growth Lab prioritise?

We welcome the government’s recognition of sovereignty as a key topic of concern and policy priority. However, we believe the current exclusive focus on public sector data and compute, as suggested in the AI Opportunities Plan, is too narrow. Sovereign AI as an initiative should also preserve, promote and protect the long-term competitiveness and sustainability of the UK (allowing for restrictions within international trade).

Our work in the IFOW Sandbox points to the merit of viewing these questions through the lens of the workplace. It is the workplace, rather than the web, where the next generation of training data is being collected, and from which powers of inference are being developed. This is being referred to by the likes of Microsoft as ‘post-deployment training’. This could develop far more advanced domain-specific utility of AI, with the potential to significantly improve productivity.

This insight over work method and process – a conventional domain of bargaining between worker and firm, and the source of competitive edge for companies over their rivals – is the central preserve of value within our economy. The knowledge, skills, and know-how of the UK workforce is a critical, strategic national epistemic asset.

In turn, where the value of newly generated inferences about this are captured – between employers adopting these systems, the SaaS providers creating these tools, and the PaaS providers who may collate the inferences from the wide range of applications involved – invites interrogation, as it presents a range of significant risks.

Our research suggests neither businesses adopting LLMs directly, or integrating LLM's into their software capabilities to provide as a service to others, trust the technical, legal or governance-based measures they have in place to preserve and protect access to this workplace data. These risks are further exacerbated where there is little attention to work design through the adoption process.

Together, this presents risks to the sustainability and resilience of UK firms as they adopt, and UK PLC more widely, in ways which merit further attention. Our work suggests that to identify practical measures to promote and protect sovereignty requires an approach which examines the relationships between individual, corporate, and national data governance, ownership, and trade, as well as how these are balanced in choices around business development.

What lessons from past sandboxes should inform the design of the AI Growth Lab?

An effective AI Sandbox or Growth Lab would have the following characteristics:

Transparent: Committing to the publication of findings is critical to reflexive learning across the regulatory innovation ecosystem. It also builds trust in the purpose of these activities and allows scrutiny by the wider community. Sharing insight is also important to reduce risks associated with regulatory capture.

Operate a ‘full stack’ methodology: ‘Command and control’ or ‘conformity assessment’ approaches to Sandboxing that look to assess the compliance of single corporate entities against specific legal or regulatory regimes, can miss the chance to examine how interactions – shaped by various laws, business dynamics, contextual conditions, and so on – determine the relationships between different actors within the value chain. Given the relationships between choices in design, development, and deployment for any end outcomes, including but not limited to productivity performance, this is essential.

Socio-Technical: The move towards a single point of entry and review of multiple domains of law is a strength. However, to understand the relevance and intersection of these different domains requires an approach drawing on the insights of various disciplines – including law and computer sciences, but also psychology, human resource management and innovation studies, industrial relations, and economics.

What types of regulation (particularly legislative provisions), if any, should be eligible for temporary modification or disapplication within the Lab? Could you give specific examples and why these should be eligible?

Viewed from the point of maximising learning for the Growth Lab, participation should be made on the basis of suspension of intellectual property protections which would restrict the Growth Lab from accessing relevant and necessary information to evaluate the lawfulness, function and performance of systems. While temporary suspensions are usually viewed from the perspective of benefiting participating organisations through non-repercussions, to encourage their participation and learning, this suspension would serve the objectives of the Lab.

The suitability of extending the Lab approach to other technologies should be made on the basis of design choices and outcomes of a Phase 1 implementation, process evaluation, and review.

Which institutional model for operating the Lab is preferable? What is your reason for selecting this institutional model?

Sandboxing is commonly described as activities within a secure or ‘controlled’ environment. This ‘ex-ante’ approach can involve running a system on real or dummy data within a secure technical environment. Risks (rather than impacts) are modelled and evaluated without the system interacting with the real world. This approach allows a regulator to evaluate how a current rule regime may or may not apply to a high-risk context, without being complicit in any harm.

However, this approach has some challenges when taking a full socio-technical lens to understand how technology is shaped by social, cultural, political and economic factors within the workplace. Deployment is a critical context for evaluating how systems interact with wider fundamental rights.

The vast majority of firms using AI purchase from third-party providers. Nearly 80% of businesses report accessing, buying, licensing or using third-party AI tools, while more than half (53%) rely exclusively on third-party AI tools. This means that focusing only on the third party will lose sight of all workplace impacts and interdependencies. Further, understanding the legal and business relationships between these actors is critical to devising effective governance.

In turn, for a Sandbox to examine the down-stream impacts of algorithmic systems – which as aforementioned is essential in the context of the workplace – an independent entity is arguably necessary, as a view which extends from ex-ante to ex-post is key.

Further, being independent provides the potential for the more comfortable recruitment of industry partners and greater, aggregate regulatory insight and learning. Independence from regulators can reduce anxiety among participants about later penalty or culpability for what could be surfaced within the defined period of rule exemption (as Sandbox participants are not given a longer waiver for non-compliance and may have revealed practices which are not compliant during their participation).

What supervision, monitoring and controls should there be on companies taking part in the Lab?

We anticipate that the Growth Labs will require a process and framework which can be adapted to the specifics of each use case, while also upholding some key principles for fairness. Given the focus on upholding employment law, and the persistent relevance of the workplace as the domain within which adoption happens, and where innovation is realised as growth potential, there are merits in applying a framework that looks to advance good work and innovation together.

The Good Work Algorithmic Impact Assessment, funded originally under the grant programme of the Information Commissioner’s Office, could be the foundation for such a process.

This provides a ‘full stack’ methodology, which also draws attention to employment related impacts. As this process looks to the relationship between human and technical systems, it is also designed to promote more effective and productivity yielding outcomes from innovation than are being seen at present. This process has been endorsed by industry, the OECD, and aligns to the Trade Union Congress asks for workplace impact assessment.

This framework focusses on reflection and documentation of key decisions, from the point of problem definition, through to ongoing monitoring of the system in practice, can serve to ensure there is robust monitoring throughout the course of an experimental evaluation.

We welcome the chance to further discuss, support and advice on approaches to maximise our potential to capitalise on the benefits of AI while advancing better work, and a sustainable, diverse and thriving UK economy.

AI Growth Lab Consultation: IFOW Response

Sign up to our newsletter

Stay up to date with IFOW research, insights and events.

You can unsubscribe at anytime by clicking the link at the bottom of our emails or by emailing data@ifow.org.

Read our full privacy policy including how your information will be stored by clicking the link below.