Issue 5 Q1 2026
FEATURING
Govern or inherit: Strategic AI decisions boards must make in 2026
Issue 4 Q4 2025
Prev. article
Next article
Introduction I am pleased to introduce the fifth edition of Risk Quarterly, our flagship global publication examining the strategic risks reshaping business in an increasingly complex world. This issue places Artificial Intelligence firmly at the centre of boardroom decision-making. Our lead article, Govern or inherit: Strategic AI decisions boards must make in 2026, by Paul Armstrong, founder of TBD Group and author of Disruptive Technologies: Understand, Evaluate, Respond, argues that 2026 marks a decisive inflection point for AI governance. As regulatory deadlines under the EU AI Act approach, insurance markets harden around AI-related exposure,
Tom Tippett Equity Partner, London
Building on this theme, we include a dedicated chapter from our Corporate Risk Radar report examining how AI is reshaping the workplace. We also explore the risks posed by persistent AI meeting-recording tools, the growing use of agentic systems capable of autonomous action, and the governance implications of generative AI across core business functions. Across sectors, we examine liability and accountability in practice: from securing the intellectual property legitimacy of AI-created assets and the impact of AI on trial advocacy, to healthcare liability and the potential implications of AI-driven clinical tools. We analyse the evolving cyber threat landscape from a Latin American perspective, assess the legal and risk implications of quantum computing’s anticipated ability to decrypt today’s encrypted data (often referred to as “Q-Day”), and consider how social media, automated vehicles, and fragile supply chains continue to create complex and interconnected insurance exposures. Beyond AI, we explore environmental and regulatory developments, including the environmental liability risks linked to sucralose and an update on the road to COP31, following the outcomes of COP30 last year. As AI authority migrates quietly into operational decision-making, governance can no longer be treated as a compliance afterthought. Across jurisdictions and industries, the ability to manage regulatory divergence, shadow AI adoption, and delegated system authority is fast becoming a defining strategic capability. We hope you find this edition both practical and thought-provoking. Thank you to all our contributors. If there are topics you would like to see explored in future issues, please contact us at riskquarterly@clydeco.com.
and competitive advantage shifts from rapid experimentation to sustainable scaling, the critical question for boards is not whether to adopt AI, but how — and where — authority is being delegated to it. The article challenges leaders to move from abstract AI ambition to deliberate governance before operational dependency outpaces oversight.
Back to contents
Cross-border joint venturesWhy strategic partnerships make sense in turbulent times
The fine print of AI hype The legal risks of AI washing
Tune in to silent AI risks
The FCA’s new AI Live Testing initiative Does it address the elephant in the room?
Will AI change the way data centres are delivered?
Tokenising the real world How blockchain is reimagining asset ownership
Cyber risk rundown Cracking down on ransomware: plotting the way forward
Geopolitical shocks Responding rapidly to sudden sanctions
The UK’s fraud crackdown What British businesses and multinationals need to know
Introduction Sam Tate
Lithium batteries and sources of emerging risks
Water scarcity Emerging risks and implications
Emerging Risk
Corporate Risk Radar
A spotlight on the newly released Corporate Risk Radar: Taking decisive action as risks interlink
By Lucille Dolor
In this issue...
Govern or inherit: Strategic AI decisions boards must make in 2026 By PAUL ARMSTRONG
Doomscrolling to death: Social Media’s legal challenges
The advent of Q-Day: Why prepare for quantum computers’ data de-encryption capabilities now
Scroll down
Systems deployed today will be operational when those requirements hit, and organisations can't retrofit governance onto systems already embedded in critical workflows without accepting operational instability. Second, insurance markets are hardening. Carriers are increasingly excluding or substantially limiting liability for autonomous system failures, algorithmic discrimination, and governance failures around AI deployment. Coverage that existed twelve months ago is vanishing, leaving organisations exposed to risks their balance sheets weren't designed to absorb.
Market events underscore this urgency. When AI capabilities targeting professional services were released in early 2026, European data and software stocks fell sharply, some by double digits. The sell-off reflects investor recognition that AI now threatens business models across legal, financial, and consulting services previously considered insulated from disruption. Established providers watch valuations evaporate whilst discovering that autonomous systems can displace revenue streams faster than governance can respond or business models can adapt.
The term ‘Artificial Intelligence’ is bandied about with wild abandon online, in media, and, increasingly, boardrooms, but often generative AI is what’s being talked about. The reason? Make it a simple, single concept that’s easy to understand, but doing so misses the specifics that ignore the need for different rules and mitigation for automation, optimisation, prediction, content generation, and autonomous execution. Each has distinct forms of delegated authority with different failure modes, so any reduction just serves to obscure the point at which responsibility transfers from human judgement to system behaviour.
November 2022: OpenAI releases ChatGPT to the public. Your board discusses AI as a future opportunity requiring strategic consideration.
Govern or inherit:
Strategic AI decisions boards must make in 2026
Understanding AI risk requires distinguishing between three levels of delegated authority: Assistance (systems that help), Advice (systems that predict), and Agency (systems that act autonomously).
Speculation about artificial general intelligence makes this worse by directing attention toward hypothetical futures rather than present systems. Abstract debate about superintelligence (often referred to as AGI - Artificial General Intelligence) absorbs senior time whilst narrow capabilities quietly shape pricing logic, claims triage, infrastructure scheduling, and customer interaction. Loss rarely emerges from speculative intelligence, it emerges from ordinary systems operating at scale under delegated authority, where governance debt accumulates invisibly until consequences surface elsewhere.
Challapally, A., Pease, C., & Raskar, R. (2025, July). State of AI in Business 2025 Report. MLQ / MIT NANDA. https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf; Johnston, A. (2025, October 27). Generative AI shows rapid growth but yields mixed results. S&P Global Market Intelligence. https://www.spglobal.com/market-intelligence/en/news-insights/research/2025/10/generative-ai-shows-rapid-growth-but-yields-mixed-results; Microsoft UK Stories. (2025, October 13). Rise in ‘Shadow AI’ tools raising security concerns for UK organisations. Microsoft UK Stories. https://ukstories.microsoft.com/features/rise-in-shadow-ai-tools-raising-security-concerns-for-uk/; Ponting, S., & Erlendson, J.-M. (2024, October 29). Half of all employees are using unauthorised AI tools. HR Summits. https://hrsummits.co.uk/briefing/half-of-all-employees-are-using-unauthorised-ai-tools/
Reference list
What: Physical or procedural automation within defined constraints Where: Manufacturing assembly, warehouse logistics, surgical assistance Authority: Clear attribution, visible activation points, preserved accountability
China is a completely different beast and integrates AI governance into coordinated industrial policy and social stability objectives through mechanisms Western boards often misunderstand. The Cyberspace Administration of China’s Interim Measures for the Management of Generative Artificial Intelligence Services, effective since August 2023, require algorithm registration, content filtering aligned with state objectives, and data localisation preventing information leaving Chinese jurisdiction. AI systems must reflect core socialist values, undergo security assessments before deployment, and maintain records accessible to authorities. For organisations operating in China, algorithms developed elsewhere require modification before Chinese deployment, with technical architectures
designed for data sovereignty compliance from inception rather than retrofitted. The regulatory model prioritises coordination between commercial capability and state objectives, creating approval processes that Western legal teams accustomed to rules-based frameworks find opaque. Compliance depends less on documented conformity to published standards and more on ongoing relationships with regulatory bodies able to interpret requirements case-by-case. Technology transfer requirements, data residency mandates, and content control obligations mean AI systems deployed in China often can’t integrate with global systems architecture, forcing parallel development paths that increase cost whilst reducing interoperability. For boards, Chinese operations require governance structures that accept regulatory opacity, plan for algorithm localisation, and maintain separation between Chinese systems and global infrastructure that complicates efficiency gains AI deployment typically promises.
For organisations operating globally, divergent approaches create overlapping requirements rather than best-practice options. Beyond the EU, US, and China, every jurisdiction where AI operates imposes distinct obligations. Singapore’s principles-based governance, India’s emerging data protection requirements, UK’s post-Brexit positioning, and Middle Eastern developing regulatory environments each add compliance layers. Systems trained under one set of regulatory assumptions encounter materially different expectations elsewhere. Documentation sufficient in one country proves inadequate in another. Governance shifts from legal alignment toward internal reconciliation of incompatible assumptions about acceptable risk. Liability becomes increasingly difficult to determine once systems operate across multiple countries simultaneously.
Geography now shapes AI governance as much as capability shapes deployment. Divergence across countries intensifies these pressures for organisations operating globally.
Compliance and legal were once viewed as overhead. Today, they determine who wins. Competitive advantage no longer lies with the fastest algorithm, but with the organisation whose governance allows it to operate in Berlin and Shanghai as seamlessly as in Boston. Governance is no longer the brake. It is the steering wheel.
By Paul Armstrong Founder of TBD Group, an emerging technology advisory firm, and author of Disruptive Technologies: Understand, Evaluate, Respond.
hree forces converge to make 2026 the inflection point for AI governance. First, regulatory deadlines are imminent.
T
The European Union’s AI Act entered into force in August 2024 with staged implementation through 2027, requiring conformity assessments for high-risk systems by August 2026, just six months away.
February 2026: AI systems shape pricing decisions, claims assessment, workforce allocation, and customer interaction across your business. Nobody on the board approved this transition. Nobody decided these systems would carry consequential authority. Yet decoupling these systems would trigger operational instability.
Third, competitive dynamics are shifting. Advantage no longer goes to those who deploy AI fastest but to those who build governance that allows sustainable scaling without accumulating unmanageable exposure. First-mover advantage matters less than avoiding first-mover disasters. AI reshapes organisations whether boards govern it or not. The choice is between deliberate allocation of authority and inherited dependency discovered too late.
Calling everything ‘AI’ weakens governance
Three categories of AI risk that need different governance
CATEGORY 1: ASSISTANCE - Deterministic, Rule-Based Systems (Green/Low complexity)
ROBOTICS
EXPERT SYSTEMS
What: Rule-based decision logic encoded in if/then structures Where: Credit scoring, insurance underwriting, compliance checking Authority: Institutional persistence makes embedded assumptions hard to challenge
MACHINE LEARNING
CATEGORY 2: ADVICE - Probabilistic, Pattern Recognition (Amber/Medium complexity)
What: Statistical pattern recognition that improves from data without explicit programmingWhere: Fraud detection, demand forecasting, predictive maintenance Authority: Drift risk as systems stay accurate to training data whilst environments change
Risk: Bias, drift, and hallucination as systems optimise for patterns that may not represent reality
NEURAL NETWORKS
FUZZY LOGIC
What: Probabilistic decision-making using degrees of truth rather than binary rules Where: Process control, climate systems, consumer electronics Authority: Blurs the moment decisions attach, complicating accountability
NATURAL LANGUAGE PROCESSING
CATEGORY 3: AGENCY - Agentic Systems That Act Autonomously (Red/High complexity)
What: Text analysis, generation, and summarisation that shapes information flow Where: Contract review, email triage, report summarisation, chatbots Authority: Through attention direction; distorts what reaches senior leadership
COMPUTER VISION
What: Visual pattern recognition feeding automated categorisation and response Where: Quality control, security surveillance, autonomous vehicles Authority: Perception errors interact with execution systems, propagating quickly
Risk: Unbounded liability as systems execute without human approval at each decision point
When executives present AI initiatives, does the proposal specify whether the system observes, recommends, decides, or acts autonomously?
Questions boards should be asking:
Can leadership articulate which specific AI capability is being deployed and why that capability fits the risk tolerance for the function involved?
Has the organisation mapped where AI systems currently operate and distinguished between advisory tools and systems that trigger consequential action?
Why different countries take different regulatory approaches
Who leads in AI depends entirely on how leadership gets measured. The United States dominates in commercial deployment velocity, venture capital concentration, and foundational model development. China leads in manufacturing integration, facial recognition deployment, and coordinated industrial policy. Europe advances in regulatory sophistication, individual rights protection, and ethical framework development. Each jurisdiction optimises for different values, making ‘leadership’ a function of priorities rather than capabilities, creating a compliance minefield for business leaders operating globally.
Operating globally now requires an interoperability framework, not a single policy. A large US biotech firm discovered this friction firsthand: an autonomous AI system for patient screening represented a competitive edge in their Boston operations but constituted a
Europe’s precautionary framework establishes clear rules but imposes compliance burdens that competitors operating domestically elsewhere don’t yet face. The EU AI Act prohibits certain AI applications whilst imposing transparency, human oversight, and technical documentation requirements on high-risk systems by August 2026. Systems used in employment decisions, credit scoring, law enforcement, and critical infrastructure face mandatory assessments. For financial services firms, credit decisioning algorithms require assessment. For insurers, claims triage systems using AI to determine payout amounts likely qualify as high-risk. For asset managers, portfolio algorithms that autonomously adjust holdings face scrutiny when decisions compound across funds. For manufacturers, predictive maintenance systems that trigger shutdowns without human override require documentation demonstrating safety thresholds remain appropriate as equipment ages.
The United States operates without comprehensive federal AI legislation, creating regulatory fragmentation that complicates planning. No horizontal framework comparable to the EU AI Act exists. Instead, sector-specific regulators improvise rules independently. The Federal Trade Commission addresses deceptive AI practices through consumer protection authority. The Securities and Exchange Commission examines AI in trading and disclosure. The Equal Employment Opportunity Commission scrutinises algorithmic hiring. The Food and Drug Administration regulates AI in medical devices. Coordination remains limited and evolving. The current administration signals preference for light-touch regulation favouring commercial velocity, but this creates planning uncertainty rather than clarity because policy is relatively fluid. State-level legislation fills the federal vacuum inconsistently, although California is taking a lead in advancing comprehensive AI regulation. Colorado passed algorithmic discrimination laws, and New York already regulates AI in employment. Organisations operating nationally face overlapping state requirements that conflict, with compliance in one jurisdiction potentially violating rules in another.
Litigation risk exceeds regulatory clarity, with class actions and state attorneys general enforcement creating exposure faster than formal rulemaking provides safe harbour. For boards, US operations demand governance robust enough to satisfy the strictest potential jurisdiction whilst flexible enough to adapt as rules emerge, shift, or reverse, making legal and compliance capabilities strategic differentiators rather than cost centres.
Has general counsel mapped AI systems against EU AI Act risk classifications to identify which deployments require assessments by August 2026?
Can the organisation demonstrate that high-risk systems maintain human oversight mechanisms that satisfy regulatory requirements?
For systems operating across countries, has legal established protocols for determining which regulatory framework governs when obligations conflict?
Governance checkpoints for boards operating globally:
Do vendor contracts specify liability allocation when systems compliant in one country create exposure in another?
Navigating regulatory fragmentation matters most when boards understand which AI systems carry consequential authority. Returning to the framework established earlier: systems providing Assistance (helping humans work) carry different governance requirements than systems providing Advice (predicting outcomes) or exercising Agency (acting autonomously). Generative systems produce outputs that humans then use (text, images, code, analysis) keeping judgement embedded in execution. Agentic systems act autonomously, triggering workflows, allocating resources, and making decisions that cascade into other decisions without waiting for human instruction. The first accelerates human work, whereas the second can replace human control.
Better performance doesn’t neutralise this shift in authority, because an agentic system that works well still changes fundamentally how responsibility attaches and how quickly errors spread beyond human capacity to intervene. Records of what systems did weaken as systems adapt continuously, leaving governance frameworks designed for static tools struggling to constrain behaviour that no longer respects organisational boundaries.
Where employees adopt faster than boards can govern
Research from multiple sources indicates that between 70% and 95% of AI initiatives fail to deliver measurable value, though these figures deserve scrutiny. MIT’s 2025 State of AI in Business report found only 5% of generative AI projects reached production with measurable profit-and-loss impact, whilst S&P Global reported that 42% of companies abandoned most AI initiatives in 2025, up sharply from 17% in 2024. However, these studies and statistics reflect narrow success definitions that exclude slower-payoff projects, indirect benefits, and productivity gains happening through unsanctioned AI use, where employees use personal tools outside official programmes at substantially higher returns than approved deployments. The failure rates signal structural mismatch as much as technical immaturity. Different large companies suffered from different issues, from fragmented data, unclear ownership and accountability gaps, incentive structures that reward experimentation without demanding outcomes, and operating models that remain unchanged even as authority migrates into automated systems. In short, these companies went in without asking the right questions, strategy, and measure of success.
Competitive advantage doesn’t accrue to organisations that deploy AI fastest but to those that build governance enabling sustainable scaling, because first-mover advantage matters lessthan avoiding disasters that consume executive attention, regulatory goodwill, and board credibility.
Organisational behaviour compounds this problem through unsanctioned AI use, often referred to as ‘Shadow AI’.
Survey data suggests between 60% and 75% of knowledge workers have used unauthorised AI tools for work tasks, often without informing IT or checking compliance functions.
Employees assemble workflows using personal assistants and consumer tools to overcome delays and friction that approved systems haven’t addressed. Sensitive data moves across unsanctioned channels, tools chain together in ways IT never approved, and detection rates remain extremely low. Blocking strategies fail because underlying incentives remain unchanged and capability routes around control mechanisms long before policy frameworks adapt.
Shadow AI tools running on employee devices create a governance blind spot. Unlike enterprise software that IT can monitor and control, these open-source assistants operate on local devices, connecting to company systems whilst often remaining invisible to oversight. Employees adopt them because they work, boosting productivity through connections to email, project management, and CRM systems. Adoption spreads through informal recommendations faster than governance frameworks can respond. The problem intensifies when these tools coordinate with each other, creating dependencies that operate outside hierarchies boards govern. By the time leadership discovers critical workflows depend on unapproved software, reversing course triggers operational disruption.
The problem compounds when these agents start coordinating with each other. At the time of writing, OpenClaw has spawned an entirely AI-generated social network where thousands of autonomous agents post updates, exchange information, and coordinate activities without human direction. Authority disperses horizontally across agent networks rather than flowing vertically through hierarchies that boards can govern. Dependency forms quietly through repeated use. Colleagues observe productivity gains and adopt similar tools. Exposure accumulates without triggering governance mechanisms designed to manage technology risk.
OpenClaw matters because it exposes a governance assumption boards rely on: that AI operates on centralised platforms where IT can audit, constrain, and retrospectively examine behaviour. Once agentic systems align with user incentives around productivity and run locally, adoption precedes institutional understanding by a margin that creates substantial exposure boards only discover when asking why critical workflows now depend on software nobody approved.
How does internal audit trace decisions made by agentic systems back to approved parameters and escalation triggers?
How do monitoring systems capture when AI behaviour drifts from training conditions, or do they only detect technical failures?
How has risk management quantified the gap between formal AI governance policies and actual usage patterns?
Questions for audit committees:
How do data loss prevention systems detect when structured data gets fed into external AI systems operating beyond enterprise oversight?
S&P Global reported that 42% of companies abandoned most AI initiatives in 2025, up sharply from 17% in 2024.
Organisations need to stop asking “which AI should we buy?”, and start asking “where should AI be allowed to act without human approval?”.
Investment decisions are authority decisions
Systems that recommend carry different risks than systems that act. You can govern recommendations through disclosure and periodic review. Action requires explicit decisions about delegation, reversibility, and exit criteria before deployment, because pulling the plug becomes progressively harder once workflows depend on the system and contracts are signed.
Three investment categories demand different governance:
Experimentation: Isolated from production. Reversible without operational disruption. Time-bound with hard end dates. Budget these like R&D. Add guardrails where possible with IT and legal beforehand.
Operational support: Integrated into workflows but humans retain control at decision points where errors would cause material harm. Think autopilot with override, not autopilot replacing the pilot.
Delegated authority: Systems act without waiting for approval. Require defined boundaries, escalation triggers when conditions exceed parameters, and exit plans acknowledging that removal causes disruption. Budget for the governance infrastructure these systems demand, not just the technology.
Invest in infrastructure that enables multiple applications, not point solutions that solve one problem. Building robust data systems, governance frameworks, and internal expertise creates capability supporting diverse applications over time. Frameworks like NIST AI Risk Management and UK ICO guidance on AI and data protection provide starting points, but your board must decide explicitly where authority resides in your organisation.
Can the organisation articulate which AI investments build foundational capability versus narrow solutions?
Do business cases distinguish between systems supporting human decision-making and systems making decisions autonomously?
Has finance established methods for valuing flexibility preserved by maintaining the ability to pause or stop initiatives?
Do approved investments include explicit funding for governance infrastructure required to manage them safely?
Investment review questions:
Three maturity levels define how well boards govern AI. Most organisations discovering governance gaps sit at Level 1.
Where does your organisation stand?
You can’t govern systems you don’t understand. Directors don’t need to become technical experts, but they do need to distinguish between systems that provide Assistance, Advice, or Agency, understand why those distinctions matter for accountability, and ask questions that surface where authority has migrated without approval.
Ask yourself: does your board include members who can make these distinctions, or does the board depend entirely on management to translate AI risk?
If executives present AI initiatives and nobody on the board can challenge whether a system observes, recommends, or acts autonomously, your governance operates reactively rather than strategically.
No inventory of AI systems exists. You can’t distinguish between advisory tools and autonomous systems. Governance chases deployment rather than shaping it. Shadow AI operates beyond visibility. Board discussions remain abstract, focused on potential rather than current reality. Most organisations sit here.
Level 1: Reactive Governance
Complete inventory exists, classifying systems by whether they observe, recommend, decide, or act. The generative v agentic distinction is understood and reflected in approvals. Policies exist and enforcement functions, though shadow AI persists. Operating model changes get explicit attention during investment approval. Board reporting includes AI risk metrics beyond project updates.
Level 2: Managed Governance
Authority allocation is explicit. Boards actively govern where autonomous systems are permitted to act. Governance constrains systems that adapt, not just static systems. Monitoring tracks authority migration, not just technical performance. Investment decisions distinguish foundational capability from point solutions. Refusal receives equal weight to adoption. Board composition reflects the understanding required to oversee autonomous systems.
Level 3: Strategic Governance
Timeline reality: Moving from Level 1 to Level 2 takes six to nine months. Level 2 to Level 3 takes longer, requiring governance embedded in strategy processes. Organisations at Level 1 today can’t reach Level 3 before August 2026 regulatory deadlines without significant execution risk. The gap between where you sit and where you need to be determines your urgency.
Boards can’t delegate this accountability. AI authority migration from operational functions into strategic consequence means oversight must extend beyond approving budgets toward actively governing where autonomous systems are permitted to act, under which constraints, and with what accountability mechanisms. For most organisations, comprehensive AI governance can’t be achieved by August 2026. The realistic goal for the next 90 days is establishing visibility and setting boundaries that prevent exposure from deepening whilst longer-term governance builds. Boards should aim to know what they’re governing, stop the bleeding from unsanctioned AI, and ensure no high-risk systems deploy without assessment pathways. The following actions represent the first 90 days of governance transformation that will take six to twelve months to complete.
What boards must do: the critical first 90 days
First 30 Days: Establish Visibility
Direct management to produce a complete inventory of AI systems currently deployed, classified by whether they observe, recommend, decide, or act autonomously, with explicit identification of systems that have migrated from one category to another without board awareness. Require completion within 30 days. Without inventory, all subsequent governance decisions rest on incomplete information.
Define board-level reporting on AI risk that goes beyond project status updates to include authority migration patterns, near-miss incidents where human intervention prevented AI-driven harm, and aggregate measures of organisational dependency on autonomous systems. Make this standing agenda material, not quarterly exception reporting.
One large US biotech company established a weekly cross-functional task force reviewing ‘what’s new, what’s now, what’s next’ with monthly board-level reporting, creating ownership and allowing the board to develop question-asking capability without requiring technical expertise.
Direct internal audit to assess the gap between formal AI governance policies and actual usage patterns, particularly regarding unsanctioned AI use and locally executed agents operating beyond enterprise visibility. Request quantification of the gap, not anecdotal evidence.
Require that all future AI investment proposals specify which form of authority is being delegated, what operating model changes are required for success, and what criteria will trigger pause or termination rather than scope expansion. Reject proposals lacking this specificity.
Final 30 Days: Set Boundaries and Accountability
Get executive leadership to demonstrate that governance frameworks can constrain systems that adapt through interaction, not just approve static systems that behave predictably. Address specifically how oversight keeps pace with systems that change faster than policy cycles.
Direct strategy discussions toward deciding explicitly where authority should reside, which functions can safely delegate consequential decisions to autonomous systems, and where human judgement must remain embedded regardless of technical capability. Treat these as strategic choices rather than technical implementation details.
Next 30 Days: Assess Gaps and Exposure
Direct legal to map AI systems against EU AI Act classifications, identifying which deployments require assessments by August 2026 and what documentation those assessments will demand. Preparatory work must begin immediately given unforgiving timelines.
Direct risk management to evaluate whether existing Directors and Officers insurance, professional indemnity coverage, and cyber policies adequately address AI-related governance failures, autonomous system liability, and responsibility challenges that emerge when errors propagate across interconnected systems. Address gaps before incidents occur.
Boards and executives face a governance challenge that will define their tenure. ChatGPT launched just over three years ago, and nobody knows what the next three years will bring because they’re making it now. Authority flows through AI systems whilst organisational structures built for industrial operations struggle to keep pace, as technology evolves faster than policy can adapt, systems change behaviour faster than oversight can monitor, and authority migrates faster than accountability frameworks can follow.
The corridor narrows daily
Start now whilst dependency remains manageable. Build governance that positions your organisation to capture compounding value over years, not spend those years recovering control.
AI will deliver transformative value and create systemic risk simultaneously. Technology platforms will race toward AGI whilst whistleblowers raise safety concerns. Regulations will conflict across jurisdictions. Secondary effects will compound unpredictably, much like GLP-1 drugs that entered markets for diabetes treatment yet now reshape insurance risk models and airline economics.
Markets reward hype over governance maturity, which is precisely why big tech perpetuates it. The governance decisions you make now determine your options for the next decade.
Click photo to find out more
MEET THE AUTHOR
Paul Armstrong Founder, TBD Group
Paul Armstrong is founder of TBD Group, an emerging technology advisory firm, and author of Disruptive Technologies: Understand, Evaluate, Respond. He provides emerging technology intelligence and strategic advisory to global organisations including PwC, Meta, and Coca-Cola, helping leaders navigate disruption and identify opportunities. Sought for comment by the Wall Street Journal, Financial Times, CNN, and BBC, Paul has written for Forbes, The Guardian, Reuters, and has the popular City AM column on AI’s ongoing business impact, examining how breakthrough innovations reshape industries before markets can adapt. He is currently preparing his next book on risk as a driver of innovation.
high-risk violation in Brussels without human-in-the-loop override. A model that could accelerate a US launch risked triggering European fines that would evaporate global margins.
Richard Power Partner, London
Richard Power is a leading disputes lawyer with over 15 years’ experience in the energy sector. He specialises in complex cross-border and domestic disputes across arbitration, litigation, mediation and other ADR processes. Richard advises on commercial, corporate, regulatory and climate-related disputes, including greenwashing and Energy Charter Treaty claims.
Click photos to find out more
https://www.linkedin.com/in/paulstephenarmstrong/
SLIDO HERE
The evolving risk landscape through the lens of leading decision-makers
Meet the contributors to Corporate Risk Radar 2025: second edition
Rebecca Armstrong Partner, London
Rosehana Amin Partner, London
Leon Alexander Partner, Perth
How AI is reshaping the workplace
he 2025 edition of our annual Corporate Risk Radar captures the perspectives of over 400 global
business leaders — including Board members, C-suite executives and General Counsel (GCs) — from a wide range of sectors and regions. The report explores leaders’ views on the most pressing risks facing their organisations now and over the next three years, how prepared they feel to address them, and how resilience is being embedded at the heart of their operations — including a dedicated chapter examining the impact of AI in the workplace.
Many organisations are adapting to the implementation of Large Language Models (LLMs) rather than agentic AI, which could mean policies and controls are not on the front foot. Isabel Simpson, Partner, Clyde & Co in London, said: “As organisations are adapting to the use of LLMs, there is a real risk that the policies, safeguards and controls put in place to mitigate the risks associated with LLMs will not be keeping pace with newer forms of AI such as agentic AI. Since AI is developing at pace, there is a lack of organisational awareness of how these different kinds of AI work, produce results and make decisions. If organisations don’t understand how their AI models work, they won’t be able to use them in a trusted way so education in this space across the organisation is key”.
The debate around the adoption and use of AI in the workplace has shifted, with many leaders initially sceptical but quickly moving to introduce AI for productivity gains. AI holds enormous potential to reshape workflows, but the pace of development, the introduction of new agentic AI tools and the need for rapid training and investment are creating new challenges for organisations.
Agility is crucial to keep pace
She noted that operationalising and realising efficiency requires multi-layered governance. “This requires policy frameworks, data leakage prevention, access management controls, shadow IT monitoring and a pilot-then-scale deployment pattern that allows organisations to test and refine models before rolling them out. Policies need to be under constant review and development to keep pace, outputs need to be aligned with the values of the organisation, proper controls for access to data and safeguards must be established and you need a dedicated team of skilled professionals.”
As companies adapt to the use of AI, and which model is right for their organisation, it can mean that employees seek information from unverified sources. Eva-Maria Barbosa, Partner & Chair of the Global Corporate & Advisory Group at Clyde & Co in Munich, said: “Organisations need to have clear use cases, so employees understand how to use it, what to use it for and, importantly, how to verify the output.”
Clear direction from leadership
Mindset and culture
AI is increasingly playing a role in every aspect of organisations, from research and planning through to decision-making. Beyond tangible guardrails, training, governance frameworks and the recruitment of skilled personnel, successful use cases will increasingly depend on team dynamics and cultural alignment. Isabel Simpson, Partner, Clyde & Co in London, said: “Geographical culture plays a significant role in how organisations approach AI deployment as it influences societal expectations and therefore risk appetite. A positive approach to AI deployment combines governance and human oversight.”
This chapter is excerpted from the Corporate Risk Radar 2025: second edition. Click here to download the full report.
Eva-Maria Barbosa Partner and Chair of the Global Corporate & Advisory Group, Munich
Elizabeth Evans Partner, New York
Jared Kangwana Managing Partner - Kenya, Nairobi
Olivia Darlington Partner, Dubai
Roshanak Bassiri Gharb Partner, Dubai
James Roberts Partner, London
Isabel Simpson Partner, London
Chris Leadbetter Partner, London
Rebecca Kelly Managing Partner - Australia, Brisbane
Charles Urquhart Partner, London
Sam Tate Partner & Global Head of Regulatory and Investigations, London
MEET THE Authors
Ben Knowles Partner & Chair of the Global Arbitration Group, London
Jared Kangwana Managing Partner, Nairobi
Rebecca Kelly Managing Partner, Brisbane
Eva-Maria Barbosa Partner & Chair of the Global Corporate & Advisory Group, Munich
As more and more businesses explore the potential of Artificial Intelligence, having AI tools automatically record and analyse meetings and other forms of communication is a relatively straightforward entry point into the world of AI – especially as so many meetings now take place online. Since AI is capable of transcribing, summarising and analysing content in seconds, it can create and disseminate minutes, notes and action points with no manual human input, saving significant time and resources.
With persistent AI recording, robust data governance becomes more critical than ever. Organisations are harvesting increasing amounts of information that must be stored securely within suitable frameworks, with appropriate controls around who is responsible for it, who can access it and how it should be shared. Moreover, when meeting attendees log into virtual meetings from different countries, international data transfer issues will need to be considered, in terms of what data protection and privacy regulations from which jurisdictions apply.
The importance of robust data governance
It’s worth noting too that holding more data on individuals increases risk should a subject data access request be made (where individuals can ask organisations for copies of the personal information they hold, including where it came from, what it is used for and who it has been shared with). There will now be more – potentially sensitive — data on record to be interrogated.
However, in continuously capturing these conversations via always-on AI recording tools (which is part of what we call “persistent AI recording”), companies must manage heightened and emerging legal and regulatory risks – from data protection and privacy issues to the impact on confidentiality and client privilege.
If data is compromised, regulators will question how well it was protected, whether it was retained for too long and if it should have been collected at all. Those whose data has been stolen may launch legal action. Corporate reputation as well as company value could be damaged.
Then there’s the possibility that persistent AI tools could cross-contaminate information. If companies use AI systems to analyse centrally-managed client data, guardrails must be in place to prevent AI models from aggregating information harvested from the meeting recordings or notes of multiple customers in a way that breaches client confidentiality obligations, data privacy rules or contractual terms. Otherwise, the risk is that sensitive information from one client could be inadvertently (and unlawfully) communicated to others.
This is a board-level issue. Senior leaders have a duty to ensure due diligence is undertaken on any new technologies implemented, and that good governance and appropriate policies, procedures and frameworks are embedded throughout their organisations. As AI develops rapidly, controls require regular testing to ensure they remain fit for purpose. Staff must be educated and trained on the implications of persistent AI recording. They may need to be told to think more carefully about what they say in meetings, and given the power to turn recordings off when necessary to enable full and frank discussions to take place.
Why always-on AI meeting capture tools should be handled with care
Persistent AI recording:
Finding the balance between boosting productivity and compliance
Watson, M. (2025, August 25). Financial firms face 25% surge in advanced cyberattacks in 2024. IT Brief. https://itbrief.co.uk/story/financial-firms-face-25-surge-in-advanced-cyberattacks-in-2024 Gallagher. (2025, October 27). How cyber criminals set their sights on professional services firms. The Law Society Gazette. https://www.lawsociety.org.uk/topics/cybersecurity/partner-content/how-cyber-criminals-set-their-sights-on-professional-services-firms
In a contentious context, the rise of persistent AI recording opens up new opportunities for the discovery of evidence on the part of litigants. Information discussed during meetings and other conversations may be confidential and protected by client privilege however, if those communications have been subject to persistent AI recording, it may be harder to assert confidentiality or privilege rights. Refusing to hand over data and documents as part of the litigation disclosure process may not wash, if that “secret” information has already been widely circulated or accessed.
Organisations can reap substantial efficiency gains and valuable business intelligence by using AI to capture and analyse meeting content, but it’s vital to consider carefully how to balance productivity-boosting tools with compliance with legal and regulatory obligations. Meanwhile insurers should be thinking about how all this plays into risk evaluation and liability issues. As yet more buckets of data are added to the ever-increasing pool, vigilance about how this information is processed and protected should be a top-of-the-agenda item, if it isn’t already.
content must navigate a complex web of data protection, privacy, confidentiality and client privilege issues, say Isabel Simpson and Rosehana Amin
Proper consent must be gained from those attending. It may not be enough simply to inform attendees that a meeting or conversation is being recorded: for example, employees might not be capable of truly consenting to this kind of monitoring if they feel they can’t say no. Therefore, it’s vital to conduct appropriate due diligence to ensure compliance with contractual conditions as well as with data protection obligations and employment law. Extensive guidance on employee monitoring exists, so companies must follow it.
More data, greater cyber vulnerability
Implications for client confidentiality and privilege
In much the same way, persistent AI recording may increase organisations’ attractiveness as a target for cybercriminals. The more high-value financial, business-critical or sensitive personal information they hold, the more vulnerable they could be. Financial institutions and professional services firms are particularly susceptible: the former faced a 25% increase in cyberattacks in 2024,1 while the latter were the targets of 20% of ransomware incidents in Q2 2025.2
Furthermore, exposing this information to third-party AI tools raises serious questions about what data access rights providers of those AI solutions have. Claimants could argue that allowing, e.g. Microsoft Teams or Copilot, to process the data has already breached client confidentiality and privilege.
MEET THE AUTHORs
s the benefits of AI become clearer, organisations that use it to capture and analyse meeting
A
Robbie Pilcher Associate, APAC
Leon Alexander Partner, APAC
Artificial Intelligence (AI) is evolving rapidly, and its next major iteration - “Agentic AI” – is set to transform working practices even more radically than ever before. While generative AI (Gen AI) creates outputs such as text or images, agentic AI autonomously makes decisions, takes actions, adapts to new information, and collaborates with other systems to perform complex, multi-step tasks without human intervention.
However, with greater autonomy comes heightened risk across global regulatory environments. Agentic systems often require access to sensitive data and they have the ability to modify digital environments or trigger real-world actions, increasing the potential impact of errors, misuse, or unintended behaviour. As organisations begin deploying AI agents that interact dynamically, outcomes can become more difficult to predict and govern.
Agentic AI offers great advantages to businesses in terms of saving time and resources required to carry out a multitude of tasks, streamlining processes and accelerating outcomes. Clearly, however, implementing AI models that carry out activities autonomously presents significant risks, particularly around:
What is agentic AI? Agentic AI is a sophisticated Artificial Intelligence model that can reason, plan and take actions independently to solve problems and achieve set goals, learning and refining its approach along the way. Examples include advanced coding assistants, intelligent customer service agents or automated enterprise workflows. Agentic AI decides what steps are necessary to complete a specific task, maps out a process, works around obstacles and completes the task by itself.
Organisations seeking to adopt agentic AI should implement a structured internal governance framework that assesses and classifies risks, defines accountability across stakeholders, and implements safeguards should issues arise. This could include human-in-the-loop controls, audit requirements, data access restrictions, and defined procedures to escalate any issues identified to the relevant personnel.
Harnessing agentic AI:
Risks, rewards and responsibilities
Agentic AI may inadvertently expose or manipulate sensitive data, including personal data, proprietary information, customer records, trade secrets, and internal communications. This risk may arise from external security breaches (where attackers exploit autonomous agents) or from the agent’s inability to correctly identify and protect confidential information. Agentic AI can be something of a double-edged sword. While it can accelerate tasks and solve problems more quickly, meaning that it could, for example, spot and contain a cybersecurity breach in real time, the downside of autonomous decision-making is that mistakes, lapses of “judgement” or misuse can also be perpetuated much faster.
With this in mind, organisations should consider carefully what agentic AI should and shouldn’t be used for. Agentic AI may not be suitable in situations where either the likelihood of failure is high, or the potential impact of a failure would be severe, e.g.:
Regulators in APAC leading the way on governance Regulators are starting to develop formalised governance guidance which should help organisations recognise, address and reduce the risks, with the Asia Pacific (APAC) region leading the way. In January, Singapore became the first jurisdiction to publish a formalised model governance framework for agentic AI at the World Economic Forum in Davos,2 which outlines the need for responsible deployment and emphasises that human accountability remains paramount.
(2026, January). BCG AI Radar 2026 As AI investments surge, CEOs take the lead. https://www.bcg.com/publications/2026/as-ai-investments-surge-ceos-take-the-lead?utm_content=ai-radar26 (2026, January 22). Infocomm Media Development Authority. Singapore Launches New Model AI Governance Framework for Agentic AI. https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/press-releases/2026/new-model-ai-governance-framework-for-agentic-ai Singla, A., Sukharevsky, A., Hall, B., Yee, L., Chiu, M., Balakrishnan, T. (2025, November 5). McKinsey. The state of AI in 2025: Agents, innovation, and transformation. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
Agentic AI can be something of a double-edged sword. While it can accelerate tasks and solve problems more quickly, meaning that it could, for example, spot and contain a cybersecurity breach in real time, the downside of autonomous decision-making is that mistakes, lapses of “judgement” or misuse can also be perpetuated much faster.
Agentic AI is fast becoming a reality for workers across the globe: a survey late last year found that 62% of respondents report that their organisations are at least experimenting with AI agents.3 As they do so, appropriate application and good governance are paramount.
When it comes to agentic AI regulation, it’s likely that APAC will be the global testbed. The region’s mix of jurisdictions comprises a diverse range of strict regulatory models, flexible frameworks and emerging national strategies, and this creates a unique environment for early cross-jurisdictional experimentation and compliance challenges.
Deployment of agentic AI is still in its relative infancy, but its use is expected to increase rapidly as companies race to stay at the forefront of cutting-edge technologies that are poised to deliver significant business benefits. Recent research among CEOs around the world found that nine in ten think AI agents will deliver measurable return on investment (ROI) this year.1
Users and/or their organisations could be held accountable for adverse outcomes stemming from actions taken by AI agents on their behalf, even if those actions were unintentional. Liability for errors, unintended behaviour or data breach on the part of agentic AI will vary depending on the circumstances and the stakeholders involved.
If an agentic AI system operates incorrectly, it could ultimately undermine professional competence and integrity. For instance, it may result in unfair or biased outcomes that could expose professionals or organisations to allegations of discrimination or failure to exercise due skill and care in areas such as procurement, grant administration or HR. In regulated professions, failures may even affect an individual’s ability to continue practising.
Agentic AI systems may independently perform incorrect or harmful actions, such as scheduling appointments on incorrect dates or generating flawed software code. The severity of the resulting harm depends on the nature of the task. For instance, erroneous medical scheduling could adversely affect patient outcomes, while defective code may expose organisations to cybersecurity vulnerabilities.
High likelihood scenarios: If an AI agent is required to exercise broad discretion or self-direct its workflow without strict guidance, the risk of unintended behaviour could be elevated. Conversely, a structured environment governed by rigid, well-defined operational controls limits variability and reduces risk.
But organisations everywhere should think about where liability lies, prepare for evolving regulatory expectations and adopt robust, adaptable frameworks to harness the capabilities of agentic AI in a safe and responsible manner.
What are the key risks?
Erroneous actions
Unauthorised activity
Given their autonomous capabilities, agentic systems may take actions beyond their permitted scope or authority. This includes executing tasks without the required human escalation or approvals in accordance with internal policies or standard operating procedures (SOPs). Such deviations can lead to regulatory non-compliance, contractual breaches, or operational failures.
Data breaches and improper disclosure
Advanced AI agents capable of autonomously executing complex actions are poised to further transform the world of work, but raise the risk of errors, unauthorised activity and data breaches. Zhen Guang Lam, Senior Associate
What can be done to mitigate them?
High impact scenarios: Tasks involving high-value financial transactions or decisions requiring strict accuracy and accountability or understanding of nuance (such as recruitment or employment termination decisions) may not yet be suitable for fully agentic autonomy. In contrast, low risk functions such as analysing sales leads, updating databases and conducting initial outreach; or onboarding new hires, scheduling start dates, submitting the necessary recruitment documents and signing forms pose comparatively minimal danger.
Where does liability for adverse outcomes lie?
For example, in an e-commerce context, if a human user relies on an AI agent to make purchasing decisions, they could be held responsible if the AI incorrectly purchases goods that fail to meet the required standards or specification. Likewise, if a seller deploys an AI agent to handle the product marketing or dispatch and returns process, they could be deemed accountable if agentic AI misrepresents the product or breaches consumer or data protection obligations. The developer of the AI agent or the organisation that deployed it could also be in the frame from a liability perspective, particularly where inadequate controls or governance contributed to the error.
Adopting agentic AI responsibly
Zhen Guang Lam Senior Associate, Singapore
While the scope of IP protection varies depending on jurisdiction, two fundamental principles typically apply: originality and ownership. Businesses must be able to show that a work is original to warrant IP protection, and crucially, it will almost always need to be created by a human, not solely by a machine.
People may be becoming more familiar with using GenAI, but very few can explain exactly how the tools they are deploying actually work. Users are increasingly confident in inputting well-crafted prompts that yield meaningful and valuable outputs yet often remain unaware of how those outputs are generated and how their input data is being processed in the background. Moreover, developers of AI systems are frequently reluctant to disclose the inner workings of their large language models (LLMs) in the first place. All of which creates an intellectual property (IP) “black box”: limited transparency, little to no traceability, and significant uncertainty about what data has gone in and what content may come out.
This lack of certainty creates a challenging environment for organisations seeking to rely on AI‑generated outputs without inadvertently breaching copyright, database right, or other IP rights.
Put appropriate AI usage policies in place. Outline and define what AI-related activities are permitted or restricted. These policies should explicitly state that confidential or sensitive information must not be shared on publicly available AI tools which have not undergone due diligence by the company. Existing policies such as IP, data protection, and cybersecurity policies should be updated to reflect Gen AI specific risks, including potential data leakage or model hallucinations. Employees should be trained to understand these risks, follow required procedures, and properly document their contribution to the creation of AI-assisted works.
Inputting confidential information into LLMs carries the risk that sensitive or proprietary data could be used to further train the AI model, or could be surfaced (directly or indirectly) in outputs generated for other users, putting trade secrets at risk of exposure. Unless the provider of the AI tools being used expressly states they will not use or retain inputted data, the default assumption must be that they might.
Securing the IP legitimacy of Gen AI-created assets
Trade secret leakage
Morgan Stanley. (2025, March 4). GenAI revenue could surpass $1 trillion by 2028. Morgan Stanley. https://www.morganstanley.com/insights/articles/genai-revenue-growth-and-profitability
For the same reasons, evidencing and claiming ownership of GenAI outputs can be equally challenging, given that it could be difficult for businesses to assert IP rights over that same asset. Moreover, defining who owns the IP rights in an AI-assisted creation remains a grey area.
Choose enterprise-grade AI solutions. Where possible, opt for enterprise‑grade AI tools rather than free or generic public tools. Enterprise solutions are more likely to suit the business’ specific purposes, be developed with greater discernment around sources of information, and ringfence how customer data is used. Companies should attempt to negotiate key terms with vendors such as data retention and deletion, IP ownership of outputs, geographic location of data processing, liability allocation, and indemnities.
tech-savvy to a mainstream business tool, capable of unlocking significant efficiency gains and boosting creativity. Industry forecasts suggest this momentum will only intensify, with GenAI revenues expected to exceed USD 1trillion by 2028, a threefold increase in just three years. Yet, despite increasingly widespread adoption, many of the legal and commercial risks are often poorly understood. In particular, uncertainty persists around ownership of AI-assisted creations, IP infringement exposure and the protection of trade secrets and confidential information.
Infringement exposure
We see businesses continuing to face three key IP issues:
Since GenAI models are trained on vast datasets, it is often difficult to trace the origin and chain of title of the content used. In fact, many models scrape data from across the internet, where the provenance and status may be unclear or entirely undocumented. This creates a genuine risk that the output generated by users could infringe third-party IP rights. Even where licenses to use particular datasets have been secured, those licenses may have very complex or restrictive terms that preclude certain uses leading to unintentional breaches of license terms.
Businesses may also be exposed to liability if client or customer information is compromised as a result of a breach affecting the AI service provider. More critically, for organisations seeking patent protection, any disclosure of an invention to the public could jeopardise novelty requirements and undermine the ability to secure patent rights. In some jurisdictions, inputting information into open GenAI models could inadvertently constitute a public disclosure (i.e., the information is made available to the public without a binding obligation of confidentiality).
n just a few short years, generative Artificial Intelligence (Gen AI) has gone from being the preserve of the
I
Ownership and enforcement
Some of these issues are currently being explored in the courts, but there is as yet no definitive body of case law to offer clear guidance. To mitigate risk, as in other areas of the business, organisations must enact appropriate governance and controls, for instance:
Maintaining documentary evidence is crucial to demonstrate that an asset was created through compliant processes and is eligible for IP protection. Organisations should be able to answer key questions such as: Where did the underlying material originate? Who created it? What exactly was the input and the final output? Do we own the relevant rights? How valuable or strategically important is it? Legal team clearance should be obtained before materials are published, disclosed, or used in marketing or client‑facing contexts.
GenAI represents a novel way of working, changing the dynamics between humans and machines and redefining what it means to be creative. Given its extensive benefits, adoption is only likely to accelerate, but many businesses are still grappling with its technical capabilities and governance issues when it comes to IP. Best practice parameters that apply to many other business processes should likewise be applied to AI – albeit with some adjustments for the nuances and complexities of this technology. That way, companies can be confident that their valuable business assets are built on firm legal foundations.
Anna Caruso Senior Associate, Abu Dhabi
As a result, on the one hand, organisations that have invested heavily in AI‑assisted proprietary materials may later face legitimacy issues when seeking to enforce or defend “their” IP, for example, during due diligence in the context of M&A transactions. If the legal standing of these assets is called into question, the consequences of losing that IP protection could be costly.
On the other hand, when attempting to enforce IP rights, the probabilistic nature of GenAI systems presents a significant challenge. Because these systems draw on patterns learned from large bodies of training data and recombine fragments of that data, there is rarely a clear, verifiable chain of evidence showing exactly which sources were used or “copied” to produce the allegedly infringing output.
The adoption of generative Artificial Intelligence for creating “proprietary” business materials raises significant legal challenges, particularly in relation to ownership, infringement, and the safeguarding of confidential information, says Anna Caruso, Senior Associate
Implement strict review protocols. Formal review processes can assist with scrutinising and monitoring inputs and outputs.
n just a few short years, generative artificial intelligence (Gen AI) has gone from being the preserve of the
Technological advances in generative AI may lead to a resurgence in the importance of traditional trial advocacy, as video evidence becomes less trustworthy and judges and juries seek assurances of authenticity that only live humans can provide.
The past several years have seen the proliferation, and later refinement, of web-based tools that can take a user-generated text description and generate a short video that fits that description. OpenAI Sora (now Sora 2), Google Veo (now Veo 3.1), Meta Vibes, and Adobe Firefly are all products that generate video using AI, with other competitors surely to follow as the generative AI space matures.
not, the rate of correct responses hovered around 50% for most videos (with one AI-generated video only being identified as AI by 32% of readers).1 In other words, for many videos, picking out whether they were generated by AI is no more likely than predicting a coin flip. And, given that these products have been in the marketplace for less than a year, there is every reason to believe that the videos generated by them will only improve.
We therefore may have come full circle: The means of proof predating video technology—effective advocacy by attorneys and coherent testimony by witnesses—may be the only tools available to fill the credibility gap left by the rise of generative AI.
Historically, introducing video evidence at trial has been simple and straightforward. In federal courts, under Federal Rule of Evidence 901(a), the proponent of a piece of evidence must present sufficient proof to support a finding that the item is what they claim it to be, and with video evidence, usually by testifying that the video accurately portrays what they observed or that the video is otherwise authentic so what it shows is true.6 Rule 901(b) outlines various methods for establishing authenticity, including testimony from a knowledgeable witness, circumstantial evidence, and descriptions of systems or processes that reliably produce accurate results (typically used in the absence of witness testimony corroborating what is seen on the video, such as for a store security camera recording an overnight burglary).7 These standards are intentionally flexible, designed to accommodate a wide range of electronic evidence formats.
The development of generative AI video, accessible to anyone with an OpenAI or Google subscription, has broad implications for how society treats information contained in videos.2,3 But perhaps nowhere may be more affected than courts trying to adjudicate factual disputes. For years, video has been seen as the gold standard of evidence—unassailable, even by contradictory live testimony, even for the purpose of granting summary judgment.4 Indeed, a good story, without video to back it up, was often not seen as good enough by today’s jurors, especially when a video would be expected.5 But what happens when the video itself could be fabricated, and fabricated so well that no expert or computer can detect that it is fake? Video is no longer the gold standard of evidence, and indeed it may be treated as unreliable.
Generative AI and trial advocacy
Back to basics?
However, as generative AI tools become more sophisticated and widely accessible, the reliability of these traditional methods of authentication should be increasingly called into question. And even if a video is shown by proper means to be authentic for use as evidence, a jury may not buy it, no matter what the judge and lawyers say. The ability to fabricate convincing images undermines the evidentiary weight of digital visuals, even when they meet the formal requirements of Rule 901. This tension sits at the heart of the current debate over how courts should evaluate digital evidence in the age of AI.
These provisions were crafted to accommodate the digitisation of information and use it as evidence, including photographs stored and reproduced electronically. However, now, the underlying assumption that a digital image accurately reflects reality is no longer guaranteed. The legal framework, while flexible, was not built to anticipate the ease with which synthetic images can be created and passed off as authentic. As a result, courts must now grapple with the possibility that even images meeting the formal criteria for admissibility may be fundamentally unreliable, raising urgent questions about how to establish and preserve evidentiary integrity. A proposal to amend these rules is currently under consideration and is discussed in greater detail below.
Thompson, S. A. (2025, June 26). A.I. videos have never been better. Can you tell what’s real? The New York Times. https://news.nestia.com/detail/A.I.-Videos-Have-Never-Been-Better.-Can-You-Tell-What%E2%80%99s-Real%3F/13637483 Metz, R. (2025, October 3). OpenAI’s Sora video app raises risk for rampant misinformation. Bloomberg. https://www.bloomberg.com/news/newsletters/2025-10-03/openai-s-sora-video-app-raises-risk-for-rampant-misinformation Hsu, T., Thompson, S. A., & Myers, S. L. (2025, October 3). OpenAI’s Sora makes disinformation extremely easy and extremely real. The New York Times. https://www.nytimes.com/2025/10/03/technology/sora-openai-video-disinformation.html Scott v. Harris, 550 U.S. 372 (2007). (2007, April 30). https://supreme.justia.com/cases/federal/us/550/372/ Schwartz, J., & Zezima, K. (2010, December 9). With video everywhere, stark evidence is on trial. The New York Times. https://www.nytimes.com/2010/12/09/us/09jury.html Federal Rules of Evidence Rule 901(a): Authenticating or identifying evidence. (n.d.). In Federal Rules of Evidence. U.S. Courts. https://www.law.cornell.edu/rules/fre/rule_901 Federal Rules of Evidence Rule 901(b): Methods of authentication/identification. (n.d.). In Federal Rules of Evidence. U.S. Courts. https://www.law.cornell.edu/rules/fre/rule_901 United States v. Chapman, 804 F.3d 895 (7th Cir. 2015). (2015). Hojjati, A. (2024, May 30). From paper to post: The most secure ways to vote; The road ahead: Creating a secure and accessible future for voting. DigiCert. https://www.digicert.com/blog/what-is-the-most-secure-voting-method#:~:text=But%20the%20task%20of%20verifying,fraud%20prevention%20without%20sacrificing%20usability SynthID – Google DeepMind. (n.d.). https://deepmind.com/research/open-source/synthid Advisory Committee on Evidence Rules. (2025, May 2). Agenda book: May 2 2025 meeting, U.S. Courts (p. 77). https://www.uscourts.gov/sites/default/files/2025-04/2025-05_evidence_rules_committee_agenda_book_final.pdf Proposed Fed. R. Evid. 707 on Artificial Intelligence–Generated Evidence. (2025, August 21). National Law Review. https://natlawreview.com/article/new-evidence-rule-707-would-set-standards-ai-generated-courtroom-evidence Id. Advisory Committee on Evidence Rules. (2025, May 2). Agenda book: May 2 2025 meeting, U.S. Courts (p. 199). https://www.uscourts.gov/sites/default/files/2025-04/2025-05_evidence_rules_committee_agenda_book_final.pdf Proposed Fed. R. Evid. 707 on Artificial Intelligence–Generated Evidence. (2025, August 21). National Law Review. https://natlawreview.com/article/new-evidence-rule-707-would-set-standards-ai-generated-courtroom-evidence Id. New AI evidence rule is a good start, but more is needed. (2025, August 27). Law360. https://www.law360.com/pulse/articles/2381199/new-ai-evidence-rule-is-a-good-start-but-more-is-needed
Further complicating matters, the Federal Rules of Evidence—specifically Article X, which governs the contents of writings, recordings, and photographs—provide broad definitions and standards that were designed for more traditional digital formats. Federal Rule of Evidence 1001(1) defines writings and recordings to include magnetic, mechanical, or electronic recordings. Federal Rule of Evidence 1001(3) states that data stored in a computer, when printed or displayed in a readable format and shown to accurately reflect the data, qualifies as an “original.”
Around the same time that courts were adapting to the rise of digital evidence in the early 2000s, the United States faced a parallel issue of eroding public trust of the reliability of digital records in another high-stakes arena: presidential voting. The 2000 presidential election exposed deep concerns about the accuracy and transparency of voting systems, particularly in contrast to traditional paper ballots. This resulted in a rise of public distrust of digital voting mechanisms, which threatened the integrity of the democratic process.
But where no one can testify to the authenticity of a video, lawyers could be faced with having to prove facts the old-fashioned way—with eyewitness testimony—without reliance (or over-reliance) on video evidence, because judges and jurors don’t trust video evidence like they used to. The only method available to fill the credibility gap, therefore, may be conventional trial advocacy—that is, telling a compelling story and effectively examining witnesses to enhance (or detract from) the credibility of the documentation or pictorial evidence that the jury is also presented with.
If a story makes sense, a video backing that story up will effectively support it. But if a story is only held together by dubious reasoning and dodgy witnesses, a video corroborating it may not be enough. The advent of AI generative media is, therefore, a “Back to the Future” moment for trial lawyers, as it may diminish the crutch that video evidence has become, and may place a premium on lawyers that are effective storytellers and advocates, and not play-by-play announcers over a video.
Under Rule 707, AI and other machine-learning evidence offered at trial without an expert witness would be subjected to the same reliability standards as expert witnesses. Such evidence could be admitted only if it: (1) assists the trier of fact, (2) is based on sufficient facts or data, (3) is the product of reliable principles and methods, and (4) reflects a reliable application of the principles and methods to the facts.12 Rule 707 creates a framework for opposing parties to challenge the reliability of AI-generated evidence by assessing how the producing system operated and how its methods were applied to the specific facts of the case.13 Notably, a Committee Note clarifies the scope of Rule 707 machine learning to mean “an application of Artificial Intelligence that is characterised by providing systems the ability to automatically learn and improve on the basis of data or experience, without being explicitly programmed.”14
proponent acknowledges was created by AI, and not to evidence whose authenticity is in dispute. Thus, it does little to help courts avoid deepfakes or other falsified evidence when authenticity is contested.17 Nevertheless, Rule 707 marks an important first step at the federal level to adapt the rules of evidence to the increasing use in court of AI-generated materials.
When these tools were first released in late 2024, the immediate reactions were mixed (as with many generative AI products): amazement, but also skepticism. The videos were good—often good enough to look real at first glance—but flaws could be found upon closer examination. But in the past year, several new iterations of those tools have been released, and the output has only improved, to the point where in a recent New York Times quiz that asked readers to identify whether short videos were generated by AI or
The departure from purely paper voting to a hybrid of electronic voting mirrors the legal system’s current struggle with AI-generated image evidence. Just as digital voting required a tangible safeguard to maintain public trust, courts today must find ways to validate digital photographs and videos in an era where manipulation is not only possible but increasingly undetectable. For its part, Google has developed an AI “watermark” that is embedded in all of its AI-generated content and is (according to Google) difficult to tamper with.10 But unless every provider of generative AI services employed similar tamper-proof watermarks, the absence of an AI watermark would not be sufficient to prove that an image or video was not generated by AI. The lesson is clear: digital convenience must be balanced with evidentiary rigor. Whether in elections or litigation, the credibility of digital records depends on the ability to trace, verify, and, when necessary, revert to a trusted source.
The Federal Rules of Evidence, including Rule 901 and Article X, were interpreted to accommodate digital formats, recognising that images stored electronically could still be authenticated through witness testimony, metadata, and system-generated records. See e.g., State v. Hayden, 90 Wash. App. 100, 950 P.2d 1024 (1998) (holding that digitally enhanced images of latent fingerprints and palm prints were admissible under the Frye standard, as there was no substantial disagreement among qualified experts regarding the reliability of the enhancement techniques or the software used by trained professionals). The Hayden court found “there does not appear to be a significant dispute among qualified experts as to the validity of enhanced digital imaging performed by qualified experts using appropriate software, we conclude that the process is generally accepted in the relevant scientific community.” Id. at 1028. Still, where there is a question over whether a piece of evidence is genuine, the task of determining what weight to put on that evidence most often falls to the finder of fact—it is simply one more item in the evidentiary stew that jurors must consider to reach a verdict.
AI-generated media presents a deeper challenge than previous generations of dubious evidence, however, in that commentators fear that as time goes on, experts will not be able to distinguish AI-generated images from genuine images using computer-based tools. With no expert testimony to rely on, courts and jurors are left with little guidance beyond their own eyes (which, as shown above, can deceive them). If a video authentically portrays events that a person can testify to, the admissibility standards for evidence should be sufficient—if a person comes into court and says that the video shows what they saw or that it was recorded using a reliable method, that video should come into evidence (if it is otherwise admissible).
In the event that Rule 707 is ultimately adopted, its practical impact could be significant. Certain cases may see an increase in pre-trial motions, expert testimony, and evidentiary challenges, driving up both complexity and cost. Lawyers will need to develop new strategies for authenticating digital evidence and countering AI-related objections, while judges will face the task of applying expert witness standards to technologies that evolve rapidly. In the short term, implementing the rule may drive up both the cost and complexity of introducing AI-generated evidence in court. In short, Rule 707 may not resolve every issue posed by generative AI, but it signals a shift toward a future where courts must balance technological innovation with the fundamental need for reliable evidence.
The problem of fake evidence is not new to courts. Courts have dealt with allegations over fake evidence for centuries (for example, forged documents). In recent decades, those allegations have expanded to include digitally edited photographs or video, using a program such as Adobe Photoshop. When digital photography first emerged as a replacement for traditional film, courts and attorneys were forced to confront new questions about authenticity, manipulation, and evidentiary reliability. Unlike film negatives, which offered a physical and relatively tamper-resistant record, digital images could be altered with relative ease, raising concerns about whether they could be trusted in court. Yet despite these early doubts, digital photography quickly became the norm, and legal standards adapted accordingly. And courts have figured out how to adjudicate the allegation that the evidence was fake, often with the help of forensic experts.8
In response, many jurisdictions adopted hybrid systems that paired digital voting machines with paper backups, as physical records can be audited and recounted if disputes arose.9 Today, many people feel more confident marking and submitting a physical ballot. The security of paper-based systems relies on a verifiable chain of custody, including secure storage, careful transport, and human oversight. Ultimately, it was the paper trail that provided the necessary assurance of integrity, reinforcing the idea that digital systems, while efficient, must be anchored by verifiable originals. Today, most polling places have adopted paper components in part to assure public trust in the security and authenticity of their voting.
Recognising these challenges, the US Judicial Conference’s Advisory Committee on Evidence Rules (“Advisory Committee”) has proposed two paths for updating the Federal Rules of Evidence. The first proposal was to amend Rule 901 to establish a specialised authentication process for suspected deepfakes. The second approach, and the Committee’s preferred approach, introduces a new rule, Rule 707, which governs machine-generated evidence by applying expert witness standards to assess reliability. Ultimately, the Advisory Committee chose not to amend Rule 901, with several members favoring a “wait-and-see” approach as to amending Rule 901.11
In May 2025, the Advisory Committee voted 8–1 in favor of seeking public comment on the proposed Rule 707.15 By August, the Committee on Rules of Practice and Procedure of the Judicial Conference of the United States released Rule 707 for public comment, with the period open until February 16, 2026.16 Critics caution that Rule 707 applies only to evidence that the
MEET THE AUTHORS
Patrick Hofer Partner, Washington, DC
Petra Starr Senior Associate, Chicago
Bret Kabacinski Senior Associate, Chicago
Jared Clapper Partner, Chicago
AI-generated media presents a deeper challenge than previous generations of dubious evidence, however, in that commentators fear that as time goes on, experts will not be able to distinguish AI-generated images from genuine images
Ross Deuchars Associate, London
Victoria Peckett Partner, London
Ariana Chis Associate, London
Marianne Anton Partner, London
Rule 1001(4) further clarifies that a duplicate, whether created through mechanical or electronic reproduction, is admissible if it accurately reproduces the original. Under Rule 1003, such duplicates are generally admissible unless there is a genuine question about the authenticity of the original or if admitting the duplicate would be unfair.
Artificial Intelligence in healthcare:
Ben Knowles Partner and Chair of the Global Arbitration Group, London
Rebecca Kelly Partner, Brisbane
Where will liability lie?
e cannot escape the fact that Artificial Intelligence (AI) is becoming more common in
our daily lives. From asking Copilot or ChatGPT a question, to driverless cars, to AI in a healthcare setting; the uses of AI are continually evolving.
W
As Government plans have been announced to digitalise the healthcare sector, the reality is very much that AI is already making a mark, and it is important to consider how its use can affect insurers and those who have responsibility for claims.
AI is being used in a variety of ways within healthcare, analysing x-rays, mammograms and skin samples or being used as a virtual scribe to take notes at appointments. AI is being used by clinicians in primary, secondary, and tertiary settings, for example by assisting with analysis of test results/images or by reducing the time spent on administrative tasks.
It is hoped that AI will benefit both patients and professionals working within healthcare, but it is equally important to exercise caution when using AI. An AI tool relies on data that is put into it. Systems can learn, but they need to learn from existing data. This data needs to be broad enough to represent society as a whole, not just a small cross section of patients. It is important
for developers and users of AI systems to ensure data going in is reliable and stays reliable, to avoid potential for unreliable results. People using the systems should be encouraged to raise any concerns about the results AI is generating to make sure that any inaccuracies are closely examined, as this may be due to problems with data inputting and cleansing.
Where a healthcare professional is making clinical decisions, the liability position if something goes wrong is familiar. The position where AI is involved is presently unknown and there are a number of possibilities over where liability could lie. It could fall to the clinician using the technology (i.e. when inputting data or interpreting the information generated), the healthcare organisation who have implemented the AI system, the body who developed the technology, or the body who gave approval for the technology to be used in a healthcare setting. If something goes wrong, a number of legal frameworks could apply, including negligence, product liability, and vicarious liability. It is not clear how the courts will approach the use of AI at present, as this is very much a developing area. There will inevitably be claims either regarding the use of, or the failure to use
AI in a patient’s clinical journey and the decision making alongside that. Contracts for the use of AI will need to be carefully considered, including liability and indemnity provisions.
When used in clinical practice there may be a question of exactly how the standard of care will be assessed. Parties in a claim usually instruct independent medical experts to assist the Court in determining liability, but will the same medical experts still be able to comment where AI has been used or will there be a need to instruct an expert in AI in addition to the medical experts to explain how the technology works? Alternatively, will medical experts need to be familiar with the use of AI in their field of practice in order to be able to prepare a report? Similarly, consideration needs to be given to whether the familiar Bolam test still works in a situation where AI has made a recommendation, but a reasonable body of clinical opinion does not agree with that recommendation.
As well as the different legal frameworks potentially in play, one must also consider how AI is used, and whether this will influence where any liability lies. For example, if a person uses AI to help to reach a diagnosis, like using any other diagnostic tool, the legal responsibility lies with the person making the diagnosis (as in a standard negligence claim). Some argue that this could differ, depending on the way that a particular algorithm is used and whether it is AI reaching the diagnosis or the clinician. This raises the question that
It is important to say that, despite instances of AI being used in healthcare, it is currently still a qualified human who makes the final diagnosis and discusses/consents their patient about the treatment journey (although AI may assist with that process, by providing information about likely treatment outcomes and the risks associated with the options under discussion etc). The NHS England Transformation Directive from 30 April 2025 in particular states that “the final decision about the care that people receive should be made in consultation with the patient or service user, using your professional judgment”. This may change as the use of AI becomes more prevalent and where liability lies when things go wrong may similarly evolve.
if the clinician is completely removed from making the diagnosis, where does liability lie? Some have argued that, if the algorithm is influencing the decision or reaching the diagnosis, it could become a question of whether the clinician can understand or explain how the diagnosis was reached because, if they cannot explain this, then can they really be responsible if something has gone wrong?
the final decision about the care that people receive should be made in consultation with the patient or service user, using your professional judgment
Healthcare providers using AI in a clinical context will need to be aware of how AI is being used and what it is being used for as this will likely help to determine where liability could lie. Having policies and operating procedures in place providing guidance over the role of AI in reaching a diagnosis may help as could training from the AI developers so that healthcare providers can understand the ways that AI can help and how it works. Consideration may need to be given to documenting in the medical records whether AI was used in a diagnosis and how it was used as this may help if something does go wrong and these questions need to be answered.
As AI is continually learning from available data, there is also an argument that AI may become so advantageous to clinical practice or accurate in the future that it could be argued that it is a breach of duty not to use AI and some patients may specifically ask for AI to be used. If AI is not available, would it then be a breach of duty not to use AI and would the consent process need to include risks related to the use of AI? These are issues that clinical negligence lawyers may need to tackle as the use of AI in healthcare increases and is also something that medicolegal experts will need to be alive to.
Kayleigh Tranter Associate, Birmingham
Kayleigh Tranter is a leading disputes lawyer with over 15 years’ experience in the energy sector. She specialises in complex cross-border and domestic disputes across arbitration, litigation, mediation and other ADR processes.
Kayleigh Trantar is a leading disputes lawyer with over 15 years’ experience in the energy sector. She specialises in complex cross-border and domestic disputes across arbitration, litigation, mediation and other ADR processes. Richard advises on commercial, corporate, regulatory and climate-related disputes, including greenwashing and Energy Charter Treaty claims.
ChatGPT Health -
OpenAI has recently launched ChatGPT Health. This is a new space within ChatGPT that will allow users to ask health questions. In some regions, users can upload their medical records and fitness apps to provide personal data to add context to their questions (the ability to link records is not currently available to UK users). OpenAI stresses that it is not to be used to replace medical care, but clearly the reality is that many will turn to it as first port of call.
could this be a new dawn in clinical negligence claims?
Even before this advancement, OpenAI has reported that ChatGPT is widely used for heath related questions. It is reportedly relied upon by many users to ask questions before consulting their GP (e.g. uploading an image of a spot and asking if it is a concern or asking AI to explain their diagnosis and surgery in plain language). The risks of relying on AI as a first point of call does not need to be laboured here. AI has been proven to be excellent at imaging recognition (and its use in healthcare longstanding). However, its ability to contextualise and adapt to unique circumstance of an individual user is not as well tested (and that is exactly the gap ChatGPT Health now appears to be trying to target).
As a clinical negligence practitioner, it is wise to pay attention to these developments. Not only are surveys showing the wide uptake of AI by patients, but there is evidence of generative AI now working its way into healthcare settings (whether that be to act as a scribe or to interpret imaging). NHS England is trying to keep pace, recently releasing guidance on AI-enabled ambient scribing products in health and care settings.
It surely cannot be long before we see cases where a patient has a fixed ‘AI-backed’ idea of their condition and treatment need and criticises a doctor for not following suit. This is even more so when apps named like ‘ChatGPT Health’ will suggest to the user it is specialised in health conditions and questions. Such a claim not only engages the Bolam/Bolitho questions, but also questions that drift into the data protection, consumer protection, and product liability remits, with there being UK uncertainty as to whether generative AI is a ‘product’. As part of their core technology, models like ChatGPT were never seen to be intended for medical use. However, some would argue that line has clearly now been crossed.
From a clinical negligence perspective, the direction of travel is concerning. There is claims exposure in these advancements. Examples of GenAI “hallucinations” are common and arguably can never more be as serious as when concerning medical care.
The introduction of ChatGPT Health signals a move towards a more structured way of using mainstream AI in the health setting. Jurisdictions like the EU have already adopted a more formal approach to AI regulation. The UK is yet to follow suit, relying on existing regulations rather than any single new AI Act in the spirit of ‘pro-innovation’. However, with daily advancements in AI such as the dawn of ChatGPT Health on the horizon, the regulatory challenges the UK faces is getting greater (as is the need for clinicians to have a clear understanding of the regulations to know how to safely engage in AI tools).
For any questions regarding this article, please reach out to Adam Hudson using the contact details provided below.
Adam Hudson Senior Associate, Bristol
The cyber (r)evolution
LATAM
atin America’s digital transformation is rewriting the rules of business, and of risk. The cyber revolution isn’t just accelerating; it is outpacing its ability to defend itself. As businesses race to deploy cutting edge technologies and build sprawling
digital ecosystems, cybercrime also rises to unprecedented levels. The region now stands at a tipping point: rapid innovation on one side, systemic vulnerability on the other. The result? A threat landscape evolving faster than most organisations can comprehend. High-profile hacks, active regulators, and volatile markets are forcing a brutal choice: adapt or face collapse.
L
The current threat landscape
Cyber risk is now recognised as a major business challenge across the board. The region has experienced a sharp rise in cyberattacks, with countries such as Ecuador, Guatemala, Bolivia, and Peru ranking among the top ten most impacted globally. The financial sector, in particular, has seen its rapid digital transformation (primarily fed by the unstoppable growth of fintech) outpace cybersecurity development, leaving institutions exposed. Cybercrime in the region has grown at an average rate of 61% year-on-year over the past decade, and since 2015, 92% of financial institutions have suffered at least one cyberattack.
Unlike the European Union’s unified frameworks, Latin America operates in a fragmented regulatory landscape. This paradigm creates overlapping and renders conflicting obligations, inconsistent enforcement and higher compliance costs among other challenges for multinational organisations. This lack of harmonisation cripples cross-border incident response and destabilises cyber insurance markets.
Alongside escalating threats, businesses face the burden of new or fast-changing regulations. Clyde & Co’s Corporate Risk Radar 2025 report reveals a stark reality: More than two-thirds of Latin American respondents admit their organisations are struggling to keep pace with rapidly changing AI, data privacy, and cybersecurity regulations.
Case studies
The impact of cyberattacks in Latin America is best illustrated through real-world incidents:
Lessons from recent attacks
Banco de Chile (2018): A phishing email led to a sophisticated USD 10 million attack. The bank responded by establishing a dedicated cybersecurity division, but only after the breach.
Banco do Brasil (2024): Criminals, with insider assistance, gained remote access to equipment and confidential data, enabling fraud and manipulation of client records.
Colombian Government (2023): A ransomware attack via IFX Networks crippled critical infrastructure and public services, affecting 762 companies across 17 countries. Key government websites, including the Ministry of Health and the data protection regulator, were offline for days. The attack suspended an estimated two million legal processes and disrupted healthcare services, with losses impossible to fully quantify.
Other Governments: The Dominican Republic and Costa Rica have also suffered major ransomware attacks to the public entities, with Costa Rica’s government reportedly shutting down after allegedly refusing to pay a USD 20 million ransom.
Regulatory pressures and compliance challenges
With threats escalating, regulatory hurdles multiplying, and technology accelerating, Latin America’s digital transformation is anything but smooth. This turbulence gives rise to key challenges businesses can neither ignore nor sidestep. These issues aren’t optional, they’re redefining resilience, competitiveness, and survival in the region’s commercial landscape.
The issue is compounded by fragmented regulations and rapid tech adoption: as companies deploy AI, IoT, and cloud systems, their risk profile becomes more complex, yet insurance penetration lags behind. For multinational organisations, inconsistent local requirements and lack of harmonised standards present an additional complexity make underwriting difficult, driving up premia and reducing availability in a market where innovation outpaces protection, and therefore one major breach can wipe out years of growth.
Limited incident reporting: Latin America’s cyber risk picture is blurred by a culture of silence. Many organisations avoid reporting incidents, driven by fear of reputational damage, regulatory scrutiny, and even internal stigma. This information gap means regulators and insurers lack visibility into the true scale and impact of cyber risks, making it difficult to design effective policies or accurately underwrite policies. At the same time, without reliable benchmarks, organisations also struggle to assess their own exposure or justify investment in readiness and resilience. Ultimately, the
Regulatory fragmentation: The absence of a unified regional framework leads to legal uncertainty, inconsistent enforcement, and higher compliance costs. Multinationals face challenges when incidents span multiple countries, and there is weak cross-border cooperation during ransomware chains. The lack of unified metrics and uneven response capabilities further complicate the landscape.
Insufficient corporate awareness: Cyber risk remains overlooked at board level, resulting in inadequate policies, processes, and training.
The booming dark market: Latin America’s cyber threat isn’t just technical, it’s economic. The dark web economy is evolving beyond stolen passwords into a sophisticated marketplace for corporate access. In 2024, Initial Access Brokers (IABs) surged across the region. These criminal groups sell direct entry into corporate networks, meaning attackers no longer need to exploit vulnerabilities, they can simply buy their way in.
The most sought-after assets include VPN credentials, remote desktop (RDP) access, admin panels, web shells, ready-to-use breach lists, stolen data enhancers for targeted attacks, and credential validation services to guarantee working logins. This commoditisation of access lowers the barrier to entry for cybercrime and accelerates attack velocity. For businesses, it underscores an urgent need for robust cybersecurity, proactive monitoring, and resilience strategies, because, in this market, your network could be for sale.
A recent Forbes report found that 60% of Latin American companies shut down within six months of a cyberattack, driven by crippling recovery costs, data loss, reputational damage, and lack of business continuity plans. Furthermore, 75% of affected companies have no cyber insurance or incident response protocols leaving them exposed to financial and operation shortfalls.
Market development: There is exponential growth in demand for cybersecurity solutions, driven by accelerated digitalisation, remote work, and rising cyber exposures. Critical sectors including finance, energy, healthcare, retail, and utilities, are investing heavily in next generation security infrastructure, 24/7 monitoring, and incident response services.
Regulatory momentum: Countries are shifting from reactive to preventive regulatory policies. Notable examples include Brazil’s National E-Cyber strategy, Colombia’s CONPES policies, and Chile’s Cybersecurity Framework Law. In September 2025, Brazil’s ANPD (Agência Nacional de Proteção de Dados, the National
The cyber insurance market is also expanding at pace, though significant gaps remain. Large corporations are leading adoption, while SMEs lag behind due to cost and awareness barriers. This imbalance creates a prime opportunity for insurers and brokers to innovate with tailored products, flexible pricing models, and risk management services that meet the unique needs of Latin American businesses.
Innovation in insurance products: The insurance market is evolving rapidly, moving away from generic cyber cover towards modular and sector-specific policies. Insurers are introducing tailored solutions such as fraud and continuity protection for banks, IoT disruption coverage for manufacturers, and regulatory penalty protection for healthcare providers.
Key challenges facing the region
Slow adoption of cyber insurance: Despite latent cyber risks, most companies in Latin America (particularly SMEs) remain uninsured. A dangerous mix of underestimating exposure, limited market offerings, and affordability barriers means access to insurance is still relatively scarce. Many business leaders still view cyber incidents as unlikely or assume traditional policies will cover digital losses.
Opportunities: strategic growth and leadership
Data Protection Authority) and Argentina’s AAIP (Agency of Access to Public Information) signed a Memorandum of Understanding to strengthen cooperation on personal data protection—a significant step toward regional regulatory harmonisation.
The region’s booming fintech sector has been a critical catalyst, driving demand for innovative products and digital distribution channels. As cyber threats diversify, insurers have an opportunity to differentiate through customised coverage, embedded risk management services, and dynamic pricing models, positioning themselves as strategic partners rather than mere risk carriers.
Digital transformation and infrastructure financing: Artificial Intelligence and predictive analytics are revolutionising how insurers and businesses manage cyber risk, enabling dynamic premium calculation, cyber maturity assessments, and integrated preventive services such as vulnerability monitoring and incident response training. At the same time, Latin America is experiencing a surge in digital infrastructure investment, with fibre networks, low-latency connectivity, and sustainable data hubs attracting hyperscalers and regional investors.
This dual transformation creates new cyber-protection needs for critical networks as they expand. For organisations, it means shifting from reactive defence to proactive resilience. AI-driven tools can anticipate emerging threats, automate compliance, and optimise security investments, while infrastructure growth offers a chance to embed security from the ground up. Together, these trends turn technology from a vulnerability into a competitive advantage.
Talent and managed services growth: Latin America is rapidly emerging as a hub for cybersecurity talent and managed security services. The managed services market, covering cloud, security, automation, and network operations, is projected to grow from approximately USD 18.2 billion in 2024 to nearly USD 33.4 billion by 2035, reflecting a strong shift towards outsourced, scalable solutions. This trend is particularly attractive for SMEs, which often lack the resources for in-house cyber teams but face increasing exposure to sophisticated attacks.
At the same time, the region is building a deep pool of skilled professionals, supported by nearshoring advantages such as time zone alignment and cultural compatibility with North America and Europe. In turn, service providers and vendors from more mature markets are turning to the region, keen to establish a foothold and capitalise on this momentum. International firms are outsourcing cybersecurity operations, creating a virtuous cycle of demand, job creation, and capability development. For insurers, technology providers, and investors, this convergence of talent and service innovation represents a unique opportunity to scale rapidly and secure regional leadership.
Latin America stands at a defining moment. The cyber (r)evolution reflects a complex interplay of risk and opportunity. The region faces structural challenges (fragmented regulation, low insurance penetration, limited incident reporting, and uneven cyber maturity) that collectively heighten vulnerability. These issues are not merely operational; they expose systemic weaknesses in governance, market readiness, and cultural attitudes toward risk disclosure.
Turning risk into opportunity
Whether these developments will translate into sustainable cyber maturity depends on coordinated action among regulators, insurers, and enterprises. Without harmonisation and capacity-building, the region risks perpetuating a cycle of reactive compliance and underinsurance. Conversely, strategic engagement could position Latin America as a benchmark for emerging-market cyber resilience, turning structural vulnerabilities into drivers of systemic reform. A risk, yes, but also a good opportunity, too good to ignore.
Yet, these same constraints are driving significant innovation. Regulatory frameworks are gradually shifting from reactive to preventive, digital infrastructure investment is accelerating, and insurers are experimenting with modular, sector-specific products. The rise of AI-driven risk assessment and managed security services further signals a transition toward integrated resilience models.
Laura Thackeray Senior Associate, London
Laura is a leading disputes lawyer with over 15 years’ experience in the energy sector. Shee specialises in complex cross-border and domestic disputes across arbitration, litigation, mediation and other ADR processes. She advises on commercial, corporate, regulatory and climate-related disputes, including greenwashing and Energy Charter Treaty claims.
Employees remain the main risk vector, with organisations facing an estimated 1,600 attacks per week. Without urgent action to embed cyber resilience into corporate governance, the region risks a serious escalation of preventable business failures.
There’s no question over Latin America’s need to embed effective cybersecurity and resilience strategies into its corporate DNA. This means implementing robust incident response and breach readiness plans, investing in employee training, and promoting specialised insurance products that truly meet the region’s needs. Without these measures, businesses will remain exposed to escalating threats and systemic vulnerabilities.
lack of transparency erodes trust among stakeholders, silence amplifies vulnerability and perpetuates a cycle where risk grows unchecked.
They won’t. This gap leaves organisations exposed to financial fallout from cyberattacks, data breaches, and regulatory penalties.
92%of financial institutions have suffered at least one cyberattack.
The protection of data remains a constant battle for organisations around the world, and encryption is a critical weapon in their armoury to keep it safe from bad actors. Figures suggest that over 10 billion encrypted records may be stolen each year.1 Even if cybercriminals or hostile state agents are able to steal data, modern encryption methods render it unreadable – and therefore valueless.
It’s estimated that it would take up to a billion years2 for modern-day computers to break the cryptography methods that currently secure sensitive data when it is being stored or shared, such as via databases, network connections, virtual private networks (VPNs), emails, messaging systems, apps and websites. However, super-fast, extremely powerful computers known as quantum computers are already being developed that could solve complex problems more quickly than has ever been possible before. It’s thought that quantum computing capabilities will come on stream in the next decade – a moment in time that has been dubbed “Q-Day”.3
Solving complex problems - fast
Aware of the possibilities quantum computing offers to unlock the “safe” of encrypted data, cybercriminals are starting to store, rather than discard, stolen data that is currently meaningless, in the expectation that the day will soon come when the safe can be opened. Indeed, there are reasons to fear they are being even more proactive: adopting a “harvest now, de-encrypt later” approach. Governments are already alive to the threat this could pose from hostile states and are starting to take action now to mitigate the risks. For example, in the US, the Quantum Computing Cybersecurity Preparedness Act requires government agencies to migrate to technology systems that can withstand quantum computing attacks.5 Businesses should take note, and adopt a similarly forward-thinking approach.
But it may not stay that way forever. Computers could soon be so powerful they would one day be capable of cracking encryption codes, making data stolen today a potential treasure trove for bad actors in the future. Therefore, it’s vital to recognise the emerging threat posed by “quantum computing” technology now, and take the necessary steps to future-proof cyber resilience.
Tech companies are racing to introduce quantum-resilient encryption methods into their solutions. Yet, as the authors of a Federal Reserve analysis paper on post-quantum cryptography admit, “Migrating or updating cryptography methods is a complicated process and requires time.”6 Meanwhile, there are some relatively
Include post-quantum security clauses in supplier contractsAs well as controlling their own internal data, organisations need to think about how to ensure sensitive data held by suppliers is protected. By including provisions in contracts requiring suppliers to put post-quantum resilience measures in place now, they too should be well-prepared to manage and mitigate the risk.
Organisations may think that by engaging technology providers to store their data and deal with this potential future threat, they are “transferring” the problem to expert providers. However, under many jurisdictions’ data protection rules, the ultimate responsibility for keeping data safe remains with the customer (data “controllers” as they are known) rather than the technology providers (or data “processors” as they are known) . Moreover, the risk is not just about personal data but also confidential commercial data and data relating to third parties, to whom they owe duties of confidentiality, such as an organisation’s clients. This data could cause significant damage to a business if it ends up in the wrong hands. Service providers also often heavily limit their liability in the contracts that they enter into with customers,
Why prepare for quantum computers’ data de-encryption capabilities now
The advent of Q-Day:
The realities of liability
Undertake due diligence on post-quantum technologiesIt’s sensible for organisations to talk to technology providers about quantum risk, and start undertaking due diligence on encryption systems that are advanced enough to protect against the rapid advances in computing technology and data processing power when Q-Day comes.
Walton, A. (2025, June 16). The impact of quantum decryption. Cyber Defence Magazine. https://www.cyberdefensemagazine.com/the-impact-of-quantum-decryption/ Hunter, W. (2026, January 26). When will “Q–Day” arrive? Scientists predict the date when quantum computing will crack all of Earth’s digital encryption – with terrifying consequences. Daily Mail. https://www.dailymail.co.uk/sciencetech/article-15498725/qday-scientists-quantum-computing-digital-encryption.html Palo Alto Networks. (n.d.). What is Q-Day, and how far away is it—really? Palo Alto Networks. https://www.paloaltonetworks.com/cyberpedia/what-is-q-day Gidney, C., & Schmeig, S. (2025, May 23). Tracking the cost of quantum factoring. Google Security Blog. https://security.googleblog.com/2025/05/tracking-cost-of-quantum-factori.html Sanzeri, S. (2023, January 25). What the Quantum Computing Cybersecurity Preparedness Act means for national security. Forbes. https://www.forbes.com/councils/forbestechcouncil/2023/01/25/what-the-quantum-computing-cybersecurity-preparedness-act-means-for-national-security/ Mascelli, J., & Rodden, M. (2025). Harvest now, decrypt later: Examining post-quantum cryptography and the data privacy risks for distributed ledger networks. Finance and Economics Discussion Series 2025-093. Board of Governors of the Federal Reserve System. https://doi.org/10.17016/FEDS.2025.093
Get rid of unnecessary dataThe sense that data is safe because it is encrypted can lead companies to hoard more data than they need, or should legitimately retain. As a result, more information exists online that could be targeted by cybercriminals – even if they can’t yet exploit it. Therefore, it’s wise to map and regularly monitor information assets, deleting any data that’s unnecessary to the business’ ongoing operations or whose continued retention could breach data privacy rules.
De-encryption may sound like a faraway threat, but quantum computing is already on the horizon. Organisations hold ever-increasing quantities of business-critical data, from personal employee data to confidential client details or sensitive commercial information, that they believe is well-protected. And it may be, for the time being. But classic encryption methods in use today may not safeguard against what’s possible tomorrow. Now, more than ever, companies must do everything possible to keep their data house in order.
uper-powerful quantum computers could one day soon open previously impenetrable
caches of data, exposing sensitive information that is well-protected today to significant risk tomorrow.
S
Quantum computers have the potential to deliver major benefits to society, for instance by turbocharging advances in science or medicine, but in the wrong hands, they could also be used for nefarious purposes. That includes breaking open previously impenetrable caches of data, exposing organisations to ransom demands or fraud or facilitating corporate or state espionage. A recent analysis by Google indicates that a quantum computer could theoretically crack the 2048-bit RSA encryption key (a common encryption standard) in around a week.4
The threat of “harvest now, de-encrypt later”
Good housekeeping
Craig Lightfoot Senior Associate, London
Craig is a leading disputes lawyer with over 15 years’ experience in the energy sector. He specialises in complex cross-border and domestic disputes across arbitration, litigation, mediation and other ADR processes. Richard advises on commercial, corporate, regulatory and climate-related disputes, including greenwashing and Energy Charter Treaty claims.
straightforward housekeeping steps organisations can take in the near term to mitigate the threat:
diminishing the prospect of a customer successfully recovering any or all of their financial losses from the provider should a breach arise (such as a data loss and decryption event caused by the provider’s failure to counter the quantum computer decryption risk). The potential consequences for businesses from a reputational, financial and regulatory compliance perspective remain very real.
s scrutiny of social media platforms continues to intensify, this article provides an update to our December 2024 insight,
which examined the growing legal and regulatory challenges arising from concerns about social media’s impact on children and key considerations for insurers.
Since our last publication, the challenges facing social media companies continue to grow, not only through continued litigation but the prospect that these allegations may become relevant to a wider class of defendants beyond social media companies, particularly through the rapid rise of Artificial Intelligence (“AI”) technologies into digital platforms. This update explores how these developments are shaping the future of social media liability, including the implications for insurers.
In our last article, we discussed how social media platforms, Meta, TikTok, Snap, and YouTube, are facing claims consolidated in a Multi-District Litigation No. 3047 (“MDL”) in the Northern District of California.1 The MDL centres on allegations that these platforms contribute to a range of harmful behaviours in children, from suicidal ideation to eating disorders, through intentionally addictive design features and exposure to harmful content.
Escalating litigation in the US against social media giants
In parallel to the federal MDL, related claims are proceeding in a Judicial Council Coordination Proceeding (“JCCP”) before Judge Carolyn B. Kuhl in Los Angeles County Superior Court. On the eve of jury selection in the first coordinated trial in that state-court action, TikTok and Snap reached settlements in principle with the plaintiff, identified as K.G.M.2 Those settlements are limited to K.G.M.’s claims and do not dispose of the roughly 1,000 remaining cases in the JCCP. However, these settlements mean that Snap and TikTok’s CEO’s will no longer be called to testify in the trial and focus will narrow on remaining defendants, Meta and YouTube. Although the JCCP is procedurally distinct from the federal MDL, the state court proceedings may provide an early indication of how the social media defendants evaluate their litigation risk as the MDL bellwether trials approach.
Notably, in a November 2025 filing, school districts alleged that Meta supressed internal research showing that the mental health of young users suffered from compulsive use of its social media platforms, even as some employees likened the company’s practices to those of drug pushers.3 According to the filing, Meta terminated an internal study, dubbed Project Mercury,
The MDL continues to grow, exceeding 2,200 cases as of January 2026. In June 2025, Judge Yvonne Gonzalez Rogers confirmed an initial bellwether pool of 11 cases, six by school districts and five by families, for the first social media addiction bellwether trials, now scheduled for summer 2026.
Plaintiffs refer to internal communications in which a Meta employee warned that withholding negative results could draw comparisons with tobacco companies concealing evidence of harm.
State-level enforcement actions have also accelerated. In Hawaii, the state filed a suit against TikTok’s parent company, ByteDance, alleging that the platform deliberately designed features to cause children to be addicted, thereby violating the Children’s Online Privacy Protection Act (“COPPA”).6 The complaint relies on statements from former employees who describe “coercive design tactics” akin to gambling industry methods, highlighting that the platform’s short-form video algorithm maximises user engagement at the expense of child safety. Hawaii further alleges that TikTok continued to collect personal data from underage users despite knowledge of their ages, echoing previous federal COPPA violations pursued by the Federal Trade Commission in 2019 and 2024.7
As highlighted in our December 2024 article, insurers have moved swiftly to test whether claims arising from alleged youth harms linked to social media platforms engage cover under traditional liability policies.
In a 2024 Delaware state court action, Hartford Casualty Insurance Co. et al. v. Instagram LLC et al.9, we saw insurers seek declaratory relief that they owe no duty to defend or indemnify Meta in the MDL. Insurers argued that the underlying claims do not allege covered “bodily injury” or “personal and advertising injury”, but instead arise from intentional design and business decisions aimed at maximising user engagement. They further argue that the MDL plaintiffs seek recovery for broad economic and societal harms, rather than damages “because of” injury to identifiable individuals.
Social media's legal challenges
Doomscrolling to death:
Insurance coverage update
after early results suggested that users who stopped using Facebook and Instagram for just a week reported reduced feelings of depression, anxiety and social comparison.
In Massachusetts, the state Supreme Judicial Court heard oral arguments in December 2025 regarding whether Instagram’s autoplay, ephemeral postings, and incessant notifications place Meta outside the protective scope of Section 230 of the Communications Decency Act. Justices debated whether these features constitute advertising rather than publishing, a distinction that could expose Meta to liability for encouraging minors’ engagement.8
its insurers and align its coverage action with the larger social media MDL. However, in May 2025, the California federal court determined that the insurers’ earlier Delaware action took precedence, remanding the coverage case back to Delaware and dismissing the California action. As of December 2025, the coverage dispute remains pending in Delaware and no final outcome on the coverage issues has yet been reached.10 As litigation grows, we can expect insurers to continue to challenge cover under liability policies where the claims relate to public nuisance, rather than “bodily injury” or “personal and advertising injury”.
Insurers should also be aware that the use of revenue generating platforms with addictive features are not limited to social media companies. Online or video games creators have similarly faced scrutiny that they have developed features that are addictive (e.g. in-game rewards and other tactics to encourage spending) which harm young users.
Anna Harkin Associate, London
In re: Social Media Adolescent Addiction/Personal Injury Products Liability Litigation. (2022). In re: Social Media Adolescent Addiction/Personal Injury Products Liability Litigation, No. 4:22-md-03047 (Northern District of California). BBC News. (2026, January 27). TikTok settles just before social media addiction trial to begin. BBC News. https://thetruestory.news/en/world/story/9154a493-fc28-11f0-92eea8a1590471b5 Plaintiffs. (2025, November 21). Plaintiffs’ omnibus opposition to defendants’ motions for summary judgment (Document 2480). In re: Social Media Adolescent Addiction/Personal Injury Products Liability Litigation, No. 4:22-md-03047-YGR. USA Herald. (2025). Judge rejects Meta’s bid to muzzle ex-researcher in youth harm MDL. USA Herald. Sattizahn, J. (2025, September 8). Written statement to Senate Judiciary Subcommittee. Hawaii. (2025). Hawaii v. ByteDance Inc., No. 1CCV-25-0001964. Circuit Court of the First Circuit, Hawaii. Federal Trade Commission. (2024, August 2). FTC investigation leads to lawsuit against TikTok and ByteDance for flagrantly violating children’s privacy law. Federal Trade Commission. Commonwealth of Massachusetts. (n.d.). Commonwealth v. Meta Platforms Inc. et al., No. SJC-13747. Massachusetts Supreme Judicial Court. Hartford Casualty Insurance Co. et al. (n.d.). Hartford Casualty Insurance Co. et al. v. Instagram LLC et al., No. N24C-11-010. Delaware Superior Court. United States District Court, Northern District of California. (2025, May 27). Order granting Hartford’s motion to remand, denying Meta’s motion to dismiss, and granting insurers’ motion to stay. Hartford Casualty Ins. Co. v. Instagram LLC, No. 4:25-cv-03193-YGR. Dowey, R., et al. (2025). Rosalind “Ros” Dowey et al. v. Meta Platforms Inc. et al., No. N25C-12-250. Superior Court of the State of Delaware. Federal Trade Commission. (n.d.). Genshin Impact game developer banned from selling loot boxes to under-16s without parental consent. Federal Trade Commission. First County Bank. (n.d.). First County Bank v. OpenAI Foundation et al. Superior Court of the State of California, County of San Francisco. Brittain, B. (2025, December 11). OpenAI sued for allegedly enabling murder-suicide. Reuters. https://www.reuters.com/legal/government/openai-sued-allegedly-enabling-murder-suicide-2025-12-11/ New York State Attorney General. (2025, December 10). Attorney General James and bipartisan coalition urge big tech companies to address dangerous AI chatbot features. New York State Office of the Attorney General.
Whilst Meta disputes the characterisation of both the study and its decision to end it, these allegations have intensified arguments that the company had knowledge of potential harms to children and teenagers whilst continuing to design and market products targeting young users.
Further pressure has been placed on Meta following a ruling allowing a former Meta researcher to testify in the MDL, despite Meta’s objections.4 John Sattizahn is expected to testify that Meta directed internal researchers to alter study protocols to avoid documenting evidence of harm to minors, a revelation with potentially significant implications for both the underlying personal injury claims and related insurance coverage disputes.5
Meta disputes that characterisation and has argued that the insurers are seeking to resolve coverage issues prematurely, before liability has been established in the underlying proceedings. In December 2024, Meta filed a parallel insurance coverage lawsuit in the Northern District of California, seeking to compel a defence from
Expanding liability
Regulators are beginning to take action against these companies and lawsuits have emerged which allege harms beyond those alleged in the MDL. For instance, there are also:
Lawsuits against Meta and Instagram in relation to claims users are dying by suicide after being “sextorted”11 It is alleged that Meta knew as early as 2019 that the Instagram “Accounts You May Follow” feature recommended almost 2 million children’s accounts to adult predators in a three-month period which then resulted in some cases with a request to follow the child’s account. Parents of a 16 year old boy from Scotland and 13 year old boy from Pennsylvania have alleged in their lawsuits that their children were targeted by adult predators who posed as young girls, manipulated their children to send sexually explicit photos before extorting them which led these teenagers to die by suicide. It remains to be seen whether these lawsuits will gain traction more broadly beyond these state actions.
In January 2025, the U.S. Federal Trade Commission (“FTC”) issued a USD 20 million fine against the developer of Genshin Impact after determining that the company had used misleading and manipulative in game mechanics that encouraged children and teenagers to spend money on in-game rewards. The FTC found that the game’s loot-box system obscured the real- world costs involved and gave young players an unrealistic impression of their chances of obtaining rare, high value items. The regulator also criticised the use of a confusing virtual currency structure that made it difficult for children understand how much they were actually spending. As part of the settlement, the company is now required to block users under 16 from making any purchases without verified parental consent, a clear indication of regulators’ growing willingness to intervene when game features risk exploiting younger users.
Emerging AI risks & litigation
As referenced in our last article, claims linked to engagement with digital platforms can also extend beyond traditional social media. Litigation has continued to expand to AI technologies, particularly large language models.
Recently, a wrongful death lawsuit was filed in December 2025 in California against OpenAI and Microsoft alleging that ChatGPT exacerbated a mentally ill man’s paranoid delusions and contributed to him killing his 83-year-old mother before taking his own life. 13
The complaint alleges the AI validated and reinforced dangerous beliefs without directing the user toward real-world help. This action is understood to be the first wrongful death lawsuit directly linking an AI chatbot to a homicide.14
We also note that on 9 December 2025, a bipartisan coalition of US state attorneys general warned major AI developers, including Microsoft, Meta, Google, Apple, and OpenAI, that outputs from AI chatbots may violate state laws and pose serious mental health risks, especially to children and vulnerable users.15 The AGs called for independent audits and stronger safeguards for so called “delusional” or harmful AI outputs.
Taken together, these developments suggest that courts and regulators may begin to apply principles developed in social media design liability cases to generative AI systems. If courts hold AI companies liable for personal injuries or wrongful deaths allegedly tied to chatbot interactions, the implications could extend well beyond social media, potentially affecting developers of chatbots, productivity tools, and other AI driven interactive platforms.
The actions referenced above highlight an increasingly complex liability landscape. As courts examine whether algorithmic features and internal knowledge of harms create liability, insurers must closely monitor evolving coverage disputes, emerging claims, and consider tailored policy provisions or endorsements to manage systemic risk.
Click here to check the December 2024 insight here
Richard Manstoff Senior Associate, London
Neil Beresford Partner, London
In light of these allegations and growing scrutiny from regulators, insurers will no doubt examine whether social media and gaming companies knew but ignored the dangers around their respective platform features that were potentially exploitative, addictive and/or facilitated predatory behaviour resulting in harm to young users. If these harms were known, insurers could seek to exclude liability on the basis that the injuries and losses now claimed were expected or intended, where such exclusions are available (for example in CGL or Bermuda Form policies).
Latest insurance news and opinions to help you navigate the unknown
Navigating technology, law and liability
Automated vehicles:
he futuristic vision of self-driving cars is already a reality: driverless taxis are on the streets of cities in Arizona, California, and Texas. By the late 2020s, automated vehicles (AVs) are expected to be a common sight on roads worldwide,
with manufacturers and tech companies racing to deploy advanced automated driving systems (ADS). While the technology promises convenience and efficiency, Neil Beresford (Partner), Alistair Kinley (Director of Policy & Government Affairs), Nanci Schanerman (Senior Counsel) and Steven Crocchi (Senior Associate) explore the complex challenges for insurers and claims professionals.
Click here to watch the Automated vehicles webinar
Regulatory Landscape: UK and US Perspectives
In the UK, the Automated Vehicles Act 2024 establishes a comprehensive framework for AV deployment and ongoing safety monitoring. The Act empowers the government to set detailed regulations covering pre-market approval and post-deployment compliance. A key element will be the statement of safety principles, requiring AVs to meet a standard equivalent to a “careful and competent human driver.” This benchmark will shape liability and claims handling when accidents occur.
Claims Complexity: From Negligence to Product Liability
When accidents occur, liability shifts from the at-fault driver to manufacturers and component suppliers, triggering product liability claims rather than standard auto negligence. This shift implicates commercial liability policies with higher coverage limits, increasing exposure for insurers.
If legislation requires human oversight, claims may involve dual liability, negligence claims against the human operator and product liability claims against manufacturers. For claims professionals, this means more defendants, more policies, and more complex litigation strategies.
The Act also introduces initial and ongoing authorisation requirements for ADS developers, reinforcing that safety compliance is not a one-time event but a continuous obligation throughout the vehicle’s lifecycle.
Across the Atlantic, regulation is fragmented and evolving. Several US states, including California, Delaware, New York, and Washington, are considering laws mandating a human safety operator in autonomous commercial vehicles. These measures could significantly influence liability frameworks, as the presence (or absence) of a human operator determines whether claims fall under traditional negligence or product liability.
Litigation Trends: Lessons from Tesla Cases
Recent US verdicts highlight the growing appetite for high-value product liability claims. In a landmark Florida case, a jury awarded USD 329 million in damages involving Tesla, citing overselling of Autopilot capabilities. Although the driver was found primarily at fault, Tesla bore 33% liability, underscoring the risk for manufacturers even when driver error is evident and was assessed punitive damages in the amount of USD 200 million.
Key Takeaways for Claims Professionals
Prepare for Multi-Policy Exposure: AV accidents may trigger auto, commercial, and product liability policies simultaneously.
Other cases, such as Justine Hsu v Tesla and Micah Lee v Tesla, show mixed outcomes but reinforce a trend: plaintiffs increasingly target manufacturers and technology providers, leveraging public concerns about AV safety. For insurers, this signals a shift toward defending complex, high-stakes product liability claims alongside traditional auto claims.
Monitor Regulatory Developments: UK and US frameworks will shape liability standards and claims handling protocols.
Anticipate Higher Claim Values: Product liability litigation often involves punitive damages and large settlements.
Adapt Investigation Strategies: Understanding ADS functionality and compliance will be critical in determining fault and defending claims.
Alistair Kinley Head of Policy Development, London
Steven Crocchi Senior Associate, Phoenix
Nanci Schanerman Senior Counsel, Miami
Click here to listen to the Automated vehicle podcast
Petra Starr Senior Associate, US
Patrick Hofer Partner, Washington
Eric Retter Senior Counsel, Atlanta
Siân Purath Partner, London
The fragile chain:
Understanding supply chain disruption
upply chain disruption is a risk that re-emerges in volatile geopolitical and economic circumstances. We all
remember too well the images of empty shelves in supermarkets as the global supply chain faced its ultimate test: a global pandemic.
Another stark example of the vulnerability of the supply chain are the recent cyberattacks on the British high street, with some suppliers reportedly having to resort to taking orders by pen and paper due to the disruption caused. While certain retailers may have the benefit of cyber insurance to soften losses, others have been less fortunate.
Although the risk of supply chain disruption may not be new, the complexity of modern global supply chain and the current geopolitical environment is likely to mean that the risks faced by businesses are multiplied.
The multiplication of risk has been highlighted in recent times by the brewing trade war. The fast-moving tariff environment has made it difficult for suppliers to maintain costs when shipping goods overseas, especially when the products are already in transit when new tariffs are brought into force. This has led to unexpected duties and charges being incurred, making deals less attractive than when they were originally agreed and causing parties to consider whether the deal they agreed is still a commercially viable one.
Key risk factors
Trading with new partners may also give rise to concerns about whether the quality of work is up to the standard that the buyer was previously accustomed to. Similarly, it is possible that some markets will not have the same safety standards and regulations in place as others that businesses are used to trading with. Failure to fully assess these types of risk could give rise to product recall losses including business interruption and reputational damage.
In some cases, supply chain diversification will bring opportunities to establish new trading relationships and potentially renegotiate terms with existing partners. New trade routes may be considered to reduce exposure to tariffs or other risks such as piracy, geopolitical instability and even climate change. Development of new technologies or the use of AI to map the supply chain may also lead to increased efficiency and cost savings.
However, supply chain diversification and new trading relationships also brings new challenges as parties cannot necessarily rely on trusted and long-standing relations. This challenge becomes particularly acute where new partners are domiciled in emerging markets with limited controls on corporate governance and trade fraud.
These markets may also be exposed to political risks or business interference. There may also be the prospect of retaliatory measures being taken by governments in response to tariffs being imposed. Such measures could include selective discrimination, licence cancellation or even expropriation. Political violence is another risk multiplier. Local populations were already
The prospect of unexpected tariffs, against the background of an already volatile geopolitical climate, have caused businesses to re-assess their current supply chain exposure and consider how the supply chain can be made more resilient e.g. by diversification in trading partners and markets.
feeling the strain of the increased cost of living, and this will likely be exacerbated if the increased cost to the supply chain is pushed down to consumers. This, plus a rise in nationalist sentiment, could lead to increased political violence events such as strikes, riots and civil commotion which could impact the supply chain further.
Liability claims are also a potential risk if defective products cause personal injury or damage to third party property. Delivery of defective goods can also lead to other increased costs of working if contractual deadlines are impacted. Supply chain disruption may also result in an inability to supply goods to customers or projects if the inward supply is delayed. This in turn may result in claims for breach of contract.
Considering rights in relation to potential force majeure or contract frustration where performance is significantly hindered
The potential losses that might arise out of impacts on the supply chain including business interruption and payment default
Strategies for resilience
Due diligence into trading partners and the general market in which businesses are operating will be even more critical to navigating these multiplied risks. Businesses can mitigate against supply chain risk by undertaking a thorough assessment of their supply chain exposure. This will involve looking at:
The legal and regulatory environment including whether there are legal protections in place such as Bilateral Investment Treaties when doing business in emerging markets
The entire supply chain (and not just direct, tier 1 suppliers)
Whether risk mitigants such as insurance should still be considered a ‘nice to have’ or a necessity
The location of trading partners and whether this brings about any particular geopolitical exposure
Existing contracts and considering whether it is possible to renegotiate terms in view of rising prices
The credit risk posed by counterparties and whether a counterparty’s business model is particularly exposed to supply chain disruption
Whether there are rights of recourse in the event of the supply of a defective product
The development of an agile and experienced response team that is able to respond in the event of a crisis, such as a product recall, cyber incident or similar incident will also be key to limiting damage by communicating with counterparties, consumers, regulators and the media.
Directors & Offices that do not engage with the new world order and the multiplied risks faced by the supply chain may find themselves exposed to claims by investors.
Click here to listen to the Supply chain disruption podcast
Click here to watch the webinar on Beyond the breach: The impacts of supply chain failure
Jasmine Zamprogno Associate, Munich
Dr. Sven Förster Partner, Munich
Dr. Sophia Henrich, LL.M. Counsel, Munich
Our Tariff Tracker Stay ahead of global trade shifts. Sign up here for regular updates on US tariffs and worldwide countermeasures.
Artificial sweeteners, particularly sucralose (also known as E955), have long been promoted as healthier alternatives to sugar.
Found in thousands of food, beverage, and cosmetic products worldwide, sucralose is valued for its zero-calorie content and chemical stability.
In 2024, the global sucralose market was valued at approximately USD 4.09 billion.1
Despite its widespread use, sucralose is now facing growing scrutiny. In addition to concerns about its impact on human health, emerging research is drawing attention to its environmental persistence and potential harm to aquatic ecosystems.2
Sucralose is a chlorinated derivative of sucrose engineered to resist metabolic breakdown in the human body. This same resilience extends to the environment, posing a growing concern for ecosystems. Research from the University of Florida has revealed that sucralose passes through wastewater treatment plants with minimal degradation, with an average removal efficiency of just 12%.3 As a result, it enters rivers, lakes, and even drinking water systems virtually unchanged.
The persistent pollutant
Emerging contaminants rarely trigger immediate losses. Instead, they follow a gradual path from widespread use to regulatory scrutiny, driven by improved detection methods, accumulating scientific evidence, and rising public concern. Sucralose appears to be on this path. It is ubiquitous, environmentally persistent, and increasingly debated in academic and regulatory circles. For insurers, this pattern is all too familiar. PFAS, MTBE, pharmaceutical residues, and microplastics have demonstrated how contaminants can transition from unregulated to financially material risks in a matter of a few years.
Considering the environmental impact of sucralose and the precedent set by other micropollutants, developing effective strategies for its removal and recovery from water systems is likely to become increasingly necessary. Should authorities impose new standards requiring utilities or industrial facilities to remove sucralose from water, the financial implications could be substantial.
Once released into aquatic ecosystems, sucralose can interact with microbial communities in complex and often harmful ways. Diatoms, microscopic algae that contribute to over 30% of marine food chain productivity, have shown population declines when exposed to sucralose. For example, in laboratory studies, freshwater diatoms experienced a sharp decline of more than 50% within just 12 hours of sucralose exposure.4 This effect is likely due to the algae mistaking sucralose for a nutrient, leading to metabolic disruption. Because diatoms form the foundation of many aquatic food chains, such disturbances can cascade through entire ecosystems causing broader ecological instability.5 A study found that sucralose exposure can cause DNA damage and genetic mutations in freshwater fish, further raising alarms about its long-term ecological impact.6
Litigation represents another emerging risk. As scientific consensus around sucralose’s ecological impact continues to build, legal action may follow. Environmental groups or public agencies could pursue claims for natural resource damages, arguing that lakes or rivers have been adversely affected and seeking restoration funds. Companies that release sucralose (whether intentionally or inadvertently) may face third-party liability. While these scenarios are still unfolding, they are becoming increasingly plausible amid growing awareness of these micropollutants.
From ecological risk to insurance exposure
For insurers, sucralose may represent a previously overlooked exposure that could generate claims across several lines of coverage. For example:
Specialised environmental liability policies or pollution endorsements will likely be the first line of defence against contamination claims. These policies typically cover the costs of pollutant clean up and third-party environmental damage. If regulators mandate sucralose remediation or if claims allege ecosystem harm, affected companies may turn to these policies for legal defence, remediation expenses, and settlements.
Environmental liability and pollution coverage
Product liability insurance may also be relevant if sucralose is deemed environmentally defective and requires removing from the water supply. In such cases, product recall or contamination insurance might also apply, covering expenses related to withdrawing affected products, notifying customers, relabelling, and other associated costs.
Environmental disruptions can also lead to business interruption losses. If a facility is forced to halt operations due to sucralose related incidents or regulatory orders, it may seek compensation for lost income.
Sucralose’s environmental risk and what it means for insurers
Sweet but toxic?
While toxicology debates continue, the trajectory is familiar: a common compound moves from uncontroversial to contested, and eventually into regulation and litigation. Insurers who observed the path of Per- and polyfluoroalkyl substances (PFAS) or Methyl tertiary-butyl ether (MTBE) will recognise the early warning signs. For insurers, this emerging risk is more than an environmental issue – it is a potential source of multi-line exposure.
Where sucralose exposure may sit in insurers portfolio
Technologies capable of removing such persistent compounds, like activated carbon adsorption, ozonation, or specialised membrane filtration, require significant capital investment and ongoing operational costs. For example, ozonation and nanofiltration come at a steep price, ranging from GBP 112 to GBP 238 per 1,000 cubic metres of water treated.7
General liability
Most companies maintain general liability insurance. While environmental harms are typically excluded through pollution clauses, general liability policies may be triggered in the absence of a clear pollution exclusion.
Product liability and recall
Business interruption insurance
Sucralose may appear insignificant given its ubiquity, but its environmental persistence presents a growing exposure that insurers may wish to consider. Many micropollutants, once considered benign, are now the subject of thousands of claims and settlements amounting to hundreds of billions of pounds.
Waiting for regulatory action or litigation to materialise could leave insurers vulnerable to costly claims and reputational damage. Proactive measures, such as assessing portfolio exposure, tightening policy language, and evaluating how emerging contaminants like sucralose fit within broader sustainability and risk frameworks, can help insurers manage these risks more effectively and stay ahead of the curve.
Looking ahead
The Business Research Company. (2025). Sucralose global market report. The Business Research Company. https://www.marketresearch.com/Business-Research-Company-v4006/Sucralose-Global-42125219/ Derry, M., & Goddard, A. (2024, July 12). Some artificial sweeteners are forever chemicals that could be harming aquatic life. The Conversation. https://uk.news.yahoo.com/artificial-sweeteners-forever-chemicals-could-150117912.html Westmoreland, A. G., Schafer, T. B., Breland, K. E., Beard, A. R., & Osborne, T. Z. (2024). Sucralose (C₁₂H₁₉Cl₃O₈) impact on microbial activity in estuarine and freshwater marsh soils. Environmental Monitoring and Assessment, 196(5), 451. https://doi.org/10.1007/s10661-024-12610-5 Bryce, E. (2024, August 2). Artificial sweeteners don’t degrade in the human body or in nature. Anthropocene Magazine. https://www.anthropocenemagazine.org/2024/08/artificial-sweeteners-dont-degrade-in-the-human-body-or-in-nature/ Derry, M., & Goddard, A. (2024, July 12). Some artificial sweeteners are forever chemicals that could be harming aquatic life. The Conversation. https://uk.news.yahoo.com/artificial-sweeteners-forever-chemicals-could-150117912.html Đorđević, J. M., Petrović, I. M., et al. (2020). Occurrence and fate of artificial sweeteners in aquatic environment. Science of the Total Environment, 743, 140714. https://www.sciencedirect.com/science/article/abs/pii/S0048969719332796 Tarpani, R. R. Z., & Azapagic, A. (2018). Life cycle costs of advanced treatment techniques for wastewater reuse and resource recovery from sewage sludge. Journal of Cleaner Production, 204, 832–847. https://research.manchester.ac.uk/en/publications/life-cycle-costs-of-advanced-treatment-techniques-for-wastewater-/
Anna Fodor Associate Solicitor, Travelers
GUEST AUTHOR
Meet the AUTHORS
Quratulain Channa Paralegal, Manchester
Catríona Campbell Consultant, London
Darcy Anderson Trainee Solicitor, London
The road to COP31
OP30, held in Belém, Brazil from 10 to 21 November 2025, was billed as the “COP of implementation”
that would drive action towards fulfilling existing climate commitments. However, it delivered mixed results. While States agreed on a Global Mutirão decision to advance implementation initiatives and a set of global adaptation indicators, progress fell short of expectations. Disputes over procedure, weak ambition in updated nationally determined contributions, and the absence of anticipated agreements on fossil fuels and deforestation highlighted persistent contradictions between the aims and approaches of different negotiating blocs, setting a challenge for parties now looking forward to COP31 in 2026.
C
The nature and status of the fossil fuel and deforestation “roadmaps” announced by the Brazilian Presidency remain unclear. As the roadmaps have no basis in a negotiated document or foundation in the legal treaties underpinning the COP process, it is not certain whether all or which countries will contribute to their development, nor what their outcomes will be. The outcome of the “Baku to Belém Roadmap to 1.3T” could, although it was established in a negotiated text, be a guide. More may become clear in April 2026, when Colombia and the Netherlands will jointly hold a world-first conference on the phasing out of fossil fuels.
There are several important issues that will continue to develop between now and COP31 in Antalya, Türkiye, in November 2026. Here we set out some key issues to watch out for in the year ahead:
Looking ahead to COP31
Implementation of the fossil fuel and deforestation roadmaps
COP30 was billed as the “COP of implementation”, and therefore was never intended to produce a large number of negotiated decisions. However, the failure of a COP based in the Amazon rainforest to produce a decision on deforestation and the continuing inability of the COP process to find common ground on even mentioning fossil fuels in its negotiated decisions, has left many considering whether a new way forward is required.
Notwithstanding such arguments, the COP process remains the only significant international legal mechanism by which countries have agreed to reduce their emissions and transition away from fossil fuels. While the economic advantage of cheap renewable power is growing, growth in renewable energy production is only adding to, not replacing, fossil fuel power generation that itself continues to grow in response to the economic incentive of increasing global demand for energy. Other non-COP initiatives, such as the G7’s Just Energy Transition Partnerships, seem to have also gone into reverse without significant fanfare. COPs consequently remain the only forum where the weight of consensus-based decision-making and the legal basis of the UNFCCC and Paris Agreement lends both credence and viability to states’ commitment.
Kristina Doerr Trainee Solicitor, London
Isabel Slippe-Quartey Trainee Solicitor, London
Lucia Williams Consultant, London
Wynne Lawrence Partner, London
William Ferris Associate, Singapore
Continuing questions over COP’s relevance
Yvo de Boer, a former Executive Secretary of the UNFCCC, commented that: “My overall sense is that the wheels came off in Belém. My conviction is that this is not a bad thing.” His desire for “coalitions of the willing” to take the initiative on discrete issues is a view shared by many. In the lead up to COP31 in November 2026, we can expect many to raise similar views, including the opinion that it is economics rather than politics that will drive the energy transition.
For the first time in the history of the UNFCCC process, COP31 will see the responsibilities of the Presidency split between hosting and negotiating functions: Türkiye will serve as the official host and COP President, whilst Australia will preside over the negotiations.
A novel COP structure
Influence of the International Court of Justice advisory opinion
It will be interesting to see what effect this dual structure has on negotiations, especially given that Australia was in favour of a fossil fuel phase out roadmap at COP30, while Türkiye was recorded by the Presidency as being in opposition (although Türkiye has since denied this). As COP President and physical host, Türkiye will likely play the more important role. It remains to be seen to what extent Australia (and the climate vulnerable Pacific Island states its proposed presidency was supposed to represent) will drive the most difficult negotiations or whether Türkiye will have the final word in Presidency decision making.
This Advisory Opinion confirmed that States have a legal duty to cooperate to mitigate climate change and adapt to its effects, that the definitions of developed and developing countries are not immutable, and that States can be held legally accountable for emissions and that climate victims may be entitled to “reparations”.
Prior to COP30, there was some expectation that the International Court of Justice’s Advisory Opinion on climate change would have an impact on negotiations.
Monaco, Mexico, the Alliance of Small Island States (AOSIS), the Independent Alliance of Latin America and the Caribbean (AILAC) and the Least Developed Country (LDC) blocs called for acknowledgment of the ICJ’s opinion, including in relation to discussions on Loss and Damage; the Arab Group responded that it was not appropriate to include it and that even its discussion would be a “deep, deep, deep red line”.
While no mention of the Advisory Opinion was made in a negotiated text, the Netherlands has announced that its/Colombia’s proposed conference on the phasing out of fossil fuels will be Advisory Opinion-aligned. Separately, Vanuatu’s Minster of Climate Change has reportedly issued a draft UN General Assembly resolution to endorse and operationalise the Advisory Opinion. This means that while COP30 was perhaps too soon for states to commit to its inclusion, the Advisory Opinion’s influence on international negotiations may develop and grow over time.
UN Climate Change. (n.d.). The COP of implementation: Action agenda delivers accelerated progress on 117 solutions, builds momentum for renewed global vision in Belém and beyond. UNFCCC. https://unfccc.int UN Climate Change. (n.d.). Baku to Belém roadmap to $1.3 trillion. UNFCCC. https://unfccc.int UN Climate Change. (n.d.). Relatório roadmap COP29–COP30 (EN final). UNFCCC. https://unfccc.int International Institute for Sustainable Development. (n.d.). 2026 UN Climate Change Conference (UNFCCC COP 31). SDG Knowledge Hub. https://sdg.iisd.org McSweeney, R. (n.d.). Papua New Guinea “not happy” as Australia walks away from bid to host Cop31. The Guardian. https://www.theguardian.com Independent Alliance of Latin America and the Caribbean. (n.d.). About AILAC. AILAC. https://ailac.org University of Cambridge. (n.d.). A Cambridge legal expert on the ICJ’s landmark climate opinion. University of Cambridge. https://www.cam.ac.uk International Court of Justice. (n.d.). The Court gives its advisory opinion and responds to the questions posed by the General Assembly. ICJ. https://www.icj-cij.org Fossil Fuel Non-Proliferation Treaty Initiative. (n.d.). Colombia and the Netherlands announce the First International Conference on the Just Transition Away from Fossil Fuels. Fossil Fuel Non-Proliferation Treaty Initiative. https://fossilfueltreaty.org Climate Home News. (2026, February 10). Vanuatu introduces draft UN resolution on ICJ demanding full climate compensation. Climate Home News. https://www.climatechangenews.com
Darcy Anderson is a leading disputes lawyer with over 15 years’ experience in the energy sector. She specialises in complex cross-border and domestic disputes across arbitration, litigation, mediation and other ADR processes. Richard advises on commercial, corporate, regulatory and climate-related disputes, including greenwashing and Energy Charter Treaty claims.
Kristina Doerr is a leading disputes lawyer with over 15 years’ experience in the energy sector. She specialises in complex cross-border and domestic disputes across arbitration, litigation, mediation and other ADR processes. Richard advises on commercial, corporate, regulatory and climate-related disputes, including greenwashing and Energy Charter Treaty claims.
Isabel Slippe-Quartey is a leading disputes lawyer with over 15 years’ experience in the energy sector. She specialises in complex cross-border and domestic disputes across arbitration, litigation, mediation and other ADR processes. Richard advises on commercial, corporate, regulatory and climate-related disputes, including greenwashing and Energy Charter Treaty claims.
Quratulain Channa is a leading disputes lawyer with over 15 years’ experience in the energy sector. She specialises in complex cross-border and domestic disputes across arbitration, litigation, mediation and other ADR processes. Richard advises on commercial, corporate, regulatory and climate-related disputes, including greenwashing and Energy Charter Treaty claims.
Eva Maria Barbosa Partner, Munich
Jared Kangwana Managing Partner, Kenya
European Commission. (2024, August 1). EU AI Act: Regulatory Framework for Artificial Intelligence. European Commission Digital Strategy. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai The White House. (2023, October 30). Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. https://www.govinfo.gov/content/pkg/FR-2023-11-01/pdf/2023-24283.pdf US Senate Committee on Commerce, Science, and Transportation. (2024, March). Artificial Intelligence Risk Management Act Discussion Draft. https://docs.house.gov/meetings/BA/BA21/20260113/118806/BILLS-119pih-clarifyhowexistingmodelriskmanagement.pdf Cyberspace Administration of China. (2023, July). Interim Measures for the Management of Generative Artificial Intelligence Services. https://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm Information Commissioner’s Office. (2024, October). Guidance on AI and Data Protection. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ Challapally, A., et al. (2025, August). The GenAI Divide: State of AI in Business 2025. MIT Media Lab, Project NANDA. https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf S&P Global Market Intelligence. (2025, March). AI Project Failure Rates and Enterprise Adoption Analysis. https://www.spglobal.com/market-intelligence/en/news-insights/research/ai-experiences-rapid-adoption-but-with-mixed-outcomes-highlights-from-vote-ai-machine-learning Koopman, S. (2026, February 3). Anthropic's new AI legal tool wipes billions off European data stocks. City AM. https://www.cityam.com/anthropics-new-ai-legal-tool-wipes-billions-off-european-data-stocks/
Persistent AI recording: Why always-on AI meeting capture tools should be handled with care
Harnessing agentic AI: risks, rewards and responsibilities
Generative AI and trial advocacy: back to basics?
Doomscrolling to death: Social Media’s Legal Challenges
Sweet but toxic? Sucralose’s environmental risk and what it means for insurers
Rosehana Amin Partner
Authors
AI is often described as a “force multiplier”3 – meaning that early adopters can make progress faster and faster as AI augments their capabilities, leaving the rest falling further behind. This applies to every industry, not just ‘high-tech’ sectors. For example:
Intro Semi Bold 20/30 -20
of respondents saying their organisations now use it in at least one business function, up from 33% IN 2023.
71%
A global AI survey by McKinsey found that genAI usage jumped sharply last year, with
5 key steps
employers should take when using AI in the workplace
Subscribe via:
yber threats are one of the fastest-evolving risks companies face, so staying ahead of regulatory compliance and being prepared for whatever malicious actors may have in store next is vital and challenging in equal measure.
Our podcast series is designed to enhance organisations’ visibility over key developments, upcoming issues and emerging threats, providing critical insights and analysis to help mitigate risk and improve preparedness, should a data breach or cyber attack occur.
Article 1
Article 2
Article 3
Article 4
Article 5
Contents