‘AI can turbocharge deceptive and unfair practices that harm consumers’, regulators warn


Watchdogs from the UK, US and EU have jointly warned that the rise of generative artificial intelligence creates risks for competition and consumer affairs, which will require extra regulatory vigilance

The UK’s competition watchdog has teamed up with counterparts in the US and Europe to jointly warn that the rise of artificial intelligence has created a range of new regulatory risks.

In guidance issued alongside the European Commission and the US Department of Justice and Federal Trade Commission, the UK Competition and Markets Authority states that “generative AI has rapidly evolved in recent years… [which] requires being vigilant and safeguarding against tactics that could undermine fair competition”.

The quartet of watchdogs identify three core risks to a competitive marketplace, the first of which concerns “concentrated control of key inputs” – such as “specialised chips, substantial compute, data at scale, and specialist technical expertise”. The need for these elements in delivering generative AI “could potentially put a small number of companies in a position to exploit existing or emerging bottlenecks across the AI stack and to have outsized influence over the future development of these tools”.

There is also a risk of “entrenching or extending market power in AI-related markets”, the joint statement warns, as AI “foundation models are arriving at a time when large incumbent digital firms already enjoy strong accumulated advantage [and] platforms may have substantial market power at multiple levels related to the AI stack”.


Related content


The final major danger to competition relates to “arrangements involving key players”. This could encompass “partnerships, financial investments, and other connections between firms related to the development of generative AI, [which] have been widespread to date… [and] could be used by major firms to undermine or coopt competitive threats and steer market outcomes in their favour at the expense of the public”.

Alongside these market-wide risks, technological development also creates dangers for individuals, as the regulators note that “AI can turbocharge deceptive and unfair practices that harm consumers”.

“Firms that deceptively or unfairly use consumer data to train their models can undermine people’s privacy, security, and autonomy,” the statement says. “Firms that use business customers’ data to train their models could also expose competitively sensitive information. Furthermore, it is important that consumers are informed, where relevant, about when and how an AI application is employed in the products and services they purchase or use.”

In tackling such risks in the coming years, the watchdogs pick out three principles that should inform their work in overseeing AI markets and consumer issues: fair dealings; interoperability; and choice.

The statement adds: “Given the speed and dynamism of AI developments, and learning from our experience with digital markets, we are committed to using our available powers to address any such risks before they become entrenched or irreversible harms.”

Sam Trendall

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *