Practical Tips for Businesses
Pricing algorithms can offer significant benefits to consumers, including increased competition, lower costs, and more responsive price adjustments that better align supply and demand. In recent years, however, enforcers—including the UK Competition and Markets Authority (the “CMA”) and the European Commission (the “Commission”)—have increasingly scrutinized pricing algorithms over concerns that they may facilitate anticompetitive coordination between competitors. Companies using algorithmic pricing systems should consider the following steps to reduce the risk of regulatory scrutiny:
- Clean data practices. Exercise care when seeding data sets with market sensitive data like company strategy, pricing, capacity, stock levels etc. If an algorithm is used by competitors, there is less risk of antitrust scrutiny if only publicly available data is used.
- Scrutinise data and algorithms where needed. Audit the input data and statistical approaches used, whether in-house or third-party. Consider linguistic stress-testing prompts, and include explicit anti-collusion constraints.
- Exercise care when sharing information with your competitors. While sharing information of a historical, aggregated and non-specific nature is less likely to invite regulatory scrutiny, increasing attention is being paid to means of sharing the most sensitive data. Sharing confidential, competitively sensitive information with competitors (for example, through a pricing consultant or pricing software) may raise concerns from enforcers. Information is considered to be “competitively sensitive” if it could influence the competitive strategy of other businesses.
- Consider risk from pricing management software vendors. Audit the risk associated with third-party vendors of applications that support pricing and revenue management decisions. There is less regulatory enforcement risk if third-party vendors keep their data separate from the data of other companies. There is more risk if pricing authority is delegated to, and/or confidential firm information is being shared with or received from, the vendors of the system.
- Provide training. Key stakeholders from any team that is involved in deploying algorithm-based pricing or revenue management applications should be included in routine antitrust training so that they can spot issues and escalate them appropriately.
- Cross-portfolio risks. Be alert to situations where multiple portfolio companies in the same sector might be using the same pricing tools or data providers, as this could create regulatory enforcement risks even without direct communication.
- Obtain advice. Expect more frequent requests for information, deeper engagement with expert evidence, and agency scrutiny. Legal teams should be informed of major pricing product tools or developments and given an opportunity to comment on risks before they are adopted.
Introduction
The CMA and Commission are the latest agencies to pursue algorithmic pricing enforcement, reflecting growing concerns across global competition authorities about the risks that these technologies pose to competitive markets. While the US has been paving the way in enforcement in this area, with the Department of Justice (“DOJ”) playing an active role, the CMA has now launched its first investigation and also published its thinking on AI and collusion, and agentic AI and consumers. The Commission has not yet made any investigations public, but officials have confirmed that multiple confidential investigations into algorithmic pricing are underway.
Recent Enforcement and Guidance
The CMA has explicitly identified algorithmic pricing as an area of focus and concern, saying that “businesses must take proactive steps to mitigate the risk of breaking consumer or competition law, making sure they understand the technology on which they rely to inform or shape commercial and operational decisions”. Algorithmic pricing has been on the CMA’s radar for some time, including early economic research on pricing algorithms in 2018 and subsequent papers on algorithmic systems and AI foundation models. The CMA has invested heavily in its technical capabilities, including the ability to use AI and agentic systems to detect breaches of competition law at “unprecedented pace and scale”.
On 2 March 2026, the CMA launched its first substantial enforcement action into alleged algorithm‑enabled information sharing in the hotel sector, mirroring similar property sector enforcement in the US. While recognising that algorithms can bring benefits including more intense competition, lower costs, and faster changes in prices to better match demand and supply in markets, the CMA warned that algorithms may also reduce uncertainty as to competitors’ market behaviour and therefore affect how strongly companies compete.
Shortly after, on 4 March 2026, the CMA published a detailed blog post titled “AI and Collusion: Frontiers, Opportunities and Challenges”, discussing how algorithms and AI systems may facilitate collusive outcomes, and providing practical compliance guidance for businesses that use algorithmic pricing tools. This enforcement‑led stance sits alongside the CMA’s broader policy work on advanced AI, including its 9 March 2026 paper “Agentic AI and Consumers”, which flags heightened consumer and competition risks as AI systems evolve from decision‑support tools into more autonomous, goal‑driven agents. While this technology is at an early stage, the CMA makes clear that “UK consumer law applies whether decisions are made by people or by AI”.
The CMA is pairing guidance and thought leadership focused on consumer AI applications with active enforcement. This increase in thought leadership on AI infrastructure is a noteworthy shift from the CMA’s previous stance. The new approach reflects a clear message: businesses are fully accountable for conduct carried out through algorithms and cannot evade liability by delegating decisions to automated systems. Businesses are advised to ensure they understand the law, and mitigate competition and consumer law risks that may be posed by algorithmic pricing (including the newer and more subtle risks posed by genAI-driven systems and agentic AI).
Algorithmic pricing is also attracting considerable attention from regulators across Europe. Commission officials have confirmed that multiple confidential investigations into algorithmic pricing are underway. Further, the Commission is preparing a study to analyse modern pricing and the risks of collusion. Similarly, at Member State level, the Dutch and Italian competition authorities have launched investigations into airlines employing dynamic and algorithmic pricing techniques, while Germany recently took enforcement action prohibiting a major online platform from using algorithmic price control mechanisms to influence third‑party sellers’ prices, alleging such practices constitute an abuse of dominance.
Categories of Algorithmic Collusion Risk
In its recent blog post, the CMA identified four potential scenarios where algorithmic collusion could arise, based on academic work in the area:
- Traditional collusion implemented through a tool. Rival businesses may explicitly agree to collude and then use algorithms to put into practice, monitor, and enforce the agreement. This happened in the 2016 case on online sales of posters and frames, where two online sellers of posters colluded not to undercut each other’s prices on an online marketplace, using a pricing algorithm to ensure price matching. As well as civil enforcement, the CMA also pursued a successful criminal investigation.
- Hub-and-spoke collusion. As with classic collusion, pricing algorithms may be used to coordinate behaviour and raise prices above a competitive level, by exchanging competitively sensitive information through, and in some cases delegating pricing decisions, to a central data hub or third‑party intermediary.
- Predictable-agent collusion. Algorithms that are deployed by firms and that react predictably to market events may independently learn to follow price-leadership strategies and punish deviations, in ways that lead to coordinated pricing outcomes, reducing the need for human communication or an explicit agreement. Coordination stems from the algorithm’s human‑designed reaction rules (method). While use of antitrust enforcement tools may be more difficult in this scenario – illegal collusion requires a common understanding between businesses –the CMA could use its market investigation powers, which do not require an agreement or consensus.
- Autonomous collusion. The most contentious category arises where an advanced algorithm is directed to collude or learns to do so independently through machine learning or AI capabilities. Coordination stems from self‑learning behaviour (method) by more sophisticated, agentic systems rather than from human‑coded reaction rules. The CMA notes that AI agents could potentially develop exclusionary strategies autonomously or even communicate through steganographic techniques (concealing information within seemingly innocuous messages). The CMA is actively monitoring these developments through its horizon scanning.
Consumer Risks and Agentic AI
In addition to the competition and consumer risks we have previously discussed, the CMA’s new paper on agentic AI examines how the latter might change the way consumers interact with businesses. Agentic AI could impact consumers’ decision-making positively, from identifying relevant offers and good deals, through switching providers and understanding tariffs, to resolving complaints. The CMA has, however, also identified the danger of hyper-personalisation and “dark patterns” that steer consumers towards products which are more profitable for businesses. Consumers will need to be able to trust that AI agents will act in accordance with their interests and that they are not being steered or manipulated in ways that lead to worse personal outcomes.
Looking Ahead to Future Enforcement
To address these risks, the CMA has significantly strengthened its technical capabilities, including machine learning and agentic AI tools to detect algorithmic collusion and screen for bid‑rigging. In addition, the Digital Markets, Competition and Consumers Act 2024 grants the CMA enhanced investigative powers, including the ability to test algorithms directly. The Commission has similarly highlighted its capabilities and emphasised that companies remain fully accountable for conduct carried out through algorithms.
In light of increased enforcement in this area, and to mitigate the likelihood of regulatory scrutiny, companies using algorithmic pricing systems and agentic AI should consider the tips outlined above to mitigate competition and consumer compliance risks.
* * *