Italy Ends Antitrust Probe After DeepSeek Agrees to Hallucination Warnings
Italy’s competition regulator closed a probe into the Chinese AI system DeepSeek after the company agreed to binding commitments to improve user disclosures about the technology’s tendency to generate false or fabricated information. The move resolves a consumer‑protection investigation but leaves separate data‑protection questions open, highlighting growing regulatory pressure on generative AI services in Europe.

Italy’s antitrust authority has closed its investigation into the Chinese artificial intelligence system DeepSeek after the companies that own and operate the system agreed to a package of binding commitments to warn users about the risk of so‑called hallucinations. The Autorità Garante della Concorrenza e del Mercato announced the case closure in its weekly bulletin on Jan. 5, concluding a probe that began in June 2025 into whether DeepSeek sufficiently informed users that its outputs could be inaccurate, misleading or fabricated.
The investigation was launched under the authority’s consumer‑protection mandate, focusing on transparency and whether consumers were given clear, comprehensible information about the limits and risks of the AI model. Regulators described the agreed measures as a set of improvements to disclosures designed to make the risk that the system may produce false material more conspicuous and better explained to users. The measures are described as binding; the agency closed the file on the basis of the companies’ acceptance.
Publicly available accounts of the decision do not include the full text of the commitments or a detailed timetable for their implementation, and the regulator did not publish specific operational deadlines in its bulletin. The two corporate owners named in the decision are Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence, which jointly own and operate the DeepSeek system.
Separately, Italy’s Data Protection Authority, the Garante, has been reported to be pursuing a distinct line of inquiry into DeepSeek’s handling of personal data. That account states the Garante has moved to prohibit the DeepSeek chatbot in Italy and has given the company 20 days to provide fuller information about how it collects, stores and processes personal data of Italian users. The same account characterizes the company’s earlier assertions that it was not subject to EU privacy rules because it does not operate within the bloc as a serious compliance concern and warns that failure to cooperate could lead to a permanent ban. Those data‑protection proceedings are reported as unresolved and separate from the antitrust closure.

The dual regulatory attention reflects a broader trend in which European authorities are treating generative AI both as a consumer issue and as a privacy and data‑security matter. The antitrust outcome underscores regulators’ expectation that AI developers must use clear disclosures to manage user expectations about model limitations. At the same time, the Garante’s asserted actions signal that transparency alone will not satisfy regulators if data‑handling practices remain opaque or inconsistent with EU law.
For AI vendors, the Italian actions illustrate the layered obligations they face: consumer‑facing warnings and user safeguards on accuracy, alongside rigorous compliance with data‑protection rules. What remains unclear is how the commitments will be implemented in practice, how compliance will be monitored, and whether the Garante will pursue restrictions that affect DeepSeek’s availability in Italy. The AGCM closure resolved one regulatory front, but other legal and operational risks for the company remain active.
Sources:
Know something we missed? Have a correction or additional information?
Submit a Tip

