Technology

Italy closes AI probes after firms promise clearer hallucination warnings

Italy forced DeepSeek, Mistral AI and NOVA AI to spell out that chatbots can hallucinate, a move that may shape Europe’s next AI rules.

Lisa Park··2 min read
Published
Listen to this article0:00 min
Share this article:
Italy closes AI probes after firms promise clearer hallucination warnings
Source: yahoo.com

Italy’s antitrust authority closed probes into DeepSeek, Mistral AI SAS and Turkey’s Scaleup Yazilim Hizmetleri Anonim Şirketi after the companies agreed to tougher warnings about chatbot hallucinations, a sign that consumer law is becoming a frontline tool for AI oversight.

The Autorità Garante della Concorrenza e del Mercato said the three investigations ended without a finding of infringement under article 27(7) of the Italian Consumer Code. Instead of fines, the firms accepted binding commitments to make clearer, on the websites and apps people actually use, that generated answers can be inaccurate, misleading or fabricated.

AGCM said permanent disclaimers were added below the chat windows in the user interfaces, with warnings in Italian and links to more information. The authority also said pre-contractual information was expanded so users are told that generated content may not always be reliable and should be verified before being used in purchases, research or other decisions.

AI-generated illustration
AI-generated illustration

The DeepSeek case carries the clearest regulatory message. AGCM opened that investigation in June 2025 and had said the company’s earlier warning was too generic, appearing in English only and not prominently enough for Italian users. The regulator’s concern was not abstract technical quality; it was whether weak disclosure about hallucinations could distort a consumer’s transactional decision. DeepSeek also agreed to invest in technology aimed at reducing hallucination risk, while acknowledging that current tools cannot eliminate the problem entirely.

The same logic applied to NOVA AI, the cross-platform chatbot service from Scaleup Yazilim Hizmetleri. AGCM said the company committed to making clear that NOVA AI is only a single interface for accessing several chatbots, not a system that aggregates or processes their responses. That distinction matters because a user who believes one system is verifying or synthesizing answers may place more trust in it than the service deserves.

Related stock photo
Photo by Matheus Bertelli

The decision reaches beyond Italy because it shows how regulators are treating AI disclosures as a marketplace issue, not just a technical one. In practice, AGCM is asking whether users can make informed choices when a chatbot can confidently produce falsehoods. That approach could become a template for other European consumer authorities as AI products evolve faster than the legal standards built to govern them.

For AI firms, the message is blunt: warnings buried in fine print are no longer enough. In Italy, the disclaimers now have to sit right below the chat box, where consumers are most likely to see them before they act on what a chatbot says.

Know something we missed? Have a correction or additional information?

Submit a Tip

Never miss a story.

Get Prism News updates weekly. The top stories delivered to your inbox.

Free forever · Unsubscribe anytime

Discussion

More in Technology