Microsoft's Copilot Terms Call It Entertainment Only, Warn Against Important Advice
Microsoft's Copilot terms warn it's "for entertainment purposes only" and not for important advice, directly contradicting the company's marketing of it as an essential workplace tool.

Buried in Microsoft's Copilot Terms of Use, under a section labeled in bold capital letters "IMPORTANT DISCLOSURES & WARNINGS," sits a phrase that contradicts years of aggressive marketing: "Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don't rely on Copilot for important advice. Use Copilot at your own risk."
The disclaimer, part of terms updated October 24, 2025, surfaced widely in early April 2026 and immediately drew scrutiny for the gap between its language and the product's commercial positioning. A Microsoft spokesperson told PCMag the phrase is "legacy language" that is "no longer reflective of how Copilot is used today" and "will be altered with our next update," offering no timeline for the change.
The same terms state that Microsoft makes "no warranty or representation of any kind about Copilot," that users are "solely responsible" if they publish or share the AI's responses, and that users must indemnify Microsoft against claims arising from their use of Copilot's outputs. A separate clause warns against relying on Copilot for "medical, legal, financial or other professional advice" and covers the tool across Microsoft's full product suite, including within Excel and PowerPoint.
That language sits uneasily alongside the company's commercial ambitions. Microsoft launched Copilot on September 21, 2023, embedded it across Windows 11 and the Microsoft 365 suite, and priced the enterprise tier at $30 per user per month. CEO Satya Nadella called the tool "a true daily habit" and told investors daily active users had grown nearly threefold year over year. The company spent approximately $80 billion on AI-related capital expenditure in fiscal year 2025, including a reported $13 billion investment in OpenAI, whose models power Copilot's core capabilities.
Adoption has been substantially slower than the marketing implies. Microsoft reported 15 million paid Microsoft 365 Copilot seats as of its FY2026 Q2 earnings call, representing just 3.3% of its 450 million paid commercial seats. U.S. paid subscriber market share contracted 39% in six months, from 18.8% in July 2025 to 11.5% in January 2026. When given a free choice between Copilot, ChatGPT, and Gemini, only 8% of workers select the Microsoft product. Recon Analytics tracked the tool's accuracy Net Promoter Score deteriorating from -3.5 in July 2025 to -24.1 in September 2025, and found that 44.2% of lapsed users cite distrust of answers as their primary reason for abandoning it.
The reliability concerns carry real-world consequences. In August 2024, Copilot falsely identified German court reporter Martin Bernklau as a convicted child abuser and fraudster, and provided his home address. Microsoft blocked queries about Bernklau following a data protection complaint. In January 2026, Copilot generated false claims about football-related violence, prompting renewed coverage of its accuracy problems.
Microsoft is not alone in hedging through fine print. OpenAI warns users not to rely on its outputs as "a sole source of truth or factual information" and caps aggregate liability at $100 or the amount paid in the preceding 12 months. Google's Gemini terms state: "Don't rely on the Services for medical, mental health, legal, financial, or other professional advice." xAI warns that its AI "is probabilistic in nature" and may produce incorrect output. Neither Google nor OpenAI, however, applied the phrase "entertainment purposes only" to their products.
Courts are beginning to treat these disclaimers as consequential. In May 2025, a Georgia state court ruled for OpenAI in Walters v. OpenAI, a defamation case brought by radio host Mark Walters after ChatGPT falsely claimed he had embezzled from a gun-rights organization. The court cited OpenAI's terms of service warnings as a key factor limiting liability, a precedent that underscores why the fine print matters far beyond its font size.
Sources:
Know something we missed? Have a correction or additional information?
Submit a Tip

