Technology

Campbell Brown warns AI could repeat social media's accuracy failures

Campbell Brown said AI assistants risk repeating social media’s accuracy failures, with hidden choices shaping what millions see, trust and believe.

Lisa Park··3 min read
Published
Listen to this article0:00 min
Share this article:
Campbell Brown warns AI could repeat social media's accuracy failures
Source: techcrunch.com

“The conversation is sort of happening in Silicon Valley around one thing, and a totally different conversation is happening among consumers,” Campbell Brown said, putting her warning in stark terms: the next fight over information may be inside AI systems that users increasingly trust without seeing how they work.

Brown, who spent years chasing accuracy first as a television journalist and then as Facebook’s first and only dedicated news chief, said foundation models could reproduce the same pattern that damaged trust on social platforms. Her company, Forum AI, evaluates models on what she calls “high-stakes topics” such as geopolitics, mental health, finance and hiring, areas where answers are rarely clean and the consequences can be real. Brown said Forum AI aims to get AI judges to about 90 percent consensus with human experts, a benchmark meant to expose where models drift from careful judgment.

She said she founded Forum AI in New York 17 months ago after seeing ChatGPT’s public release at Meta and realizing AI would become a major funnel for information. Brown said she worried about her children growing up dependent on systems that were not accurate enough, while the companies building them remained laser-focused on coding and math instead of news and information. For geopolitics benchmarks, she said the company recruited Niall Ferguson, Fareed Zakaria, former Secretary of State Tony Blinken, former House Speaker Kevin McCarthy and Anne Neuberger.

Related photo
Source: hollywoodreporter.com

Brown said Forum AI’s testing has turned up problems that go beyond obvious factual errors. She pointed to Gemini drawing from Chinese Communist Party websites for stories unrelated to China, and said nearly all models showed a left-leaning political bias. More troubling, she said, were subtler failures: missing context, missing perspectives and arguments that were flattened or straw-manned without acknowledgment. Those are the kinds of editorial decisions that shape public understanding even when no obvious falsehood appears on screen.

Her warning lands as Meta continues to unwind its own earlier approach to information integrity. The company announced on January 7, 2025, that it would end its third-party fact-checking program in the United States and later began the U.S. shutdown on April 7, 2025. Meta said it wanted to reduce mistakes and move toward user-driven context, but fact-checking partners and news-accuracy advocates argued the change reduced transparency and undervalued journalism. In April 2025, Meta’s Joel Kaplan publicly apologized after the company’s AI chatbot generated false and defamatory claims about conservative activist Robby Starbuck.

Related stock photo
Photo by Matheus Bertelli

The broader evidence suggests Brown’s fears are already moving beyond theory. A European Broadcasting Union study released in 2025 found AI assistants misrepresented news content in 45 percent of responses, with 31 percent showing serious sourcing problems and 20 percent containing major accuracy issues. The Reuters Institute’s Digital News Report 2025 found 7 percent of online news consumers used AI assistants for news each week, rising to 15 percent among under-25s. The power struggle now is not just over what AI can answer, but over who gets to decide what it surfaces, suppresses and summarizes for everyone else.

Know something we missed? Have a correction or additional information?

Submit a Tip

Never miss a story.

Get Prism News updates weekly. The top stories delivered to your inbox.

Free forever · Unsubscribe anytime

Discussion

More in Technology