Technology

AI Giants Face Growing Skepticism Over Safety, Profit, and Public Costs

AI firms promise safety and public benefit, but their structures still reward growth, secrecy, and scale over accountability.

Lisa Park5 min read
Published
Listen to this article0:00 min
Share this article:
AI Giants Face Growing Skepticism Over Safety, Profit, and Public Costs
Source: mos.cms.futurecdn.net

The promise collides with the business model

AI companies now sit inside search, office software, customer service, media, and coding, but their public ethics language has not kept pace with the scale of their power. OpenAI is the clearest example: it began as a nonprofit in 2015, later moved to a capped-profit structure, and in 2025 reached a deal with Microsoft that would let it reorganize as a public benefit corporation, with Reuters reporting a valuation of $500 billion in that arrangement. That is the core accountability gap in this industry: a company can say it exists to benefit humanity while still being financed, governed, and measured like a growth machine.

That tension has become harder to ignore because the companies themselves have admitted, at least indirectly, that the pressure is real. A 2024 open letter from current and former workers at OpenAI, Google DeepMind, and Anthropic warned that AI firms have strong financial incentives to avoid effective oversight and called for whistleblower protections. Journalists and researchers have also highlighted internal conflicts between safety and commercialization, especially at OpenAI, where critics have questioned whether a safety mission can survive rapid expansion.

What a “good” AI company would actually have to give up

A serious test of corporate virtue is not whether a company says it values safety, but whether it is willing to slow down when money says accelerate. A genuinely responsible AI company would likely have to sacrifice some combination of growth, data access, labor flexibility, and product rollout speed. It would need to accept stronger outside scrutiny, clearer internal reporting channels, and limits on the kinds of data it can use or the kinds of models it can deploy.

That is where most public promises become fragile. If investors expect steep growth, executives are pushed toward bigger models, more users, and faster monetization. If the company depends on collecting huge amounts of data and shipping products into search, media, and workplaces, then restraint can look like lost opportunity rather than responsible governance. The result is an industry where the most socially useful behavior, slowing down, can conflict with the most rewarded behavior, scaling up.

The public is paying for the data centers

The safety debate is not only about abstract risk. It is also about water, electricity, land use, and the local costs of building the infrastructure that AI requires. AP reported in 2025 that larger data centers can consume up to 5 million gallons of water a day, roughly the daily water demand of a town of up to 50,000 people. That is not a minor operational detail; it is a public resource issue that can shape whether a community has enough water, power, and infrastructure for its own needs.

This is why data-center expansion has become a political fight in so many places. Communities are being asked to absorb higher energy demand, water stress, and strain on roads and grids in exchange for promises of jobs and tax revenue. Those tradeoffs matter even more as AI workloads grow, because the environmental footprint is tied to the scale of both training and inference. The newest facilities are often being built in places that are already under pressure from water or power constraints, which turns the promise of innovation into a local equity question.

Copyright is becoming a legal test of legitimacy

Another fault line is the material used to build these systems in the first place. By early 2025, federal litigation over AI training data had become a central test of whether companies can lawfully build products on books, articles, music, and other copyrighted works. In February 2025, a federal judge in Delaware rejected an AI startup’s fair-use defense in Thomson Reuters Enterprise Centre GmbH v. Ross Intelligence Inc., a ruling widely treated as a major setback for the argument that training on copyrighted material is automatically protected.

By 2025, copyright claims against AI companies had multiplied into dozens of lawsuits filed by publishers, writers, musicians, and other rights holders. That legal backlash is more than a courtroom dispute. It is a public accounting of who bears the costs when AI products are trained on disputed data, and it raises a basic credibility problem: if a company’s business model depends on material it cannot clearly justify using, its claims to social good become much harder to trust.

Trust is now part of the product

The Reuters Institute for the Study of Journalism found in 2025 that people across six countries were increasingly using generative AI while also worrying about its impact on journalism and society. That combination matters because public acceptance is no longer just about whether the tools work. It is about whether people believe the companies behind them are honest about what the systems can do, what they take from others, and who carries the risks.

Researchers at the Reuters Institute, including Richard Fletcher, Nic Newman, and Rasmus Kleis Nielsen, have helped show that generative AI sits in a contradictory space: useful enough to spread quickly, but contested enough to raise alarms about accuracy, labor, and the quality of public information. Sasha Luccioni has also become a prominent voice on the environmental burden of large AI systems, a reminder that the debate cannot stay confined to model benchmarks and product demos. Trust, in other words, is not a branding problem. It is a governance problem.

What accountability would look like in practice

A “good” AI company would have to be more than ambitious. It would need to prove that safety claims are backed by structures that can survive investor pressure, internal competition, and market hype. That means real whistleblower protections, clearer oversight, transparent limits on data use, and a willingness to delay deployment when harms are not understood.

It would also have to accept that public benefit may require less profit, not just better messaging. If a company wants to claim it is protecting humanity, then its incentives should not reward secrecy over accountability or scale over restraint. Until AI firms are willing to trade some growth for verifiable safeguards, the distance between their promises and their practices will keep widening.

The industry can keep calling itself responsible. The public will keep judging it by what it gives up.

Know something we missed? Have a correction or additional information?

Submit a Tip

Never miss a story.
Get Prism News updates weekly.

The top stories delivered to your inbox.

Free forever · Unsubscribe anytime

Discussion

More in Technology