Culture

KPMG, UT Austin Study Analyzes 1.4 Million AI Workplace Interactions

Only 5% of KPMG's 2,597 AI users showed true sophistication across 1.4 million interactions, per a new Harvard Business Review study.

Lauren Xu4 min read
Published
Listen to this article0:00 min
Share this article:
KPMG, UT Austin Study Analyzes 1.4 Million AI Workplace Interactions
AI-generated illustration
This article contains affiliate links, marked with a blue dot. We may earn a small commission at no extra cost to you.

Approximately 5% of users consistently demonstrated highly sophisticated AI behaviors across months of usage data, according to a joint study by KPMG LLP and the McCombs School of Business at The University of Texas at Austin published March 19 in Harvard Business Review. The researchers spent eight months studying KPMG LLP's back-office operations, analyzing how people use AI at work. The dataset covered 2,597 unique users and 1.4 million real workplace interactions.

Users who were most successful with AI were not those who simply used it most frequently or those with the best technical skills; rather, they were the ones who excel in patterns of engagement with AI to frame problems, direct the AI model's approach to tasks, and apply AI across their work. For KPMG professionals navigating an environment where the firm has already deployed AI broadly across audit, advisory, and back-office functions, the implication is direct: adoption alone does not separate the people who get the most out of these tools.

To move beyond assumptions about what "good" AI use looks like, KPMG LLP collaborated with Zach Kowaleski, Nick Hallman, and Jaime Schmidt, faculty members in McCombs' Shulkin Department of Accounting, to analyze behavioral signals embedded in real-world AI interactions, evaluating more than 30 characteristics of prompt behavior across months of usage data, including task complexity, prompting techniques, and iteration patterns. Anu Puvvada, KPMG Studio Leader, led the research effort on the firm side.

Sophisticated users treated AI as a reasoning partner, shaping how it approached problems by asking the model to assume a certain role or perspective; providing concrete direction and examples; showing the AI how to reason through a task; requiring the model to explain how it got to a response; and offering ongoing feedback. Kowaleski, an assistant professor in the Shulkin Department of Accounting at UT Austin, described the mindset in concrete terms: "You want to be very clear about what you're asking for and what it's going to look like when you get it." He added that the highest-impact users also pushed back on early outputs the same way a careful reviewer would: "When they hear something back, like any good listener, they kind of push on the things that don't sound right to them and make improvements from there."

Jaime Schmidt, McCombs professor of accounting and director of the C. Aubrey Smith Center for Auditing Education and Research, put it this way: "We weren't looking for power users in the abstract. We were looking for people who had figured out how to think with the model, not just ask it questions."

"The gap between routine and sophisticated AI use is not hidden in prompts themselves, but in patterns of engagement. And once those patterns are visible, they become possible to recognize, discuss, and scale," said Puvvada. That visibility is precisely what the research was designed to create: a repeatable signal that organizations can train toward, not just observe.

For KPMG, these insights have been translated into a set of AI-First behaviors, supported by practical playbooks, training, and peer-led champion networks. By embedding these research-backed behaviors into the firmwide learning ecosystem, through the firm's aIQ Learning Academy, role-based skills development, and hands-on practice, more of KPMG's workforce can move from routine prompting to higher-impact human-AI collaboration.

These same insights now inform how KPMG employees work with clients: helping them define what effective AI use looks like within their own organizations, build role-aligned capabilities, and enable leaders to scale sophisticated human-AI collaboration as part of everyday work. For staff on client engagements, that means the behaviors documented in the study are increasingly likely to show up in how the firm structures its own delivery work and what it recommends to the organizations it advises.

Top users were ambitious with how they approached AI, treated it as a reasoning partner, delegated complex tasks with clear objectives, and treated AI as a general cognitive tool rather than a mere productivity tool. The distinction between those two framings, productivity tool versus cognitive partner, may matter most in the kinds of high-judgment work that define KPMG's core services: audit conclusions, deal diligence, regulatory interpretation, and client strategy. The study's core argument is that the behaviors separating the top 5% are not innate. They are teachable, observable, and scalable across a firm that already has the infrastructure to act on them.

Know something we missed? Have a correction or additional information?

Submit a Tip

Never miss a story.
Get KPMG updates weekly.

The top stories delivered to your inbox.

Free forever · Unsubscribe anytime

Discussion

More KPMG News