KPMG, UT Austin study finds best AI users guide reasoning, not just prompts
The real AI edge at KPMG is not speed or volume. The study points to better reasoning, tighter prompts and stronger judgment that workers can use on real client work.

The best AI users are not just better at prompting. They are better at thinking.
That is the core message from KPMG and the University of Texas at Austin McCombs School of Business after studying 1.4 million real workplace AI interactions over eight months. The clearest performers were not the heaviest users or the most technical ones. They were the people who framed the problem well, pushed the model to reason, and kept refining until the output was usable in real work.

For KPMG employees, that distinction matters. In audit, tax, consulting and internal operations, the payoff from AI does not come from opening the tool more often. It comes from using it in a disciplined way that improves planning, analysis, drafting and decision-making while reducing rework and errors.
What the study actually measured
The research did not rely on surveys or toy examples. KPMG and UT Austin looked at real workplace interactions from 2,500 employees over eight months, and the team evaluated more than 30 characteristics of prompt behavior. That gives the findings unusual weight for professional services, where managers care less about AI hype than about whether work comes back cleaner, faster and with fewer fixes.
KPMG said nearly 90% of employees were already using AI regularly when the project began, which means the study was not about teaching a company to start using AI. It was about showing the difference between basic usage and sophisticated usage in a mature environment. The analysis was led by Anu Puvvada, KPMG Studio Leader, with UT Austin faculty Nick Hallman, Zach Kowaleski and Jaime Schmidt.
The strongest signals of sophisticated use were practical and measurable: how often people returned to AI, how persistently they refined outputs, how ambitious their initial requests were and how intentionally they selected tools or models. That matters because it gives managers a more concrete way to judge AI skill than simply counting logins or messages sent.
The habits that separate stronger users from everyone else
The study’s best users treated AI as a reasoning partner rather than a search box. That is the key shift KPMG teams should internalize. Instead of asking for a quick answer and moving on, stronger users shaped the model’s role, asked it to explain its thinking and used multiple exchanges to get to a better result.
The behaviors that stood out are easy to translate into day-to-day work:
- Set the model’s role or perspective before asking for an answer.
- Give examples of the style, structure or level of detail you want.
- Ask the model to explain its reasoning, not just present a conclusion.
- Refine the output through several rounds instead of accepting the first draft.
- Choose the right tool or model for the task instead of using one default option for everything.
- Aim higher in the first request, especially on complex work that benefits from brainstorming, analysis or problem solving.
That combination matters because it reduces noisy outputs and forces the model to work through the issue instead of guessing at it. It also changes the user’s role from content consumer to editor, reviewer and problem-framer, which is much closer to the judgment KPMG expects from professionals on the audit, tax and advisory track.
What this means in audit
Audit teams can use these findings in ways that directly affect quality. The study suggests AI is most useful when it is pointed at cognitively demanding work, not simple copy-and-paste tasks. That means auditors should lean on it for planning memos, risk framing, test design, exception analysis and drafting issue summaries, then use their judgment to challenge the output before it reaches a manager or partner.
A better audit workflow often starts with a better prompt. If the task is to outline a control walkthrough or help explain why a variance matters, the prompt should define the role, the client context and the desired output. Then the user should ask the model to show its reasoning and look for gaps, because the real value is not in accepting a polished paragraph. It is in catching flawed logic before it becomes a review note.
That is where error reduction happens. Sophisticated users do not just speed up drafting. They catch inconsistencies earlier, ask follow-up questions and make AI do the first pass on hard thinking so the human can focus on judgment.
What this means in tax and consulting
Tax work is especially well suited to the behavior pattern the study highlights. Professionals often need iterative research, structured issue spotting and repeated narrowing of a complicated question. The researchers found that persistent refinement was one of the strongest signs of sophisticated use, which is exactly how many tax workflows already function when they are done well.
For tax teams, the lesson is to make AI do the heavy lifting on structure first, then pressure-test the result. Ask it to organize authorities, compare fact patterns, identify open questions and restate the issue in plain English. The goal is not to replace technical research. It is to get to a cleaner starting point faster and spend more time on the judgment call that determines whether a position holds.
Consulting teams can apply the same logic to research, analysis and proposal work. A well-shaped prompt can turn a vague business problem into a sharper work plan, but only if the user keeps refining the answer. Stronger users are not the ones who ask for a quick slide outline and stop there. They are the ones who iterate until the logic is tight enough to show a client or a partner.
That is especially relevant in proposal work, where the first draft often determines whether a team sounds generic or credible. A model that is asked to take a perspective, compare alternatives and explain tradeoffs will usually produce a more useful base than one given a one-line prompt. The human still has to edit for tone, strategy and client nuance, but the starting point is much stronger.
Why this matters for performance and career growth
The study also lines up with a broader shift in professional services: AI skill is becoming less about tool familiarity and more about demonstrated judgment. That is important inside KPMG because career progression depends on whether people can handle more complex work with less supervision. The employees who can guide AI well are better positioned to save time on routine parts of the job and spend more energy on the parts that matter for promotion, client trust and leadership readiness.
KPMG says it is applying the findings internally and in client work, which suggests these behaviors are not meant to stay in a lab. They are becoming part of how the firm thinks about productivity and performance. That aligns with broader industry reporting that some firms are beginning to include AI performance expectations in employee reviews, a sign that usage alone is no longer enough.
For employees, the practical takeaway is straightforward: do not measure yourself by how often you use AI. Measure yourself by whether it helps you think more clearly, spot mistakes sooner and deliver better work on the first or second pass. That is the skill set the study points to, and it is the one most likely to matter as AI becomes more embedded in audit rooms, tax files, proposal decks and back-office workflows.
The firms that win with AI will not be the ones that talk most about adoption. They will be the ones whose people know how to push the tool, challenge it and turn it into sharper professional judgment.
Know something we missed? Have a correction or additional information?
Submit a Tip

