Analysis

KPMG expands AI audits while experts demand independent model verification

KPMG is automating more audit work with AI, but the real pressure is on staff and partners to prove the models themselves are reliable.

Derek Washington··6 min read
Published
Listen to this article0:00 min
Share this article:
KPMG expands AI audits while experts demand independent model verification
Source: videostream.kpmg.de

Why the model, not just the output, is the issue

KPMG is pushing AI deeper into audit work at the same moment experts are warning that polished output is not the same as reliable assurance. For auditors, that distinction matters because a signed opinion still carries a human name, a human judgment, and human regulatory exposure, even when the work was shaped by AI. The question is no longer whether AI can speed up an audit file. It is whether anyone can independently verify the system that helped produce it.

That is where the phrase “AI cannot audit itself” lands with force. If an AI tool helps assemble a conclusion, the profession still needs a way to test the model behind it, including the controls around it, the data that fed it, and the trail showing how exceptions were handled. In practice, that means the comfort level cannot come from a clean-looking output alone. It has to come from evidence that the machine was governed like a system, not trusted like a colleague.

What KPMG is automating inside KPMG Clara

KPMG has said it is accelerating AI integration into KPMG Clara, its global smart audit platform, and that the rollout is intended to support more than 95,000 auditors globally. The firm says the AI agents are built to handle routine work such as expense vouching and searches for unrecorded liabilities and accrued expenses. It has also said the upgrades include a Financial Report Analyzer engine for disclosure checklists.

That is a meaningful shift in how audit teams work day to day. These are not flashy consumer tools bolted onto the side of a workflow. They are being embedded into substantive procedures, risk assessment, and document review, which means they can influence what gets tested, what gets flagged, and what gets escalated. For staff on busy-season schedules, the promise is less grunt work. The reality is that every automation layer can also create another layer of review.

KPMG describes the rollout as “human-in-the-loop,” and that wording matters. It signals that auditors are still expected to make the call, not just rubber-stamp what the software suggests. But human-in-the-loop also means human burden-in-the-loop. If AI drafts the first pass on a liability search or disclosure checklist, someone still has to test whether the logic was sound, whether the exceptions were complete, and whether the output matches the evidence in the file.

Where professional skepticism has to stay manual

The profession’s hardest judgment calls remain stubbornly human. An auditor still has to ask how the model was trained, what data were used, whether the input set was complete, where the exceptions landed, and who owns the decision when the machine is wrong. Those are not cosmetic questions. They are the core of professional skepticism.

That is especially true in public-company work, where the sign-off is tied to regulatory scrutiny and potential restatements. If AI is used to support substantive procedures, the staff member or manager reviewing the work cannot treat the system as a black box. They have to understand enough about the model and its controls to explain why the result is trustworthy, or why it is not. In audit terms, that is not an IT issue sitting off to the side. It is part of the evidence chain.

The same tension reaches advisory and risk teams, which are increasingly asked to help clients deploy AI in finance, compliance, and operations. Once the firm is advising on AI-enabled controls and also using AI inside its own audit platform, the standard for skepticism only gets tighter. The message to employees is straightforward: AI can draft, sort, search, and surface. It cannot own the judgment.

Why regulators are watching the control environment

The Public Company Accounting Oversight Board has said GenAI use in audits and financial reporting was still limited in its July 2024 outreach, but it was evolving quickly. The board said firms it spoke with were mostly using GenAI for administrative and research activities, and that those firms stressed the need for strong supervision because of data privacy and security risks. The board is still considering whether guidance, standards changes, or other regulatory action is needed for technology-based tools in audits.

That uncertainty is part of the pressure now building around KPMG and its peers. Regulators do not have to ban AI to raise the bar. They can simply expect firms to prove that the tools are controlled, that access is restricted, and that the audit trail can withstand inspection. If the AI changes the way work is performed, the review burden does not disappear. It shifts upward to the seniors, managers, and partners who have to defend the process.

KPMG’s own AI controls guide makes that point bluntly. It warns that poorly governed AI systems can create compliance violations, data and intellectual property loss, and reputational damage. That is not just a technology caution. It is a description of what happens when speed outruns control. In a firm built on checklists, sign-offs, and risk review, AI has to fit inside the control culture, not replace it.

What this means for staff, managers, and partner track pressure

For auditors trying to build a career at KPMG, the AI shift changes what competence looks like. The easy version of the story says routine work gets automated and people move up faster. The harder version says junior teams may do less mechanical work but face more scrutiny on the output they review, document, and explain. If AI handles first-pass vouching or disclosure checks, then the human reviewer is expected to catch the edge cases, prove the challenge, and leave a clean record.

That matters for promotion cycles too. Senior associates and managers are often judged on judgment, efficiency, and file quality. AI can help with efficiency, but it also raises the standard for file quality because reviewers will expect the model output to be tested, not merely accepted. For partners, the stakes are even sharper. The partner track is built on accountability, and no amount of automation changes who signs the opinion.

KPMG’s FY25 Audit Quality Report shows how quickly the firm is trying to move. It says auditors are using GenAI and AI agents in KPMG Clara AI to refine risk assessments, automate substantive procedures, and surface audit insights, and it says the firm is rolling out AI-specific assurance services. That puts KPMG in the middle of a wider industry test: proving that AI can improve audit quality without weakening the independence of the reviewer.

The real standard is still evidentiary confidence

The profession’s temptation is to treat AI as a throughput story. Faster searches, cleaner summaries, fewer repetitive tasks. But audit opinions are not judged by speed. They are judged by whether the evidence supports the conclusion. That is why independent model verification matters so much now.

KPMG can expand AI audits, and likely will. But every added layer of automation also adds a new question for the people whose names remain on the work: can you verify the model, defend the controls, and explain the judgment without hiding behind the tool? In audit, that answer still has to be human.

Know something we missed? Have a correction or additional information?

Submit a Tip

Never miss a story.

Get KPMG updates weekly. The top stories delivered to your inbox.

Free forever · Unsubscribe anytime

Discussion

More KPMG News