Guides

KPMG staff turn to NIST framework for practical AI risk controls

KPMG’s AI controls are moving from policy slides to working papers, model reviews, and client diligence. NIST’s framework gives teams a common language for proving AI is governed, tested, and defensible.

Derek Washingtonwritten with AI··6 min read
Published
Listen to this article0:00 min
Share this article:
KPMG staff turn to NIST framework for practical AI risk controls
Source: kpmg.com

Why the NIST framework is showing up in KPMG workflows

The NIST AI Risk Management Framework is useful at KPMG because it turns AI risk from a vague warning into a set of work steps that can be documented, challenged, and reviewed. NIST says the framework is voluntary, rights-preserving, non-sector-specific, and use-case agnostic, which makes it easier to apply across audit, advisory, and internal tools without pretending every AI use looks the same.

That matters in a firm where AI risk is already scattered across different teams. An advisory group may be worried about hallucinations, confidentiality, data lineage, prompt injection, model drift, bias, or client misuse. An audit team may be more focused on evidence quality, explainability, and whether a tool is being used consistently with firm policy. The framework gives both sides a shared control language: Govern, Map, Measure, and Manage.

The three places where it will surface first

The first use case is client-facing advisory work. When a KPMG team helps a client adopt generative AI or automate a workflow, the NIST framework gives the engagement team a way to ask basic but consequential questions: Who owns the risk? What is the model being asked to do? What data is going into it? What happens when the output is wrong? Those are not abstract questions. They are the difference between a promising prototype and a system that can survive scrutiny from legal, compliance, and internal risk teams.

The second use case is audit work, where AI increasingly touches planning, testing, documentation, and quality review. Auditors need to know whether an AI-enabled procedure is consistent, whether its outputs are explainable enough to support a conclusion, and whether the evidence trail is strong enough to stand up later. If a team cannot show what was measured, what limits were set, and what escalation occurred when risk moved beyond tolerance, the tool becomes harder to defend.

The third use case is internal tool review. KPMG, like every large professional services firm, is under pressure to move quickly on productivity tools, summarization systems, and knowledge assistants. The NIST framework gives internal reviewers a way to test whether a tool fits the firm’s policies before it lands in day-to-day use. That includes who can access it, how prompts and outputs are stored, what review happens before use in client work, and what restrictions apply when sensitive information is involved.

What teams will actually have to produce

The value of the framework is not in its vocabulary. It is in the evidence it forces teams to assemble. A KPMG team that wants AI-enabled work to hold up under scrutiny should expect to produce documentation such as:

  • a defined use case and business purpose
  • the roles responsible for oversight, approval, and escalation
  • a risk assessment tied to the specific workflow, not just the model
  • data lineage and source documentation
  • testing results for accuracy, bias, and reliability
  • controls for confidentiality, access, and output review
  • rules for when a human must intervene
  • evidence of monitoring after launch

That documentation becomes especially important in client-facing work, where many organizations want the speed of AI but have not yet built the governance around it. The framework helps KPMG professionals separate excitement from readiness. A client may want a faster report draft or a smarter audit triage tool, but the team still has to show what is being governed, what context the system is operating in, what is being measured, and what happens when risk exceeds tolerance.

How Govern, Map, Measure, and Manage work in practice

NIST’s four functions are the part of the framework that matters most in a live engagement. Govern is the cross-cutting layer. NIST says governance is infused throughout the other three functions, which means AI oversight cannot be left to a technology team alone. Business leaders, risk owners, legal, compliance, and end users all need a role in setting expectations and enforcing them.

Map is where the team defines the setting. That means identifying the workflow, the data, the users, the possible harms, and the boundaries of acceptable use. In a KPMG context, this is where a project team decides whether an AI tool is being used for internal drafting, client analysis, or a more sensitive judgment-support function.

Measure is where evidence comes in. Here, the team looks at whether the model performs as expected, where it fails, and what kind of drift or inconsistency is showing up over time. For auditors and advisors, this is the point where technical testing meets professional judgment. If the output cannot be measured against a meaningful standard, the control story is weak.

Manage is where the response kicks in. If risk exceeds the agreed tolerance, the team needs a plan: restrict the use case, add human review, retrain users, change the data source, or stop the tool from being used in that workflow. Without that step, a framework is just a filing cabinet.

Why profiles matter for KPMG’s real-world use cases

NIST also says AI RMF profiles tailor the framework to specific settings based on requirements, risk tolerance, and resources. That makes the framework more practical for a global firm with very different service lines and risk appetites. A tax workflow, an audit support tool, and a client strategy assistant do not need the same controls, but they do need controls that match the risk.

The NIST Generative AI Profile, published on July 26, 2024, sharpens that point. NIST says it was developed as a companion to the AI RMF and pursuant to Executive Order 14110 on safe, secure, and trustworthy AI. In plain terms, it helps organizations identify the distinct risks created by generative AI and line up actions with their priorities. That is the exact pressure point KPMG teams face when clients want to deploy generative tools before they have the governance to support them.

How KPMG Trusted AI fits around NIST

KPMG’s own Trusted AI approach sits naturally beside NIST’s framework. KPMG describes Trusted AI as its strategic approach to designing, building, deploying, and using AI responsibly and ethically, and says the framework rests on ten ethical pillars. The firm also published an illustrative AI Risk and Controls Guide aligned with Trusted AI to help organizations identify AI risks and design proportionate controls.

That alignment matters because it gives KPMG staff two layers of language. NIST offers a neutral public-sector baseline that clients can recognize and discuss. KPMG Trusted AI translates that into the firm’s internal governance and control model. For staff trying to get through busy season, a proposal review, or a high-stakes advisory build, that combination is practical. It tells you not only what good looks like, but what proof you need when a partner, client, or regulator asks how the AI was controlled.

NIST’s AI Resource Center treats the framework as a living document and says the Playbook will be revised after AI RMF 1.1 is published. That is another reminder for KPMG teams: this is not a one-time policy exercise. AI controls will keep shifting as tools change, client expectations tighten, and accountability questions become more specific. The firms that can show their work will have the easiest time defending AI in the room where it actually matters.

Know something we missed? Have a correction or additional information?

Submit a Tip

Never miss a story.

Get KPMG updates weekly. The top stories delivered to your inbox.

Free forever · Unsubscribe anytime

Discussion

More KPMG News