AI Governance and Compliance

CONTRIBUTORS

Jackson Johnson, President & Founder, Johnson Global Advisory

Stephanie Mickens, Director, Johnson Global Advisory

THE ACCOUNTING AI PLAYBOOK

accountingaiplaybook.com

FIRST EDITION

2025

Building Trust in AI Accounting

As accounting firms adopt AI tools in audits, they face new questions about reliability, transparency, and compliance. Regulators like the PCAOB have made clear that if AI outputs can’t be explained or reproduced, they could violate existing standards. Yet formal guidance on AI use in audits remains limited, leaving firms unsure about how to move forward.

Some firms have responded by limiting AI to non-public clients, but this caution also presents a chance to lead. Firms that build strong AI governance practices now can stay ahead of future regulation and establish trust in their use of AI. This chapter covers key compliance barriers, governance best practices, and steps to create a trusted control environment.

Key Compliance Barriers

Accountants face several key compliance barriers when using AI, particularly as regulators such as the PCAOB, AICPA, and SEC increase their scrutiny.

Explainability

One major challenge is explainability. Many AI models, especially machine learning and generative AI, don’t clearly show how they reach conclusions. This is a problem for auditors who need to support their findings.

This lack of clarity makes it harder to meet audit evidence requirements, which must be sufficient, appropriate, and easy to understand, as outlined in PCAOB standard AS 1105.

Poor Documentation

Poor documentation is another major issue. This includes inadequate records of data inputs and outputs, training data, model logic, and controls over changes.

Such deficiencies may violate documentation and risk assessment requirements, as seen when audit teams use AI for journal entry testing without documenting the rationale for flagged entries or threshold settings.

Data Privacy

Data privacy becomes a concern as firms use AI to handle large amounts of sensitive financial and personal information. This can lead to violations of laws like GDPR and CCPA, especially when client data is processed in cloud or third-party systems.

Firms often struggle to maintain consistent policies for data classification, encryption, and access. Auditor independence may also be at risk if AI tools are built by a firm’s advisory armor are deeply integrated with a client’s systems. For instance, if both the firm and client use the same predictive AI tool for forecasting, it could lead to a self-review threat.

AI Skills Gap

A skills gap and overreliance on AI further complicate compliance.  Many auditors lack the training needed to critically evaluate AI outputs or to recognize when human judgment should override algorithmic conclusions.

This can lead to audit failures, such as misinterpreting a false negative from an AI-driven risk assessment as a clean result.  

Validation and Testing

Testing and validating AI tools is another challenge, especially for tools that keep learning over time. Firms need to test tools when they’re first used and then on a regular basis, just like they do when relying on third-party service providers.

But this is hard to do if the AI vendor doesn’t offer enough detail about how the tool works or the controls in place.

Change Management

Managing updates and changes to AI models is a concern. If a tool is updated or retrained without documentation, it can lead to inconsistent results.

For example, a model may flag different transactions in different quarters without any clear reason why. Many firms also lack a formal AI governance plan tied to their quality management systems, which causes inconsistent control practices and unclear responsibilities.

Lack of Guidance

Regulators have been slow to issue formal guidance on how AI should be integrated into the audit process, leaving many firms in a state of uncertainty.

The good news is that momentum is building.  PCAOB Board Member Christina Ho has publicly emphasized the transformative potential of AI in auditing, particularly in automating routine tasks such as cross-referencing data, extracting key contract terms, and documenting interviews.  She has advocated for the PCAOB to evolve its standards to promote responsible AI use, calling for transparency, bias mitigation, and auditability in AI tools.

Similarly, the International Auditing and Assurance Standards Board (IAASB) has demonstrated its commitment to supporting firms by releasing its Technology Position, which is a strategic framework that outlines how the board will adapt auditing standards to align with emerging technologies, including AI.

Until these guardrails are firmly in place, firms should proactively develop internal AI frameworks modeled on established control standards. COBIT can support firms in assessing and governing AI systems, including data and system integrity. COSO can be applied to evaluate AI governance, model risk, and internal control implications, particularly when AI impacts financial reporting or ICFR. NIST provides guidance to help firms build trustworthy AI systems and establish appropriate cyber security and governance protocols.

Best Practices for Governance

To use AI confidently and compliantly in accounting, especially in regulated environments like audit and assurance, firms should implement strong governance practices that align with both regulatory expectations and ethical standards.

1

Test AI Internally Before Use In Engagements

Before you bring AI into your audits, you’ll need to put it through its paces. The starting point is an internal review and certification process, ideally led by your firm’s risk or national office. They should evaluate the AI tool’s design, logic, and controls, and may require your vendor to share documentation, control reports, and allow independent testing.

A great way to do this is by running the AI on historical data from past audits with known results. That helps confirm whether the AI delivers the same conclusions auditors already reached.

Scenario analysis is another smart move. Challenge the AI with tricky edge cases like known fraud or anomalies. This can expose blind spots or bias in the model.

Be sure to maintain a complete audit trail of how the tool was tested and what controls were in place. If any issues pop up during testing, document and resolve them.

And before you roll it out firm-wide, get an independent review of the tool. Think of it like a second set of eyes, similar to a concurring partner review. Only once your firm is fully confident in the tool should it be used in your accounting processes.

2

Develop AI Governance Policies

Strong policies lay the foundation for responsible AI use. These should outline your standards for data inputs, risk reviews, decision-making responsibilities, and transparency.

Deloitte recommends a universal governance policy that applies to all AI technologies across the firm. This policy should define acceptable (and prohibited) use cases, require approval for new AI tools, and establish review intervals.

Ethical usage also needs to be a priority. That means clear guidelines around privacy, bias, and legal compliance — with transparency as a core value. Internally and externally, stakeholders should understand when and how AI is being used in order to build trust in AI usage.

To oversee this, consider forming a dedicated AI GRC (Governance, Risk, Compliance) team. Roles might include a Chief AI Risk Officer, Data Protection Manager, AI Project Manager, and an AI Governance Committee.

Need help building your framework? Look to proven models like NIST AI RMF and ISO 42001. COSO’s recent guide Realize the Full Potential of AI shows how to extend COSO’s ERM framework to AI, and it’s a great place to start.

3

Implement Data Quality Controls

AI tools are only as reliable as the data they process. The old adage “garbage in, garbage out” underscores the importance of data quality in AI-driven accounting. To minimize the risk of inaccurate or biased AI outputs, firms should implement data validation, cleansing, and standardization processes. High-quality data improves AI performance and supports more reliable audit conclusions.

Protecting sensitive data is also crucial. Firms should limit access to confidential information using role-based access controls (RBAC) and multi-factor authentication (MFA). Audit logs tracking data access provide an added layer of oversight, helping firms monitor and secure critical information.

Data lifecycle management is equally important. Retention and deletion policies should be in place to ensure outdated data does not become a liability. While GDPR is an EU regulation, it sets a high standard for data management and serves as a strong benchmark for firms looking to enhance their data governance practices

Mini Case Study: Regulatory Write Up

The Situation

A large, national accounting and consulting firm auditing a publicly traded company used an AI- platform for journal entry testing. The platform was SOC  2 Type 2 compliant, indicating that the vendor had implemented and maintained effective operational controls over a specified period.  The certification provided stakeholders with assurance that the platform upheld rigorous standards for data protection and internal controls in support of its AI functionality.

The Result

Despite the platform’s compliance status, the PCAOB issued a regulatory finding because the firm could not sufficiently demonstrate the reliability of the AI-generated results. As a result, the firm chose to discontinue use of the platform for public company audits to mitigate further regulatory risk.  This decision reduced the return on its AI investment and limited the use of AI in its most risk-intensive audit engagements.

The Lesson

A vendor’s SOC 2 report alone was not sufficient to satisfy regulatory expectations regarding the reliability of AI tools.  Firms must also perform independent validations of the data processed and results generated by the AI platform.  This could involve testing the tool against historical data with known outcomes, or providing clear documentation that explains the tool’s purpose, how it is used in the audit process, and what internal controls are in place to support the accuracy and completeness of its outputs.  Independent evaluations like these can enhance regulatory confidence and provide the necessary support for integrating AI into critical audit procedures.

How to Build Your Universal AI Governance Policy

This template provides a foundational structure for firms to develop a universal AI governance policy.

Policy Title: Universal AI Governance and Compliance Policy

Effective Date: [Date]

Approved By: [Managing Partner / Board]

1. Purpose

  • Clearly state the intent of the policy (e.g., ensuring ethical, effective, and compliant use of AI across the firm).
  • Reference the guiding frameworks (e.g., NIST AI RMF, ISO 42001, COSO) that inform the policy

2. Scope

  • Define the scope of thepolicy, including what AI tools, systems, and personnel it covers.
  • Specify that it appliesto both internally developed models and third-party AI tools used in clientengagements.

3. Governance Structure & Responsibilities

  • Establish an AI Governance Committee or designate roles (e.g., Chief AI Risk Officer, DataProtection Manager) responsible for oversight.
  • Outline the responsibilities of the committee, including policy enforcement, risk assessments, and continuous improvement.

4. Risk Assessment & Approval

  • Require pre-implementation risk assessments for all new AI tools.
  • Include criteria for data risks, model risks, and regulatory risks.
  • Set a process for leadership approval before AI tools are used in client work.

5. Data Management & Privacy

  • Define data classification and security requirements for AI inputs.
  • Mandate role-based access controls (RBAC), encryption, and data retention policies.
  • Require vendor reviews for data protection and privacy practices.

6. Explainability & Transparency

  • Require documentation that explains AI outputs and decision logic.
  • Include guidelines for communicating AI use to clients and stakeholders.

7. Validation, Testing & Monitoring

  • Mandate testing on historical or sample data before first use in client work.
  • Require ongoing validation and performance monitoring to ensure reliability.
  • Document all testing procedures and results for audit trails.

8. Vendor Management

  • Require due diligence on third-party AI vendors, including security and compliance reviews.
  • Set contract requirements for data protection, audit rights, and model transparency.

9. Change Management

  • Establish a processfor reviewing and approving AI model changes.
  • Include documentation requirements for model updates and version control.

10. Training & Awarenes

  • Require regular training for staff on AI governance and responsible AI use.
  • Include onboarding requirements for new employees working with AI.

11. Compliance, Audit & Enforcement

  • Define procedures for internal audits of AI compliance.
  • Outline disciplinary actions for non-compliance.

12. Review and Continuous Improvement

  • Require regular policy reviews to keep pace with changing technology and regulations.
  • Set a timeline for policy updates (e.g., annually or as needed).

Supplemental References

  • Include a list of guiding frameworks and resources that support the policy (e.g., NIST AI RMF,ISO/IEC 42001, COSO ERM).

AI Governance and Compliance Checklist

Use this checklist to guide your firm’s efforts in implementing sound AI governance and compliance practices.

Governance & Oversight

Have you designated an AI Governance Officer or committee to oversee AI tools and compliance?

Is there a formal AI governance policy that aligns with industry standards like NIST, COSO, and ISO?

Are AI use cases approved by firm leadership before deployment?

AI Inventory & Documentation

Have you created and maintained a complete inventory of all AI tools in use, including purpose, data sources, and vendor details?

Is there a process for documenting model logic, data inputs, outputs, and changes?

Are your AI systems mapped to specific regulatory requirements (e.g., PCAOB, SEC, AICPA)?

Explainability & Transparency

Can your AI tools provide clear, explainable outputs that meet audit evidence standards?

Are you documenting how AI tools reach their conclusions to support audit transparency?

Are engagement teams aware of the limitations and appropriate uses of AI tools?

Data Privacy & Security

Are data inputs for AI tools classified by sensitivity and protected by strong access controls?

Have you implemented encryption, role-based access (RBAC), and audit logs for AI data?

Are vendor data protection practices verified before use in client engagements?

Bias & Fairness Checks

Are you regularly testing AI tools for bias or unfair outcomes?

Are you adjusting models or their use if bias is detected?

Are you following emerging best practices for bias mitigation and fairness in AI?

Validation & Testing

Are AI tools tested on historical data before deployment?

Are you performing scenario analyses with edge cases to identify potential weaknesses?

Are you documenting all testing procedures and results for audit trails?

Change Management

Is there a formal process for reviewing and approving AI model changes?

Are you documenting all changes to AI models, including rationale and expected impact?

Are engagement teams informed of significant AI model updates that could affect results?

Regulatory Alignment

Have you mapped AI usage to relevant standards (e.g., PCAOB AS 1105, AICPA Code of Conduct)?

Are you maintaining documentation to demonstrate AI compliance to regulators?

Are you tracking regulatory developments to update your AI practices as needed?

Ongoing Training & Review

Are staff regularly trained on AI governance policies and best practices?

Are you conducting periodic reviews of AI use and compliance?

Are you updating your governance approach as AI technology and regulations evolve?

This checklist can be incorporated into your firm's quality control program, supporting the responsible use of AI tools and alignment with U.S. regulatory expectations.

Control Environment for AI Tools in 4 Steps

If you’re already using AI or software tools in your workflow, but aren’t sure if the right controls are in place, here are four steps to help establish a strong control environment.

1

Tool Inventory

Identify all of the AI tools used by the firm and collect all the relevant information about the tool and how it is used in accounting processes.

2

Engagement Team Memo & Training

Policies and procedures should be developed and put into place to guide how engagement teams use AI tools, ensuring they understand the risks associated.

3

National Office Policy

A firm's national office should implement procedures for the certification of AI tools that will be used in engagements, including identifying the risks and controls, and an approach for testing said controls.

4

Initial Certification

Once the national office gains confidence in the AI tool through their testing, they should certify the tool, allowing their engagement teams to rely on the national office's control testing for the tool.

Chapter Summary

As AI becomes more integrated into accounting, firms must navigate regulatory complexities to ensure compliance and reliability.  This chapter delves into key governance challenges, including regulatory uncertainty, explainability, and data privacy risks.  It outlines best practices for AI implementation, such as internal testing, developing governance policies, and implementing data quality controls.  Through a real-world case study and structured methodology for building a control environment for AI tools, readers will gain a comprehensive roadmap for establishing a compliant AI framework that fosters trust and strengthens audit integrity.

Works Cited

HomePlaybookMembershipContact Us

© 2025 Accounting AI Playbook. All rights reserved.