Practical guidance for managing AI risks in accounting firms
Kevin Lassar, Founder & CEO, ForgePath
Dani Lensky, Chief Information Security Officer, ForgePath
Sandy Buchanan, Chief Security Officer, Mirai Security
accountingaiplaybook.com
2025
AI introduces meaningful security and confidentiality risks for accounting firms, particularly for those handling sensitive client data without a formal governance plan. Without clear policies, AI tools can expose firms to data leaks, compliance violations, or reputational damage.
This chapter outlines simple, effective ways to manage those risks. You'll learn how to assess AI tools, define acceptable use, and put lightweight controls in place, all tailored for firms without dedicated IT or security teams.
Employees are adopting AI tools faster than leadership can respond. A 2024 workplace survey found that over half of employees were already using AI at work, while less than half of their companies had any formal AI policy in place. Without guidance, well-meaning staff may input confidential client data into unsecured tools, use AI for the wrong types of tasks, or overtrust AI-generated outputs.
Because these actions often happen in browsers, emails, or side apps, leadership may not even be aware they're occurring. This creates serious risks: data leaks, inaccuracies, and noncompliance, all without a clear line of accountability. Without guardrails, every employee becomes their own AI policy maker.
Large firms often have cybersecurity teams and vendor vetting protocols. Most small to mid-sized firms do not, and without a risk framework, they may not know what questions to ask when evaluating AI tools. Who owns the data? Is it being used for training? Can it be deleted? Without a structured approach, many firms trust vendors by default, which is risky when AI platforms vary widely in how they handle data.
Even well-intentioned vendors may log user inputs or train on customer data unless you are using enterprise-grade versions. But smaller firms rarely verify those terms. Without a formal checklist or due diligence process, trust gaps can lead to blind spots that expose firms to unnecessary security, legal, and ethical risk.
AI use does not present just one risk. It triggers multiple, overlapping concerns: data classification, compliance, access control, output reliability, and client confidentiality. Even a simple AI-assisted task, like drafting part of a deliverable, can create uncertainty across legal, technical, and ethical dimensions.
Most firms are not equipped to manage these risks holistically. One missed detail, such as a tool retaining client data or generating unreviewed outputs, can lead to confidentiality breaches or compliance issues. Without clear roles, layered controls, or structured workflows, firms are left guessing what is safe and hoping nothing slips through.
The first and most critical step to managing AI-related risk is documenting a firmwide AI usage policy. Even a short, clear policy can prevent staff from unintentionally misusing AI or exposing sensitive client information. A strong baseline policy should define:
You don't need legal jargon or technical detail. This document should be readable by all staff and updated regularly. The goal is to empower employees with confidence and clarity while protecting your firm from misuse.
Governance should serve your firm's strategy, not the other way around. Start by identifying your most important goals with AI: increased efficiency, better documentation, faster turnaround, or stronger client service. Then, focus your AI guardrails and risk controls on supporting these specific outcomes.
For example, if your firm prioritizes operational efficiency, your governance model should make it easy to adopt low-risk tools that automate internal processes. If your firm's reputation is built on accuracy, you'll want stronger controls around review and approval of AI-generated work. Always lead with the business case and build your governance around it.
YHB is a mid-sized accounting and advisory firm serving clients across the Mid-Atlantic region. As the firm began to explore AI use cases across departments, leadership recognized the need to create guardrails that would allow for innovation while minimizing risk. To help navigate this challenge, YHB partnered with ForgePath, a cybersecurity and risk advisory firm.
ForgePath worked closely with YHB's leadership to assess risk factors, define acceptable use cases, and ensure alignment with legal, ethical, and data protection standards.
The resulting document outlines practical firmwide guidance across several categories, including:
This framework now serves as YHB's foundational policy for responsible AI adoption and is accessible to all employees. You can view a redacted version of YHB's AI Guardrails document here.
You don’t need to roll out AI across your entire firm at once. In fact, it's safer and smarter to start small. Pick a low-risk use case where AI can provide value without touching sensitive client data. Common starting points include:
Pilot tools in these areas, track results, and gather feedback from your team. Use these insights to improve your policy, refine training, and build buy-in. Once you’ve found success, expand use to other departments and use cases with clear guidelines in place.
AI tools introduce multiple opportunities for data leakage, especially when integrated into day-to-day workflows. You can reduce this risk by applying a layered defense strategy:
These small, accessible steps are effective even for firms without in-house IT teams. Vendor-provided dashboards, admin panels, or plug-and-play data masking tools can offer simple solutions to major risks.
AI doesn’t require zero risk, but you do need to know your limits. Defining your firm’s risk tolerance allows you to make proactive decisions about where AI can be used and how much oversight is needed. Consider creating a simple Risk Tolerance Matrix:
How to build a Risk Tolerance Matrix:
This tool clarifies expectations, supports consistent decision-making, and gives leadership a simple way to approve or reject new AI uses.
Use this framework to define what level of AI-related risk is acceptable at your firm. This tool helps align leadership, prevent gray areas, and guide AI tool approvals.
Use these categories to evaluate all AI use cases. These five areas tend to carry the greatest potential for compliance, security, and reputational harm.
Will the AI process client information, financial records, or personal identifiers?
Could inaccurate or misleading outputs cause errors in work product?
Could the tool violate professional standards, contracts, or data privacy laws?
Could the output reflect poorly on the firm or be seen as unprofessional?
Will your team over-rely on AI, skip review steps, or lose visibility into key work?
Define what’s considered low, moderate, or high risk in each category. This keeps the framework clear and usable across different departments.
Once you’ve defined your risk levels, clarify what is allowed and what is not. Keep this in a shared policy document or table.
Acceptable
For internal use, human must review
Prohibited
Too high in compliance and accuracy risk
Acceptable
Data must be anonymized
Prohibited
High confidentiality and vendor risk
Conditional
Must be reviewed and follow firm branding policy
This framework should be formally reviewed and approved by a designated group. In a small or mid-sized firm, that may include:
Use this step as a calibration tool to assess new use cases before rollout. You can copy and adapt this table to build your own firm-specific registry of pre-approved or prohibited use cases.
Reputational
Low
Acceptable with brand review
Confidentiality
Moderate
Allowed internally with anonymized input
Accuracy & Compliance
High
Not permitted
Operational & Accuracy
High
Only with review and disclaimers
Every AI tool or new use case should be reviewed through the lens of risk. AI-specific risk assessments help identify vulnerabilities before a tool is implemented. This doesn’t require an in-house security team, just a thoughtful checklist.
Firms can use a lightweight assessment template to review each new AI integration. Repeating this process across tools creates consistency and reduces reliance on vendor claims.
Your people are both your front line and your biggest variable. Without proper training, even well-meaning staff may use AI incorrectly. Provide employees with guidance on which tools are approved, what data is off-limits, and how to verify AI-generated content.
The more confident employees feel about what’s allowed, the less likely they are to take risky shortcuts or fall back on unauthorized tools.
AI governance isn’t static. Once you roll out AI tools, create a simple monitoring structure that keeps usage safe, aligned, and productive. If your platform provides audit trails or dashboards, check them monthly. If it doesn’t, hold informal check-ins with departments to learn what’s working and what’s not.
Maintain a live inventory of which AI tools are in use, who has access, and what tasks they’re being used for. Periodically review that list to identify gaps, risks, or under utilized licenses. Monitoring doesn’t have to be a full-time job, but without it, even the best policies eventually break down.

Use this comprehensive checklist before approving any AI tool or workflow. Ideal for non-technical firms looking to identify blind spots across data, compliance, and vendor risks.

Is the AI tool solving a specific, meaningful business problem?

Will the tool be used for internal operations, client services, or marketing?

Is this a high-impact task (e.g., client deliverables) or low-impact (e.g., research)?

Will client data, financials, or PII be entered into the tool?

Can the data be anonymized or redacted before input?

Does this involve any regulated data (e.g., tax filings, audit records)?

Does the tool store or log prompts and responses?

Does it train its model using your data?

Can data be deleted from the system upon request?

Is it clear how long data is retained and by whom?

Is this vendor known in the accounting or legal space?

Do they disclose how they handle and protect user data?

Is there a documented security program (e.g., SOC 2 Type II, ISO 27001)?

Do they comply with privacy laws (e.g., GDPR, CCPA)?

Who at the firm will use the tool, and for what?

Are admin roles defined, with controls over who can input or access sensitive data?

Is there multi-user or enterprise-level access available?

Will the output affect clients, reports, or financial guidance?

Will a human review every AI-assisted deliverable before it is sent externally?

Could inaccurate or biased outputs cause reputational or compliance issues?

Is there a usage log or audit trail available?

Can usage be tracked by department or individual?

Are there regular reviews planned to assess how the tool is being used?

If the vendor shuts down or changes terms, is there a backup tool or workflow?

Can data and settings be exported?

Is there a documented procedure for ending use and revoking access?
This chapter provides accounting firms with practical guidance to manage the security, compliance, and confidentiality risks that come with AI adoption. It outlines common challenges, from premature employee use to vendor trust gaps, and offers best practices tailored to firms without dedicated IT teams. Readers will learn how to create baseline AI policies, define firmwide risk tolerance, apply layered safeguards, and assess tools using simple checklists. A real-world case study from YHB highlights how firms can implement AI guardrails in collaboration with risk advisors, with ready-to-use templates included to help firms take immediate action.