Practical guidance for managing AI risks in accounting firms
Kevin Lassar, Founder & CEO, ForgePath
Dani Lensky, Chief Information Security Officer, ForgePath
Sandy Buchanan, Chief Security Officer, Mirae Security
accountingaiplaybook.com
2025
AI introduces meaningful security and confidentiality risks for accounting firms, particularly for those handling sensitive client data without a formal governance plan. Without clear policies, AI tools can expose firms to data leaks, compliance violations, or reputational damage.
This chapter outlines simple, effective ways to manage those risks. You'll learn how to assess AI tools, define acceptable use, and put lightweight controls in place, all tailored for firms without dedicated IT or security teams.
Employees are adopting AI tools faster than leadership can respond. A 2024 workplace survey found that over half of employees were already using AI at work, while less than half of their companies had any formal AI policy in place. Without guidance, well-meaning staff may input confidential client data into unsecured tools, use AI for the wrong types of tasks, or overtrust AI-generated outputs.
Because these actions often happen in browsers, emails, or side apps, leadership may not even be aware they're occurring. This creates serious risks: data leaks, inaccuracies, and noncompliance, all without a clear line of accountability. Without guardrails, every employee becomes their own AI policy maker.
Large firms often have cybersecurity teams and vendor vetting protocols. Most small to mid-sized firms do not, and without a risk framework, they may not know what questions to ask when evaluating AI tools. Who owns the data? Is it being used for training? Can it be deleted? Without a structured approach, many firms trust vendors by default, which is risky when AI platforms vary widely in how they handle data.
Even well-intentioned vendors may log user inputs or train on customer data unless you are using enterprise-grade versions. But smaller firms rarely verify those terms. Without a formal checklist or due diligence process, trust gaps can lead to blind spots that expose firms to unnecessary security, legal, and ethical risk.
AI use does not present just one risk. It triggers multiple, overlapping concerns: data classification, compliance, access control, output reliability, and client confidentiality. Even a simple AI-assisted task, like drafting part of a deliverable, can create uncertainty across legal, technical, and ethical dimensions.
Most firms are not equipped to manage these risks holistically. One missed detail, such as a tool retaining client data or generating unreviewed outputs, can lead to confidentiality breaches or compliance issues. Without clear roles, layered controls, or structured workflows, firms are left guessing what is safe and hoping nothing slips through.
The first and most critical step to managing AI-related risk is documenting a firmwide AI usage policy. Even a short, clear policy can prevent staff from unintentionally misusing AI or exposing sensitive client information. A strong baseline policy should define:
You don't need legal jargon or technical detail. This document should be readable by all staff and updated regularly. The goal is to empower employees with confidence and clarity while protecting your firm from misuse.
Governance should serve your firm's strategy, not the other way around. Start by identifying your most important goals with AI: increased efficiency, better documentation, faster turnaround, or stronger client service. Then, focus your AI guardrails and risk controls on supporting these specific outcomes.
For example, if your firm prioritizes operational efficiency, your governance model should make it easy to adopt low-risk tools that automate internal processes. If your firm's reputation is built on accuracy, you'll want stronger controls around review and approval of AI-generated work. Always lead with the business case and build your governance around it.
You don't need to roll out AI across your entire firm at once. In fact, it's safer and smarter to start small. Pick a low-risk use case where AI can provide value without touching sensitive client data. Common starting points include:
Pilot tools in these areas, track results, and gather feedback from your team. Use these insights to improve your policy, refine training, and build buy-in. Once you've found success, expand use to other departments and use cases with clear guidelines in place.
AI tools introduce multiple opportunities for data leakage, especially when integrated into day-to-day workflows. You can reduce this risk by applying a layered defense strategy:
Process-based controls:
Tool-based controls:
These small, accessible steps are effective even for firms without in-house IT teams. Vendor-provided dashboards, admin panels, or plug-and-play data masking tools can offer simple solutions to major risks.
AI doesn't require zero risk, but you do need to know your limits. Defining your firm's risk tolerance allows you to make proactive decisions about where AI can be used and how much oversight is needed. Consider creating a simple Risk Tolerance Matrix:
This tool clarifies expectations, supports consistent decision-making, and gives leadership a simple way to approve or reject new AI uses.
Every AI tool or new use case should be reviewed through the lens of risk. AI-specific risk assessments help identify vulnerabilities before a tool is implemented. This doesn't require an in-house security team, just a thoughtful checklist.
Firms can use a lightweight assessment template to review each new AI integration. Repeating this process across tools creates consistency and reduces reliance on vendor claims.
Your people are both your front line and your biggest variable. Without proper training, even well-meaning staff may use AI incorrectly. Provide employees with guidance on which tools are approved, what data is off-limits, and how to verify AI-generated content.
The more confident employees feel about what's allowed, the less likely they are to take risky shortcuts or fall back on unauthorized tools.
AI governance isn't static. Once you roll out AI tools, create a simple monitoring structure that keeps usage safe, aligned, and productive. If your platform provides audit trails or dashboards, check them monthly. If it doesn't, hold informal check-ins with departments to learn what's working and what's not.
Maintain a live inventory of which AI tools are in use, who has access, and what tasks they're being used for. Periodically review that list to identify gaps, risks, or underutilized licenses. Monitoring doesn't have to be a full-time job, but without it, even the best policies eventually break down.