Risk Management, Security, And Confidentiality In Ai Systems

Practical guidance for managing AI risks in accounting firms

CONTRIBUTORS

Kevin Lassar, Founder & CEO, ForgePath

Dani Lensky, Chief Information Security Officer, ForgePath

Sandy Buchanan, Chief Security Officer, Mirae Security

THE ACCOUNTING AI PLAYBOOK

accountingaiplaybook.com

FIRST EDITION

2025

Juggling the Security Risks of AI

AI introduces meaningful security and confidentiality risks for accounting firms, particularly for those handling sensitive client data without a formal governance plan. Without clear policies, AI tools can expose firms to data leaks, compliance violations, or reputational damage.

This chapter outlines simple, effective ways to manage those risks. You'll learn how to assess AI tools, define acceptable use, and put lightweight controls in place, all tailored for firms without dedicated IT or security teams.

Key Barriers to Managing AI Risk & Security

1

Employees Are Using AI Before Your Policies Are Ready

Employees are adopting AI tools faster than leadership can respond. A 2024 workplace survey found that over half of employees were already using AI at work, while less than half of their companies had any formal AI policy in place. Without guidance, well-meaning staff may input confidential client data into unsecured tools, use AI for the wrong types of tasks, or overtrust AI-generated outputs.

Because these actions often happen in browsers, emails, or side apps, leadership may not even be aware they're occurring. This creates serious risks: data leaks, inaccuracies, and noncompliance, all without a clear line of accountability. Without guardrails, every employee becomes their own AI policy maker.

2

Small Firms Lack Mature Risk Frameworks and Vendor Trust

Large firms often have cybersecurity teams and vendor vetting protocols. Most small to mid-sized firms do not, and without a risk framework, they may not know what questions to ask when evaluating AI tools. Who owns the data? Is it being used for training? Can it be deleted? Without a structured approach, many firms trust vendors by default, which is risky when AI platforms vary widely in how they handle data.

Even well-intentioned vendors may log user inputs or train on customer data unless you are using enterprise-grade versions. But smaller firms rarely verify those terms. Without a formal checklist or due diligence process, trust gaps can lead to blind spots that expose firms to unnecessary security, legal, and ethical risk.

3

Overlapping Concerns Create Complexity

AI use does not present just one risk. It triggers multiple, overlapping concerns: data classification, compliance, access control, output reliability, and client confidentiality. Even a simple AI-assisted task, like drafting part of a deliverable, can create uncertainty across legal, technical, and ethical dimensions.

Most firms are not equipped to manage these risks holistically. One missed detail, such as a tool retaining client data or generating unreviewed outputs, can lead to confidentiality breaches or compliance issues. Without clear roles, layered controls, or structured workflows, firms are left guessing what is safe and hoping nothing slips through.

Best Practices for Risk Management, Security, and Confidentiality

1

Start with a Baseline AI Policy

The first and most critical step to managing AI-related risk is documenting a firmwide AI usage policy. Even a short, clear policy can prevent staff from unintentionally misusing AI or exposing sensitive client information. A strong baseline policy should define:

  • Which AI tools are approved for use
  • What types of tasks are acceptable (e.g., summarizing internal content vs. drafting client-facing documents)
  • What types of data are strictly off-limits (e.g., financial statements, tax IDs, PII)
  • Where human review is required

You don't need legal jargon or technical detail. This document should be readable by all staff and updated regularly. The goal is to empower employees with confidence and clarity while protecting your firm from misuse.

2

Align Governance with Business Objectives

Governance should serve your firm's strategy, not the other way around. Start by identifying your most important goals with AI: increased efficiency, better documentation, faster turnaround, or stronger client service. Then, focus your AI guardrails and risk controls on supporting these specific outcomes.

For example, if your firm prioritizes operational efficiency, your governance model should make it easy to adopt low-risk tools that automate internal processes. If your firm's reputation is built on accuracy, you'll want stronger controls around review and approval of AI-generated work. Always lead with the business case and build your governance around it.

3

Start Small, Then Expand

You don't need to roll out AI across your entire firm at once. In fact, it's safer and smarter to start small. Pick a low-risk use case where AI can provide value without touching sensitive client data. Common starting points include:

  • Drafting internal communications
  • Summarizing technical content or guidance
  • Formatting templates or engagement letters

Pilot tools in these areas, track results, and gather feedback from your team. Use these insights to improve your policy, refine training, and build buy-in. Once you've found success, expand use to other departments and use cases with clear guidelines in place.

4

Use Layered Safeguards to Protect Sensitive Data

AI tools introduce multiple opportunities for data leakage, especially when integrated into day-to-day workflows. You can reduce this risk by applying a layered defense strategy:

Process-based controls:

  • Drafting internal communications
  • Summarizing technical content or guidance
  • Formatting templates or engagement letters

Tool-based controls:

  • Use enterprise versions of AI platforms with data encryption, logging, and admin controls
  • Deploy browser extensions or middleware that anonymize sensitive input before it reaches the model
  • Limit access to approved AI tools via role-based permissions

These small, accessible steps are effective even for firms without in-house IT teams. Vendor-provided dashboards, admin panels, or plug-and-play data masking tools can offer simple solutions to major risks.

5

Define Risk Tolerance Before Problems Arise

AI doesn't require zero risk, but you do need to know your limits. Defining your firm's risk tolerance allows you to make proactive decisions about where AI can be used and how much oversight is needed. Consider creating a simple Risk Tolerance Matrix:

How to build a Risk Tolerance Matrix:

  • List categories of risk: confidentiality, output accuracy, regulatory compliance, and reputational impact
  • Score each category as High, Medium, or Low tolerance based on your firm's appetite
  • Score each category as High, Medium, or Low tolerance based on your firm's appetite

    This tool clarifies expectations, supports consistent decision-making, and gives leadership a simple way to approve or reject new AI uses.

    6

    Conduct AI-Specific Risk Assessments

    Every AI tool or new use case should be reviewed through the lens of risk. AI-specific risk assessments help identify vulnerabilities before a tool is implemented. This doesn't require an in-house security team, just a thoughtful checklist.

    AI Risk Assessment Questions:

    • What data will the tool access?
    • Does it store or train on user input?
    • Are client names, numbers, or financials involved?
    • Can we verify the tool complies with SOC 2, GDPR, or other frameworks?
    • What happens if the tool produces incorrect output?

      Firms can use a lightweight assessment template to review each new AI integration. Repeating this process across tools creates consistency and reduces reliance on vendor claims.

      7

      Train and Involve Employees

      Your people are both your front line and your biggest variable. Without proper training, even well-meaning staff may use AI incorrectly. Provide employees with guidance on which tools are approved, what data is off-limits, and how to verify AI-generated content.

      Tips for training and enablement:

      • Include AI policies in new hire onboarding
      • Host short training sessions using real-life examples
      • Create a shared "Do's and Don'ts" list
      • Involve staff in policy feedback and improvement

        The more confident employees feel about what's allowed, the less likely they are to take risky shortcuts or fall back on unauthorized tools.

        8

        Monitor Use and Adjust Over Time

        AI governance isn't static. Once you roll out AI tools, create a simple monitoring structure that keeps usage safe, aligned, and productive. If your platform provides audit trails or dashboards, check them monthly. If it doesn't, hold informal check-ins with departments to learn what's working and what's not.

        Maintain a live inventory of which AI tools are in use, who has access, and what tasks they're being used for. Periodically review that list to identify gaps, risks, or underutilized licenses. Monitoring doesn't have to be a full-time job, but without it, even the best policies eventually break down.

        © 2025 Accounting AI Playbook. All rights reserved.