Risk Management, Security, and Confidentiality in AI Systems

Practical guidance for managing AI risks in accounting firms

CONTRIBUTORS

Kevin Lassar, Founder & CEO, ForgePath

Dani Lensky, Chief Information Security Officer, ForgePath

Sandy Buchanan, Chief Security Officer, Mirai Security

THE ACCOUNTING AI PLAYBOOK

accountingaiplaybook.com

FIRST EDITION

2025

Juggling the Security Risks of AI

AI introduces meaningful security and confidentiality risks for accounting firms, particularly for those handling sensitive client data without a formal governance plan. Without clear policies, AI tools can expose firms to data leaks, compliance violations, or reputational damage.

This chapter outlines simple, effective ways to manage those risks. You'll learn how to assess AI tools, define acceptable use, and put lightweight controls in place, all tailored for firms without dedicated IT or security teams.

Key Barriers to Managing AI Risk & Security

1

Employees Are Using AI Before Your Policies Are Ready

Employees are adopting AI tools faster than leadership can respond. A 2024 workplace survey found that over half of employees were already using AI at work, while less than half of their companies had any formal AI policy in place. Without guidance, well-meaning staff may input confidential client data into unsecured tools, use AI for the wrong types of tasks, or overtrust AI-generated outputs.

Because these actions often happen in browsers, emails, or side apps, leadership may not even be aware they're occurring. This creates serious risks: data leaks, inaccuracies, and noncompliance, all without a clear line of accountability. Without guardrails, every employee becomes their own AI policy maker.

2

Small Firms Lack Mature Risk Frameworks and Vendor Trust

Large firms often have cybersecurity teams and vendor vetting protocols. Most small to mid-sized firms do not, and without a risk framework, they may not know what questions to ask when evaluating AI tools. Who owns the data? Is it being used for training? Can it be deleted? Without a structured approach, many firms trust vendors by default, which is risky when AI platforms vary widely in how they handle data.

Even well-intentioned vendors may log user inputs or train on customer data unless you are using enterprise-grade versions. But smaller firms rarely verify those terms. Without a formal checklist or due diligence process, trust gaps can lead to blind spots that expose firms to unnecessary security, legal, and ethical risk.

3

Overlapping Concerns Create Complexity

AI use does not present just one risk. It triggers multiple, overlapping concerns: data classification, compliance, access control, output reliability, and client confidentiality. Even a simple AI-assisted task, like drafting part of a deliverable, can create uncertainty across legal, technical, and ethical dimensions.

Most firms are not equipped to manage these risks holistically. One missed detail, such as a tool retaining client data or generating unreviewed outputs, can lead to confidentiality breaches or compliance issues. Without clear roles, layered controls, or structured workflows, firms are left guessing what is safe and hoping nothing slips through.

Best Practices for Risk Management, Security, and Confidentiality

1

Start with a Baseline AI Policy

The first and most critical step to managing AI-related risk is documenting a firmwide AI usage policy. Even a short, clear policy can prevent staff from unintentionally misusing AI or exposing sensitive client information. A strong baseline policy should define:

  • Which AI tools are approved for use
  • What types of tasks are acceptable (e.g., summarizing internal content vs. drafting client-facing documents)
  • What types of data are strictly off-limits (e.g., financial statements, tax IDs, PII)
  • Where human review is required

You don't need legal jargon or technical detail. This document should be readable by all staff and updated regularly. The goal is to empower employees with confidence and clarity while protecting your firm from misuse.

2

Align Governance with Business Objectives

Governance should serve your firm's strategy, not the other way around. Start by identifying your most important goals with AI: increased efficiency, better documentation, faster turnaround, or stronger client service. Then, focus your AI guardrails and risk controls on supporting these specific outcomes.

For example, if your firm prioritizes operational efficiency, your governance model should make it easy to adopt low-risk tools that automate internal processes. If your firm's reputation is built on accuracy, you'll want stronger controls around review and approval of AI-generated work. Always lead with the business case and build your governance around it.

Mini Case Study: YHB's AI Guardrails Framework

The Situation

YHB is a mid-sized accounting and advisory firm serving clients across the Mid-Atlantic region. As the firm began to explore AI use cases across departments, leadership recognized the need to create guardrails that would allow for innovation while minimizing risk. To help navigate this challenge, YHB partnered with ForgePath, a cybersecurity and risk advisory firm.

The Result

ForgePath worked closely with YHB's leadership to assess risk factors, define acceptable use cases, and ensure alignment with legal, ethical, and data protection standards.

The resulting document outlines practical firmwide guidance across several categories, including:

  • Acceptable and unacceptable uses of AI
  • Data protection protocols for different types of tools (e.g., assistants vs. meeting recorders)
  • A formal evaluation and approval process for new AI tools
  • Human validation standards for AI-generated outputs
  • Data sanitization procedures and redaction techniques

This framework now serves as YHB's foundational policy for responsible AI adoption and is accessible to all employees. You can view a redacted version of YHB's AI Guardrails document here.

The Lesson

Start Small, Then Expand

You don’t need to roll out AI across your entire firm at once. In fact, it's safer and smarter to start small. Pick a low-risk use case where AI can provide value without touching sensitive client data. Common starting points include:

  • Drafting internal communications
  • Summarizing technical content or guidance
  • Formatting templates or engagement letters

Pilot tools in these areas, track results, and gather feedback from your team. Use these insights to improve your policy, refine training, and build buy-in. Once you’ve found success, expand use to other departments and use cases with clear guidelines in place.

Use Layered Safeguards to Protect Sensitive Data

AI tools introduce multiple opportunities for data leakage, especially when integrated into day-to-day workflows. You can reduce this risk by applying a layered defense strategy:

Process-based controls:

●    Require human review of all AI-generated content

●    Prohibit client data in public-facing AI tools (e.g., free versions of ChatGPT)

●    Establish prompt-writing guidelines to avoid oversharing information

Tool-based controls:

●    Use enterprise versions of AI platforms with data encryption, logging, and admin controls

●    Deploy browser extensions or middleware that anonymize sensitive input before it reaches the model

●    Limit access to approved AI tools via role-based permissions

These small, accessible steps are effective even for firms without in-house IT teams. Vendor-provided dashboards, admin panels, or plug-and-play data masking tools can offer simple solutions to major risks.

Define Risk Tolerance Before Problems Arise

AI doesn’t require zero risk, but you do need to know your limits. Defining your firm’s risk tolerance allows you to make proactive decisions about where AI can be used and how much oversight is needed. Consider creating a simple Risk Tolerance Matrix:

How to build a Risk Tolerance Matrix:

1.  List categories of risk: confidentiality, output accuracy, regulatory compliance, and reputational impact

2.   Score each  category as High, Medium, or Low tolerance based on your firm's appetite

3.   Map these scores to real use cases (e.g., “Drafting social content” = low risk; “Preparing tax memos” = high risk)

This tool clarifies expectations, supports consistent decision-making, and gives leadership a simple way to approve or reject new AI uses.

AI Risk Tolerance Framework

Use this framework to define what level of AI-related risk is acceptable at your firm. This tool helps align leadership, prevent gray areas, and guide AI tool approvals.

Step 1: Identify the Key Risk Categories

Use these categories to evaluate all AI use cases. These five areas tend to carry the greatest potential for compliance, security, and reputational harm.

Risk Category

What to Evaluate

Confidentiality

Will the AI process client information, financial records, or personal identifiers?

Accuracy & Reliability

Could inaccurate or misleading outputs cause errors in work product?

Compliance

Could the tool violate professional standards, contracts, or data privacy laws?

Reputational

Could the output reflect poorly on the firm or be seen as unprofessional?

Operational

Will your team over-rely on AI, skip review steps, or lose visibility into key work?

Step 2: Define Risk Tolerance Levels for Each Category

Define what’s considered low, moderate, or high risk in each category. This keeps the framework clear and usable across different departments.

Confidentiality

●    Low Risk: Using anonymized or dummy data with internal AI tools

●    Moderate Risk: Using client-related info with enterprise-secure tools under policy

●    High Risk: Inputting real client data into public tools (e.g., free ChatGPT)

Accuracy & Reliability

●    Low Risk: Using AI to generate draft outlines, lists, or templates with human review

●    Moderate Risk: AI-generated technical summaries, marketing copy, or calculations with oversight

●    High Risk: Relying on AI for tax positions, financial analysis, or audit findings without review

Compliance

●    Low Risk: Internal productivity use cases with no client data

●    Moderate Risk: Client communication assistance that follows internal policy

●    High Risk: AI use involving tax records, PII, or financial documents without explicit controls

Reputational

●    Low Risk: Internal drafts or brainstorming tools for staff

●    Moderate Risk: Marketing, website content, or social media (requires final review)

●    High Risk: AI-generated client deliverables or proposals without review

Operational

●    Low Risk: Limited use for one-off tasks or personal productivity

●    Moderate Risk: Expanded use across teams with oversight

●    High Risk: AI fully replacing staff judgment in recurring tasks without auditability

Step 3: Document Acceptable vs. Unacceptable Use

Once you’ve defined your risk levels, clarify what is allowed and what is not. Keep this in a shared policy document or table.

Use Case

Status

Notes

Drafting an email reply using AI

Acceptable

For internal use, human must review

Drafting a tax position with AI

Prohibited

Too high in compliance and accuracy risk

Using AI for client report formatting

Acceptable

Data must be anonymized

Entering client info into ChatGPT (free version)

Prohibited

High confidentiality and vendor risk

Creating client proposals with Copilot

Conditional

Must be reviewed and follow firm branding policy

Step 4: Review and Approve the Risk Tolerance Framework

This framework should be formally reviewed and approved by a designated group. In a small or mid-sized firm, that may include:

●    Managing partner or practice leader

●    Department heads (e.g.,tax, audit, operations)

●    Risk, compliance, or legal advisor (if applicable)

Once finalized:

●    Share it across the firm

●    Include in onboarding and AI training

●    Revisit and revise quarterly or as new tools are introduced

Step 5: Apply to Real-World Scenarios

Use this step as a calibration tool to assess new use cases before rollout. You can copy and adapt this table to build your own firm-specific registry of pre-approved or prohibited use cases.

Scenario

Risk Category

Risk Level

Firm Policy

Using AI to write client birthday cards

Reputational

Low

Acceptable with brand review

AI helping with staff performance review summaries

Confidentiality

Moderate

Allowed internally with anonymized input

Using AI to draft financial advice

Accuracy & Compliance

High

Not permitted

Client asks for AI-driven analysis in their deliverable

Operational & Accuracy

High

Only with review and disclaimers

Conduct AI-Specific Risk Assessments

Every AI tool or new use case should be reviewed through the lens of risk. AI-specific risk assessments help identify vulnerabilities before a tool is implemented. This doesn’t require an in-house security team, just a thoughtful checklist.

AI Risk Assessment Questions:

●    What data will the tool access?

●    Does it store or train on user input?

●    Are client names, numbers, or financials involved?

●    Can we verify the tool complies with SOC 2, GDPR, or other frameworks?

●    What happens if the tool produces incorrect output?

Firms can use a lightweight assessment template to review each new AI integration. Repeating this process across tools creates consistency and reduces reliance on vendor claims.

Train and Involve Employees

Your people are both your front line and your biggest variable. Without proper training, even well-meaning staff may use AI incorrectly. Provide employees with guidance on which tools are approved, what data is off-limits, and how to verify AI-generated content.

Tips for training and enablement:

●    Include AI policies in new hire onboarding

●    Host short training sessions using real-life examples

●    Create a shared "Do’s and Don’ts" list

●    Involve staff in policy feedback and improvement

The more confident employees feel about what’s allowed, the less likely they are to take risky shortcuts or fall back on unauthorized tools.

Monitor Use and Adjust Over Time

AI governance isn’t static. Once you roll out AI tools, create a simple monitoring structure that keeps usage safe, aligned, and productive. If your platform provides audit trails or dashboards, check them monthly. If it doesn’t, hold informal check-ins with departments to learn what’s working and what’s not.

Maintain a live inventory of which AI tools are in use, who has access, and what tasks they’re being used for. Periodically review that list to identify gaps, risks, or under utilized licenses. Monitoring doesn’t have to be a full-time job, but without it, even the best policies eventually break down.

AI Risk Assessment Checklist

Use this comprehensive checklist before approving any AI tool or workflow. Ideal for non-technical firms looking to identify blind spots across data, compliance, and vendor risks.

Purpose & Use Case

Is the AI tool solving a specific, meaningful business problem?

Will the tool be used for internal operations, client services, or marketing?

Is this a high-impact task (e.g., client deliverables) or low-impact (e.g., research)?

Data Sensitivity

Will client data, financials, or PII be entered into the tool?

Can the data be anonymized or redacted before input?

Does this involve any regulated data (e.g., tax filings, audit records)?

Tool Behavior & Retention

Does the tool store or log prompts and responses?

Does it train its model using your data?

Can data be deleted from the system upon request?

Is it clear how long data is retained and by whom?

Vendor Trustworthiness

Is this vendor known in the accounting or legal space?

Do they disclose how they handle and protect user data?

Is there a documented security program (e.g., SOC 2 Type II, ISO 27001)?

Do they comply with privacy laws (e.g., GDPR, CCPA)?

Access & Permissions

Who at the firm will use the tool, and for what?

Are admin roles defined, with controls over who can input or access sensitive data?

Is there multi-user or enterprise-level access available?

Output Risk & Review

Will the output affect clients, reports, or financial guidance?

Will a human review every AI-assisted deliverable before it is sent externally?

Could inaccurate or biased outputs cause reputational or compliance issues?

Auditability

Is there a usage log or audit trail available?

Can usage be tracked by department or individual?

Are there regular reviews planned to assess how the tool is being used?

Fallback & Exit Plan

If the vendor shuts down or changes terms, is there a backup tool or workflow?

Can data and settings be exported?

Is there a documented procedure for ending use and revoking access?

Chapter Summary

This chapter provides accounting firms with practical guidance to manage the security, compliance, and confidentiality risks that come with AI adoption. It outlines common challenges, from premature employee use to vendor trust gaps, and offers best practices tailored to firms without dedicated IT teams. Readers will learn how to create baseline AI policies, define firmwide risk tolerance, apply layered safeguards, and assess tools using simple checklists. A real-world case study from YHB highlights how firms can implement AI guardrails in collaboration with risk advisors, with ready-to-use templates included to help firms take immediate action.

Works Cited

© 2025 Accounting AI Playbook. All rights reserved.