Decide before you deploy: Implementing AI in accounting and finance

Process Diagram with 3D Gear Icon Centered Among Sequential Circular Steps
Photo credit: narvo vexar/iStock/Getty Images

Technology risk and AI governance advisor Mary Carmichael, CPA, CMA, shares a practical, structured approach that organizations can use to evaluate and implement AI initiatives.


Artificial Intelligence is one of the biggest shifts the accounting profession has experienced, yet not all initiatives bring value. With Gartner predicting that over 40% of agentic AI projects will be cancelled by the end of 2027 due to factors such as unclear business value, it’s evident that organizations need to invest in a thorough, thoughtful approach.

If you’ve been wondering how to operationalize AI governance, manage risks, or measure the value of AI initiatives, here are the pitfalls to avoid and practical, structured ways to move ahead.

Learn how to navigate innovation, risk, and real-world ROI at PD Nexus: AI Insights on May 21, 2026.

What organizations get wrong early on

One of the biggest early missteps is starting with the technology instead of the business problem. Teams often ask, “How can we use AI in accounting and finance?” That question is too broad and can ultimately reduce a project’s chances of success. Instead, a better starting point is, “What are we trying to improve, and why does it matter?”

Other early mistakes include:

  1. Unclear or inconsistent use cases: Take management reporting – a team might want AI to summarize performance or explain variances within the organization. This sounds like a strong use case, but underneath, margin definitions might differ across business units, cost categories could be inconsistent, and forecast assumptions could vary across the organization.
  2. Weak foundations: Finance depends strongly on judgment. Forecasting, reconciliations, reporting, and analysis are structured processes, and they rely on review, escalation, materiality, and skepticism. If those aren’t built into your AI design, your outputs won’t be decision-ready.
  3. Fuzzy ownership: AI often gets framed as a technology initiative. However, if it influences financial decisions, finance should own the business case, use case, and result.

The risks of adopting AI too quickly

The problem is not necessarily speed itself, but speed without structure and discipline. Organizations feel pressure to pilot AI, but it’s risky to start using the technology without thinking through controls, human reviews, and accountability. Especially as CPAs, we need to be able to explain, support, and stand behind these AI outputs.

For example, AI-drafted commentary for board reporting might save time and sound polished, but it becomes risky if no one has defined who will validate the numbers or challenge the narrative when something feels off. Without proper controls and processes in place, you can end up with a weak audit trail, inconsistent reviews, and unclear sign-off.

Revising initiatives after implementation is another risk. Moving too fast often leads to redesigning controls, revisiting workflows, retraining users, or even pulling back pilots, which is expensive.

Start with a structured decision-making approach

To minimize risk and increase the likelihood of success, it’s important to make key business decisions before AI tools become embedded in day-to-day work. Answering these questions will help.

  1. What problem are we trying to solve? Examples include improving forecasting quality, making reconciliations more efficient, or producing more consistent reporting. If answers are vague, the use case isn’t ready.
  2. What data is being used? Teams need to assess where data is coming from and if it is reliable, complete, consistent, timely, and appropriate for the intended use. If the data is fragmented or inconsistent, AI often just escalates these issues.
  3. Where does human review sit? Even if AI can speed up your process, you still need to define where judgment, challenges, and approvals happen. Accounting and finance teams will need to determine these controls:
    • Where approval points and reviews happen;
    • What evidence will be retained;
    • How exceptions will be handled; and
    • What sign off looks like.

    Further, If AI supports reconciliation, who reviews exceptions, overrides issues, and how is the audit trail retained? If AI helps with reporting, who will confirm the output is complete, consistent, and appropriate to use?

  4. Who owns the outputs? Teams will need to evaluate if AI outputs are useful for decision-making. Can they be explained clearly enough for a finance leader to rely on them? Can they be supported with evidence if challenged? Outputs can affect reporting, forecasting, and analysis, so ownership cannot stay vague.
  5. How will success be measured? What’s the baseline today and what should improve – cycle time, rework, exception rates, quality of analysis, confidence in the output?

Answering these questions will also allow staff to understand the AI tool’s purpose, boundaries, and what success looks like.


Read more


Identifying accountability and ownership for AI initiatives

Since AI initiatives can touch finance, IT, and risk, organizations need to define ownership. Generally, finance and accounting should own the business use case and decision outcomes.

IT should own the platform, access, security, integration, and technical enablement. Separating tool ownership from outcome ownership is important.

Risk, compliance, legal, and governance should help define controls. An AI impact assessment and a joint committee are useful to review the use case, challenge whether the controls are strong and oversight is clear, and ensure appropriate monitoring is in place.

How to define and measure AI’s value

Many AI initiatives are described in broad terms like “productivity” or “transformation.” CPAs can strengthen the value case by making it measurable and decision-useful with these steps.

  1. Define the value driver clearly. Are you aiming to reduce manual effort? Strengthen reporting quality? Expand capacity? Those are stronger starting points than saying, “AI will make us more innovative.”
  2. Establish a baseline. For example, if AI is used in the monthly close, you can build your baseline by determining current cycle time, hours devoted to manual preparation, how often rework occurs, and exception rate. Without a baseline, it’s difficult to show whether value is being created.
  3. Separate efficiency value from decision value. Efficiency value includes time saved, cost avoided, and faster turnaround. Decision value includes whether the team spots issues earlier, sees patterns more clearly, or makes better calls because analysis is more timely and consistent.

Overall, a credible ROI case should be measurable, realistic, and tied to a specific finance process.

Diving deeper into operationalizing AI governance

Internal reflection is important. Ask yourself and your teams: What decisions are we trying to improve? Where does manual effort consume skilled finance time? Where do we need stronger evidence or consistency? These questions are a productive way to start discussing AI initiatives.

Interested in learning more? CPABC offers seminars and resources. The NIST AI Risk Management Framework gives organizations a structured way to think about trust, risk, and oversight, while COSO’s AI guidance covers internal control, accountability, and decision support.


Mary Carmichael, CPA, CMA, is a technology risk and AI governance advisor and principal director at Momentum Technology. She helps public and private sector organizations operationalize AI adoption through strategy, governance, and control frameworks.

In Other News

Resources for CPAs
By Tammy Towill Apr 23, 2026
Resources for CPAs
By CPABC’s Regulation and Registrar Team Apr 15, 2026
Resources for CPAs
By CPABC’s Professional Conduct Department Apr 9, 2026
Resources for CPAs
By Emma Rowbotham Apr 8, 2026