What role does AI have in tax advice?

AI tax advice
Written by Robert John
Share

Across the UK, businesses are racing to try out AI tools and encouraging teams to ‘have a go’, to create efficiencies and automate previously manual processes.

There is a risk that without organised, strategic implementation, the use of AI tools can become embedded in an organisation’s processes in a disjointed, undocumented way that creates significant risk.

Generative AI (Gen AI) tools like ChatGPT are rapidly becoming the new “I’ll Google it”, and it’s easy to see why: they’re fast, interactive and articulate. Tax and finance in particular may seem like natural domains for AI-based tools, as they are built on rule-based workflows, word heavy analysis, structured data and repeatable processes.

But alongside the many, well-documented opportunities that AI brings in these areas, there is a growing risk that finance leaders need to understand and manage.

A recent study by Dext reported that 77% of the accountants and bookkeepers surveyed saw an increase in clients using public AI tools “to seek financial, tax or bookkeeping advice”. 72% reported having seen AI used to “question or challenge professional advice” and 68% noted clients suggesting that AI “could replace the need for professional accounting services”.

The frequency of errors reported in the study was, however, striking. 7% of respondents in the Dext survey say they see incorrect AI-generated financial or tax advice on a daily basis, 31% weekly, and 28% monthly.

How can in-house teams manage AI use effectively?

For many businesses, the primary issue they need to manage is that AI, particularly Gen AI, is only ‘probably right’. In some areas, this might be acceptable, but when it comes to the tax and finance world, ‘probably right’ could mean missing a zero on a filing, misinterpreting a reverse charge scenario or making up a tax treaty.

So, for in-house finance and tax teams, AI use needs guardrails and governance to ensure it is implemented safely. AI isn’t going to get the blame if an organisation submits incorrect data to HMRC, it will be the business and its leaders who are held responsible.

Business leaders need to understand the various risks that different AI solutions pose. For example:

  • Machine learning-based solutions are highly susceptible to input data changes. If the profile of the data changes (eg new types of invoices that weren’t in your original models) then the machine learning solution may not produce the same quality of results,
  • Gen AI/large language models (LLMs) have a broader risk profile. Relying on a knowledge base and the quality of the prompt given, they are prone to ‘hallucination’ (ie generating incorrect or fabricated information), and
  • LLMs can differ in the way they approach problems. For example, Anthropic’s LLM is programmed to guess less and some tax-specific research tools only use specific, reliable knowledge bases.

There are ways to limit errors, including use of ‘systemised’ prompting. For example, custom GPTs/Copilot agents use defined knowledge bases and build in rules and guardrails to reduce the guessing nature of LLMs. In an agentic workflow, you can limit the amount of ‘free thinking’ allowed by the LLM by detailing prescriptive steps and intertwining more certain technologies (eg rules-based engines, specific code/automation, APIs). You can also employ multiple AI agents to act as a line of defence against a single LLM’s error. These, and other approaches, can contribute to the way you govern AI use in an organisation.

Therefore, the safe use of AI in finance and tax includes:

  • The ability to spot hallucinations and inaccuracies,
  • A solid understanding of LLM limitations, and
  • Awareness of when human expertise must override AI output.

HMRC is itself beginning to use Gen AI across its work, including for analysing data and preparing casework materials. HMRC’s own guidelines for building AI-supported/enhanced tools is based on “designing with human oversight and control”, noting “it should support, not replace, human judgement” – commonly referred to as a ‘human-in-the-loop’. A view shared by Professional Bodies like the ICAEW.

What all of this boils down to is that you need to manage AI more like a member of your team than you would a traditional technology: set clear objectives, check on progress and monitor output.

Don’t let the pitfalls of AI put you off trying

Being aware of the pitfalls will allow experimentation with Gen AI in a more structured and self-aware way. It’s also important to understand that AI getting something wrong doesn’t mean it’s useless: understanding the weak points will better enable you to harness its strengths.

Experimenting with the prompts you use is equally important. Research has consistently identified a strong correlation between AI output quality and the capability of the user to write prompts, so the more experience you have with AI tools, the more successful your outputs will be.

How can finance teams get started with AI?

AI won’t replace the need to use expert tax and finance professionals, but it can be used to create operational efficiencies. We recommend a three-phase approach for in-house teams to safely move forward and bring AI into their day-to-day processes:

Phase 1: assess your current position

  • Understand whether any of your current suppliers offer solutions– for example many SAAS tools are releasing AI functionality. You may already subscribe to a product with powerful solutions. This is the easiest place to start as the vendor will already be onboarded and it’s likely that you already have a secure environment to begin experimenting within.
  • Identify who is using AI in your organisation and what they’re doing. Connecting and sharing with others using AI day-to-day is a great way to upskill. From a risk management perspective, it’s also important you know where AI is being used to ensure you have appropriate controls in place.

Phase 2: experiment with what you have

  • Prioritise initial use-cases that you want to experiment with. If working solo, start with small use cases to support personal productivity. These are low cost and an easily achievable way to learn prompting skills and create confidence.

The best place to start is with tasks that play into LLM strengths such as those that are:

  • Repetitive, time-consuming and high-volume – this could be reviewing invoices, or transactional VAT analysis, and
  • Knowledge work – such as research, document review or summarising.
  • Safe enablement – define your initial guardrails and policy around the use of AI. It’s likely that your organisation already has a policy in place about the type of data you’re comfortable sharing with your preferred LLM/Gen AI tool.
  • Skills gap analysis – identify what initial training is needed. There’s an abundance of resources available, often for free.

Phase 3: measure, share and scale

  • Measure and share – even small gains can add up. 60-90 mins saving per week is nearly two weeks over a year. As noted in DWP’s analysis, users learned most from peers so sharing is vital.
  • Scale – encourage continuous development, embed AI use and innovation by repeating the above steps, being aware of updates and monitoring usage to avoid eroding confidence and invite risk.

Put the process in place before technology

Much of the commentary on the cause of failed AI implementations is as much to do with poor understanding of the underlying processes and poor documentation as it is the lack of quality in the AI tools themselves. Experienced tax and finance professionals are great at ‘filling in the process gaps’ intuitively; a process map that says ‘prepare draft analysis’ or ‘use the Excel macro’ is unlikely to cause problems. AI solutions don’t naturally have the same knowledge so ensuring a process is well documented will help ensure a successful Agentic AI implementation by reducing the amount of ‘guessing’ that must be made.

How Saffery supports smarter AI integration

As the use of agentic AI grows in finance, those organisations that can successfully integrate AI into their systems and processes and invest early in safe, skilled AI adoption will be the ones that outperform their peers as the landscape shifts.

At Saffery, we work with businesses that are looking to accelerate their AI adoption, providing subject matter relevant use cases, tools, governance frameworks and advice on how best to proceed, based on a business’s specific requirements and the skills of their in-house teams.

If you have any queries, please get in touch with Robert John.

Contact us

Robert John

Director, Edinburgh

Key experience

Robert leads the Tax Technology and Transformation offering at Saffery.
Loading