AI Tools Are Changing Tax Enforcement Policy in Latin America – Bloomberg Law

author
5 minutes, 37 seconds Read

Using technology to streamline tax processes is nothing new. For years, tax administrations have integrated enterprise resource planning into their systems.

Accounting firms seem to pour everything they have into IT capacity, and tax management tools have become increasingly popular as enterprises and nations try to tackle challenges that arise from digital economies.

In turn, tax bureaus have taken a similar approach, using AI to manage the overflow of data they receive from taxpayers, other tax administrations, financial institutions, and so on.

A fine example in Europe is the Irish tax administration, which has successfully administered a chatbot since 2022, answering roughly 600 queries a day. AI use has enabled the government to direct questions from taxpayers to the right expert within the authority, going from 70% success in terms of content classification to 97%, allowing experts to be assigned more quickly than when such classification was done manually by taxpayers.

Latin American nations aren’t an exception, but implementing AI in tax compliance has already proven challenging when considering taxpayer rights, transparency, and data accuracy.

Take Argentina. Its tax administration, known as AFIP, has implemented AI extensively via a grading system for taxpayers based on their tax history. SIPER rates taxpayers on factors such as failing to file certain affidavits, being party to criminal cases, or simply being labeled a non-reliable taxpayer by a tax official, just to name a few.

While knowing the taxpayer is undoubtedly essential to effective tax administration, the Argentine experience appears headed in a direction where a bad grade determined by AI implies someone is guilty until proven innocent. This assumption would directly impact the taxpayers’ capacity to be eligible for lower interest rates when paying their taxes or produce an increased administrative burden.

Said differently, a low grade doesn’t equal sanctions but leads to a more difficult tax life. Would a court of law justify this discrimination?

The case of Colombia seems significantly different. The country has used AI to promote taxpayer education via a 24/7 chatbot called DIANA, enabling users to address straightforward questions related to electronic invoicing, taxation of individuals, and taxation of small and medium enterprises.

However, it’s unclear whether the Colombian tax administration, or DIAN, will begin to use AI alongside audit procedures as other administrations have done, such as Mexico. AI-powered chatbots are interesting because they can help resolve urgent or straightforward queries more quickly without taking valuable time from government tax experts.

But what would happen if a taxpayer acts in good faith based on the information provided by DIANA, and that data turns out to be incorrect?

Further north, Mexico implemented a so-called Master Inspection Plan, which uses AI through graph analytics models and machine learning to classify risky taxpayers.

Along those lines, the Mexican tax administration, SAT, intends to use AI to help identify complex avoidance and evasion networks and detect inconsistencies in electronic invoices that might be associated with smuggling and shell companies.

If the assumption is true that machines tend to err less than humans, what kind of impact might this strategy have on a taxpayer’s reputation should a taxpayer’s behavior that AI deems as tax-aggressive were to become public? How would such reputational damage be compensated if SAT’s AI were to make a mistake?

Not everyone is keen about disclosing their use of AI. Peru isn’t openly using AI for tax audit procedures but does like it for preemptive compliance purposes. The Peruvian tax administration, known as SUNAT, uses AI to warn taxpayers whether certain expenses, which are known to them thanks to their electronic invoicing system, might not be deductible, or when a taxpayer is showing a level of revenue below market for its business sector.

This sort of heads up by the Peruvian tax administration, allowing taxpayers to reconsider deductability of certain expenses or revenue recognition policies, for example, can indeed be useful for taxpayers, as it produces a live warning sign of what could become a future audit headache.

In a different jurisdiction, Chile has successfully implemented electronic compliance procedures via e-filing, e-invoicing, and remote audits. From the perspective of AI, the Chilean tax administration, or SII, is fine-tuning its capacity to establish preemptive anti-tax-fraud measures, such as limiting e-invoicing when the taxpayer’s behavior is deemed risky by their systems.

That said, and as happens in Argentina, the challenge of the administration preemptively grading a taxpayer or limiting its capacity to operate puts pressure on policymakers to create adequate and swift dispute resolution for taxpayer use. This will help demonstrate when taxpayer conduct is indeed correct and allow them to keep operating their business. If AI is expected to act quickly, so are the mechanisms that challenge it.

Are We Ready?

Even though AI adds unmistakable value to data gathering and transparency, its implementation poses one of the biggest challenges to taxpayers today.

If AI were to play a greater role with taxpayers in that regard—by selectively increasing their administrative burden, blocking their operations, or blacklisting them in a way that could become known to the public—then fundamental taxpayer rights of taxpayers would be threatened. This includes non-discrimination, freedom to legally participate in economic activities, or the right to privacy.

Because AI is more commonly used to characterize or rate taxpayers, one of the most obvious risks for taxpayers is to be characterized incorrectly and for that information to be known publicly.

Split-second judgments based on online sources and a trending passion for tax sustainability risks associating taxpayer employees or business partners with a company disliked by the tax authority—sometimes incorrectly.

What emergency recourse do taxpayers need to fend off what could be severe attacks on their rights? Even more importantly, what resources do tax administrations and courts have to respond swiftly to those challenges?

The issue with AI is that it operates faster than humans, faster than institutions like tax administrations, and certainly years ahead of remedies decreed by courts of law. So when AI gets it wrong, how do we catch up to it?

The answer isn’t simple, and we can’t expect tax administrations to devise ways to halt their own technological progress. But giving AI freedom to act on taxpayers should lead to the conclusion that new emergency tools must be created—where taxpayers, tax administrations, and courts of law can rule as quickly as one expects technology to work. Otherwise, the system will catch up with us.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Ignacio Gepp is partner with Puente Sur in Chile.

Florencia Fernández, Laura Sanint, Alfredo Palacios, Trevor Glavey, and Renzo Chumacero contributed to this article.

Write for Us: Author Guidelines

This post was originally published on this site

Similar Posts