Only half trust AI to help with taxes – Accounting Today

3 minutes, 4 seconds Read

Tax solution providers who have woven artificial intelligence into their products still have a long way to go in terms of earning people’s trust, with only half saying they’d trust such a company.

This is according to a survey released by business review site Trust Pilot, conducted by Attest with a sample size of 999 respondents across the U.S. on March 28 to 29, about tax filing preferences and the role of technology. What they found was that 50% are unlikely to trust companies that use AI to support filing taxes, and 55% are likely to not trust AI to give advice on filing taxes accurately.

“These insights help show the diverse concerns that taxpayers hold, especially when technology and personal finances intersect more than ever,” said the report from Trust Pilot. “Even without putting things like AI into the mix, it’s no secret that taxes are incredibly challenging and confusing for consumers in the U.S. These trends can help both businesses and consumers by helping consumers understand where their confusions and concerns align with other consumers and helping businesses better educate and guide their customers through each tax season.”

Recent news events may have soured some portions of the public on AI in the tax realm. In particular, last month both Intuit and H&R Block were faulted over the effectiveness of their AI models.

The Washington Post tested the accuracy of the tax advice given by AI models recently deployed by Intuit’s TurboTax and H&R Block. Both companies were faulted for providing answers that were either irrelevant or flat out wrong. For instance, when asked about whether there were tax credits available for having installed a new air conditioner, TurboTax’s AI responded with information about educational expenses. Meanwhile, when asked about whether wash sale rules apply to cryptocurrency, H&R Block’s AI responded that, yes, they do, when, in fact, they do not.

Spokespeople from Intuit and H&R Block said the tests were not accurate representations of their software. Both gave statements to the effect that these tools are meant to supplement the tax preparation process and not be the final word when it comes to one’s taxes. Both also said that real-time monitoring on their end did not find a significant number of people experiencing problems with their AI tools and, even if there were, both companies offer a guarantee of accuracy that extends beyond what the AI tools might say.

Outside the realm of taxes specifically, though, there does seem to be a wider problem with trust in AI systems to deliver accurate information. The Associated Press, for example, recently reported that, when asked about relevant election information such as where to vote, 40% of answers were considered outright harmful, including perpetuating dated and inaccurate answers that could limit voting rights.

Meanwhile, there was also the matter of a generative AI chatbot used by Air Canada giving someone inaccurate price information, saying a customer could claim a refund after purchasing tickets when, in fact, the specific policy was to not give refunds once the flight has been booked. While the airline tried to claim the chatbot was a separate legal entity and the customer should have never trusted what it said anyway, a court nonetheless ordered the company to give the customer a partial refund, plus additional damages to cover interest on the airfare and tribunal fees.

Incidents like these might explain the findings of a KPMG survey in November which found, in the U.S., just 40% of respondents are willing to trust AI while 60% are unwilling or unsure. Meanwhile, 43% of the survey takers have a low acceptance of AI and about 49% are fearful and worried about AI.

This post was originally published on this site

Similar Posts