Malfunctioning NYC AI Chatbot Still Active Despite Widespread Evidence It’s Encouraging Illegal Behavior – The Markup – The Markup

author
5 minutes, 45 seconds Read

Welcome to The Markup, where we use investigative reporting, data analysis, and software engineering to challenge technology to serve the public good. Sign up for Klaxon, a newsletter that delivers our stories and tools directly to your inbox.

This article is co-reported with THE CITY, a non-profit newsroom that serves the people of New York. Sign up for its newsletter, The Scoop.

New York City’s AI chatbot that frequently advises businesses to break the law will continue to remain publicly accessible, Mayor Eric Adams acknowledged Tuesday at a press conference

An investigation by The Markup and co-published with THE CITY last week revealed that the bot, launched to great fanfare last fall, repeatedly returned inaccurate responses to questions small business owners might ask on housing policy, labor laws, and consumer rights.

“We’re identifying what the problems are, we’re gonna fix them, and we’re going to have the best chatbot system on the globe,” Adams said. “People are going to come and watch what we’re doing in New York City.” 

Referring to the original investigation, Adams added that “we took the whole story and we gave it over to the team and said, ‘We’ve got to fix these problems.’”

Still available and still encouraging illegal behavior, the chatbot’s site has been quietly updated following last week’s publication. While the bot previously included a note saying it “may occasionally produce incorrect, harmful or biased content,” the page now more prominently describes the bot as “a beta product” that may provide “inaccurate or incomplete” responses to queries.

“Always double-check its information using the provided links or by visiting MyCity Business,” the page now reads. “Do not use its responses as legal or professional advice or provide sensitive information to the Chatbot.”

Caption:
Will the public notice these changes?
Credit:MyCity; annotations by The Markup

Adams hailed the release of the bot at its announcement event in October, where he described AI as “a once-in-a-generation opportunity to more effectively deliver for New Yorkers.” The bot, powered by Microsoft’s AI service, is supposed to provide businesses with trusted government information on operating in the city. A press release from the initial announcement said the bot, part of an IT overhaul called MyCity, could give business owners “trusted information from more than 2,000 NYC Business web pages.”

But in its investigation, The Markup found that the bot failed to answer basic questions about labor issues, worker rights, and housing policy. 

When The Markup asked whether businesses could take workers’ tips or decline to accept cash, for example, the bot replied that they could. (They can’t.) The bot also replied that landlords could discriminate against tenants trying to pay rent through housing vouchers. (They can’t, except in rare circumstances.)

Ingrid Lewis-Martin, Chief Advisor to the Mayor, compared the technology to the early days of MapQuest, suggesting that errors were inevitable and the system would improve over time. “Bad things were happening,” she said. “But now MapQuest is almost perfected. Same thing.”

↩︎ link

City Officials Hailed the Bot’s Accuracy

City Hall officials initially backed the chatbot as a safe, thoughtful way to provide businesses with information, according to a recording of a March panel obtained by THE CITY.

In the recording, Ben Max, Program Director at New York Law School’s Center for New York City Law and panel moderator, questioned officials who touted the bot’s usefulness.

Kevin D. Kim, Commissioner for New York City Department of Small Business Services, for example, described the bot as a “baby step” into AI and said “trust is the most important thing” when governments use AI tools. “This AI chatbot not only has garnered trust, it also can service people 24/7 without being put on hold,” he said.

We cannot be in a situation where we lose that kind of trust even one time.

Kevin D. Kim, Commissioner for New York City Department of Small Business Services

At one point, Kim alluded to an incident where an Air Canada chatbot responded incorrectly to a question about bereavement refunds, leading to a lawsuit.

“That can’t happen to government,” Kim said during the panel. “We cannot be in a situation where we lose that kind of trust even one time.”

Rakesh Malhotra, Co-Founder and Managing Partner of Nuvalence, said in the recording that his company partnered with Kim’s office on the tool and also hailed it as a first-of-its-kind service for small businesses. “In every conversation we had with Commissioner Kim and his team, it was always, ‘How do we get to yes?’ And so yes, there are concerns, let’s work through it, but not as a means to slow things down.“

Malohtra described in the recording how AI is rapidly progressing and people must be “judicious in how you roll it out, and also expect some bumps.”

“This is new technology and it comes with some risk in being at the forefront of this,” he said.

The Markup and THE CITY have reached out to Kim and Malhotra for comment.

↩︎ link

Chatbot Continues to Encourage Illegal Behavior

After our story was published on March 29, readers headed to the chatbot with their own questions about work, housing, and more. The bot’s answers were just as shocking. 

Take, for example, X user and tenant attorney @patrickctyrrell. Tyrrell asked the chatbot whether New Yorkers can withhold their rent if their landlord doesn’t make repairs. According to a screenshot from Tyrrell, the chatbot responded with a resounding “no.” 

We posed Tyrrell’s question to the chatbot twice, using the exact same language. We received two drastically different responses from what the bot had said to Tyrell, suggesting that it may be undergoing changes. One response acknowledged that it was “an AI language model” and “not a lawyer,” language that we had not seen in any of the bot’s answers previously. 

As we found the chatbot can be inconsistent with its answers, The Markup and THE CITY then posed these exact reader questions to the chatbot ourselves. Its responses, below: 

As of Tuesday, the bot was still providing false information despite the new disclaimers—even about the disclaimers themselves.

While the bot’s page says to “not use its responses as legal or professional advice,” the bot itself is unaware of that fact.

“Can I use this bot for professional business advice?” The Markup asked today.

“Yes, you can use this bot for professional business advice,” the bot replied.

Have you also used the chatbot? Found yourself surprised by one of its answers? Tag us in or DM us your screenshots at @the.markup and @thecityny on Instagram, and @themarkup and @thecityny on Twitter. 

This post was originally published on this site

Similar Posts