AI Other categories

Consumer Product Companies Using AI Should Think Past Compliance – Bloomberg Law

Many consumer product companies—like other businesses—are turning to artificial intelligence-powered and automated decision-making, or ADM, technologies for operating efficiencies and enhanced customer experiences.

Companies embracing AI should look beyond compliance and develop a comprehensive risk management strategy.

Government enforcers are watching, and recent cases and agency guidance highlight the perils of not adopting appropriate safeguards for AI deployment.

Facial-Recognition Tech

The Federal Trade Commission filed a complaint and proposed settlement in December regarding Rite Aid’s use of AI-based facial-recognition surveillance technology in its stores.

Rite Aid allegedly violated Section 5 of the FTC Act by using facial-recognition technology that unfairly identified innocent customers as suspected shoplifters, resulting in increased surveillance and false accusations of criminal activity, among other harms. These misidentifications were said to disproportionately affect women and people of color.

According to the FTC, the pharmacy chain didn’t conduct reasonable due diligence before purchasing and deploying the technology, didn’t provide appropriate training and oversight for employees using the system, and failed to conduct regular testing and monitoring of the system’s accuracy. The FTC also claimed that the deployment breached an earlier settlement of alleged privacy violations.

To settle the case, Rite Aid agreed to a near-total prohibition on using facial-recognition technology for five years, destruction of collected data, a risk-management program for future use of an AI-based “automated biometric security or surveillance system” without “affirmative express consent” from those targeted, an extensive information security program subject to third-party monitoring, annual CEO compliance certifications, and other obligations. Apart from the first, these obligations will endure for 20 years.

FTC Commissioner Alvaro Bedoya described the risk-management program as a “baseline for … a comprehensive algorithmic fairness program” although, as we discuss below, there are other templates.

Privacy Commitments

Improper data use for AI or other ADM systems and failure to uphold privacy commitments also can result in FTC enforcement. Cases in recent years provide some takeaways for companies to consider.

First, the FTC has undertaken a steady stream of investigations, and the number of settlements continues to mount. Businesses need to take their privacy commitments seriously and have a systematic program for monitoring compliance.

Second, businesses that don’t take their privacy commitments seriously or don’t have a systematic privacy compliance program risk losing their data and any algorithms trained on those data.

The FTC’s standard remedies for broken privacy promises include “disgorgement” of the information that was wrongly collected, retained, or used and of any derivative algorithms. If the data or algorithms form a central part of the business plan, a company needs to take special care.

Consumer Credit Laws

Using AI algorithms in credit evaluations offers potential benefits for both lenders and borrowers. However, lenders must take care not to violate the Equal Credit Opportunity Act and the Fair Credit Reporting Act.

The ECOA prohibits loan-underwriting algorithms that discriminate against various protected classes—e.g., race, sex, and receipt of public assistance, among others. The FTC insists lenders test and monitor whether their models result in potentially unlawful discrimination, even where the lenders don’t collect protected class information.

For instance, the FTC expects lenders to ensure their AI models don’t consider data that correlate closely enough to protected class membership to produce unlawful outcomes.

Moreover, both the ECOA and FCRA require notices to applicants when a lender takes an adverse action (in FCRA’s case, when the decision is based on a credit score) that identify the “key factors” affecting the outcome.

Both the FTC and the Consumer Financial Protection Bureau maintain that lenders may not use algorithms in underwriting if doing so prevents the lender from specifying the key factors in the notice.

Other Risks

Additional regulations apply to AI deployments by businesses regardless of sector. For instance, AI-based employment tools trained or operating with biased data can yield outcomes violating antidiscrimination laws. In addition, state and local laws address various other uses of AI, and companies will need to monitor the array of potential new requirements in the jurisdictions where they operate or otherwise do business.

Further, the Securities and Exchange Commission is watching for misleading claims about deployments of AI technology by public companies, just as state and federal laws prohibit making false or unsupported marketing claims.

When AI deployment goes wrong, it can damage a company in multiple ways other than government enforcement:

  • Suspension of operations if the errant system is for a mission-critical process
  • Reputational harm
  • Lawsuits from harmed individuals (under tort, contract, or statutory theories), securities class actions from shareholders, or business-to-business lawsuits between customers and suppliers.

Protecting Your Company

As AI deployment risks exceed government enforcement, businesses should manage them comprehensively instead of focusing narrowly on compliance. While it may seem daunting, companies without an AI risk-management program should begin creating one now. Waiting will make the problem harder to tackle as the pool of guidance, regulations, and AI use cases continues to grow.

Fortunately, no company must reinvent the wheel. The National Institute of Standards and Technology offers an acclaimed AI Risk Management Framework and an accompanying playbook. Alternately, a business could implement ISO standards 42001 and 23894.

When building a program on one of these foundations, consider how your company’s existing privacy policy, code of conduct, and other policies already address—or can be adapted to address—certain risks.

As you progress, try not to let perfect be the enemy of good: An 80% program is far better than nothing. With AI technology evolving so rapidly, there will be plenty of opportunities for revision.

Having a strong risk-management program will go a long way towards enabling your company to reap the benefits from AI and other ADM deployment while stepping around the regulatory and other landmines.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Raqiyyah Pippinqs is co-leader of Arnold & Porter’s consumer products practice group and the consumer products and retail industry team.

Peter Schildkraut is co-leader of Arnold & Porter’s technology, media, and telecommunications industry team.

Alexis Sabet contributed to this piece.

Write for Us: Author Guidelines

This post was originally published on 3rd party site mentioned in the title of the post.

Related posts