New AI Regulations Are Coming. Is Your Organization Ready?

New AI Regulations Are Coming. Is Your Organization Ready?

by Bloomberg Stocks
0 comment 7 views
A+A-
Reset

In recent weeks, government bodies — including U.S. financial regulators, the U.S. Federal Trade Commission, and the European Commission — have announced guidelines or proposals for regulating artificial intelligence. Clearly, the regulation of AI is rapidly evolving. But rather than wait for more clarity on what laws and regulations will be implemented, companies can take actions now to prepare. That’s because there are three trends emerging from governments’ recent moves.

Over the last few weeks, regulators and lawmakers around the world have made one thing clear: New laws will soon shape how companies use artificial intelligence (AI). In late March, the five largest federal financial regulators in the United States released a request for information on how banks use AI, signaling that new guidance is coming for the finance sector. Just a few weeks after that, the U.S. Federal Trade Commission (FTC) released an uncharacteristically bold set of guidelines on “truth, fairness, and equity” in AI — defining unfairness, and therefore the illegal use of AI, broadly as any act that “causes more harm than good.”

The European Commission followed suit on April 21 released its own proposal for the regulation of AI, which includes fines of up to 6% of a company’s annual revenues for noncompliance — fines that are higher than the historic penalties of up to 4% of global turnover that can be levied under the General Data Protection Regulation (GDPR).

For companies adopting AI, the dilemma is clear: On the one hand, evolving regulatory frameworks on AI will significantly impact their ability to use the technology; on the other, with new laws and proposals still evolving, it can seem like it’s not yet clear what companies can and should do. The good news, however, is that three central trends unite nearly all current and proposed laws on AI, which means that there are concrete actions companies can undertake right now to ensure their systems don’t run afoul of any existing and future laws and regulations.

The first is the requirement to conduct assessments of AI risks and to document how such risks have been minimized (and ideally, resolved). A host of regulatory frameworks refer to these types of risk assessments as “algorithmic impact assessments” — also sometimes called “IA for AI” — which have become increasingly popular across a range of AI and data protection frameworks.

Indeed, some of these types of requirements are already in place, such as Virginia’s Consumer Data Protection Act — signed into law last month, it requires assessments for certain types of high-risk algorithms. In the EU, the GDPR currently requires similar impact assessments for high-risk processing of personal data. (The UK’s Information Commissioner’s Office keeps its own plain language guidance on how to conduct impact assessments on its website).

Unsurprisingly, impact assessments also form a central part of the EU’s new proposal on AI regulation, which requires an eight-part technical document for high-risk AI systems that outlines “the foreseeable unintended outcomes and sources of risks” of each AI system, along with a risk-management plan designed to address such risks. The EU proposal should be familiar to U.S. lawmakers — it aligns with the impact assessments required in a bill proposed in 2019 in both chambers of Congress called the Algorithmic Accountability Act. Although the bill languished on both floors, the proposal would have mandated similar reviews of the costs and benefits of AI systems related to AI risks. That bill that continues to enjoy broad support in both the research and policy communities to this day, and Senator Ron Wyden (D-Oregon), one of its cosponsors, reportedly plans to reintroduce the bill in the coming months.

While the specific requirements for impact assessments differ across these frameworks, all such assessments have the two-part structure in common: mandating a clear description of the risks generated by each AI system and clear descriptions of how each individual risk has been addressed. Ensuring that AI documentation exists and captures each requirement for AI systems is a clear way to ensure compliance with new and evolving laws.

The second trend is accountability and independence, which, at a high level, requires both that each AI system be tested for risks and that the data scientists, lawyers, and others evaluating the AI have different incentives than those of the frontline data scientists. In some cases, this simply means that AI be tested and validated by different technical personnel than those who originally developed it; in other cases (especially higher-risk systems), organizations may seek to hire outside experts to be involved in these assessments to demonstrate full accountability and independence. (Full disclosure: bnh.ai, the law firm that I run, is frequently asked to perform this role.) Either way, ensuring that clear processes create independence between the developers and those evaluating the systems for risk is a central component of nearly all new regulatory frameworks on AI.

The FTC has been vocal on exactly this point for years. In its April 19 guidelines, it recommended that companies “embrace” accountability and independence and commended the use of transparency frameworks, independent standards, independent audits, and opening data or source code to outside inspection. (This recommendation echoed similar points on accountability the agency made publicly in April of last year.)

The last trend is the need for continuous review of AI systems, even after impact assessments and independent reviews have taken place. This makes sense. Because AI systems are brittle and subject to high rates of failure, AI risks inevitably grow and change over time — meaning that AI risks are never fully mitigated in practice at a single point in time.

For this reason, lawmakers and regulators alike are sending the message that risk management is a continual process. In the eight-part documentation template for AI systems in the new EU proposal, an entire section is devoted to describing “the system in place to evaluate the AI system performance in the post-market phase” — in other words, how the AI will be continuously monitored once it’s deployed.

For companies adopting AI, this means that auditing and review of AI should occur regularly, ideally in the context of a structured process that ensures the highest-risk deployments are monitored the most thoroughly. Including details about this process in documentation — who performs the review, on what timeline, and the parties responsible — is a central aspect of complying with these new regulations.

Will regulators converge on other approaches to managing AI risks outside of these three trends? Surely.

There are a host of ways to regulate AI systems — from explainability requirements for complex algorithms to strict limitations for how certain AI systems can be deployed (e.g., outright banning certain use cases such as the bans on facial recognition that have been proposed in various jurisdictions throughout the world).

Indeed, lawmakers and regulators have still not even arrived at a broad consensus on what “AI” is itself, a clear prerequisite for developing a common standard to govern AI. Some definitions, for example, are tailored so narrowly that they only apply to sophisticated uses of machine learning, which are relatively new to the commercial world; other definitions (such as the one as in the recent EU proposal) appear to cover nearly any software system involved in decision-making, which would apply to systems that have been in place for decades. Diverging definitions of artificial intelligence are simply one among many signs that we are still in the early stages of global efforts to regulate AI.

But even in these early days, the ways that governments are approaching the issue of AI risk have clear commonalities, meaning that the standards for regulating AI are already becoming clear. So organizations adopting AI right now — and those seeking to ensure their existing AI remains compliant — need not wait to start preparing.

Read More

You may also like

Leave a Comment