To Spur Growth in AI, We Need a New Approach to Legal Liability

To Spur Growth in AI, We Need a New Approach to Legal Liability

by Bloomberg Stocks
0 comments 71 views
A+A-
Reset

The existing liability system in the United States and other countries can’t handle the risks relation to AI. That’s a problem because it will slow AI innovation and adoption. The answer is to revamp the system, which involves revising standards of care, changing who compensates parties when inevitable accidents occur via insurance and indemnity; changing default liability options; creating new adjudicators; and revamping regulations to prevent mistakes and exempt certain kinds of liability.

Artificial intelligence (AI) is sweeping through industries ranging from cybersecurity to environmental protection — and the Covid-19 pandemic has only accelerated this trend. AI may improve the lives of millions, but it also will inevitably cause accidents that injure people or parties — indeed, it already has through incidents like autonomous vehicle crashes. An outdated liability system in the United States and other countries, however, is unable to manage these risks, which is a problem because those risks can impede AI innovations and adoption. Therefore, it is crucial that we reform the liability system. Doing so will help speed AI innovations and adoption.

Misallocated liability can hamper innovation in several ways. All else being equal, an AI designer looking to implement a system in one of two industries will avoid the industry that places more liability on the designer. Similarly, the end users of an AI system will resist adoption if an AI algorithm carries further liability risk without some compensation. Liability reforms are needed to address these issues. Many of the changes we advocate involve rebalancing liability among the players — from end users (physicians, drivers, and other consumers of AI) to more upstream actors (e.g., designers, manufacturers).

We discuss them in order of ease of implementation, from easiest to most difficult. Although we focus on the U.S. liability system, the principles underlying our recommendations can be applied to many countries. Indeed, ignoring liability anywhere could result in both unsafe deployment of AI and hamper innovation.

Revising Standards of Care

One of the most straightforward changes that industries can make to change their liability burden is by adopting and changing standards of care — the conduct that the law or other professional standards require in response to a particular situation. These effects are strongest in medicine, law, and other professions. To the extent that current industry players have the expertise and foresight about their own industry and how AI fits in, changing the standard of care can make the difference between fighting and facilitating AI.

For instance, instead of the radiologist providing the only read of an image, an AI system can provide an initial read of the image, with the radiologist providing a subsequent secondary read. Once this becomes a standard of care in radiology practice, then the potential liability burden of AI becomes less on the individual physician if he or she complies with the particular standard of care. As AI grabs a larger foothold in medical practice, clinicians and health systems, acting together, can facilitate the safe introduction of AI by integrating AI into their standards of care.

Changing Who Pays: Insurance and Indemnity

Insurance and indemnity offer other solutions to rebalance liability. These two concepts are related but distinct. Insurance allows many policyholders to pool resources to protect themselves. Indemnity allows two or more parties to define, divide, and distribute liability in a contract, essentially shifting liability among themselves. Both allow AI stakeholders to negotiate directly with each other to sidestep liability rules.

Insurers make it their business to understand every nuance of the industry to which they provide protection. Indeed, they often are exposed to the best — and the worst — practices of a particular field. Because of their data gathering, insurers could mandate practices such as AI testing requirements and bans on particular algorithms. These can shift over time as an industry develops and the insurers gather data.

Indeed, some automobile insurers have already sponsored data-gathering efforts for new AI technologies such as autonomous-vehicle-guidance software. Insurers could reward users with lower rates for selecting certain more-effective AI programs, just as insurers already reward drivers for selecting safer cars and avoiding accidents. Thus, insurers would facilitate AI adoption through two methods: 1) blunting liability costs by spreading the risk across all policyholders, and 2) developing best practices for companies looking to use AI.

Indemnity, on the other hand, could provide some liability certainty between two parties. Indemnity clauses have already been used to apportion liability in clinical trials between health systems and pharmaceutical or device companies.

Revamping the Rules: Changing Liability Defaults

Insurance and indemnity take the current liability system and allow participants to tinker around its edges. But AI may necessitate more than just tinkering; it may require changing default liability options. For example, the default rule for automobile accidents in most states is that the driver of an automobile that rear-ends another is liable for an accident. In a world where “self-driving” cars are intermixed with human-driven cars, that rule may no longer make sense. An AI system could be programmed to protect a car’s occupants from such liability and may thus try to swerve into another lane or a more dangerous situation (to a lane with debris, for instance).

Who is responsible when we trust an AI to overrule a human? Liability assumes that people cause accidents. Traditional default rules of liability need to be altered. Courts can do some of the work as they make decisions arising from accidents, but legislatures and regulators may need to craft new default rules to contend with AI accidents. These can be blunt but clear, such as attributing any AI error to the user. Or they can be more nuanced, like apportioning liability beforehand between a user and a designer.

Creating New Adjudicators: Special Courts and Liability Systems

Indeed, liability for an injury may be difficult for traditional liability mechanisms to handle due to large data sets, specialized processing, or niche technical concerns. One solution is to take disputes over certain types of algorithms, industries, or accidents to specialized tribunals, which exempt certain activities from liability to simplify issues and channel cases into one place. One could imagine a specialized tribunal that develops the skill to adjudicate, say, pathology algorithms or accidents that result from two algorithms interacting.

At their best, a tribunal system funnels disputes into simpler systems supported by taxpayers or user fees with specialized adjudicators, simpler rules, and (hopefully) lower transaction costs compared to the current legal system. And specialized adjudication can coexist with a traditional liability scheme.

Florida and Virginia have built a specialized adjudication system for certain neonatal neurologic injuries. The U.S. federal government has established its countermeasures program to provide compensation to those injured by drugs and devices used to combat public health emergencies, a system many may come to experience due to the Covid-19 pandemic. And outside of health care, many states provide workers’ compensation benefits that are determined outside the formal court system.

Ending Liability Completely: Total Regulatory Schemes

Even the drastic solution of pulling some disputes out of the traditional liability system may not be enough. For instance, some AI applications may be deemed so critical that we will attempt to prevent mistakes and exempt liability through a comprehensive web of regulations. An error in an AI system that regulates the transmission of power across states, guides airplanes to land, and other systems could be completely exempt from liability by adopting a comprehensive regulatory scheme that preempts tort law actions.

Regulation may be the modality suited to a “black-box” algorithm — a constantly updating algorithm that is generated by a computer learning directly from data and not by humans specifying inputs. In order to account for changing external factors after it is trained, black-box algorithms continuously refine their predictions with ever more data in order to improve their accuracy. However, the precise identity and weighing of variables cannot be determined. No one — the user, the designer, or the injured party — can “look under the hood” and know how a black-box algorithm came to a particular determination. This difficulty may make regulation that governs development, testing, or implementation of the black box a better fit than a court case every time an injury arises.

Granted, a regulatory scheme that attempts to specify an AI system completely will almost certainly hamper innovation. But those costs may be acceptable in particular areas such as drug development, where comprehensive Food and Drug Administration regulatory schemes can replace liability completely.

Given the tremendous innovation engendered by AI, it is often easy to ignore liability concerns until the offering makes it to market. Policymakers, designers, and end users of AI should develop a balanced liability system to facilitate AI — rather than merely react to it. Building this 21st century liability system will ensure that 21st century AI will flourish.

Read More

You may also like

Leave a Comment