Proposed EU regulations aim to restrict AI use based on its risk to public safety or liberty

Proposed EU regulations aim to restrict AI use based on its risk to public safety or liberty

by Tech News
0 comments 53 views
A+A-
Reset

In context: As artificial intelligence systems get more ubiquitous, the need to regulate their use becomes more apparent. We have already seen how systems like facial recognition can be unreliable at best and biased at worst. Or how governments can misuse AI to impinge on individual rights. The European Union is now considering implementing formal regulation on AI’s use.

On Wednesday, the European Commission proposed regulations that would restrict and guide how companies, organizations, and government agencies use artificial intelligence systems. If approved, it will be the first formal legislation to govern AI usage. The EC says the rules are necessary to safeguard “the fundamental rights of people and businesses.” The legal framework would consist of four levels of regulation.

The first tier would be AI systems deemed “unacceptable risk.” These would be algorithms considered a “clear threat to safety, livelihoods, and rights of people.” The law would outright ban applications like China’s social scoring system or any others designed to modify human behavior.

2021 04 21 image 26

The second tier consists of AI technology considered “high risk.” The EC’s definition of high-risk applications is broad, covering a wide range of software, some of which is already in use. Law enforcement software that uses AI that may interfere with human rights will be strictly controlled. Facial recognition is one example. In fact, all remote biometric identification systems fall into this category.

These systems will be highly regulated, requiring high-quality datasets for training, activity logs to trace back results, detailed documentation, and “appropriate human oversight,” among other things. The European Union would forbid the use of most of these applications in public areas. However, the rules would have concessions for matters of national security.

The third level is “limited risk” AIs. These mainly consist of chatbots or personal assistants such as Google’s Duplex. These systems must provide a level of transparency significant enough that they can be identified as non-human. The end-user must be allowed to decide whether or not he or she interacts with the AI.

Finally, there are programs considered “minimal risk.” These would be AI software that poses little to no harm to human safety or freedoms. For example, email filtering algorithms or AI used in video games would be exempt from regulation.

Enforcement measures would consist of fines in the range of six percent of a company’s global sales. However, it could take years for anything to go into effect as European member states debate and hammer out the details.

Read More

You may also like

Leave a Comment