EU proposal for ‘regulation on artificial intelligence’


The European Union (EU) has proposed using biometric identification systems to prevent a terrorist threat, find missing children, prosecute a serious crime, or identify a suspect.

The EU Commission announced the world’s first legislative proposal that includes the framework of rules on artificial intelligence. Accordingly, artificial intelligence systems were divided into 4 main groups as “unacceptable risk”, “high risk”, “limited risk” and “minimum risk”.

Considered to be a clear threat to people’s security, livelihoods and rights, AI systems were in the unacceptable risk group and their use was banned.

Applications that manipulate human behavior, prevent free will, and enable social scoring by governments and artificial intelligence systems were included in the “unacceptable” risk group.

In the high-risk group, critical infrastructure such as transportation, education, robotic surgery, CV evaluation during the recruitment process, credit rating, evidence security, migration, asylum and border management, verification of travel documents, judiciary and democratic processes were listed.

Strict obligations have been imposed that artificial intelligence systems in this group must comply with before they are put on the market.

These systems will be prevented from discriminating, the results will be traceable, and they will be subjected to adequate human surveillance.

All biometric identification systems were included in the high risk group with stringent conditions.

Can be used in special situations

Law enforcement officials will be able to use biometric identification systems in public areas in special cases. Biometric identification systems can be used to prevent a terrorist threat, search for missing children, recover a serious crime or identify a suspect. Such uses of artificial intelligence systems will be limited and will be subject to the permission of forensic or another independent organization.

Artificial intelligence systems in the limited risk group will be subject to some transparency obligation. It will be ensured that the users are aware that they are interacting with a machine while talking with the chatbots in this group and make informed decisions.

Applications such as AI-powered video games or spam filters were in the minimum risk group. Artificial intelligence systems in this group, which have minimal or no risks for the rights or safety of citizens, were not intervened.

The supervisory responsibility and authority in the field of artificial intelligence will be with the national authorities. A European Artificial Intelligence Board will be established to develop practices and standards in this field.

The approval of the European Parliament (EP) and member states is required for the regulation proposal to take effect.

When the regulation becomes law, the activities of US technology companies operating in EU countries will also be limited.