EU takes ambitious lead in regulating AI

Omar Adan

Global Courant

The word ‘risk’ is nowadays often seen in the same sense as ‘artificial intelligence’. While it is encouraging to see global leaders reflecting on the potential problems of AI, along with its industrial and strategic benefits, we should not forget that not all risks are created equal.

On Wednesday, June 14, the The European Parliament has voted his own design proposal for the AI lawa piece of legislation two years in the making, with the ambition to shape global standards in the regulation of AI.

After a final phase of negotiations, several drafts of the European parliament, Commission And Council, the law should be passed before the end of the year. It will be the first legislation in the world dedicated to regulating AI in almost all sectors of society, although defense is exempt.

- Advertisement -

Of all the ways AI regulation could be approached, it is worth noting that this legislation is entirely based on the concept of risk. It is not AI itself that is regulated, but rather the way it is used in specific domains of society, each of which poses different potential problems. The four risk categories, for which different legal obligations apply, are: unacceptable, high, limited and minimal.

Systems deemed to threaten fundamental rights or EU values ​​will be categorized as an “unacceptable risk” and banned. An example of such a risk is AI systems that are used “predictive policing”. This is using AI to make risk assessments of individuals, based on personal information, to predict whether they are likely to commit crimes.

A more controversial case is the use of facial recognition technology on live street camera feeds. This has also been added to the list of unacceptable risks and would only be allowed after the commission of a crime and with court authorization.

Those systems classified as “high risk” will be subject to disclosure obligations and are expected to be registered in a special database. They will also be subject to different monitoring or auditing requirements.

Among the types of applications classified as high risk are AI that can control access to services in education, employment, finance, healthcare and other critical areas. The use of AI in such areas is not seen as undesirable, but oversight is essential due to the potential to negatively impact security or fundamental rights.

- Advertisement -

The idea is that we should be able to trust that any software that makes decisions about our mortgage is carefully checked for compliance with European law to ensure we are not discriminated against on the basis of protected characteristics such as gender or ethnic background – at least if we live in the EU.

AI systems with a “limited risk” will be subject to minimum transparency requirements. Similarly, operators of generative AI systems — bots that produce text or images, for example — will have to reveal that the users are interacting with a machine.

During the long journey through the European institutions, which started in 2019, the legislation has become increasingly specific and explicit about the potential risks of deploying AI in sensitive situations – and how to monitor and mitigate them. There is still a lot of work to be done, but the idea is clear: we have to be specific if we want to get things done.

- Advertisement -

Risk of extinction?

In contrast, we have seen recently petitions calling for mitigation of a supposed “risk of extinction” of AI, without further details. Several politicians have echoed these views. This very long-term generic risk is very different from what the AI ​​law shapes, because it gives no details about what to watch out for, nor what to do now to protect against it.

If ‘risk’ is the ‘expected damage’ that could result from something, then we would do well to focus on possible scenarios that are both harmful and probable because they carry the greatest risk. Highly improbable events, such as an asteroid collision, should not take precedence over more probable events, such as the effects of pollution.

Generic risks, such as the potential for human extinction, are not mentioned in the law. Photo: Zsolt Biczo/Shutterstock through The Conversation

In that sense, the bill just passed by the EU parliament has less flash but more substance than some of the recent warnings about AI. It tries to walk the thin line between protecting rights and values ​​without hindering innovation and specifically addressing both dangers and remedies. While far from perfect, it at least offers concrete actions.

The next stage in the journey of this legislation will be the trilogues – three-way dialogues – where the separate drafts of parliament, committee and council are merged into a final text. Compromises are expected at this stage. The resulting law will be put to a vote, probably at the end of 2023, before the campaign for the next European elections starts.

After two or three years, the law will come into force and every company operating within the EU must comply. This long timeline does raise some questions because we don’t know what AI, or the world, will look like in 2027.

Let us not forget that the President of the European Commission, Ursula von der LeyenFirst proposed this regulation in the summer of 2019, just before a pandemic, a war and an energy crisis. This was also before ChatGPT regularly had politicians and the media talking about an existential risk of AI.

However, the law is so broadly written that it may remain relevant for a while. It will potentially affect how researchers and companies approach AI outside Europe.

What is clear, however, is that every technology carries risks, and rather than waiting for something negative to happen, academic and policy institutions are trying to think ahead about the implications of research. Compared to the way we adopted previous technologies, such as fossil fuels, this represents some progress.

Nello Cristianini is professor of Artificial Intelligence, University of Bath

This article has been republished from The conversation under a Creative Commons license. Read the original article.

Similar:

Like it Loading…

EU takes ambitious lead in regulating AI

Asia Region News ,Next Big Thing in Public Knowledg

Share This Article
slot ilk21 ilk21 ilk21