Economy

Artificial intelligence: a European law is increasingly urgent

As fascinating as it is frightening: Artificial Intelligence it is one of the most controversial fields of technology and dangerous for humanity, but it might as well open borders unknown and bring great benefits and breakthroughs to society. It all depends on how you will use it.

Europe is aware of this, as are the other large developed economies, and therefore is working on a body of rules that allow the advantages of AI to be exploited without running the risk of it taking over humans.

Here to delve into all the fields in which AI could bring benefits to humanity.

A regulation for Europe

While working on the drafting of a definitive regulation, which cannot reasonably be ready before a year or twothe idea is gaining ground of drafting some provisional rules, pertaining to the fundamental values ​​of humanity, in order to keep pace with the tumultuous pace of technological development.

The Artificial Intelligence Act (AiAct) recently approved by the European Parliament is the first legislation in the world on artificial intelligence and covers topics such as facial recognition in public places, biometric identification systems and the ban on emotion recognition software. The very complex legislation will not be ready before the end of the year, then there will be the approval process.

The need for “temporary” rules

From “temporary” rules involving European states on a voluntary basis and the international community also spoke recently about the trade commissioner Margrethe Vestagerfollowing an interview with Google’s number one, Sundar Pichai, who arrived in Brussels to make a commitment to comply with all European rules – from the one on the processing of personal data to those on digital services – and to fight disinformation.

“There’s a shared sense of urgency. To make the most of this technology, barriers are needed”, explained the vice president Vestager, referring to the negotiations underway at the G7.

Il G7 in Hiroshima

The last one G7 summit in Japan has in fact discussed the hypothesis of jotting down some technical rules or standardsto set temporary boundaries for AI and ensure that it is “reliable”.

The Seven, meeting in Hiroshima, agreed that the governance has failed to keep pace with the development of technology, but it is necessary that “artificial intelligence systems are accurate, reliable, safe and non-discriminatory, regardless of their origin” and above all are “in line with our shared democratic values”.

“We recognize the need for make the point immediately on the opportunities and challenges of generative AI”, reads the final communiqué of the G7 in Hiroshima, which ha set up a working group with the competent Ministers to investigate the issue of risks associated with generative AI by the end of this year.

From opportunities to risks

While Artificial Intelligence can be seen as one source of progresswelfare and work, on the other hand serious risks to mankindespecially when used to manipulate people.

AI could be a source of employment, requiring qualified personnel for the development and management of this technology, but at the same time it could also be a cause of unemployment, if the associated productivity gains were used to lay off existing staff.

Even the so-called AI experts have warned from the risks inherent in this technology. Recently, it has also been the eclectic Elon Musk to raise the alarm on AI, urging a six-month suspension to the development of the most powerful systems.

But also Sam AltmanCEO of OpenAI and father of the revolutionary ChatGPT, admitted that, over the next ten years, “AI systems will surpass the level of expertise of experts in most sectors” and that “superintelligence will be more powerful than other technologies that humanity has had to deal with in the past”.

But at the same time Altman warns thatis considering leaving Europe if the rules prove to be too stringent.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button