Artificial intelligence is entering our daily life through connected accessories, such as watches, or even home automation. These AIs seek to learn more and more about us to make our lives easier, but they can do so to the detriment of sometimes very sensitive personal data. To better control them, Europe is working on a specific strengthening of the GDPR concerning these AIs.
While the European Day dedicated to the protection of personal data seeks to raise awareness among populations so that they can take control of their private data on the Web, the future of these could well know a revolution. It is that of the thunderous arrival of powerful artificial intelligences (AI), like ChatGPT. If these AIs know how to convince with their rhetorical talents, they will also be increasingly exploited to manage the many personal data that are delivered to the platforms, often without realizing it. AI is not without risks from this point of view.
For this reason, Europe wants to complete its General Data Protection Regulation (GDPR). The institution plans to add a set of harmonized rules on the use of AI. It must be said that this famous AI is now everywhere. It is worn on the wrist night and day with connected watches and bracelets that can collect health data and even detect certain pathologies. However, consumers are not always aware that asking personal questions, of a medical nature for example, to a conversational tool, is to deliver to the companies that manage this artificial intelligence sensitive information that could be exploited for commercial purposes. And this is not the only concern, because artificial intelligence involves many actors, be it the developer, the supplier, the importer, the distributor and the user. This set remains rather opaque for the consumer. It is therefore difficult to know who actually has access to personal data and who would be responsible in the event of problems.
Better information on AI algorithms
With the proliferation of the use of these AIs, the risk of leakage or loss of control over personal data is also significant. This is why, to protect them, consumers should find out about the company that collects their data and its policy for processing this personal information. It is not always easy to do so, even if some players in the sector are more virtuous than others. This is particularly the case of Apple, which wants to champion data confidentiality by forcing application developers to automatically request consent for data collection, for example.
To better protect users, the European Union has therefore proposed three texts: a regulatory framework on artificial intelligence, a directive on AI liability, a directive on product liability. Among its additional regulations, the EU wants to oblige, for example, digital giants and other platforms and social networks to better inform users about their algorithms. And to oblige them, the text provides for significant sanctions. They could range from 10 to 30 million euros or 2 to 4% of turnover in the event of breach of these new obligations. It remains for the institution to adopt these texts sufficiently quickly before the AIs grant themselves even more freedom.
Leave a Reply