EU legislators are currently negotiating an Artificial Intelligence Act, but it could take up to three years before the law is fully implemented.
The European Commission and Google have committed to creating a Voluntary Agreement for Artificial Intelligence to mitigate the most serious risks associated with this rapidly developing technology, until appropriate legislation is enacted.
The pledge was announced after Google CEO Sundar Pichai met with several European commissioners during a visit to Brussels, where the topic of AI featured prominently in the conversation.
“We expect technology in Europe to respect all our rules, on data protection, online security and artificial intelligence,” Thierry Breton, European commissioner for the internal market, said on Wednesday. In Europe, it’s not pick-and-choice. ” According to a brief read-out.
“Sundar and I agreed that we cannot wait until AI regulation is actually implemented, and work together with all AI developers to develop an AI treaty on a voluntary basis well before the legal deadline.”
Breton said the voluntary agreement, the specific details of which are still unclear, would include “all major” companies working in the AI sector, both from Europe and outside.
Google did not immediately respond to a request for comment.
Although AI has long been on Brussels’ policy radar, the market explosion of ChatGPT, a chatbot developed by OpenAI, has shaken up the debate and put the so-called foundational model under the microscope.
Foundation models are those that are trained with vast repositories of data, such as text, images, music, speech, and code, with the goal of performing an ever-expanding set of tasks, rather than a specific, unchanging objective. .
Chatbots such as OpenAI’s GPT and Google’s Bard are some early examples of this technology, which is expected to develop further in the coming years.
While investors have happily jumped on chatbots, critics have decried their unchecked growth, raising alarms about bias, hate speech, fake news, state propaganda, impersonation, IP infringement and labor redundancy.
chatgpt was temporarily banned After data privacy concerns were detected by authorities in Italy.
preamble to legislation
In Brussels, a sense of urgency has spread as a result of the chatbot phenomenon.
EU legislators are currently negotiating the Artificial Intelligence Act, the world’s first attempt to regulate the technology based on a human-centred approach, which divides AI systems into four categories according to the risk they pose to society. Is.
This act was proposed by the European Commission two years ago and being amended To reflect the latest developments, such as a significant increase in foundation models.
Negotiations between the European Parliament and member states are due to conclude before the end of the year.
However, the law includes a grace period to allow tech companies to adapt to the new legal framework, meaning it could take up to three years for the act to be fully implemented across the bloc.
The newly announced treaty is meant to serve as a prelude and fill a legislative void, even if its voluntary nature will inevitably limit its reach and effectiveness.
Speaking to MEPs after his meeting with Pichai, Commissioner Britten defended the need for an intermediate rulebook containing a “broad outline” of the AI Act.
“I already have a general vision of what can be kept in anticipation and that can allow us to have some element of security,” Breton said in a parliamentary speech, referring to the possibility of “labelling” AI systems. Told the committee.
“We have to manage exigencies, but we also must not slow down innovation, so we have to find the means, the right means, and we also have to be firm on some elements that have to be overseen, and some that have to be anticipated.” Limit the effects of the AI Act.”