AI models don’t comply with EU’s AI Act, according to Stanford study

Tech updates
solo11 July 2023Last Update : 3 months ago
AI models don’t comply with EU’s AI Act, according to Stanford study

A recent Stanford study revealed that most AI models, including Google’s PaLM 2 and OpenAI’s GPT-4, do not comply with the EU AI Act.

A new study has found that the artificial intelligence (AI) models of leading tech companies do not comply with the requirements of the upcoming EU AI Act.

The EU has been actively working on establishing comprehensive rules governing AI technologies and has been developing the AI ​​Act for the past two years.

The act was recently voted on in the European Parliament, receiving overwhelming support with 499 votes in favor, 28 votes against and 93 abstentions.

The legislation is set to impose clear obligations on OpenAI and foundational model providers such as Google in efforts to regulate the use of AI and limit the dangers of the new technology.

However, since the democratization of AI systems, many legislators are trying to catch up with the rapid development of the technology and the AI ​​Act has come to the fore again with a more pressing need to regulate.

Study The focus is on the European Parliament’s version of the Act, conducted by researchers at Stanford University’s Center for Research on Foundation Models (CRFM).

Of the 22 requirements directed to foundation model providers, the researchers selected 12 requirements that could be assessed using publicly available information.

These requirements were grouped into four different categories, including data resources, compute resources, the model itself, and deployment practices.

To evaluate compliance, the researchers devised a 5-point rubric for each of the 12 requirements. Their assessment involved examining 10 leading model providers, including OpenAI, Google, Meta and, and assigning a score from 0 to 4 based on their adherence to the stated requirements.

The study revealed a significant discrepancy in compliance levels, with some providers scoring as low as 25 percent. It also revealed that there is a severe lack of transparency among model providers.

Several areas of non-compliance were identified, including failure to disclose the status of copyrighted training data, which could play an important role in deciding whether new copyrighting laws will conform to AI-generated content.

Furthermore, most providers have undeclared energy use and emissions during model training data, in addition to the absence of transparent methodologies to mitigate potential risks, which also represent important parts of the AI ​​Act.

The research also noted disparities between open and closed AI model providers, with open releases, such as META’s LLAMA, offering more comprehensive disclosure of resources than restricted or closed releases such as OpenAI’s GPT-4.

Challenges of complying with the AI ​​Act within the European Union

According to the study, all the Foundation models studied have missed the perfect score, which indicates that none of them fully comply with the existing regulations outlined in the draft AI Act.

While the study notes “considerable room for improvement” for providers to align themselves more closely with the requirements, the high-level obligations established in the AI ​​Act may pose challenges for many companies.

In a recent development, executives from 150 leading companies such as Siemens, Renault and Heineken have expressed their concerns about the tighter regulations. open letter Addressed to the European Commission, the Parliament and the Member States.

“In our assessment, the draft law will put Europe’s competitiveness and technological sovereignty at risk without effectively tackling the challenges we are and will face,” the letter states. . financial Times,

Officials further claim that the proposed rules will impose heavy regulations on foundation models, which will, in turn, burden companies involved in the development and implementation of AI systems.

As a result, he warns that these limits may prompt companies to consider leaving the EU and investors may withdraw their support for AI development in Europe, which is ahead of the EU in the race for AI development compared to the United States. can put behind.

The study further suggests that there is an urgent need to increase collaboration between policy makers and model providers in the EU to effectively address the gaps and challenges and find a common ground to ensure proper implementation and effectiveness of the AI ​​Act .

Short Link

Sorry Comments are closed