One of the “godfathers” of artificial intelligence (AI) has said he feels “lost” as experts warned the technology could lead to the extinction of humanity.
Professor Yoshua Bengio told the BBC that all companies producing AI products should be registered and people working on the technology should receive ethics training.
It comes after dozens of experts put their names to a letter organized by the Center for AI Safety, warning that the technology could wipe out humanity and that the risks should be treated with the same urgency as a pandemic or nuclear war.
This morning, it was also reported that a “fantastic froth” has built up around artificial intelligence (AI) firms as investors scramble to capitalize on the surge in interest in recent months.
AI-related stocks hit record highs this week after a City analyst warned US chipmaker Nvidia, a top AI hardware and software maker, soared to $1tn in valuation yesterday.
Prof Bengio said: “It is challenging, emotionally speaking, for those who are in (the AI field).
“You can say I feel lost. But you have to keep going and you have to engage, discuss, encourage others to think with you.”
Senior executives from companies such as Google DeepMind and Anthropic signed the letter along with another AI pioneer, Geoffrey Hinton, who resigned from his job at Google earlier this month, saying that in the wrong hands, AI can be used to harm people and spell the end of humanity.
Experts were already warning that the technology could take jobs from humans, but the new statement warns of deeper concern, saying AI could be used to develop new chemical weapons and enhance aerial warfare. Could
AI apps like Midjourney and ChatGPT have gone viral on social media sites, with users posting fake images of celebrities and politicians, and students using ChatGPT and other “language learning models” to prepare university-grade essays Are.
But AI can also perform life-saving tasks, as algorithms analyze medical images like X-rays, scans and ultrasounds, helping doctors to more accurately and quickly diagnose diseases such as cancer and heart conditions Is.
Last week Prime Minister Rishi Sunak spoke about the importance of driving innovation as well as ensuring the right “guard rails” to protect against potential threats ranging from disruption and national security to “existential threats”.
He retweeted the statement from the Center for AI Safety on Wednesday, saying: “The government is considering this very carefully. Last week I stressed the importance of guarding AI companies so that development is safe and secure. But we need to work together. So I picked it up at the @G7 and will do so again when I visit the US.”
Prof Bengio told the BBC that all companies making powerful AI products must be registered.
“Governments need to track what they’re doing, they need to be able to audit them, and that’s the bare minimum for any other sector like making airplanes or cars or pharmaceuticals,” he said. “
“We also need people who are close to these systems to have some sort of certification … We need ethical training here. Computer scientists generally don’t understand that.”
Prof Bengio said of the current state of AI: “It is never too late to improve.
“It’s just like climate change. We’ve put too much carbon into the atmosphere. And it would have been better if we didn’t do that, but let’s see what we can do now.”
Sir Nigel Shadbolt, president of the London-based Open Data Institute and an expert at Oxford University, told the BBC: “There’s an enormous amount of AI around us right now, which has become almost ubiquitous and unmarked. There’s software in our phones that can mimic our voices. Recognizes, has the ability to recognize faces.
“In fact, if we think about it, we believe there are ethical dilemmas in the use of those technologies. I think what’s different now, with so-called generative AI, things like ChatGPT, is that it’s a It is a system that can be specialized from general to many, many specialized functions and engineering is in some sense ahead of science.
“We don’t know how to explain the full consequences of this technology, we all have a common belief that we need to innovate responsibly, that we need to think about the ethnic dimension, the values that these systems Let’s include.
“We have to understand that AI is a great force for good. We have to appreciate it, not the worst, (but) there are a lot of existential challenges that we are facing… Our technologies equal other things that can weigh us down, whether it’s the climate or other challenges we face.
“But it seems to me that if we think ahead, if we take the steps that people like Yoshua are advocating for, it’s a good first step, it’s great that we have to come together.” It’s a chance to come to understand that this is a powerful technology that has a dark and a light side, that has a yin and a yang, and we need lots of voices in that debate.
Sorry Comments are closed