Be afraid. Be very afraid. .Artificial intelligence could one day develop itself without man, with the potential of taking over from humans..That’s the essence of a message being sent from dozens of artificial intelligence (AI) industry executives, academics and celebrity influencers in a brief statement released this week..The statement, published by the Center for AI Safety, reads, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war” and was signed by 350 people, including OpenAI CEO Sam Altman; the so-called “godfather” of AI, Geoffrey Hinton; top executives and researchers from Google DeepMind and Anthropic; Kevin Scott, Microsoft’s chief technology officer; Bruce Schneier, the internet security and cryptography pioneer; climate advocate Bill McKibben; and the musician Grimes, among others, reports CNN..At its core, the statement highlights the potential danger of AI if left unchecked and while most experts say programmers and developers are a long way from the kind of artificial general intelligence that's the stuff of science fiction, today’s cutting-edge chatbots largely reproduce patterns based on training data they’ve been fed and do not think for themselves, says CNN..OpenAI’s ChatGPT growth developing abilities resulted in a high-tech arms race in the tech industry over artificial intelligence, to which “a growing number of lawmakers, advocacy groups and tech insiders have raised alarms about the potential for a new crop of AI-powered chatbots to spread misinformation and displace jobs,” says CNN..Hinton, whose work has helped shape today’s AI systems, left his job in early May, telling the news network, “I’m just a scientist who suddenly realized these things are getting smarter than us. I want to sort of blow the whistle and say we should worry seriously about how we stop these things getting control over us.”.Dan Hendrycks, director of the Center for AI Safety, compared Tuesday’s statement to warnings by atomic scientists “issuing warnings about the very technologies they’ve created.”.“Societies can manage multiple risks at once; it’s not ‘either/or’ but ‘yes/and,’” Hendrycks tweeted..“From a risk management perspective, just as it would be reckless to exclusively prioritize present harms, it would also be reckless to ignore them as well.”.Axios says what makes the statement demand attention is the fact the people who know the technology best are the ones asking to be regulated and are warning “the risk of obliterating humanity isn't zero.”.Canada’s state broadcaster, CBC, entered the fray, saying, “Recent developments in AI created tools supporters say can be used in applications from medical diagnostics to writing legal briefs, but this sparked fears the technology could lead to privacy violations, powerful misinformation campaigns and lead to issues with machines thinking for themselves.”."There are many ways that AI could go wrong," said Hendrycks, who believes in a need to examine which AI tools may be used for generic purposes and which could be used with malicious intent.” .AI could one day develop autonomously, with no human input, adds Hendrycks. ."It would be difficult to tell if an AI had a goal different from our own because it could potentially conceal it,” he said. “This is not completely out of the question.”
Be afraid. Be very afraid. .Artificial intelligence could one day develop itself without man, with the potential of taking over from humans..That’s the essence of a message being sent from dozens of artificial intelligence (AI) industry executives, academics and celebrity influencers in a brief statement released this week..The statement, published by the Center for AI Safety, reads, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war” and was signed by 350 people, including OpenAI CEO Sam Altman; the so-called “godfather” of AI, Geoffrey Hinton; top executives and researchers from Google DeepMind and Anthropic; Kevin Scott, Microsoft’s chief technology officer; Bruce Schneier, the internet security and cryptography pioneer; climate advocate Bill McKibben; and the musician Grimes, among others, reports CNN..At its core, the statement highlights the potential danger of AI if left unchecked and while most experts say programmers and developers are a long way from the kind of artificial general intelligence that's the stuff of science fiction, today’s cutting-edge chatbots largely reproduce patterns based on training data they’ve been fed and do not think for themselves, says CNN..OpenAI’s ChatGPT growth developing abilities resulted in a high-tech arms race in the tech industry over artificial intelligence, to which “a growing number of lawmakers, advocacy groups and tech insiders have raised alarms about the potential for a new crop of AI-powered chatbots to spread misinformation and displace jobs,” says CNN..Hinton, whose work has helped shape today’s AI systems, left his job in early May, telling the news network, “I’m just a scientist who suddenly realized these things are getting smarter than us. I want to sort of blow the whistle and say we should worry seriously about how we stop these things getting control over us.”.Dan Hendrycks, director of the Center for AI Safety, compared Tuesday’s statement to warnings by atomic scientists “issuing warnings about the very technologies they’ve created.”.“Societies can manage multiple risks at once; it’s not ‘either/or’ but ‘yes/and,’” Hendrycks tweeted..“From a risk management perspective, just as it would be reckless to exclusively prioritize present harms, it would also be reckless to ignore them as well.”.Axios says what makes the statement demand attention is the fact the people who know the technology best are the ones asking to be regulated and are warning “the risk of obliterating humanity isn't zero.”.Canada’s state broadcaster, CBC, entered the fray, saying, “Recent developments in AI created tools supporters say can be used in applications from medical diagnostics to writing legal briefs, but this sparked fears the technology could lead to privacy violations, powerful misinformation campaigns and lead to issues with machines thinking for themselves.”."There are many ways that AI could go wrong," said Hendrycks, who believes in a need to examine which AI tools may be used for generic purposes and which could be used with malicious intent.” .AI could one day develop autonomously, with no human input, adds Hendrycks. ."It would be difficult to tell if an AI had a goal different from our own because it could potentially conceal it,” he said. “This is not completely out of the question.”