Elon Musk is ringing major alarm bells about artificial intelligence (AI) and the dangers it presents..About a decade ago Musk helped found the nonprofit project Open AI, based on the idea AI was something that wasn’t going away so it should be open to ensure it was used for good and not evil..But AI went beyond the non-profit stage and is being used and abused. Just this week, Western Standard reported about a frightened mother hearing her daughter’s voice, saying she'd been kidnapped. She hadn’t. The voice was generated by AI. .At the other end of the scale was someone generating pornography that appeared to feature real people. It didn’t. The people were AI creations..“It is fundamentally profound in that the smartest creatures, as far as we know on this earth, are humans and intelligence is a defining characteristic,” said Musk on the Tucker Carlson Show on Fox. .“So now what happens when something vastly smarter than the smartest person comes along? It’s hard to predict, so I think we should be cautious with AI and there should be some government oversight because it’s a danger to the public.”.Musk spoke of agencies in the US, such as the Food and Drug Administration, the Federal Aviation Administration and the Federal Communications Commission, all designed to provide protection to the public where needed.. “We have these agencies to oversee things that affect the public where they could wreak public harm,” said Musk. “AI is perhaps more dangerous than say mismanaged aircraft design or production maintenance or bad car production in the sense it has the potential of civilizational destruction.”.“I think we should be cautious with AI and there should be some government oversight because it’s a danger to the public.”.Musk is not necessarily a fan of regulation, but he understands its importance..“It's sort of arduous to be regulated (and) I have a lot of experience with regulated industries because automotive is highly regulated. You could fill this room with all the regulations that are required for a production car just in the United States,” he said. “The same thing is true with rockets. You can't just shoot rockets off, because the FAA oversees that.”.AI needs to be taken as a serious threat, said Musk..“We should have a range of agencies and I think it needs to start with a group that initially seeks insight into AI and then solicits opinion from industry and then has proposed rule making and then those will probably gradually be accepted by the major players in AI and make a better chance of advanced AI being beneficial to humanity in that circumstance,” he said..A fear Musk has is regulations won’t be put into place until “after something terrible has happened.”.“If that's the case for AI and we're only putting regulations in after something terrible has happened, it may be too late to actually put the regulations in place,” he said. “AI may be in control at that point.”.Carlson talked about “the cool parts of artificial intelligence” such as writing college papers and songs “there's a lot there that's fun and useful. Can you be more precise about what's potentially dangerous and scary and what specifically are you worried about.”.“The pen is mightier than the sword, so if you have a super intelligent AI that's capable of writing incredibly well and in a way that is very influential, (it’s) convincing and it's constantly figuring out what is more convincing to people over time,” said Musk. “Then enter social media, for example Twitter, but also Facebook and others, you know and it potentially replicates public opinion in a way that's very bad and how would we ever know?”.Musk said Microsoft and Google are the “AI heavy weights” and the world needs a third option..“I think I will create a third option although I’m starting very late in the game of course.” he said. “I don’t know (if it can be done), but I'll try to create a third option and that third option hopefully does more good than harm.”.“I'm worried about the fact AI is being trained to be politically correct, which is another way of being untruthful.”.“So this will lead to a path to train AI. I'm going to start something which I call TruthGBT, or a maximum truth-seeking AI that tries to understand the nature of the universe. I think this might be the best path to safety in the sense that an AI that cares about understanding the universe is unlikely to annihilate humans because we are an interesting part of the universe.”
Elon Musk is ringing major alarm bells about artificial intelligence (AI) and the dangers it presents..About a decade ago Musk helped found the nonprofit project Open AI, based on the idea AI was something that wasn’t going away so it should be open to ensure it was used for good and not evil..But AI went beyond the non-profit stage and is being used and abused. Just this week, Western Standard reported about a frightened mother hearing her daughter’s voice, saying she'd been kidnapped. She hadn’t. The voice was generated by AI. .At the other end of the scale was someone generating pornography that appeared to feature real people. It didn’t. The people were AI creations..“It is fundamentally profound in that the smartest creatures, as far as we know on this earth, are humans and intelligence is a defining characteristic,” said Musk on the Tucker Carlson Show on Fox. .“So now what happens when something vastly smarter than the smartest person comes along? It’s hard to predict, so I think we should be cautious with AI and there should be some government oversight because it’s a danger to the public.”.Musk spoke of agencies in the US, such as the Food and Drug Administration, the Federal Aviation Administration and the Federal Communications Commission, all designed to provide protection to the public where needed.. “We have these agencies to oversee things that affect the public where they could wreak public harm,” said Musk. “AI is perhaps more dangerous than say mismanaged aircraft design or production maintenance or bad car production in the sense it has the potential of civilizational destruction.”.“I think we should be cautious with AI and there should be some government oversight because it’s a danger to the public.”.Musk is not necessarily a fan of regulation, but he understands its importance..“It's sort of arduous to be regulated (and) I have a lot of experience with regulated industries because automotive is highly regulated. You could fill this room with all the regulations that are required for a production car just in the United States,” he said. “The same thing is true with rockets. You can't just shoot rockets off, because the FAA oversees that.”.AI needs to be taken as a serious threat, said Musk..“We should have a range of agencies and I think it needs to start with a group that initially seeks insight into AI and then solicits opinion from industry and then has proposed rule making and then those will probably gradually be accepted by the major players in AI and make a better chance of advanced AI being beneficial to humanity in that circumstance,” he said..A fear Musk has is regulations won’t be put into place until “after something terrible has happened.”.“If that's the case for AI and we're only putting regulations in after something terrible has happened, it may be too late to actually put the regulations in place,” he said. “AI may be in control at that point.”.Carlson talked about “the cool parts of artificial intelligence” such as writing college papers and songs “there's a lot there that's fun and useful. Can you be more precise about what's potentially dangerous and scary and what specifically are you worried about.”.“The pen is mightier than the sword, so if you have a super intelligent AI that's capable of writing incredibly well and in a way that is very influential, (it’s) convincing and it's constantly figuring out what is more convincing to people over time,” said Musk. “Then enter social media, for example Twitter, but also Facebook and others, you know and it potentially replicates public opinion in a way that's very bad and how would we ever know?”.Musk said Microsoft and Google are the “AI heavy weights” and the world needs a third option..“I think I will create a third option although I’m starting very late in the game of course.” he said. “I don’t know (if it can be done), but I'll try to create a third option and that third option hopefully does more good than harm.”.“I'm worried about the fact AI is being trained to be politically correct, which is another way of being untruthful.”.“So this will lead to a path to train AI. I'm going to start something which I call TruthGBT, or a maximum truth-seeking AI that tries to understand the nature of the universe. I think this might be the best path to safety in the sense that an AI that cares about understanding the universe is unlikely to annihilate humans because we are an interesting part of the universe.”