Speakers at the World Economic Forum gathering in Davos, Switzerland, called for allied efforts by governments, businesses and big tech to deal with misinformation.The President of the European Commission, Ursula von der Leyen, told "dear Klaus" Schwab, the WEF chairman, that tackling disinformation was her consistent priority."Disinformation and misinformation tackling this has been our focus since the very beginning of my mandate. With our Digital Services Act, we defined the responsibility of large internet platforms on the content they promote and propagate," she said. "A responsibility to children and vulnerable groups targeted by hate speech, but also a responsibility to our societies as a whole. Because the boundary between online and offline is getting thinner and thinner, and the values we cherish offline should also be protected online." The president, selected for her role by secret ballot in 2019, warned Davos attendees that misinformation and polarization was the pre-eminent threat of the immediate future.“For the global business community, the top concern for the next two years is not conflict or climate,” she said, but “disinformation and misinformation, followed closely by polarization within our societies.”Business and government has to work together to solve the problem, von der Leyen suggested, as 2024 would be “the biggest electoral year in history” with people going to the polls in many nations.“Many of the solutions lie not only in countries working together but, crucially, on businesses and governments, businesses and democracies working together,” she said.Leyen said the attempts to put the public "off track" with "misinformation and disinformation" about Ukraine were a prime example of the problem. She said Russia had failed economically and militarily.Naomi Oreskes, professor of the History of Science at Harvard University, complained about disinformation on social media. Moderator Urs Gredig asked Oreskes if Donald Trump exemplified how a leader can mislead the public.“President Trump, it's been well written about, he had over 30,000 false or misleading claims in his four years as a president. If that comes from the top, is that part of the issue and the problem as well?” asked Gredig.“Yeah, absolutely,” Oreskes replied.“It's one of the reasons this issue has become so vexed in the United States because now ordinary people are getting a lot of disinformation in social media, almost much of which is coming from private sector interests, but then it gets amplified when it gets reposted and resent by ordinary people who think, oh this is interesting, I'll pass this on.”Oreskes, a vocal critic of Twitter (“X”), complained that “disinformation” was being “amplified at the top” by social media companies and said that the “business community has a big role to play" to change this. Oreskes alleged big companies were “biting their tongue” regarding Trump due to expectations he would “cut corporate taxes” if elected.“I really hope that the World Economic Forum will take this issue on board and think harder about the role that the private sector can play in standing up against disinformation, even if they might like the fact that that politician would cut their taxes,” Oreskes said.“Even as the insidious spread of misinformation and disinformation threatens the cohesion of societies, there is a risk that some governments will act too slowly, facing a trade-off between preventing misinformation and protecting free speech, while repressive governments could use enhanced regulatory control to erode human rights.”DeepLearning.AI founder Andrew Ng told the Davos audience artificial intelligence should not be regulated, but applications should be.“We should take a tiered approach to regulating AI applications, according to their degree of risk. Doing this effectively requires clear identification of what is actually risky (medical devices, for example, or chat systems potentially spewing disinformation),” Ng explained.“Some regulatory proposals use an AI model’s size, or the amount of computation used to develop the model that determines related risk. But this is a flawed approach. Both small and large AI models are capable of doing things like providing bad medical advice or generating disinformation.”
Speakers at the World Economic Forum gathering in Davos, Switzerland, called for allied efforts by governments, businesses and big tech to deal with misinformation.The President of the European Commission, Ursula von der Leyen, told "dear Klaus" Schwab, the WEF chairman, that tackling disinformation was her consistent priority."Disinformation and misinformation tackling this has been our focus since the very beginning of my mandate. With our Digital Services Act, we defined the responsibility of large internet platforms on the content they promote and propagate," she said. "A responsibility to children and vulnerable groups targeted by hate speech, but also a responsibility to our societies as a whole. Because the boundary between online and offline is getting thinner and thinner, and the values we cherish offline should also be protected online." The president, selected for her role by secret ballot in 2019, warned Davos attendees that misinformation and polarization was the pre-eminent threat of the immediate future.“For the global business community, the top concern for the next two years is not conflict or climate,” she said, but “disinformation and misinformation, followed closely by polarization within our societies.”Business and government has to work together to solve the problem, von der Leyen suggested, as 2024 would be “the biggest electoral year in history” with people going to the polls in many nations.“Many of the solutions lie not only in countries working together but, crucially, on businesses and governments, businesses and democracies working together,” she said.Leyen said the attempts to put the public "off track" with "misinformation and disinformation" about Ukraine were a prime example of the problem. She said Russia had failed economically and militarily.Naomi Oreskes, professor of the History of Science at Harvard University, complained about disinformation on social media. Moderator Urs Gredig asked Oreskes if Donald Trump exemplified how a leader can mislead the public.“President Trump, it's been well written about, he had over 30,000 false or misleading claims in his four years as a president. If that comes from the top, is that part of the issue and the problem as well?” asked Gredig.“Yeah, absolutely,” Oreskes replied.“It's one of the reasons this issue has become so vexed in the United States because now ordinary people are getting a lot of disinformation in social media, almost much of which is coming from private sector interests, but then it gets amplified when it gets reposted and resent by ordinary people who think, oh this is interesting, I'll pass this on.”Oreskes, a vocal critic of Twitter (“X”), complained that “disinformation” was being “amplified at the top” by social media companies and said that the “business community has a big role to play" to change this. Oreskes alleged big companies were “biting their tongue” regarding Trump due to expectations he would “cut corporate taxes” if elected.“I really hope that the World Economic Forum will take this issue on board and think harder about the role that the private sector can play in standing up against disinformation, even if they might like the fact that that politician would cut their taxes,” Oreskes said.“Even as the insidious spread of misinformation and disinformation threatens the cohesion of societies, there is a risk that some governments will act too slowly, facing a trade-off between preventing misinformation and protecting free speech, while repressive governments could use enhanced regulatory control to erode human rights.”DeepLearning.AI founder Andrew Ng told the Davos audience artificial intelligence should not be regulated, but applications should be.“We should take a tiered approach to regulating AI applications, according to their degree of risk. Doing this effectively requires clear identification of what is actually risky (medical devices, for example, or chat systems potentially spewing disinformation),” Ng explained.“Some regulatory proposals use an AI model’s size, or the amount of computation used to develop the model that determines related risk. But this is a flawed approach. Both small and large AI models are capable of doing things like providing bad medical advice or generating disinformation.”