A contributor to the World Economic Forum has a solution for online abuse: using Artificial Intelligence (AI) to automatically detect and censor those who spread it.."By uniquely combining the power of innovative technology, off-platform intelligence collection and the prowess of subject-matter experts who understand how threat actors operate, scaled detection of online abuse can reach near-perfect precision," wrote Inbal Goldberger, VP of Trust and Safety at ActiveFence..Goldberger said an ever-growing list of online harms, such as extremism, disinformation, hate speech and child abuse, are becoming too much for trust and safety teams to moderate. But she said the solution should not be "hiring another roomful of content moderators or building yet another block list.".While AI is a powerful technology that relies on training sets to quickly identify abusive behaviours online, it is less effective at detecting nuanced language and understanding context. An example is an AI flagging renaissance paintings as pornography, or taking down a "violent" video of a chef wielding a knife..In contrast, while human moderators and subject-matter experts can detect nuanced online abuse, their precision is limited by their specific area of expertise. A human moderator who is an expert in European white supremacy might not be able to recognize harmful content in India or misinformation narratives in Kenya..To overcome the barriers of traditional detection methodologies, Goldberger proposes a hybrid AI-human system of "human-curated, multi-language, off-platform intelligence." She said this will allow AI to detect nuanced, novel online abuses before they reach mainstream platforms.She suggests supplementing this automated detection with human expertise to review cases and identify errors, before feeding the results back into the AI's training sets."This more intelligent AI gets more sophisticated with each moderation decision, eventually allowing near-perfect detection, at scale," she said..According to Goldberger, the time lag between the advent of new abuse tactics and AI being able to detect it is what allows online abuse to spread.."Incorporating intelligence into the content moderation process allows teams to significantly reduce the time between when new online abuse methods are introduced and when AI can detect them," she said.."In this way, trust and safety teams can stop threats rising online before they reach users.".In response to the article being shared on sites that "routinely misrepresent content and spread disinformation, a disclaimer was placed at the top of the article that stressed, "the content of this article is the opinion of the author, not the World Economic Forum."."The Forum is committed to publishing a wide array of voices and misrepresenting content only diminishes open conversations," they said.
A contributor to the World Economic Forum has a solution for online abuse: using Artificial Intelligence (AI) to automatically detect and censor those who spread it.."By uniquely combining the power of innovative technology, off-platform intelligence collection and the prowess of subject-matter experts who understand how threat actors operate, scaled detection of online abuse can reach near-perfect precision," wrote Inbal Goldberger, VP of Trust and Safety at ActiveFence..Goldberger said an ever-growing list of online harms, such as extremism, disinformation, hate speech and child abuse, are becoming too much for trust and safety teams to moderate. But she said the solution should not be "hiring another roomful of content moderators or building yet another block list.".While AI is a powerful technology that relies on training sets to quickly identify abusive behaviours online, it is less effective at detecting nuanced language and understanding context. An example is an AI flagging renaissance paintings as pornography, or taking down a "violent" video of a chef wielding a knife..In contrast, while human moderators and subject-matter experts can detect nuanced online abuse, their precision is limited by their specific area of expertise. A human moderator who is an expert in European white supremacy might not be able to recognize harmful content in India or misinformation narratives in Kenya..To overcome the barriers of traditional detection methodologies, Goldberger proposes a hybrid AI-human system of "human-curated, multi-language, off-platform intelligence." She said this will allow AI to detect nuanced, novel online abuses before they reach mainstream platforms.She suggests supplementing this automated detection with human expertise to review cases and identify errors, before feeding the results back into the AI's training sets."This more intelligent AI gets more sophisticated with each moderation decision, eventually allowing near-perfect detection, at scale," she said..According to Goldberger, the time lag between the advent of new abuse tactics and AI being able to detect it is what allows online abuse to spread.."Incorporating intelligence into the content moderation process allows teams to significantly reduce the time between when new online abuse methods are introduced and when AI can detect them," she said.."In this way, trust and safety teams can stop threats rising online before they reach users.".In response to the article being shared on sites that "routinely misrepresent content and spread disinformation, a disclaimer was placed at the top of the article that stressed, "the content of this article is the opinion of the author, not the World Economic Forum."."The Forum is committed to publishing a wide array of voices and misrepresenting content only diminishes open conversations," they said.