Google To Use AI, Human Experts To Fight Online Extremism
Silicon Valley has come to the conclusion that it will take a combination of advanced technologies and human intelligence to help find and control extremist content online. Google and Google-owned YouTube are the latest companies to embrace this approach, announcing on Sunday that they are taking four new steps to fight terrorist content on the Internet.
Those steps the companies plans to take are: ramping up their uses of video analysis models and other technology to identify extremist videos; "greatly" increasing the number of experts to flag questionable content on YouTube; toughening their stances on videos that violate content policies; and stepping up their collaborative efforts with other tech companies such as Facebook, Microsoft, and Twitter.
'No Place for Terrorist Content'
"There should be no place for terrorist content on our services," Google general counsel Kent Walker wrote Sunday in a blog post that was also published as an opinion piece in the Financial Times. "While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now."
While using technology to identify extremist content "can be challenging," Walker said Google has used video analysis models to help identify more than half of the terrorism-related content it has removed over the past six months. He added the company plans to "apply our most advanced machine learning research to train new 'content classifiers'" for identifying and removing extremist video content.
Walker added that Google also plans to add 50 more independent non-governmental organizations to the 63 groups already working with YouTube's Trusted Flagger program, and will support them with grant funding.
Google will also put new restrictions on videos with "inflammatory religious or supremacist content," placing them behind a warning message and preventing them from being monetized, recommended, commented on, or endorsed by users.
"That means these videos will have less engagement and be harder to find," Walker said. "We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints."
Other Strategies: Targeted Ads, Data Sharing
Walker added that Google also plans to expand its efforts to fight online radicalization, something it already targets through programs, such as Creators for Change, which promotes anti-hate voices on YouTube. He said the company is working through its Jigsaw initiative to expand use of the "Redirect Method" across Europe.
That method uses targeted online ads to reach potential recruits to the terrorist organization Isis, and then redirects them to videos aimed at reducing radical thinking.
"In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages," Walker noted.
In December, YouTube began working with Facebook, Microsoft, and Twitter to share data with the goal of reducing the spread of terrorist content online. Their announcement followed a European Union study that found the companies were failing to meet the voluntary compliance standards on hate speech they had agreed to earlier in 2016. Last week, Facebook also announced a two-pronged approach to fighting terrorist content with both artificial intelligence and human experts.