Google Hiring Thousands of Moderators To Clean Up YouTube
Google is hiring thousands of new moderators after facing widespread backlash for allowing child abuse videos and other violent and offensive content to flourish on YouTube.
Google, which owns YouTube, announced on Monday that next year it would expand its total workforce to more than 10,000 people responsible for reviewing content that could violate its policies. The news from YouTube's CEO, Susan Wojcicki, followed a steady stream of negative press surrounding the site's role in spreading harassing videos, misinformation, hate speech and content that is harmful to children.
Wojcicki said that in addition to an increase in human moderators, YouTube is continuing to develop advanced machine-learning technology to automatically flag problematic content for removal. The company said its new efforts to protect children from dangerous and abusive content and block hate speech on the site were modeled after the company's ongoing work to fight violent extremist content.
"Human reviewers remain essential to both removing content and training machine learning systems because human judgment is critical to making contextualized decisions on content," the CEO wrote in a blogpost, saying that moderators have manually reviewed nearly 2m videos for violent extremist content since June, helping train machine-learning systems to identify similar footage in the future.
In recent weeks, YouTube has used machine learning technology to help human moderators find and shut down hundreds of accounts and hundreds of thousands of comments, according to Wojcicki.
YouTube faced heightened scrutiny last month in the wake of reports that it was allowing violent content to slip past the YouTube Kids filter, which is supposed to block any content that is not appropriate to young users. Some parents recently discovered that YouTube Kids was allowing children to see videos with familiar characters in violent or lewd scenarios, along with nursery rhymes mixed with disturbing imagery, according to the New York Times.
Other reports uncovered "verified" channels featuring child exploitation videos, including viral footage of screaming children being mock-tortured and webcams of young girls in revealing clothing.
YouTube has also repeatedly sparked outrage for its role in perpetuating misinformation and harassing videos in the wake of mass shootings and other national tragedies. The Guardian found that survivors and the relatives of victims of numerous shootings have been subject to a wide range of online abuse and threats, some tied to popular conspiracy theory ideas featured prominently on YouTube.
Some parents of people killed in high-profile shootings have spent countless hours trying to report abusive videos about their deceased children and have repeatedly called on Google to hire more moderators and to better enforce its policies. It's unclear, however, how the expansion of moderators announced on Monday might impact this kind of content, since YouTube said it was focused on hate speech and child safety.
Although the recent scandals have illustrated the current limits of the algorithms in detecting and removing violating content, Wojcicki made clear that YouTube would continue to heavily rely on machine learning, a necessary factor given the scale of the problem.
YouTube said machine learning was helping its human moderators remove nearly five times as many videos that they were previously, and that 98% of videos removed for violent extremism are now flagged by algorithms. Wojcicki claimed that advances in the technology allowed the site to take down nearly 70% of violent extremist content within eight hours of it being uploaded.
The statement also said YouTube was also reforming its advertising policies, saying it would apply stricter criteria, conduct more manual curation and expand its team of ad reviewers. Last month, a number of high-profile brands suspended YouTube and Google advertising after reports revealed that they were placed alongside videos filled with exploitative and sexually explicit comments about children.
In March, a number of corporations also pulled their YouTube ads after learning that they were linked to videos with hate speech and extremist content.
© 2017 Guardian Web under contract with NewsEdge/Acquire Media. All rights reserved.
Image credit: YouTube/iStock/Artist's concept.
Posted: 2017-12-08 @ 7:59am PT
A number of people have asked how to apply. Please reach out directly to YouTube or Google for more info.
As of now, it seems that the info is not yet being offered on Google's Careers page, nor on the Dec. 4th blog post by YouTube's CEO, Susan Wojcicki:
You may also find useful information on Google's page titled, "How We Hire":
Posted: 2017-12-08 @ 7:51am PT
Sure would like to help out. How do we subject application for hire?
Posted: 2017-12-07 @ 6:21pm PT
I am interested in this, how do I apply for this position to help get rid of offensive material?
Posted: 2017-12-06 @ 8:54am PT
I am retired and living in Montreal, Canada. I, too, would also like to know how to apply for a position to help rid Youtube of offensive material.
Posted: 2017-12-05 @ 7:28pm PT
Would like to know how to apply for this position with Google. I would feel like I'm helping a good cause with this position - these video's out on the internet are terrible