YouTube’s AI removes over 11 million violent videos

Photo by NordWood Themes on Unsplash

YouTube has revealed that it has taken down more videos in the past few months than ever before thanks to machine learning.

The video-sharing platform relied heavily on algorithms from April to June, due to work-from-home orders, which removed more than 11.4 million videos found with misleading or abusive content. 

Prior to leaning more on computers, YouTube human moderators had only identified some five million videos from January to March that go against its policies. 

YouTube employed the technology when parts of the US went into coronavirus lockdown, as allowing staff to review content outside the office could lead to sensitive data being exposed. 

Photo by Szabo Viktor on Unsplash

‘We normally rely on a combination of people and technology to enforce our policies,’ YouTube said in a blog post. 

‘Machine learning helps detect potentially harmful content, and then sends it to human reviewers for assessment.’

‘Human review is not only necessary to train our machine-learning systems, it also serves as a check, providing feedback that improves the accuracy of our systems over time.’

Videos are removed for a variety of reasons, including child safety concerns, depictions of sex/nudity, spreading scams or misinformation, harassment or cyberbullying, or promoting violent extremism or hate speech. 

Millions of videos flagged by YouTube’s AI are later determined by a human reviewer to not have violated any policy.

YouTube’s parent company, Google, instituted a work-from-home policy during the pandemic. But allowing human moderators to review videos outside of the office could risk sensitive content – and user data – being shared, according to The Verge. 

Photo by Kon Karampelas on Unsplash

‘When reckoning with greatly reduced human review capacity due to COVID-19, we were forced to make a choice between potential under-enforcement or potential over-enforcement,’ YouTube said. 

It chose to ‘cast a wider net’ so potentially harmful content would be removed quickly.

According to YouTube’s latest Community Guidelines Enforcement report, issued Tuesday, more than 11.4 million videos were removed in the second quarter of 2020.  

In the second quarter of 2019, YouTube removed just under 9 million videos.

A particular focus was made on videos that were potentially harmful to children, which were removed more than three times more often than usual. 

Flagged clips included ‘dares, challenges, or other innocently posted content that might endanger minors.’

Content creators whose videos were removed without a human review were not issued strikes against their accounts, however, except in extreme circumstances.

YouTube also ramped up resources to handle the expected increase in appeals, which doubled during the second quarter.

Photo by Kon Karampelas on Unsplash

Fully half of those appeals resulted in videos being reinstated, up from 25 percent in the first quarter.

‘The impact of COVID-19 has been felt in every part of the world, and in every corner of our business,’ the company stated. ‘Through these challenging times, our commitment to responsibility remains steadfast.’

YouTube is hardly the only social media platform faced with a content-moderation crisis during the pandemic. 

In April, Facebook came under fire when posts about making DIY face masks were blocked by an algorithm designed to weed out coronavirus scams and misinformation.

‘We apologize for this error and are working to update our systems to avoid mistakes like this going forward,’ the company said in a statement to The New York Times. ‘We don’t want to put obstacles in the way of people doing a good thing.’

Photo by Austin Distel on Unsplash


At the same time, Facebook blamed the pandemic for hampering efforts to remove posts about suicide and self-harm.

The company revealed that between April and June it took action on fewer posts containing such content because fewer human reviewers were working due to COVID-19.

Facebook sent moderators home in March but CEO Mark Zuckerberg warned enforcement requiring human intervention could be affected.

Related Post