GiddyUp

YouTube Returns to Using Human Content Moderators (Take That, Machines!)

Sorry AI… looks like you’ve just been demoted.

We all know many companies rely heavily on algorithms and artificial intelligence (AI) to keep things flowing smoothly.

But YouTube is learning the hard way that humans are still a necessary function when it comes to content moderation.

How did AI manage to land this job?

You see, at the beginning of the pandemic, YouTube left AI back at the office – while everyone who normally moderates inappropriate content, was baking banana bread at home.

That’s why between April and June, AI removed over a record-breaking 11 million videos for not following the guidelines (in 2019 it was about 9 million videos). Problem is, a lot of the videos they removed didn’t actually break any rules.

The reason for this? YouTube programmed the AI to be more “responsible” and to err on the side of caution, causing it to remove videos that didn’t break any rules but just seemed like they did.

This opened the floodgates to thousands of appeals

 

Naturally, many content creators felt like they had been unfairly censored.

You can say people were more than annoyed with this. Which is why YouTube has hired extra (human) staff to sort through the appeals and judge everyone based on actual guidelines rather than what the AI flagged as problematic. (Big win for human intellect here!)

The number of videos that have been reinstated went up to 160,000, compared to 41,000 from the first quarter. Now more human moderators are stepping into the appeals process to make sure valid requests get back online quickly.

So, when will AI stop taking down the wrong content?

Since YouTube’s parent company Google told its employees to work from home for the rest of the year, it’s been a challenge to figure out how to get human moderators back to sifting through content.

And unfortunately, it’s not as simple as bringing a laptop home. The home environment for workers would create a risk for data breach and having sensitive data exposed.

So, for the time being it looks like AI will keep thinking that cat videos are offensive (and if you hate cats, they just might be).

It looks like no matter how advanced AI seems to get… we’ll always need that human touch of judgment to make sure things are working properly, especially for subjective things like social media.

Do you think YouTube has made the right decision by leaving the AI system as is? Would you ever want to work as a human content moderator?

Write your thoughts to us in the comment box below!