YouTube Faces New Questions Over its Algorithm-Selected Content Recommendations

YouTube Faces New Questions Over its Algorithm-Selected Content Recommendations

youtube-faces-new-questions-over-its-algorithm-selected-content-recommendations

The online space has become increasingly dominated by algorithms, digital systems which ‘learn’ from your behavior, and recommend more content along similar lines to keep you engaged and on platform.

That makes sense from the perspective of the companies which benefit from keeping you locked into their apps, but the problem with algorithms is that they don’t use any form of judgment. They simply recommend more of what you like – so if you like racist, hate-filled conspiracy theories, guess what you see more of? And if you’re a pedophile who’s looking to watch videos of underage children…

That’s the issue that YouTube has been battling over the last year or so, amid criticism around how its machine learning systems essentially facilitate pedophile networks within the app.

Back in February, YouTuber Matt Watson revealed how YouTube’s system had enabled such activity, which prompted YouTube to implement a range of new measures, including deactivating comments on “tens of millions of videos that could be subject to predatory behavior”.

Evidently, however, the issue still remains – according to a new report in The New York Times, a new concern has arisen where YouTube’s system has been recommending videos with images of children in the background of home movies to these same online pedophile networks.

As per NYT:

“Any individual video might be intended as nonsexual, perhaps uploaded by parents who wanted to share home movies among family. But YouTube’s algorithm, in part by learning from users who sought out revealing or suggestive images of children, was treating the videos as a destination for people on a different sort of journey. And the extraordinary view counts – sometimes in the millions – indicated that the system had found an audience for the videos, and was keeping that audience engaged.”

That’s a deeply concerning trend, and yet another element in YouTube’s content battle.

For its part, YouTube has explained that it’s constantly improving its recommendation systems – which drive up to 70% of its views – and that it’s implemented a range of new processes to tackle this specific type of misuse. In a separate announcement following the publication of the NYT piece, YouTube has also ruled that minors will now be banned from live-streaming on the platform, unless they’re joined by an adult on-screen.

But the real issue this reveals is with algorithms themselves. While it makes sense to use an algorithm to show more of the same to users, and keep them on-platform, it may not actually be the best thing for society more broadly, with algorithm recommendations playing a part in several of the most concerning trends of recent times.

Take, for example, Facebook, where its algorithm further indoctrinates users into certain ideologies by showing them more of what they’ll likely ‘Like’ – i.e. more of what they’ll agree with, and less of what they won’t.

That plays into human psychology – our minds are hard-wired to cater to our own inherent bias by essentially seeking out shortcuts to process information, selectively choosing which parts we’ll believe, and which we’ll ignore. 

As explained by psychologist and author Sia Mohajer:

“We look for evidence that supports our beliefs and opinions about the world, but excludes those that run contrary to our own… In an attempt to simplify the world and make it conform to our expectations, we have been blessed with the gift of cognitive biases.”

Facebook’s algorithm feeds into this instinct, which is likely why we’ve seen increases in movements like anti-vaxxers and flat earthers, non-evidence based standpoints which align with certain fringe beliefs, and are then reinforced and re-stated by Facebook’s recommendation systems.

Is that good for society more broadly?

It might not seem like a major concern, a few people sharing memes here and there. But Europe saw a record number of measles cases in 2018, due, at least in part, to a growing number of parents who are refusing vaccinations for their children, At the same time, in America – where measles was officially declared eliminated in 2000 – reports of outbreaks are, once again, becoming common. 

Then there are the issues related to political messaging, and the radicalization of users through hate speech. 

It’s not social media that’s the problem in each of these cases, its the algorithms, the systems which show you more and more on the topics you’re likely to agree with, and remove opposing viewpoints from your sphere. You can downplay the influence of Facebook and YouTube, or the potential of such process. But the evidence is clear. Algorithms, which cannot utilize judgement, will always be problematic, and will always work, without any element of conscience, to fuel inherent bias and concerning habits.

Because that’s what they’re designed to do – and we’re letting them define entire movements in the back-end of our digital systems.

If you really want to eliminate such issues, the algorithms need to be removed entirely. Let users conduct searches and decide what they want to see.

Will that stop such misuse entirely? No, but it’ll certainly slow it down, while also making it easier to detect users who are specifically following chains of concern, and stopping the involuntary expansion of such elements.

Algorithms help boost business interests, no doubt, but they operate without human judgement – which, at times, is clearly needed. As YouTube is now finding, this is almost impossible to stop. Unless you remove this element completely. 

Digital literacy is now reaching the point where, arguably, this is possible. Maybe it’s time to re-examine this aspect.  

error: Content is protected !!