How Algorithms Use our Online Identities

Writing by Bec Savage. Illustration by Hannah Purdom.


This year, our lives have been largely restricted to our screens. Our world is now online, but the online world is not a facsimile of the offline world. It’s more like a carnival mirror; distorting reality into feedback loops and filter bubbles, and condensing the complexities of our identities into engagement data. Recommendation algorithms are identifying us by what we click, what we watch, and what we engage with and are using this to curate our feeds and keep us scrolling. Tech companies tout this as an improvement on a product, an optimisation of our online experience. But does it really ‘optimise’ anything, or does it actually prove to be insidious?

It’s important to first consider what incentivises the structuring of the recommendation systems employed across social media platforms  - to boost engagement. Selling our attention is the nexus of the business model for Big Tech. The longer we’re logged in, the more money is made. But it is very rarely, if ever, in our actual interest to stay online. 

In fact, recommendation systems which are engineered to correspond with our engagement have come under fire in recent years for promoting terrorist content, foreign state-sponsored propaganda, hate speech, inappropriate content for children, and countless conspiracy theories. Guillame Chaslot, an ex-YouTube employee who worked on YouTube’s ‘recommended for you’ feature explained that he could have easily predicted any of these scandals by ‘looking at the engagement metric’  which show that ‘misinformation, rumours, and salacious or divisive content drives significant engagement’1

In 2019, researchers at Google’s Deep Mind looked into the impact of recommendation systems used by YouTube and other platforms and concluded, ‘feedback loops in recommendation systems can give rise to “echo chambers” and “filter bubbles,” which can narrow a user’s content exposure and ultimately shift their worldview’1. The consequences of this are that conspiracies are lent credence because they’re afforded the same platform as legitimate viewpoints, and bigots are emboldened because they are finding a community wherein their hate-filled rhetoric is not only accepted, but encouraged. 

We can perhaps see the tangible and tragic outcome of these systems in the 2019 shooting in Christchurch, New Zealand which was livestreamed for 17 minutes on Facebook. It continued to spread on Facebook, YouTube, and Twitter for more than a day after as propaganda for far-right white nationalists. 

While YouTube stated that it was ‘heartbroken’ on their social media accounts, many people were quick to share their fury at the company’s performance of sorrow. They ‘[profit] enormously from sending regular people down rabbit holes that radicalize them into having these kinds of beliefs’, Jackie Luo, an engineer in the Silicon Valley tech industry, pointed out. He added, ‘YouTube is complicit … because of the huge role it’s played and continues to play in normalizing and spreading this kind of violent rhetoric.’2. Algorithms identify extremists and curate content recommendations based on what like-minded users are engaging with, so that the online environment of someone like the Christchurch shooter will likely act as a reinforcement and validation of their extremist ideology and provide a space for their violent rhetoric. 

In the aftermath of the Christchurch shooting, tech companies were quick to promise to address online radicalisation, but when asked what had been done in March of this year, David Neiwert, a hate crime expert and author, said, ‘Nothing, other than there's been more murders.’3. This is not necessarily surprising, considering Facebook only this year decided to ban Holocaust denial on their platform. 

Big Tech is fundamentally driven by profit and, thus, it is fundamentally driven by our engagement. So far as this is the case, they are disinclined to change their business models to be more in line with the public interest. There is something vitally wrong, though, with essentially 6 people controlling the online economy of information and deciding how it should be disseminated to users. An overhaul is long overdue.

1.    https://www.wired.com/story/the-toxic-potential-of-youtubes-feedback-loop/

2.    https://www.gizmodo.com.au/2019/03/big-techs-favourite-excuse-for-letting-hate-go-viral-is-a-great-argument-against-big-tech/

3.    https://www.abc.net.au/news/2020-03-14/white-nationalism-a-year-after-christchurch-mosque-massacre/12046412



  • Instagram

©2020 by The Rattlecap