To understand the dehumanizing systems behind our laptops, phones, auto-fill searches, and advertisements, we must ask:
What are algorithms?
An algorithm is a set of mathematical instructions that acts as a tool which can be “used” either for good or for bad intentions. Like any other commerce industry, online market companies employ various algorithmic techniques to “hack” consumers’ feelings, particularly into making decisions that aren’t necessarily good for them. These strategies follow general psychology principles regarding decision making, such as humans’ fast thinking “autopilot” fallibility, irrational tendencies to loss aversion, and adhering to social proof of those “like” us. Tech companies meticulously manipulate us into interacting with their platforms, whether it be social media or video streaming or shopping sites, all the while unconsciously subjecting us to a feedback loop. As social media sites gain more and more traction in our technology-driven world today, these platforms often serve as a weapons for mass propaganda and political threats. Such behaviors magnify the divisions in present day societies, and result in conflicts and violence in the real world – all stemming from the algorithm. Perhaps we can ask ourselves, do media platforms accurately reflect our societies, or instead, are our lives largely shaped by stretched, skewed, and/or distorted media portrayals? Below, I will attempt to introduce the algorithms and applications of three widely used technology platforms: Facebook, YouTube, and Amazon.
On Facebook, users might see a post or story on their page regarding a real world issue, and interact through a like, comment, repost, or share. Many users’ interactions boosts the content’s engagement, which in turn amplifies potential controversy especially surrounding social issues. To see the scale of the platform’s impact on real-world issues, Facebook conducted two of many experiments across its users: the 2010 Voter Experiment and the Emotional Contagion Study.
In 2010, Facebook experimented on 61 million users with a series of political mobilization messages — an “I Voted” sticker — during the US congressional elections. Data revealed that users who saw the message were more likely to click the “I Voted” button, more likely to seek information about polling locations, and more likely to head to the polls to vote compared to the control group. This study showed the power of social influence: the addition of an algorithm to display a simple message greatly increased voter turnout. If Facebook can code a simple message into its users screen and move more people to the polls, what other algorithms might they implement, and how might that effect our world? And if Facebook hadn’t revealed their study to the public, we would have never known about this power they held, and so we may wonder whether the company has been conducting other algorithmic experiments unbeknownst to us?
Later in 2012, Facebook conducted a study on emotional contagion, where the company altered news feeds of nearly 700,000 users to show either more “positive” or “negative” content. A platform with over 1.25 billion users at the time and massive global impact, the company “played” with human emotions, then monetized that data for the growing of their own corporation. This manipulation undermined users’ trust, as feeds were changed without any notice whatsoever because all users had agreed to Facebook’s general terms of data use when first starting out in the platform. In other words, users had unknowingly consented to the company’s psychological and sociological experimentation. Such exploitative methods compromises the company’s credence and have potentially detrimental implications in data collection and privacy violations.
YouTube is infamous for its algorithmic recommendations resulting in users falling into so-called rabbit holes. In a few clicks, users may find themselves watching a guitar tutorial video, then suggested to watch a conspiracy theory, and finally a political campaign. Oftentimes, YouTube’s algorithms recommends its users videos with “shock” value, such as through grotesque, sexualized, or controversial content.
For example, the Free Brazil Movement is a Brazilian conservative and economically liberal movement founded in 2014 by Kim Kataguiri and Renan Santos. By creating viral videos, their content utilized YouTube’s algorithm and inspired outrage and protests to recruit people to their movement. Throughout their campaign, Kataguiri and Santos spread messages equating violence to “entertainment” to boost publicity on YouTube.
As a video-streaming platform, YouTube should act as a passive provider of information to its users, much like Google should as a search engine. However, we have seen that users may be disproportionally suggested right or left wing videos based on their geographical location, young children can be recommended videos with nudity and/or rape content, and other much more dangerous, discriminatory, and harmful content.
YOUTUBE
Since the beginning, data collection was one of Amazon’s largest businesses. From a user’s navigation through the page, their current/past searches, pauses in scrolling across merchandise, items placed in their basket that were versus were not ordered, all customer behavior that flows through the site is recorded and tracked as a valuable commodity. Amazon utilizes this data in making predictions of what its customers would like best or be most likely to purchase. These massive collections in data “marketize” customers in an exploitative way, analogous to “extracting the maximum amount of milk from a cow.”
AMAZON
SOURCES: Weapons of Influence, Merchants of Cool, The Facebook Dilemma, What is YouTube Pushing You to Watch Next?, Amazon Empire