Guest Contributions

Ensuring User Predictability on Social Media Platforms

How are users algorithmically controlled while online?

If fifteen years ago, you had been told that by 2020 nearly everyone would carry around a device in their pocket that tracked their lifestyle, friends, prejudices, purchases, videos, political opinions, facial expressions, and physical location, you might have laughed and accused the teller of being a “conspiracy theorist.” But today’s online reality is well-described by Jaron Lanier in a recent book, “Algorithms gorge on data about you, every second. What kinds of links do you click on? What videos do you watch all the way through? How quickly are you moving from one thing to the next? Where are you when you do these things? Who are you connecting with in-person and online? What facial expressions do you make? How does your skin tone change in different situations? What were you doing just before you decided to buy something or not? Whether to vote or not?”1 The book is Ten Arguments for Deleting Your Social Media Accounts Right Now and that title may well encapsulate the most effective strategy to protect us from online behavioral conditioning. This article explores the economic forces that drive this conditioning and how it is being used to reinforce user predictability in order to serve the interests of large platforms, corporations, and government agencies.

The first question is: how is user data collected? Put simply, while using a platform such as Facebook, our metadata and interactions are tracked and stored in huge data farms. Online histories for each user are collected and analyzed using machine learning techniques to search for behavioral patterns, such as on which posts you click the “Like” button. Machine learning is “… the study of computer algorithms that improve automatically through experience. Machine learning algorithms build a mathematical model based on sample data, known as ‘training data’, to make predictions or decisions without being explicitly programmed to do so.”2 This technique has proven extremely successful in tasks such as facial recognition. It is currently being used by Facebook and other platforms to compile packages that predict future user behavior based on profile information gathered from online activity and demographic details such as your neighborhood, profession, or what car you drive.

To predict user behavior, analysts often employ statistical methods to classify users so that they can be accurately targeted by advertisers or other interested parties. To understand how Facebook data could be used for this purpose, professor Michal Kosinski at the Stanford Graduate School of Business used 3 million Facebook user profiles as “… a massive database that connects how the things we write, share and like on Facebook relate to our behaviour, our views and our personality.”3 He used this data to generate predictions such as “People with lots of friends liked Jennifer Lopez” or “People with only a few friends liked the computer game Minecraft.” 4 He found that by using 40 to 100 dimensions such as personality type, intelligence, and lifestyle choices, users could be accurately classified for marketing purposes. On Facebook, each user is tracked using about 100,000 elements of profile data, enough to allow persuasive techniques to be customized for highly focused user groups or even specific individuals.

According to confidential documents uncovered by The Intercept, Facebook “… offers the ability to target [users] based on how they will behave, what they will buy, and what they will think. These capabilities are the fruits of a self-improving, artificial intelligence-powered prediction engine … dubbed ‘FBLearner Flow.'” 5 Using this tool, machine learning models can accurately personalize News Feed stories, rank search results according to user interests, and otherwise discover the most effective methods for engaging a particular user’s attention. Using the resulting models, predictive analytics packages are compiled and sold to those who can profit from knowing how a user would likely respond to specific conditioning methods. This data is used by advertisers and other interested parties to target a specific user’s experience by, for instance, changing a screen color, inserting a special phrase into the page text, or adding a precisely timed and positioned “buy” button in order to nudge user choices in the preferred direction. Using these methods, by 2019 Facebook had achieved annual revenue of $70.7 billion per year of which 98.5% was due to advertising.6

The major platforms such as Google, Facebook, YouTube, etc. soon discovered that the new economic order could not reach its full potential until they moved beyond merely predicting user behavior and began to shape it. To achieve this, Nicholas Carr, the acclaimed technology writer, once quipped, “… the best way to predict behavior is to script it.”7. Carr describes the fundamental breakthrough, “… it was Facebook, with its incredibly detailed data on people’s social lives, that grasped digital media’s full potential for behavior modification. By using what it called its ‘social graph’ to map the intentions, desires, and interactions of literally billions of individuals, it saw that it could turn its network into a worldwide Skinner box, employing psychological triggers and rewards to program not only what people see but how they react.”8 The companies which use this data to condition user behavior can increase profits by pushing the accuracy of their predictions to higher levels. But in order to achieve the peak rewards, they need to directly condition user behavior: “Behavioral surplus must be vast and varied, but the surest way to predict behavior is to intervene at its source and shape it.”9 Ultimately, according to Carr, “What the industries of the future will seek to manufacture is the self.”10.

Social media molds users’ behavior by manipulating the informational environment which guides their decisions. Facebook’s overall AI-based strategy is summarized as follows by Stuart Russell, a leading expert in artificial intelligence, “… because AI systems can track an individual’s online reading habits, preferences, and likely state of knowledge, they can tailor specific messages to maximize impact on that individual while minimizing the risk that the information will be disbelieved. Second, the AI system knows whether the individual reads the message, how long they spend reading it, and whether they follow additional links within the message. It then uses these signals as immediate feedback on the success or failure of its attempt to influence each individual; in this way, it quickly learns to become more effective in its work.”11 Learning user preferences in this way allows the algorithm to fine-tune advertising pitches which in turn govern the options presented to the user, known as the “choice architecture”, so as to subconsciously suggest the desired action. For instance, the default option in a set of purchase choices is often the one that brings in the most profit to the company offering the product. Just as a neural network can learn to accurately recognize faces, so Facebook’s machine learning models can determine which choices will most strongly captivate a particular user’s attention.

What is critical to understand is not simply that social media users are being conditioned to increase advertising sales or to trigger compulsive user engagement in order to generate rich supplies of machine learning data. Those are simply contributory factors to the main goal which is, as Douglas Rushkoff put it, “… to get users to behave more consistently with their profiles and the consumer segment to which they’ve been assigned … The better it does at making [us] conform to [our] algorithmically determined destiny, the more the platform can boast both its predictive accuracy and its ability to induce behavior change.”12 In other words, the goal is to ensure that we conform as closely as possible to the behaviors assigned to our profile classification. These behaviors include the type and quality of information we are being conditioned to seek.13 Corporate leaders are not as concerned with the specifics of how we support the profit margins of the tech giants, as that we are programmable enough to support any objectives that may be found convenient to those who wish to influence our subconscious motivations.

At this point, we have established the mechanisms currently being used by the major platforms to collect user data and influence our behavior through AI-driven behavior modification. For readers who wish the explore the well-established science behind online user behavior prediction and manipulation, a good place to begin is with The Age of Surveillance Capitalism14 by Shoshana Zuboff. In the next section, we will see that these techniques are not only used for commercial advertising but can also influence political opinions and voting behavior.

Online behavioral conditioning can create specific messages to influence targeted groups. For instance, if African American voters generally disliked a particular candidate, it would be to that candidate’s advantage if those voters stayed home on election day. In the 2016 elections, the Trump campaign needed to suppress the black vote. “A week before the election … [a] Russian group paid Facebook to aim an ad at users interested in African-American history, the civil rights movement, Martin Luther King Jr. and Malcolm X with a seemingly benign post. The ad included a photo of Beyoncé’s backup dancers. ‘Black girl magic!’ the ad said … Then on Election Day, the same Russian group sent the same Facebook user demographic an ad urging them to boycott the presidential election. ‘No one represents Black people. Don’t go to vote,’ the ad said.”15 While it is difficult to measure the exact impact of the attempted voter suppression, such tactics may have contributed to Trump’s margin of victory in the tight election of 2016.

Unfortunately, these anti-democratic maneuvers are among the most effective campaign strategies currently available. Political microtargeting allows voter suppression, enhanced turnout for favorable voter segments, disinformation campaigns, and much more. These tactics work because modern media conditions voters not to think, but to join carefully targeted emotion-based campaigns. The head of Cambridge Analytica described his recommended strategy as “appealing to people on an emotional level to get them to agree on a functional level”16 while explicitly invoking Hitler as his political model. Fortunately, Hitler did not have access to machine learning tools capable of identifying exactly which emotional triggers are the most effective for a specific individual.

Somehow, we must summon the courage to stop believing in the inevitability of the current trend toward algorithmic manipulation of our emotions and thoughts. Surrendering to the victory of machine learning over our human autonomy is not simply a mistake, it is an intellectual disease similar to the one George Orwell described in reference to James Burnham’s prediction of Nazi victory during World War II, “In each case,’ Orwell thundered, ‘he was obeying the same instinct: the instinct to bow down before the conqueror of the moment, to accept the existing trend as irreversible.”17 Instead, “Orwell’s courage demands that we refuse to cede the future to illegitimate power. He asks us to break the spell of enthrallment, helplessness, resignation, and numbing. We answer his call when we bend ourselves toward friction, rejecting the smooth flows of coercive confluence.”18 The AI-based techniques described above work best with users who have given up the effort to think for themselves. 

Reclaiming our humanity requires that we regain control over the platforms designed to exploit our emotions, so we act in ways that benefit advertisers or other interested parties. As one commentator wrote, “An outer cultural revolution can only happen when an inner cultural revolution has occurred inside the sanctum of human interiority. We can restore freedom outside only to the extent we have conquered fear and released freedom in the inside.”19 All the tools necessary to successfully see through official misinformation can be found within our own minds once we begin to seek the truth that makes us free.

Subscribe to the newsletter!


  1. Jaron Lanier, Ten Arguments for Deleting Your Social Media Accounts Right Now, (Henry Holt and Co. Kindle Edition), p. 5–6.
  2. Wikipedia, “Machine Learning”,
  3. David Sumpter, Outnumbered: From Facebook and Google to Fake News and Filter Bubbles - The Algorithms that Control Our Lives, (London: Bloomsbury Sigma, Scribd Edition, 2018.) p. 38.
  4. Ibid., p. 39.
  5. Sam Biddle, “Facebook Uses Artificial Intelligence to Predict Your Future Actions for Advertisers.” The Intercept 13 04 2018. <>.
  6. “Facebook’s advertising revenue worldwide from 2009 to 2019.” 28 February 2020. Statista. 2020. <>.
  7. Nicholas Carr. “Thieves of Experience: How Google and Facebook Corrupted Capitalism.” 15 01 2019. Los Angeles Review of Books. <!>.
  8. Ibid.
  9. Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, (New York: Hatchette Book Group, 2019. Kindle Edition), p. 202.
  10. Op. Cit., note 8.
  11. Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control, (Penguin Random House, Kindle Edition, 2019) p. 105.
  12. Douglas Rushkoff, Team Human, (New York: W. W. Norton & Company, Kindle Edition, 2019) p. 69.
  13. Google has become particularly adept at controlling what users click on in the search results list. See the article “How to Free Your Memory from Attentional Serfdom” if you would like to see a case study of how attention entrapment works and how you can free yourself from it.
  14. Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, (New York: Hatchette Book Group, 2019. Kindle Edition)
  15. Natasha Singer, “‘Weaponized Ad Technology’: Facebook’s Moneymaker Gets a Critical Eye.” New York Times 16 August 2018. <>.
  16. Op. Cit., note 13.
  17. Op. Cit., note 9, p. 523.
  18. Ibid., p. 524.
  19. Nicanor Perlas. “The Pandemic of Censorship.” 10 May 2020. Covid Call To Humanity. <>.

3 replies on “Ensuring User Predictability on Social Media Platforms”

Leave a Reply

Your email address will not be published. Required fields are marked *