The Problem:

Digital platforms—Facebook, Google, Twitter— have grown at a rate that is unprecedented in human history. These new technologies have significantly changed how people across the world access information and communicate. The algorithms that drive the platforms are making crucial choices for us, determining what we see next, which group we join, which product we should buy. At the same time these digital platforms prioritize growth and profit over social good. While communication has been made “easier” we no longer share the same “reality”, leading to an erosion of truth, which is worsening and making us more vulnerable as a society to anyone who can game the system.

The Goal:

The goal of #BigTechIsBroken is education and activation. We don’t need to understand the complexities of technology to understand the realities of digital platforms, and the attention economy that drives them. We need to be smart consumers and activated constituents. We need to be educators for our friends, families and neighbors. Our elected representatives do not necessarily prioritize big tech legislation nor understand the urgency. We need to sound the alarms now.

#BigTechIsBroken

A Case Study: “Stop the Steal”

  • Stop the Steal is a racist narrative that was popularized as a disinformation campaign by Roger Stone in 2016, primarily to generate donations.

  • Stop the Steal was re-packaged in November 2020 by the group “Women for America First”, and amplified with the help of Steve Bannon.

  • A Stop the Steal group on Facebook gained more than 320,000 members in less than 22 hours and eventually Facebook removed the group.

  • Stop the Steal was the battle cry in the violent storming of the US Capitol on January 6, 2021.

Stop the Steal was a successful disinformation campaign. Social media disinformation in general — and this campaign specifically — is especially powerful and dangerous because of its incredible reach. In order to understand what causes disinformation and its corrosive effect on society, we need to look at the platforms which function as both the distribution and amplification systems. Social media companies — particularly Facebook — profit from an “attention economy.” In simplistic terms:  the longer people stay on Facebook, the more advertising is viewed, the more revenue made by Facebook.  And the more of our data the platform collects about us, our likes, our behaviors. Yes, that’s right. Our data. (hold that thought)

Overview

In this excerpt from her 2017 TED Talk, We’re building a dystopia just to make people click on ads, techno-sociologist, Zeynep Tufekci, describes how the system works and how the company’s algorithms are the core of their business model.

As Tufekci summarizes, the personalized content feed doesn’t just deliver advertising. In fact, most of what we see are the posts and shares from the people and groups in our networks. The only “middleman” or gatekeeper is the algorithm that generates the endless newsfeed. While it is great to keep up with friends and groups we like, the algorithm-generated experience is designed expressly to engage us in the clicks and views that feed the business model.

Additional danger comes from the collateral damage of what the self-teaching artificial intelligence feeds us. In the absence of meaningful human editing and structures for oversight, some of those stories and many of the ads can peddle disinformation and information from groups inciting violence that would otherwise not have “free” air time due. Over and over again the Big Tech companies have favored feeding their business model over responsibility when it comes to feeding us content. Recently, the influence of disinformation from exposure to the BigTech platforms has been most visible in politics and the global Covid pandemic.

In 2010, Eli Pariser, digital activist and former Executive Director of MoveOn, was disturbed when he noticed how different his own Facebook feed was from that of his friends, despite their having many friends in common. He created the term “filter bubbles” to describe what he saw happening. Individuals are each fed our own personalized stream of information. This results in a ubiquitous state of social isolation, where we are “alone together” in our networks.

That was more than ten years ago. At the Federal level, there has been a lack of clarity, talent and the will to address the harm that arises from Big Tech platforms, which have been designed for massive growth and viral engagement. Unregulated, social platforms are too easy for exploitation by anyone who can game the system, such as white supremacists radicalizing teens with YouTube videos, Covid disinformation peddlers or far right operatives pushing false narratives for the purpose of sowing public doubt in our elections and democracy.

The primary form of governance over social media companies is, by default, self-regulation. The same (mostly) young men that built the platforms with a “move fast and break things” mantra more than 15 years ago have been the only ones in charge of regulating the behemoths they’ve created.  Consider how different that is to other information industries. The FCC regulates the airwaves.  The FDA regulates medical advertising.  In 2018, Facebook initiated its much-publicized Oversight Board that is just starting to hear cases in 2021.  Contrast this slow activation with the “speed to market” philosophy that allowed Facebook to develop without consumer protection guardrails. 

How can concerned people make a difference?

As constituents, we need to demand that our elected officials at the state and Federal level work toward meaningful policies to address social media platforms. The artificial intelligence is getting smarter by the second. Current legislation is decades old and not able to address the core of the problem: social platforms are driven by unregulated artificial intelligence models that, in the pursuit of profit, are too easily exploited and then expose consumers to potentially false or dangers information.

In order to advocate, we don’t need to understand what makes the algorithm tick. We need to know just enough to demand that our legislators prioritize policies for reining in social media. If our legislators don’t understand the issue, which most do not, it is their responsibility to hire staffers who can translate and engage professionals who specialize in these disciplines. The experts are there, it is a matter of demanding that our Congressional members engage with them to develop meaningful, future-facing policy.

Concerned but not sure where to start?

The following are resources that will help you better understand these issues so that you can start conversations. We need more people to be concerned and in real conversation. For an overview of the issue, we recommend the 2018 Frontline 2-part series The Facebook Dilemma (Part 1, Part 2). This series addresses the issues inherent in the data-driven social platform business model.

Another helpful video resources is The Social Dilemma, a docudrama that was released in August 2020 and available on Netflix.

The Social Dilemma presents the issue — from the perspective of the people who created these technologies — in a very accessible format. Watch The Social Dilemma with friends and family, especially your social media-engaged children, and compare your impressions. It is also worth asking your elected officials if they have seen the movie, as many have. Tristan Harris, Founder of the Center for Humane Technology was one of the forces behind the making of this film. Here are some notable resources from Center for Humane Technology’s website:

  • Ledger of Harms – this working document describes the often “invisible” harms that the attention-economy driven social media platforms are causing to individuals and society. 

  • For Policy Makers – this is a section of the website that addresses policy issues. This section includes podcast episodes and articles. 

If you want to understand the very real threat that runaway disinformation has on democracy, we recommend the Frontline documentary, A Thousand Cuts.  This tells the story of Maria Ressa, the founder of the online social news network, Rappler, the social news network based in the Philippines, where Facebook has near 100% penetration. The story of Ressa and Rappler show the very real threats to freedom of the press—and democracy— that we all face in a social media driven world where amplification matters more than facts.

Stay Informed

The fact that social media platforms played a role in the events leading up to and on January 6 is not disputed, and it has prompted some legislators to action. With the Biden administration in place we can expect to see more changes. The recent announcement of Biden’s likely appointment of a “Big Tech trustbuster”, Lina Khan, to the Federal Trade Commission is likely to spur change. Thankfully are brilliant minds at work, keeping pace with and making sense of the ever-shifting ground around Big Tech. Here is a very short list of newsletters to help us keep informed:

Not surprisingly, the legal approaches for addressing Big Tech are complex and multi-dimensional, which we summarize in Legal Approaches. Legislation will be evolving, and a goal of #BigTechIsBroken is to be informed enough to advocate for sane, effective legislation that reverses the very real damages caused by social platforms.

Get Active

Check out the Actions page for a messaging campaign for the week of March 22, leading up to March 25 when Big Tech is coming to DC for a hearing. Let our Representatives know that we are watching, and that we expect them to communicate their plans to address the lack of accountability in Big Tech with us and their committees.

If you use Big Tech, be an educated consumer. Educate friends, family neighbors. We need to sound the alarms about the social platforms—which are big money makers and big channels for disinformation—and tell our representatives #BigTechIsBroken.

“The real problem of humanity is the following: we have paleolithic emotions; medieval institutions; and god-like technology. And it is terrifically dangerous, and it is now approaching a point of crisis overall.”

—E.O. Wilson, 2009