Legal Approaches
The legal approaches to addressing the power of social media platforms are complex and multi-dimensional. The following provides high level summaries of a few of the different approaches to address the runaway power of big tech. There are different perspectives, caveats, limitations and risks with the different solutions. The fact that internet law is complex can not be a blocker for creating meaningful legislation and standards. Hence, it is worth becoming familiar with the different approaches to addressing Big Tech.
Antitrust
Antitrust is one of the more prominent and visible efforts to address the power of big tech. Following the January 6 insurrection on Capitol Hill, there seemed to be bipartisan support for breaking up or controlling the power of the major social media companies, albeit for different reasons. The Federal Trade Commission and Attorneys General from 48 states and territories are calling for the breakup of Facebook, Whatsapp and Instagram. By purchasing these companies, Facebook eliminated the competition to control an entire segment of the social networking market. In the video below, New York Attorney General, Letitia James discusses the case for antitrust legislation as a means of reining in Facebook.
In February, Senator Amy Klobuchar, an antitrust advocate, introduced a comprehensive antitrust bill, the Competition and Antitrust Enforcement Reform Act (text)which seeks to increase the Justice Department’s resources, strengthen existing legislation – the Clayton Act and the Sherman Antitrust Act – which have been watered down by the courts, as well as implement additional reforms. The House is currently considering legislative proposals, and we should expect news in coming weeks. David Cicilline, the Chair of the House Judiciary Subcommittee on Antitrust, Commercial, and Administrative Law, describes Big Tech as a “cancer metastasizing across our economy and our country. Mark my words: Change is coming. Laws are coming.”
Section 230
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” – Section 230, Communications Decency Act, 1996
Section 230 of the Federal Communications Decency Act is the legal clause that Mark Zuckerberg rests on when evading responsibility for the harmful and/or false content on Facebook. The law was sponsored in 1996 by Senators Ron Weyden (D-OR) and Chris Cox (R-CA) in response to law suits being that were being brought against the then-fledgling company, Prodigy, for content that was posted on one of its bulletin boards. The law was written to protect internet-based companies from an endless barrage of lawsuits because of content posted by users. It was the very early days of the internet. We had no vision of Facebook, Twitter and Google was a lab project at Stanford. Algorithms, as we know them today, did not yet exist.
Critics believe that Section 230 provides too much of a shield, protecting the owners of social media companies from the very real harm that comes from the content they distribute. Proponents of the law equate it with an internet “free speech” law and believe strongly that Section 230 protects our freedoms. Stay tuned as arguments for and against 230 unfold. Everything You Need to Know About Section 230 from the Verge is a “living guide” and will be updated on an ongoing basis. This video below is a 60 Minutes segment on the 230 and its implications from the perspective of victims of online crime and harassment.
Legislation is currently in the works to address aspects of Section 230. One type of approach has to do with carve outs to Section 230, which would exempt certain types of content from the immunity of Section 230, such as child sexual abuse, terrorism and cyber-stalking. Another approach, Protecting Americans from Dangerous Algorithms (H.R. 8636), seeks to amend Section 230 to add a paragraph on “Algorithmic Amplification” to the law, limiting the immunity provided by the law if content that is amplified via algorithms results in civil rights violations or international terrorism. This approach begs the question of domestic terrorism – would amplified content leading to domestic terrorism still be protected?
Another, more recent approach is S. 299, the Safeguarding Against Fraud, Exploitation, Threats, Extremism and Consumer Harms Act (SAFE TECH). This was introduced by Senators Warner, Hirono and Klobuchar and Kaine in February. The SAFE ACT limits the immunity for any content for which the platform company has been paid. This would not just be limited to political advertising, but all forms of paid or sponsored content. Further, this law puts the burden of proof on the “publisher” of the content, which would be the social platform. The inclusion of the word “publisher” is noteworthy, as Big Tech and particularly Facebook, have long insisted they are merely distribution channels and not publishers. By accepting a fee for the service, however, the SAFE TECH Act, would redefine them as publisher.
Data Privacy
The European General Data Protection Regulation (GDPR) law was rolled out in May 2018 and applies to the collection of personal information belonging to any person living in the European Union. This was the first of a new generation of privacy laws, and its introduction had very real implications for companies such as Facebook which are in the business of collecting personal data for users across geographical boundaries. TechCrunch’s A flaw-by-flaw guide to Facebook’s new GDPR privacy changes walks through Facebook’s predictably minimal responses to GDPR which primarily put the onus back on the user to click and accept Facebook’s terms.
A similar law was passed in California, the California Consumer Privacy Act (CCPA) in 2018 and in 2019 New York passed the Stop Hacks and Improve Electronic Data Security Act (SHIELD). Many other similar laws are on the books in other states. It is also possible that we could see laws take shape at a national level. In 2018, Senator Maria Cantwell (D-WA) introduced the Consumer Online Privacy Rights Act (COPRA) that addressed the collection of personal data and consumer privacy protection. With a new administration, we should expect to see this or similar laws re-introduced.
Platform Transparency and Accountability
This is the area of policy that addresses the “magic” of social media platforms. These are the algorithms that drive the delivery and placement of content on our newsfeeds, group recommendations, the targeting of advertising, and the results that are returned from a seemingly innocuous search. As with data privacy, the EU appears to be several important steps ahead of the US in unifying behind policies that target this unseen and very powerful aspect of social platforms.
These two EU strategic initiatives are part of a larger EU Strategy, A Europe fit for a digital age”:
The Digital Services Act – These rules seek to protect consumers, require transparency and accountability, allow for greater democratic control over platform functioning and the risks posed.
The Digital Markets Act – These rules apply to “large online platforms” and address how the platforms function as gatekeepers for other services and businesses. The would rules would serve to provide a more competitive environment for business developers and more real choice for consumers.
Both of these are highly readable, summary-style documents. The ideas are presented at a high level and can be understood by a non-technical audience. While the devil is in the details, this type of presentation helps demystify and knock down barriers that have guarded technology systems for so long. The US should prioritize similar comprehensive, accessible and future-facing strategies.
In her 2018 article, Free Speech is Not Free Reach, Renee DiResta addresses the algorithms that amplify the content as opposed to the content itself, bringing responsibility back to big tech.
But in this moment, the conversation we should be having—how can we fix the algorithms?—is instead being co-opted and twisted by politicians and pundits howling about censorship and miscasting content moderation as the demise of free speech online. It would be good to remind them that free speech does not mean free reach. There is no right to algorithmic amplification. In fact, that’s the very problem that needs fixing. — Renee DiResta