Expert Perspectives

  • Questions Congress Should Ask the Tech CEOs on Disinformation and Extremism – Yaël Eisenstat and Justin Hendrix (Tech Policy Press March 20, 2021) – Tech Policy Press has asked ten experts what they would ask Congress. The questions go both wide and deep, covering a range of relevant areas, including First Amendment issues, the reach of recommendation and amplification tools, progress and results from promised internal reviews. Several questions for Sundar Pichai of Google focus on YouTube, whose CEO is absent from the hearing which is notable given YouTube’s large role in spreading Stop the Steal disinformation.

  • How Facebook Got Addicted to Spreading Disinformation by Karen Hao (MIT Technology Review, March 11, 2021) – This article features Joaquin Quiñonero Candela, the director of AI and later the Responsible AI team at Facebook. While the Cambridge Analytica scandal and a growing acknowledgement among the public that Facebook was a haven for hate groups, led the company to setup initiatives such as a “Responsible” AI team, Facebook leadership placed growth above all other priorities, even when that mean blatant disregard of the team’s recommendations.

  • How Social Media’s Obsession with Scale Supercharged Disinformation by Joan Donovan (Harvard Business Review, January 13, 2021) – This article explores how the growth-at-any-cost mindset created the platforms where disinformation would be amplified at meteoric rates, leading to what we witnessed at the Capitol on January 6.

  • Free Speech is Not the Same as Free Reach – In her article, Renee DiResta argues that addressing platform algorithms can not be equated with censorship. “There is no right to algorithmic amplification.”

  • Center for Humane Technology – This group has been involved in educating the public on the various effects of social media platforms on our society. They approach the topic as technologists – one of the group’s Founders, Tristan Harris, started as an ethicist at Google, studying human persuasion in technology system design. The Center for Humane Technology’s website’s resources focus on the effects that “inhumane” technology design is having on society at large. For example, the Ledger of Harms documents the effects that technology platforms are having in different areas, such as human attention and cognition, younger generations and systemic oppression. Their podcasts are often thoughtful, compelling perspective on the topic of technology and society. If you don’t know where to start, here are a few episodes to try:

  • AlgoTransparency Manifesto – Founded by a former YouTube AI developer Guillaume Chaslot, their manifesto makes a clear case for algorithmic transparency. A very brief read, the Manifesto provides the basic rationale for demanding transparency for Big Tech algorithms.

  • Stopping Fake News –  Anil Dash’s Function Podcast – An interview with Fadi Quran, Campaign Director, Avaaz. In this interview, we hear some reasons for hope and also a sense of urgency and duty to implore our representatives to take a stance on Big Tech.

Continuing Education

One of the most effective ways to learn about the effects and many dimensions of big tech is to follow trustworthy sources who study online culture and business models. Below are Twitter accounts worth following — it’s worth getting a Twitter account just to follow these accounts:

  • @BostonJoan – Joan Donovan, Research Director at Harvard’s Shorenstein Center at Harvard and creator of MediaManipulation.org.

  • @beccalew – Becca Lewis, Stanford Research Affiliate and expert on social media as a tool for radicalization, particularly YouTube.

  • @noUpside – Renee DiResta – Policy Researcher at Stanford Internet Observatory

  • @justinhendrix – Justin Hendrix – Founder of Tech Policy Press

  • @zeynep – Zeynep Tufekci – Techno-sociologist focusing on the social implications of new technologies

  • @HumaneTech_ – The Center for Humane Technology

  • @FBoversight – The real Facebook oversight board, working to hold Facebook to account

  • @themarkup – Investigators dedicated to watching and reporting on big tech.

  • @shoshanazuboff – Shoshana Zuboff, author of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power.

  • @timnitGebru – Computer Scientist who studies algorithmic bias and data mining, co-Founder of Black in AI

  • @chrisinsilico – Christopher Wylie – whistleblower for Cambridge Analytica

  • @gchaslot – Guillaume Chaslot – Former Google/YouTube developer and advisor for Center for Humane Technology and founder of AlgoTransparency

  • @mer__edith – Meredith Whittaker – Former Google developer, researching of big tech power and AI

  • @BrandyZadrozny – Brandy Zadrozny.- NBC News reporter on technology, platforms and politics.

  • @alexstamos – Alex Stamos – Researcher at Stanford Internet Observatory and Election Integrity Partnership

  • @mariaressa – Maria Ressa – Founder of Rappler reporting from the Philippines

  • @kevinroose – Kevin Roose – New York Times Technology columnist, author of the forth-coming Future Proof.

  • @JuliaAngwin – Julia Angwin – Editor in Chief of the Markup

This list will continue to grow. If you’re on Twitter, click here to follow the list.