How to Hold Social Media Accountable for Undermining Democracy

How to Hold Social Media Accountable for Undermining Democracy

by Bloomberg Stocks
0 comments 71 views
A+A-
Reset

The problem with social media isn’t just what users post — it’s what the platforms decide to do with that content. Far from being neutral, social media companies are constantly making decisions about which content to amplify, elevate, and suggest to other users. Given their business model, which promotes scale above all, they’ve often actively amplified extreme, divisive content — including dangerous conspiracy theories and misinformation. It’s time for regulators to step in. A good place to start would be clarifying who should benefit from Section 230 of the Communications Decency Act, which has been vastly over-interpreted to provide blanket immunity to all internet companies — or “internet intermediaries” — for any third-party content they host. Specifically, it’s time to redefine what an “internet intermediary” means and create a more accurate category to reflect what these companies truly are, such as “digital curators” whose algorithms decide what content to boost, what to amplify, how to curate our content.

The storming of the U.S. Capitol Building on Wednesday by a mob of pro-Trump insurrectionists was shocking, but it was not surprising to anyone who has followed the growing prominence of conspiracy theorists, hate groups, and purveyors of disinformation online.

While the blame for President Trump’s incitement to insurrection lies squarely with him, the biggest social media companies — most prominently my former employer, Facebook — are absolutely complicit. They have not only allowed Trump to lie and sow division for years, their business models have exploited our biases and weaknesses and abetted the growth of conspiracy-touting hate groups and outrage machines. They have done this without bearing any responsibility for how their products and business decisions effect our democracy; in this case, including allowing an insurrection to be planned and promoted on their platforms.

This isn’t new information. I, for one, have written and spoken about how Facebook profits by amplifying lies, providing dangerous targeting tools to political operatives seeking to sow division and distrust, and polarizing and even radicalizing users. As we neared the 2020 election, a chorus of civil rights leaders, activists, journalists, and academics wrote recommendations, publicly condemned Facebook, and privately back channeled content policy proposals; employees resigned in protest; advertisers boycotted; legislators held hearings.

The events of last week, however, cast these facts in a new light — and demand an immediate response. In the absence of any U.S. laws to address social media’s responsibility to protect our democracy, we have ceded the decision-making about which rules to write, what to enforce, and how to steer our public square to CEOs of for-profit internet companies. Facebook intentionally and relentlessly scaled to dominate the global public square, yet it does not bear any of the responsibilities of traditional stewards of public goods, including the traditional media.

It is time to define responsibility and hold these companies accountable for how they aid and abet criminal activity. And it is time to listen to those who have shouted from the rooftops about these issues for years, as opposed to allowing Silicon Valley leaders to dictate the terms.

We need to change our approach not only because of the role these platforms have played in crises like last week’s, but also because of how CEOs have responded — or failed to respond. The reactionary decisions on which content to take down, which voices to downgrade, and which political ads to allow have amounted to tinkering around the margins of the bigger issue: a business model that rewards the loudest, most extreme voices.

Yet there does not seem to be the will to reckon with that problem. Mark Zuckerberg did not choose to block Trump’s account until after the U.S. Congress certified Joe Biden as the next president of the United States. Given that timing, this decision looks more like an attempt to cozy up to power than a pivot towards a more responsible stewardship of our democracy. And while the decision by many platforms to silence Trump is an obvious response to this moment, it’s one that fails to address how millions of Americans have been drawn into conspiracy theories online and led to believe this election was stolen — an issue that has never been truly addressed by the social media leaders.

A look through the Twitter feed of Ashli Babbit, the woman who was killed while storming the Capitol, is eye-opening. A 14-year Air Force veteran, she spent the last months of her life retweeting conspiracy theorists such as Lin Wood — who was finally suspended from Twitter the day after the attack (and therefore has disappeared from her feed) — QAnon followers, and others calling for the overthrow of the government. A New York Times profile paints her as a vet who struggled to keep her business afloat and who was increasingly disillusioned with the political system. The likelihood that social media played a significant part in steering her down the rabbit hole of conspiracy theories is high, but we will never truly know how her content was curated, what groups were recommended to her, who the algorithms steered her towards.

If the public, or even a restricted oversight body, had access to the Twitter and Facebook data to answer those questions, it would be harder for the companies to claim they are neutral platforms who merely show people what they want to see. Guardian journalist Julia Carrie Wong wrote in June of this year about how Facebook algorithms kept recommending QAnon groups to her. Wong was one of a chorus of journalists, academics, and activists who relentlessly warned Facebook about how these conspiracy theorists and hate groups were not only thriving on the platforms, but how their own algorithms were both amplifying their content and recommending their groups to their users. The key point is this: This is not about free speech and what individuals post on these platforms. It is about what the platforms choose to do with that content, which voices they decide to amplify, which groups are allowed to thrive and even grow at the hand of the platforms’ own algorithmic help.

So where do we go from here?

I have long advocated that governments must define responsibility for the real-world harms caused by these business models, and impose real costs for the damaging effects they are having on our public health, our public square, and our democracy. As it stands, there are no laws governing how social media companies treat political ads, hate speech, conspiracy theories, or incitement to violence. This issue is unduly complicated by Section 230 of the Communications Decency Act, which has been vastly over-interpreted to provide blanket immunity to all internet companies — or “internet intermediaries” — for any third-party content they host. Many argue that to solve some of these issues, Section 230, which dates back to 1996, must at least be updated. But how, and whether it alone will solve the myriad issues we now face with social media, is hotly debated.

One solution I continue to push is clarifying who should benefit from Section 230 to begin with, which often breaks down into the publisher vs. platform debate. To still categorize social media companies — who curate content, whose algorithms decide what speech to amplify, who nudge users towards the content that will keep them engaged, who connect users to hate groups, who recommend conspiracy theorists — as “internet intermediaries” who should enjoy immunity from the consequences of all this is beyond absurd. The notion that the few tech companies who steer how more than 2 billion people communicate, find information, and consume media enjoy the same blanket immunity as a truly neutral internet company makes it clear that it is time for an upgrade to the rules. They are not just a neutral intermediary.

However, that doesn’t mean that we need to completely re-write or kill Section 230. Instead, why not start with a narrower step by redefining what an “internet intermediary” means? Then we could create a more accurate category to reflect what these companies truly are, such as “digital curators” whose algorithms decide what content to boost, what to amplify, how to curate our content. And we can discuss how to regulate in an appropriate manner, focusing on requiring transparency and regulatory oversight of the tools such as recommendation engines, targeting tools, and algorithmic amplification rather than the non-starter of regulating actual speech.

By insisting on real transparency around what these recommendation engines are doing, how the curation, amplification, and targeting are happening, we could separate the idea that Facebook shouldn’t be responsible for what a user posts from their responsibility for how their own tools treat that content. I want us to hold the companies accountable not for the fact that someone posts misinformation or extreme rhetoric, but for how their recommendation engines spread it, how their algorithms steer people towards it, and how their tools are used to target people with it.

To be clear: Creating the rules for how to govern online speech and define platforms’ responsibility is not a magic wand to fix the myriad harms emanating from the internet. This is one piece of a larger puzzle of things that will need to change if we want to foster a healthier information ecosystem. But if Facebook were obligated to be more transparent about how they are amplifying content, about how their targeting tools work, about how they use the data they collect on us, I believe that would change the game for the better.

As long as we continue to leave it to the platforms to self-regulate, they will continue to merely tinker around the margins of content policies and moderation. We’ve seen that the time for that is long past — what we need now is to reconsider how the entire machine is designed and monetized. Until that happens, we will never truly address how platforms are aiding and abetting those intent on harming our democracy.

Read More

You may also like

Leave a Comment