It’s on Digital Platforms to Make the Internet a Better Place

For years, users of digital technology have had the sole responsibility to navigate misinformation, negativity, privacy risk, and digital abuse, to name a few. But maintaining digital well-being is a heavy weight to be put on an individual’s shoulders. What if we didn’t have to carry quite as much of the burden of maintaining our digital well-being? What if we expected a bit more of the digital platform providers that hosted our virtual interactions?

There are three key responsibilities we should expect of all of our digital platform providers to help make more positive digital spaces. First, establish meaningful norms and standards for participation in virtual spaces — and communicate them clearly to users. Second, verify human users and weed out the bots. Third, improve content curation by addressing posts that incite racism, violence, or illegal activity; identifying misinformation; and encouraging users to be content moderators.

We live in a world of unprecedented access to technology. Even before the coronavirus pandemic, technology allowed us to stay connected with family and friends, stream videos to our homes, and learn new skills at the tap of a finger. When the pandemic forced us to be socially distant, technology provided a way for many of our most important life activities to continue as school, work, church, family gatherings, doctor’s appointments, and more moved to virtual spaces.

Yet, like any powerful tool, technology also comes with dangers. In addition to connecting families and accelerating learning, our digital world can also be source of misinformation, negativity, privacy risk, and digital abuse, to name a few. Even good apps and websites, if overused, can push out other healthy digital and physical activities from our lives. We have all felt the increasing pressure of trying to maintain our well-being as a result of these digital challenges. Of course, we — the citizens of our digital world — have a responsibility for ensuring our own digital health. It’s on us to find accurate sources of information, make choices about what personal data we are willing to trade for access to online experience, or how to ensure balance between different online activities. These responsibilities roll over to our families where we feel pressure to create the right digital culture for our children and other family members to thrive as well. Maintaining digital well-being is a heavy weight to be put on an individual’s shoulders.

But what if we didn’t have to carry quite as much of the burden of maintaining our digital well-being? What if we expected a bit more of the digital platform providers that hosted our virtual interactions?

Author and entrepreneur Eli Pariser says we should expect more from our digital platform providers in exchange for the power we give them over our discourse. He believes we should ask not just how we make digital tools user-friendly, but also how we make digital tools public-friendly. In other words, it’s our responsibility to make sure our digital platforms never serve individuals at the expense of the social fabric on which we all depend.

With that in mind, let’s look at three key responsibilities we should expect of all of our digital platform providers.

Establish Meaningful Norms

Virtual platforms must establish and clearly communicate standards for participation in their virtual spaces. Some already do a good job of this, including Flickr, Lonely Planet, and The Verge. Flickr’s community norms are simple, readable guidelines that are clearly designed for community members (not just lawyers) to understand. They include some clear “dos” like:

Play nice. We’re a global community of many types of people, who all have the right to feel comfortable and who may not think what you think, believe what you believe, or see what you see. So, be polite and respectful in your interactions with other members.

And they also include some clear “don’ts”:

Don’t be creepy. You know the guy. Don’t be that guy. If you are that guy, your account will be deleted.

All of digital platforms should establish a clear code of conduct and it should be actively embedded throughout the virtual space. Even the examples I mentioned have their norms pretty deeply buried in the back corner of their sites. One way to do this is through sign-posting — creating messages and reminders of the norms of behavior throughout the platform. Imagine if, instead of one more ad for new socks on Pinterest, a reminder appeared to “post something kind about someone else today.” Or imagine if, instead of watching yet another car insurance ad before a YouTube video plays, we might be presented with tips for how to respectfully disagree with the content of someone else’s video. Sure, this would cause the platform providers to give up a fraction of a percentage of advertising revenue, but that’s a very reasonable expectation for them if they’re to have a responsible, trusted platform.

Verify Human Users

A second expectation of platform providers to take more seriously the responsibility of identifying the users of their platforms that are not human. Some of the most divisive posts that flood the virtual world each day are generated by bots, which are capable of arguing their digital positions with unsuspecting humans for hours on end. One study found that during the height of the Covid-19 pandemic, nearly half of the accounts tweeting about the virus were bots. YouTube and Facebook both have about as many robot users as human users. In a three-month period in 2018, Facebook removed over 2 billion fake accounts, but until additional verification is added, new accounts will be created, also by bots, almost as quickly as the old ones are removed.

In addition to clearly labeling bots as bots, platform providers should do more to verify the identity of human users as well, particularly those that are widely followed. Many of the dark and creepy parts of our virtual world exist because online platforms have been irresponsibly lax in verifying that users are who they say they are. This doesn’t mean platforms couldn’t still allow anonymous users, but such accounts should be clearly labeled as unverified so that when your “neighbor” asks your daughter for information about her school online, she can quickly recognize if she should be suspicious. The technology to do this sort of verification exists and is fairly straightforward (banks and airlines use it all the time). Twitter piloted this approach through verified accounts but then stopped, claiming it didn’t have the bandwidth to continue. The lack of expectation for verified identities enables fraud, cyberbullying, and misinformation. If digital platforms want us to trust them to be the host of our virtual communities, we should expect them to identify and call out users who are not who they say they are.

Improve Content Curation

The third responsibility of digital platforms is to be more proactive in curating the content on their platforms. This starts with quickly addressing posts that incite racism, violence, terrorist activity, or features that facilitate buying illegal drugs, participating in identity theft, or human trafficking. In 2019, Twitter began adding warning labels to bullying or misleading tweets from political leaders. A notable example is when a tweet from former President Donald Trump was flagged for claiming that mail-in ballots lead to widespread voter fraud. Apple has also taken this responsibility seriously with a rigorous review process on apps that are added to its mobile devices. Unlike the web, Apple does not permit apps that distribute porn, encourage consumption of illegal drugs, or encourage minors to consume alcohol or smoke on its devices. Apple and Google have both begun requiring apps on their respective stores to have content-moderation plans in place in order to remain.

Effective content moderating also means doing more to empower human moderators. Reddit and Wikipedia are the largest examples of platforms that rely on human moderators to make sure their community experiences are in line with their established norms. In both cases, humans are not just playing a policing role, but taking an active part in developing the content on the platform. Both rely on volunteer curators, but we could reasonably expect human moderators to be compensated for their time and energy in making virtual community spaces more effective. This can be done in a variety of ways. For instance, YouTube currently incentivizes content creators to upload videos to its platform by offering them a percentage of advertising revenue; a similar incentive could be given to encourage users who help curate the content on these platforms. YouTube’s current approach, though, is to use bots to moderate and curate. As author and technologist James Bridle points out, content on YouTube that is created by bots is also policed by bots, human users of the platform are left paying the price.

Another simple way to empower users as moderators is to provide more nuanced options for reacting to each other’s content. Right now, “liking” or “disliking” are about all the options we have to respond to content on shared platforms. Some platforms have added a happy face, a heart, and most recently a hug, but that is still an incredibly limited set of response options for the variety of content flowing around our digital world.

In the physical world, soft-negative feedback is a critical tool for helping people learn the norms of community space. Most of the feedback we give in the physical world is much more subtle than what we can do online. If you were in a conversation with someone who said they were not going to get a vaccine because it contains a secret tracking microchip, we might respond with an “I don’t know about that” or a “hmmm, you might want to check your facts.” But in the virtual world, our only option might be to click the “thumbs down” button — if that button exists on that platform at all. In a world where very subtle reactions carry great significance, giving a big “thumbs down” to a friend is like the social equivalent of a full-frontal assault. On the other hand, if you choose to sidestep the awkward moment by unfollowing your friend, you have just made sure they never hear your feedback again, likely reducing their sounding-board pool to people with similar views, which is even less helpful for establishing shared societal norms. What if instead of just “liking” or “disliking,” we could tag things as “I question the source of this post”?

Digital platform providers care what their users think; their continued existence depends on our continued trust. We should expect digital platforms to establish and clearly infuse their environments with media that teach appropriate norms of behavior on their digital spaces. We should call for them to do a better job of clearly labeling nonhuman users of their platforms and to empower their users to be more involved in content curation.

Adapted from the book Digital for Good: Raising Kids to Thrive in an Online World (Harvard Business Review Press, 2021).

Read More

Related posts

Ravi Uppal Spotlights: The Impact of Global Economic Policies on Local Real Estate Markets

Cargo Spill Incidents: Who Is Liable, and How Can Victims Seek Compensation?

The Single Solution for Financial Insecurity