It took Facebook two months to realize “Stop the Steal” might turn violent

It took Facebook two months to realize “Stop the Steal” might turn violent

by Tech News
0 comment 4 views
A+A-
Reset

Coordinated harm —

Social media giant was apparently unprepared for “authentic” calls for violence.

Tim De Chant

It Took Facebook Two Months To Realize &Ldquo;Stop The Steal&Rdquo; Might Turn Violent

It took Facebook less than two days to shut down the original “Stop the Steal” group but two months for it to realize that the group and its offspring had spawned a “harmful movement” that thrived on the platform and would ultimately lead to violence.

The news comes from a Facebook internal report analyzing the company’s response to the events leading up to and culminating in the January 6 insurrection at the US Capitol. Reporters at BuzzFeed News obtained the report, titled “Stop the Steal and Patriot Party: The Growth and Mitigation of an Adversarial Harmful Movement” and published the document today after Facebook reportedly began restricting employees’ access to it.

The social media company was apparently unprepared for the idea that people would use their own accounts to spread misinformation and calls for violence and other antidemocratic behavior. Among the conclusions, Facebook acknowledged that while it had prepared tools to combat “inauthentic behavior,” which might include provocations from a fake account run by Russian intelligence operatives, for example, the company was woefully unprepared to confront “coordinated authentic harm.” (Emphasis Facebook’s.)

When compared with other civic groups, groups affiliated with “Stop the Steal” were 48 times more likely to have at least five pieces of content classified as “violence and incitement” and 12 times more likely to have at least five pieces of hateful content.

The original “Stop the Steal” group was created on election night, November 3, by Kylie Jane Kremer, a pro-Trump activist and the daughter of Amy Kremer, a political operative and Tea Party organizer. The group spread disinformation about the US election results, claiming falsely that there was enough voter fraud to change the outcome. The group grew quickly to 320,000 members, with a reported million more on the waitlist by the time it was shut down on November 5.

But despite being taken down for “high levels of hate and violence and incitement,” Facebook did not appear to think the group’s motivation was terribly harmful. “With our early signals, it was unclear that coordination was taking place, or that there was enough harm to constitute designating the term”—presumably an action that would have designated similar groups as harmful or hateful.

Because there was no designation, splinter groups quickly popped up and thrived for two months. Even a couple of days after the insurrection, 66 groups were still active. The largest group was private, but it bragged that it had 14,000 members.

Super-inviters

The rapid growth of those groups was due to what Facebook calls “super-inviter” accounts, which sent more than 500 invites each, according to the report. Facebook identified 137 such accounts and said they were responsible for attracting two-thirds of the groups’ members.

Many of those “super-inviter” accounts appeared to be coordinating across the different groups, including through communication that happened both on and off Facebook’s various platforms. One user employed disappearing stories, which are no longer available on the platform after 24 hours, and chose his words carefully so as to avoid detection, presumably by automated moderation.

The Facebook report suggests that future moderation should look more closely at groups’ ties to militias and hate organizations. “One of the most effective and compelling things we did was to look for overlaps in the observed networks with militias and hate orgs. This worked because we were in a context where we had these networks well mapped.”

While Facebook may have mapped the networks, it has had a spotty record in taking action against them. In fact, as recently as last month, the site was found autogenerating pages for white supremacist and militia movements if a user updated their profile to include those groups as their employer.

The report makes clear that this was a learning experience for the group. One of the main conclusions is that the investigators “learned a lot” and that a task force has developed a set of tools to identify coordinated authentic harm. It also notes that there is a team “working on a set of cases in Ethiopia and Myanmar to test the framework in action.”

“We’re building tools and protocols and having policy discussions to help us do this better next time,” the report says.

Read More

You may also like

Leave a Comment