Trump and Twitter are on likely showdown path with expanded misinfo rules

you tried —

Social media firms are trying to stop making things even worse before it’s too late.

Kate Cox

Aurich Lawson

With just over 50 days to go before the 2020 US presidential election, everything is—predictably—hitting the fan. Foreign interference is of course an ever-present threat, with known actors both attempting to amplify social discord as well as literally hack campaigns. But good-old homegrown deliberate misinformation is also a significant threat to this year’s entire electoral process.

Misinformation spreads rapidly thanks to the advent of social media—especially Facebook, Google, and Twitter. Facebook already announced its (weak) plan for combating rampant falsehoods, and this week Twitter and Google both made their plans public as well.

Tweet, tweet…

Twitter’s existing policy prohibits users from posting content that includes “false claims on how to participate in civic processes” or “content that could intimidate or suppress participation.” In other words, at a very high level you’re not allowed to use Twitter to lie about voting or tell people not to vote.

False information can come from anywhere, and plenty of us have seen some spread—sometimes unwittingly—by our own friends and family online. Unfortunately, one of the most prevalent and high-profile sources of false election claims has been sitting US president and 2020 candidate Donald Trump, and his favorite platform is Twitter.

Twitter first began appending fact-check labels to Trump’s tweets in May, when the president alleged that all mail-in voting is inherently fraudulent (it’s not). Twitter added a small label to the bottom of the tweets with a link, reading, “Get the facts about mail-in ballots.”

That kind of fact-check, however, is not exactly detailed or obvious, and Twitter appears to have recognized that.

“People who use our service have told us that non-specific, disputed information that could cause confusion about an election should be presented with more context,” Twitter wrote in a company blog post. So beginning September 17, Twitter will be providing a whole lot more context, and it will “label or remove false or misleading information intended to undermine public confidence in an election or other civic process.”

The list of content Twitter will now label or delete includes:

1. False or misleading information that causes confusion about the laws and regulations of a civic process, or officials and institutions executing those civic processes.

2. Disputed claims that could undermine faith in the process itself, e.g. unverified information about election rigging, ballot tampering, vote tallying, or certification of election results.

3. Misleading claims about the results or outcome of a civic process which calls for or could lead to interference with the implementation of the results of the process, e.g. claiming victory before election results have been certified, inciting unlawful conduct to prevent a peaceful transfer of power or orderly succession.

It seems likely then to be only a matter of when, not if, the expanded policy once again puts Twitter on a collision course with the president and other high-profile administration or conservative figures. But Twitter is very clear that the policy will apply to all (even if some get warnings and others get banned). “We will not permit our service to be abused around civic processes, most importantly elections,” the post concludes. “Any attempt to do so—both foreign and domestic—will be met with strict enforcement of our rules, which are applied equally and judiciously for everyone.”

Searching for answers

Google’s new policy, meanwhile, is less about what people say and more about what people find. The company is fully aware that many of its search features, including autocomplete, can direct users to places they otherwise might not have gone. As such, it’s trying to reduce the amount of misinformation a search might serve to you.

Google will be beefing up its fact checks in search, Google News, and Google Images, the company said in a blog post. So far in 2020 to date, users have seen more than four billion fact checks come up, the company says, which is more than in all of 2019.

Not all users, however, are going to Google News and looking at the helpful “fact check” and “full coverage” modules when they seek information. So Google’s also tweaking search.

“We have improved our automated systems to not show predictions if we detect that the query may not lead to reliable content,” Google said, admitting that AI can’t catch everything and that sometimes human enforcers need to help. They’re also enacting changes on election-related search queries:

We will remove predictions that could be interpreted as claims for or against any candidate or political party. We will also remove predictions that could be interpreted as a claim about participation in the election—like statements about voting methods, requirements, or the status of voting locations—or the integrity or legitimacy of electoral processes, such as the security of the election. What this means in practice is that predictions like “you can vote by phone” as well as “you can’t vote by phone,” or a prediction that says “donate to” any party or candidate, should not appear in Autocomplete.

That said, however, anyone can still search for those terms or any others if they’re willing to type out whole words instead of relying on autocomplete. And the reliability of the links anyone follows through is just as big of a grab bag as ever.

Read More

Related posts

Not Using a Repricer? Here’s What You Need to Know to Get Started

What are BTC Halvings, And How Do They Drive the Market?

Essential Software When Working with Remote Employees