Facebook is rating the trustworthiness of its users on a scale from zero to one

By Elizabeth Dwoskin
Silicon Valley reporter
August 21 at 10:00 AM

https://www.washingtonpost.com/technology/2018/08/21/facebook-is-rating-trustworthiness-its-users-scale-zero-one/

SAN FRANCISCO -- Facebook has begun to assign its users a reputation score, 
predicting their trustworthiness on a scale from zero to one.

The previously unreported ratings system, which Facebook has developed over the 
last year, shows that the fight against the gaming of tech systems has evolved 
to include measuring the credibility of users to help identify malicious actors.

Facebook developed its reputation assessments as part of its effort against 
fake news, Tessa Lyons, the product manager who is in charge of fighting 
misinformation, said in an interview. The company, like others in tech, has 
long relied on its users to report problematic content - but as Facebook has 
given people more options, some users began falsely reporting items as untrue, 
a new twist on information warfare that it had to account for.

It’s “not uncommon for people to tell us something is false simply because they 
disagree with the premise of a story or they’re intentionally trying to target 
a particular publisher,” said Lyons.

Users’ trustworthiness score between zero and one isn’t meant to be an absolute 
indicator of a person’s credibility, Lyons said, nor is there is a single 
unified reputation score that users are assigned. Rather, the score is one 
measurement among thousands of new behavioral clues that Facebook now takes 
into account as it seeks to understand risk. Facebook is also monitoring which 
users have a propensity to flag content published by others as problematic, and 
which publishers are considered trustworthy by users.

It is unclear what other criteria Facebook measures to determine a user’s 
score, whether all users have a score, and in what ways they’re used.

The reputation assessments come at a moment when Silicon Valley, faced with 
Russian meddling, fake news, and ideological actors that abuse the company’s 
policies, is re-calibrating its approach to risk - and is finding untested, 
algorithmically-driven ways to understand who poses a threat. Twitter, for 
example, now factors in the behavior of other accounts in a person’s network as 
a risk factor in judging whether a person’s tweets should be spread.

But how these new credibility systems work is highly opaque, and the companies 
are wary of discussing them, in part because doing so might invite further 
gaming -- a predicament that the firms increasingly find themselves in as they 
weigh calls for more transparency around their decision-making.

“Not knowing how [Facebook is] judging us is what makes us uncomfortable,” said 
Claire Wardle, director of First Draft, research lab within Harvard Kennedy 
School that focuses on the impact of misinformation and is a fact-checking 
partner of Facebook, of the efforts to assess people’s credibility. “But the 
irony is that they can’t tell us how they are judging us - because if they do 
the algorithms that they built will be gamed.”

The system Facebook built for users to flag potentially unacceptable content 
has in many ways become a battleground. The activist Twitter account Sleeping 
Giants called on followers to take technology companies to task over the 
conservative conspiracy theorist Alex Jones and his Infowars site, leading to a 
flood of reports about hate speech that resulted in him and Infowars being 
banned from Facebook and other tech companies’ services. At the time, 
executives at the company questioned whether the mass-reporting of Jones’ 
content was part of an effort to trick Facebook’s systems. False reporting has 
also become a tactic in far right online harassment campaigns, experts say.

Tech companies have a long history of using algorithms to make predictions 
about people, from how likely they are to buy products to whether they are 
using a false identity. But with the backdrop of increased misinformation, now 
they are making increasingly sophisticated editorial choices about who is 
trustworthy.

In 2015, Facebook gave users the ability to report posts they believe to be 
false. A tab on the upper right hand corner of every Facebook post lets people 
report problematic content for a variety of reasons, including  pornography, 
violence, unauthorized sales, hate speech, and false news.

Lyons said that she soon realized that many people were reporting posts as 
false simply because they did not agree with the content. Because Facebook 
forwards posts that are marked as false to third party fact-checkers, she said 
it was important to build systems to assess whether the posts were likely to be 
false in order to make efficient use of fact-checkers’ time. That led her team 
to develop ways to assess whether the people who were flagging posts as false 
were themselves trustworthy.

“One of the signals we use is how people interact with articles,” Lyons said in 
a follow-up email. “For example, if someone previously gave us feedback that an 
article was false and the article was confirmed false by a fact-checker, then 
we might weight that person’s future false news feedback more than someone who 
indiscriminately provides false news feedback on lots of articles, including 
ones that end up being rated as true.”

The score is one signal among many that the company feeds into more algorithms 
to help it decide which stories should be reviewed.

“I like to make the joke that, if people only reported things that were 
[actually] false, this job would be so easy!” said Lyons in the interview. 
“People often report things that they just disagree with.”

She declined to say what other signals the company used to determine 
trustworthiness, citing concerns about tipping off bad actors.
_______________________________________________
Infowarrior mailing list
Infowarrior@attrition.org
https://attrition.org/mailman/listinfo/infowarrior

Reply via email to