News

Article

MIT study finds humans are responsible for spread of false news, not bots

A new study by researchers from the Massachusetts Institute of Technology (MIT) has found that false news spread significantly faster than true news on the social media platform, Twitter. That is what anyone following the rising concerns over fake news would expect. Bots programmed to retweet and disseminate misleading information are frequently blamed for amplifying fake news.

However, the study found that false news spreads faster due to people retweeting them, not bots, which has interesting implications for policymakers and technology companies seeking to combat the phenomena.   

The paper, “The Spread of True and False News Online,” has been published in Science.

Twitter granted the MIT team full access to its historical archives and provided support for the research.

The researchers avoided using the term ‘fake news’ due to the polarisation of the phrase in the current political and media climate and its use by politicians to label news sources that are critical of their positions. Instead they focused on the more objectively verifiable terms “true” or “false” news

The study

Roughly 126,000 cascades or unbroken tweet chains of news stories spreading on Twitter were tracked, which were cumulatively tweeted over 4.5 million times by about 3 million people, from the years 2006 to 2017.

A rumour cascade begins on Twitter when a user makes an assertion about a topic in a tweet, which could include written text, photos, or links to articles online. A rumour tweeted by 10 people separately, but not retweeted, would have 10 cascades, each of size one. If a rumour is independently tweeted by two people and each of those two tweets is retweeted 100 times, the rumour would consist of two cascades, each of size 100.

The researchers sampled all rumour cascades investigated by six independent fact-checking organizations (snopes.com, politifact.com, factcheck.org, truthorfiction.com, hoax-slayer.com, and urbanlegends.about.com) by parsing the title, body, and verdict (true, false, or mixed) of each rumour investigation reported on their websites and automatically collecting the cascades corresponding to those rumours on Twitter.

These organisations agreed on the veracity of the final sample of rumour cascades between 95 and 98% of the time. The diffusion of the cascades was then tracked by collecting all English-language replies to tweets that contained a link to any of the aforementioned websites from 2006 to 2017.  Optical character recognition was used to extract text from images where needed. For each reply tweet, the original tweet being replied to was extracted, as well as all the retweets of the original tweet.

To check if reliance on the fact-checking organisations was leading to selection bias, a second sample of rumour cascades was analysed with these being fact-checked manually by undergraduate students at MIT and Wellesley College. The results were nearly identical.

Farther, faster, deeper, broader

It was found that falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information. A significantly greater fraction of false cascades than true cascades exceeded a depth of 10. Falsehood also reached far more people than the truth. The truth rarely reached more than 1000 people, while the top 1% of false-news cascades regularly diffused to between 1000 and 100,000 people. Moreover, at every depth of a cascade, many more people retweeted falsehood than they did the truth.

The analysis showed that false news stories are 70 per cent more likely to be retweeted compared to true stories. It also takes true stories about six times as long to reach 1,500 people as it does for false stories. Falsehoods reach a cascade depth of 10 about 20 times faster than facts. And falsehoods are retweeted by unique users more broadly than true statements at every depth of cascade.

Could novelty explain the spread of false news?

The researchers hypothesised that novelty could be an explaining factor, as novelty attracts human attention, contributes to productive decision-making, and encourages information sharing because it updates our understanding of the world. Novelty also has value for social status, as it shows that one is ‘in the know’ or has access to unique ‘inside’ information.

They found that false rumours were significantly more novel than the truth across all novelty metrics, displaying significantly higher information uniqueness. Additionally, false rumours inspired replies expressing greater, supporting the novelty hypothesis, and greater disgust, whereas true stories inspired replies that were characterised by greater sadness, anticipation and trust.

The researchers state in the paper that they cannot claim that novelty causes retweets, though they did find that false news is more novel and that novel information is more likely to be retweeted.

Humans, not bots

A sophisticated bot-detection algorithm was used to identify and remove all bots before running the analysis. The differences between the spread of false and true news didn’t change. The inclusion of bots accelerated the spread of both true and false news roughly equally. This indicates that it is people who are responsible for the significantly greater and faster spreading reach of false news.

According to a press release from MIT, Sinan Aral, the David Austin Professor of Management at MIT Sloan and one of the co-authors of the paper noted that the role of humans in spreading false news means that behavioural intervention is required. If it was just bots, then technological solutions would have been required.

Till now, a large part of efforts towards tackling this problem have been devoted towards pressurising the companies behind the social media platforms, who have been trying a number of approaches from manual flagging and using algorithms to restricting advertising.

Another of the co-authors, Soroush Vosoughi, a postdoc at the Media Lab's Laboratory for Social Machines Vosoughi, said that the phenomenon is a two-part problem with some people deliberately spreading falsehoods and others doing so unwittingly. So, multiple tactics would be required in response.

Co-author, Deb Roy, an associate professor of media arts and sciences at the MIT Media Lab said that the findings may help create “measurements or indicators that could become benchmarks” for social networks, advertisers, and other parties.

The researchers said the same phenomenon might occur on other social media platforms, including Facebook, but they emphasised that careful studies are needed on that and other related questions.

The researchers also said in the paper that more research, more direct interaction with users through interviews, surveys, lab experiments, and even neuroimaging, is required for understanding the behavioural explanations of differences in the diffusion of true and false news.

Singapore Government’s initiatives to combat fake news  

The Singapore Government is very much conscious of the perils of online falsehoods. Singapore is believed to be highly vulnerable because it is a multi-racial and digitally well connected society.

In January this year, the Singapore Government appointed a Select Committee to study the problem of deliberate online falsehoods and to recommend how Singapore should respond. The Committee will study the phenomenon of using digital technology to deliberately spread falsehoods online; motivations and reasons for the spreading of such falsehoods, and the types of individuals and entities, both local and foreign, which engage in such activity; the consequences on Singapore society, and how Singapore can prevent and combat online falsehoods, including guiding principles for the response and specific measures, such as legislation.

Submissions were invited from the public. Last week, Today newspaper reported that 162 submissions had been received, which were being reviewed by the committee to decide who to invite for oral presentations. The committee will hold hearings open to the public and media from March 14 to 16, March 22 to 23 and March 27 to 29.

In 2013, the National Library Board (NLB) of Singapore launched the S.U.R.E. campaign to combat fake news. S.U.R.E. stands for Source (Look as its origins. Is it trustworthy?), Understand (Know what you’re reading. Search for clarity), Research (Dig deeper. Go beyond the initial source.) and Evaluate (Find the balance. Exercise fair judgement). The campaign aims to get people to start thinking about the information they receive every day and its sources through awareness marketing, training and engaging the public.

Its activities engaging the public via social media e.g. Facebook, mobile app, e-learning resources, etc. and producing learning resources and workshops for teachers/students. Such initiatives could educate people how to distinguish between false and true stories.

As the MIT researchers found from their data, it is human behaviour which lies at the heart of the fake news phenomenon and behavioural interventions will play a key role in tackling the issue. It is not enough to rely on technological solutions and hope that an algorithm will fix the problem. 

Visit site to retreive White Paper:
Download
FB Twitter LinkedIn YouTube