Twitter recently outlined that many internet users rarely read the news that they retweet. That's why the tech giant started to issue banner warnings this summer to encourage them to read the stories they're about to share. This move seems to be paying off, since 40% of the users involved in the experiment clicked on the news reports links.
Twitter recently shared the encouraging results of an experiment it launched last May among Android users. "Headlines don't tell the whole story. You can read the article on Twitter before Retweeting," read the Twitter banner.
After seeing these warnings, 40% of the users clicked more often on the news links they were about to share via their account. While the fact that they clicked does not necessarily mean that the user read the article in question in its entirety, Twitter revealed that a third of those involved in the experiment (33%) retweeted more press articles after having read them.
The study also showed that some users changed their mind about sharing certain stories on their account after having consulted them.
What's next:— Twitter Comms (@TwitterComms) September 24, 2020
Making the prompt smaller after you've seen it once, because we get that you get it
Working on bringing these prompts to everyone globally soon pic.twitter.com/08WygQi06G
"It's easy for articles to go viral on Twitter. At times, this can be great for sharing information, but can also be detrimental for discourse, especially if people haven't read what they're Tweeting," Twitter Director of Product Management Suzanne Xie told Tech Crunch.
Twitter has announced that this new feature will be tested by every user in the coming weeks.
Reducing harmful and misleading content
Researchers have shown in the past that Twitter users rarely read the links that they share. A recent Columbia University study with Microsoft showed that 59% of the links included in tweets hadn't been consulted by the user who shared them.
Twitter's new feature is rolling out at a time when the tech giants are under attack because of harmful and fake medical and conspiracy-promoting information spreading on their platforms. For instance, half a million clicks on Facebook last April were linked to fake medical information.
"This suggests that just when citizens needed credible health information the most, and while Facebook was trying to proactively raise the profile of authoritative health institutions on the platform, its algorithm was potentially undermining these efforts," an Avaaz report said.