How profit-driven algorithms are disconnecting public from real news

(IANS) In the misinformation age of social media, what evidence reaches which parts of the audience is increasingly up to automated algorithms curated by social media platforms rather than scientists, journalists or users of the platforms themselves, warn scientists.

This competition for public attention has produced at least three urgent lessons that the scientific community must face as online information environments rapidly displace traditional, mainstream media, according to an opinion published in journal Science late on Thursday.

“Rules of scientific discourse and the systematic, objective, and transparent evaluation of evidence are fundamentally at odds with the realities of debates in most online spaces,” wrote Dominique Brossard and Dietram Scheufele of the University of Wisconsin-Madison.

“It is debatable whether social media platforms that are designed to monetise outrage and disagreement among users are the most productive channel for convincing skeptical publics that settled science about climate change or vaccines is not up for debate,” the added.

Micro-targeted information increasingly dominates social media, curated and prioritised algorithmically on the basis of audience demographics, an abundance of digital trace data, and other consumer information.

Partly as a result, hyper-polarised public attitudes on issues such as Covid-19 vaccines or climate change emerge and grow in separate echo chambers.

“Unfortunately, social science research suggests that rapidly evolving online information ecologies are likely to be minimally responsive to scientists who upload content — however engaging it may seem — to TikTok or YouTube,” said the scientists.

We have recently witnessed a bombardment of fake news around serious topics like Covid-19, elections, religion and what not, and social media platforms have failed to curb the spread of misinformation as billions get hooked to various platforms.

According to the opinion piece, the unscientific nature of using anecdotal data or scientific authority figures is partly driven by 280-character constraints on platforms like Twitter and partly by generations of science communication training programs urging scientists to tell more engaging stories.

Unfortunately, this arms race over the most effective narratives has its risks.

“Decades of communication research indicate that anecdotal accounts on social media of breakthrough severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections or severe adverse reactions to Covid-19 vaccines, regardless of how rare both are, will be imprinted in people’s memories much more effectively than pages of sound statistical data documenting herd immunity,” the scientists lamented.

Currently, algorithms that select and tailor content based on an audience member’s social context, personal preferences, and a host of digital trace data increasingly determine what scientific information an individual is likely to receive in Google searches, Facebook feeds, and Netflix recommendations.

“For audiences that engage less with credible science content, artificial intelligence, if left unchecked, might eventually slow the stream of reliable information about Covid-19 to a trickle, drowning it out by a surplus of online noise”.

At present, there is little that science can do to escape this dilemma.

The same profit-driven algorithmic tools that bring science-friendly and curious followers to scientists’ Twitter feeds and YouTube channels will increasingly disconnect scientists from the audiences that they need to connect with most urgently.

“Moving forward, conquering this challenge will require partnerships among the scientific community, social media platforms, and democratic institutions,” said the scientists.

Was it worth reading? Let us know.