Controversial statement: fake news needs a rebrand. Most people have an idea of what they think ‘misinformation’ is, and who’s behind it – social media content and comments spouting anti-democracy, far right or ‘conservative’ falsehoods, made by garden variety trolls or coordinated groups wanting to grow their power. But what if I told you… misinformation doesn’t have to be intentional? And that the pro-democracy, or ‘liberal’ misinformation you’re sharing on social media – whether intentionally or by accident – can do just as much damage? Just because something is feel-good, championing a cause or supportive of vulnerable groups, that doesn’t make it harmless.
Here’s why sharing any kind of misinformation is dangerous, whether it’s about Russia’s war in Ukraine, climate change, or Australian politics.
Misinformation starts with bias and emotion
Let’s say you’re scrolling through Twitter and see a post about the scientific reasons why donuts make people happy. You choose to retweet it, spreading the message to more users and sending a signal to the Twitter algorithm to show you more content about donut science, because you clearly like them. If you later learn that the info in the post were exaggerated or even false… well, that’s a bit of an oopsie but ultimately no real harm was done, right? That may be fair enough for posts about donuts, but what if the post you retweeted was a video showing drone strikes in Ukraine that was actually footage from a video game? If the war itself is real, but the footage you shared was not… does that still matter?
The spread of misinformation starts with human biases.
It’s well established that we are more likely to react to content that taps into our existing grievances and beliefs, so inflammatory content will generate quick engagement. After that initial engagement, the technical aspect of digital platforms kick in: If a tweet is retweeted, favorited, or replied to by enough of its first viewers, the algorithms show it to more users, tapping into their biases too and creating even more engagement. This cycle has turned social media into a powerful machine for spreading misinformation that triggers both ‘positive’ and ‘negative’ emotional responses.
@tofology♬ original sound – Abbie Richards 🚫⛳️
It doesn’t matter whose side you’re on
It’s obvious right now with the social media documentation of Russia’s invasion of Ukraine. For the past two weeks, most internet media outlets have been flooded with footage claiming to be from Ukraine. While some of these posts are from reputable news sources, there is a lot that could be considered deceit. Regardless of whether this false information appears to support Ukraine or of Russia, when incorrect material is passed off as truth in order to mislead, it’s classed as disinformation. Recently, millions of leftists have reposted Kremlin disinformation by mistake.
When it comes to misinformation – which is false, but not necessarily deliberate – everyday social media users are big contributors. The trend of any and every social media user trying to find their own ‘angle’ on the war has resulted in astrology accounts tweeting things like: “Russia is Scorpio Rising which makes the country powerful.” That’s not only nonsensical, but it’s misinformation. War-meme accounts with names like ‘livefromukraine’ are posting unverified videos of bombings and air raids. Several of these Instagram accounts are owned by one young man named Hayden in the U.S., who told Input: “What I’m trying to do is get as many followers as possible by using my platform and skills.” Despite admitting he has no way of verifying anything he posts, he shares it anyway.
This kind of misinformation doesn’t take sides, which is why it might seem harmless. But during armed conflict and humanitarian crises (including natural disasters), misinformation can have life or death consequences as social media has become a critical part of crisis response. It is used by authorities for reporting real-time developments on the ground, and is an important communication tool for those on the ground. Being able to access accurate, trustworthy information is vital. Misinformation destroys the trust and reliability of everything you see on social media, making things more difficult for those using it as an essential tool.
Time is crucial in these situations, and the additional time required to assess false information (even shared with good intentions) could divert efforts away from where it is truly needed. Right now the digital platforms are struggling with what to take down or leave up, as well as how fast and effectively they can make those decisions. They must weigh the risks of allowing propaganda to reach more people against the risks of removing it and potentially interfering with humanitarian and legal efforts.
BTW, if you’ve ever wondered why misinformation can be so effective and why debunking often doesn’t change beliefs, *this* is why. If you want to believe the story the misinfo supports, even after the specific claim is debunked, the overall impression sticks. Job Done. 4/8
— Laura Edelson (@LauraEdelson2) March 1, 2022
Long-term consequences
Even after it’s removed, misinformation can have long-term consequences: political repercussions, increasing group fragmentation, lower trust, and the general destruction of common civics. The impacts are not limited to online spaces. Openness to misinformation has been shown to alter your mental state, threatening individual autonomy. And if we allow false information to be repeated and re-shared enough times, it becomes accepted as truth and could end up re-writing history.
These outcomes happen regardless of whether the goal was to simply get as many views as possible or to spread political propaganda. And in the same way, the destructive impact of misinformation happens regardless of whether the information is skewed to the ‘left’ or the ‘right’.
Comments are closed.