Disinformation has become a big problem for humanity, as it can result in risks to public health and safety. To combat it, we need media literacy, platform regulation, and a reaffirmation of the responsibility of media outlets and individuals who disseminate information.
The term refers to false, inaccurate, or misleading information that is intentionally disseminated to gain economic or political advantage or to harm a person, social group, organization, or country.
Although the word is used everyday and doesn’t sound like something critical to address, the reality is that it can have catastrophic and even lethal results.
“Disinformation during the COVID-19 pandemic is a clear example, and, close to that, we are seeing outbreaks of diseases that had been eradicated for years,” says Ana Laura Maltos research professor at the School of Humanities and Education (EHE) and the Digital Media Observatory at Tecnológico de Monterrey.
During this health crisis, rumors circulated around the world about supposed alternative treatments that had serious consequences. In Iran, more than 800 people died from drinking methanol, and in Mexico, many were poisoned by ingesting chlorine dioxide.
Another example is when then-South African President, Thabo Mbeki, denied the link between HIV and AIDS, promoting treatments other than antiretrovirals, resulting in more than 300,000 preventable deaths.
Recently, with the expansion of social media, such as Facebook, TikTok, Instagram, and X, the problem has skyrocketed. “Disinformation has always existed, but in the last five years, the landscape has drastically changed,” says Maltos.
Social Media Is the Main Vehicle for Disinformation
At the Observatory, Maltos and her colleagues have studied the role of social media in spreading disinformation, and they have found that they are the main vehicle through which this happens, although search engines, like Google, and instant messaging services, like WhatsApp, are also involved.
A few years ago, the purpose of using social media was to connect with other people and create virtual communities that shared a preference or something they liked, but recently this has changed.
“Right now we are seeing a reality where these spaces are being moderated by interests that are not necessarily those of the users,” says Maltos.
With the massive increase in the number of people using them —Facebook has three billion monthly active users globally— companies, governments, individuals, and other organizations have found the perfect channel to share information that benefits them.
“Disinformation can be created for political or economic gain, or sometimes it is simply to create chaos or generate views,” says the expert.
This poses a problem because traditional media outlets, such as radio, newspapers, and magazines, had ways of regulating the information they shared, such as fact-checkers or codes of ethics.
On social media, the owners establish the policies and rules. Earlier this year, Mark Zuckerberg announced that he would stop using fact-checkers on Meta, which includes Facebook and Instagram, and that fact-checking would be replaced by community notes.
“This is worrying because it distracts us from the platforms’ responsibility for disinformation,” Maltos reflects. “Giving users the task of debunking information or verifying its accuracy can, in a way, wash their hands of it.”
The Sophistication of Disinformation
Combating disinformation has become increasingly difficult, as the way it is presented is more subtle. Some strategies used to spread it include using some true facts to disguise misleading information or impersonating someone to claim that the source is serious and professional, using names that appeal to this, but often do not exist.
“The way it operates has become more sophisticated. It’s no longer simply a lie; it’s often inaccurate information accompanied by elements that seem very, very reliable, but aren’t,” explains Maltos.
Added to this is the existence of influencers, the people who have a large follower base and presence on social media, with the ability to influence the decisions, opinions, or behaviors of an audience or group of users.
These figures are often —intentionally or unintentionally— vehicles for disinformation. “In general, they don’t have the same goal or commitment to information as a journalist,” says Maltos.
How to Resist Disinformation
According to the expert, there are several steps you can take to avoid falling prey to disinformation. The first is to pause before reacting or sharing something you see or read.
One of the main characteristics of this phenomenon is that what is shown has an emotional aspect. “There’s something about it that causes you to back in your seat and need to send it to someone to share the emotional burden,” says Maltos. This is one of the reasons why it goes viral so quickly.
Given this, her recommendation is to receive information with skepticism and take the time to verify where it comes from and what its intention is. The Observatory has developed and compiled freely available tools for detecting disinformation.
In the end, the most important thing is to remember that, although social media tells us that the algorithm is there to serve us users, in reality it only helps us filter the enormous amounts of information out there.
“There’s this perception that we have control over the algorithm, but no, it’s the other way around,” says Maltos. “It’s fueled by a series of policies and rules determined by who owns the platform; the network isn’t neutral; it’s determined by a series of commercial and political factors.”
Understanding this is a way to resist disinformation and prevent it from having serious impacts on our lives. It can have many serious consequences, such as health risks, misinformed decision-making, the collapse of public services, the erosion of trust in institutions, and the promotion of conflict and social polarization.
“Sometimes it’s disheartening to think that there won’t be a way back, that discourses where insults and hate speech are allowed will dominate, hiding behind freedom of speech,” reflects Maltos.
The idea of Maltos and her colleagues in response to this is to start focusing efforts on media and digital literacy, with programs and tools at all educational levels —and outside of school— that teach people how to navigate virtual reality with strong foundations and distinguish malicious information. “This what I’m betting for,” says the researcher.
Were you interested in this story? Would you like to publish it? Contact our content editor to learn more: marianaleonm@tec.mx