More and more people are getting their news from social media and digital platforms. In these spaces, misinformation spreads easily and is often shared without being verified. To tackle this problem, researchers at Tecnológico de Monterrey developed VerifactzGPT, an artificial intelligence (AI) tool that not only analyzes content but also teaches people how to spot misleading information.
Misinformation ranks among the top five global risks in the short and medium term, according to the World Economic Forum’s Global Risks Report 2026. Its impact reaches across areas such as politics, the economy, health, and education.
What Is VerifactzGPT and How Does It Detect Misinformation?
Against this backdrop, María del Carmen Fernández and Ana Laura Maltos, journalists and research professors at the School of Humanities and Education (EHE) at Tecnológico de Monterrey, are leading the development of VerifactzGPT, a tool that leverages generative AI to evaluate online information. It also helps users build critical thinking skills to analyze, verify, and contextualize digital content before trusting or sharing it.
“Verifactz is an initiative of the Digital Media Observatory at the School of Humanities and Education at Tec de Monterrey that began in 2024 as a proposal to monitor and verify online information. It is an AI agent that promotes a pedagogical use of technology to strengthen media and information literacy among university students and the general public,” Fernández explains.
The development of this tool has a specific origin: it emerged in the context of Mexico’s 2024 federal elections as a network of fact-checkers made up of Tec students, who worked alongside the National Electoral Institute (INE) to manually track the information generated during the campaigns.
Building on that experience, the team developed a methodology grounded in theoretical frameworks and international guidelines to combat misinformation, including researcher Claire Wardle’s concept of “information disorder,” which classifies three types of problematic content:
Disinformation. The deliberate spread of false data is intended to deceive.
Misinformation. Incorrect content shared without intent to cause harm.
Malinformation. Genuine information used out of context to inflict harm.
How VerifactzGPT Uses AI to Detect and Verify Information
About a year and a half ago, the researchers integrated AI to develop an agent capable of analyzing content with greater accuracy and efficiency. At the same time, it guides users through a validation process in which, step by step, it explains why a piece of content may be misleading and teaches them to identify warning signs of misinformation in the data.
The tool works as a chatbot with which users can share text, screenshots of social media posts, images, or links.
Based on this input, the model conducts web searches to cross-check the information, uses tools such as Google Fact Check Tools, consults platforms like Verificado MX, and suggests resources such as reverse image searches. It also applies a verification matrix that evaluates different aspects of the content. As a result, it provides a reliability rating, but it also explains the analysis in clear terms so users can understand it and apply the same approach to other content.
“The idea is that it’s not just a verification tool (one that simply tells you whether a story is false or not); we see it as a media literacy assistant because it explains to users why a piece of information should be scrutinized, why they might question it, or how they can identify, for example, AI-generated posts such as images used in content designed to be more sensational than informative,” the researcher adds.
The Verification Matrix: Five Keys to Detecting Misinformation
The verification matrix used by VerifactzGPT was designed by the research team at the Digital Media Observatory to analyze content across five dimensions:
Source. Identifies who published the content and where it comes from. It assesses whether the source is trustworthy, such as a recognized news outlet, and whether the origin of the information is clearly cited, for example, a study or an official statement.
Facts. Examines whether the content presents verifiable data and whether it aligns with what other sources report. It helps distinguish between evidence-based claims and unsupported content.
Context. Analyzes whether the information explains what happened, when, and where. It helps detect real content that is presented outside its original context.
Intent. Evaluates the purpose of the message. It identifies whether the goal is to inform objectively or whether it relies on judgment, bias, or sensationalist framing.
Image and video. Assesses whether there are alterations or manipulations in audiovisual content. It considers the use of elements such as music or framing and allows users to rely on tools like reverse image search to verify the material’s origin.
The matrix allows the agent to calculate a reliability percentage based on three criteria applied to each dimension. It also provides a clear guide to help users interpret the results. In addition, it identifies seven types of problematic content and organizes them along a spectrum according to their potential level of harm, from satire or parody, which can be misleading, and the use of false context or source impersonation, to manipulated or entirely fabricated content intended to deceive.
The tool can also be used for research purposes. With users’ consent, it collects data on how they interact with misinformation and makes it possible to assess its impact on media literacy, that is, whether it helps develop users’ skills before and after using the tool.
Limitations of AI in VerifactzGPT
Fernández notes that the tool is still under development; it is currently in an advanced phase of testing and refinement. Although it is already accessible through the observatory’s website for validation purposes, its performance may still reflect common AI limitations, such as biases or errors in analysis.
“That’s why, right from the start, the bot tells you: ‘I can help you, but the result will depend on how much information you provide.’ GPT relies on a good prompt. If the questions are vague, the tool can get confused. We felt it was important to include those caveats, that the result will be approximate and will depend on how much information it can access,” the researcher explains. “But beyond whatever the chat may conclude, the goal is to teach users that they can carry out this kind of verification themselves by paying attention to those details.”
What’s Next for VerifactzGPT: Improvements and Next Steps
Currently, the researchers are working to refine the model and improve its accuracy through ongoing testing and iteration to release a full version this year. This process has also required them to develop technical skills in programming, API usage, and prompt engineering.
Fernández notes that the team’s next steps include expanding the tool’s ability to process multimedia content, improving how it reads links from certain social media platforms that currently impose limitations, and making it more accessible to a wider range of users. Looking ahead, they aim to turn VerifactzGPT into a mobile app and even integrate it into messaging services such as WhatsApp to make it easier to use in everyday contexts.
“Our goal is for people to have free and easy access to a tool that helps us avoid falling for misinformation,” Fernández says. “Much of today’s misinformation is generated with AI, but what we’re trying to do is flip that around and use it as a system to fight it.”
In an environment where misinformation is increasingly fueled by AI, VerifactzGPT aims to reverse the dynamic: using technology not to mislead, but to cultivate more critical and better-informed users.
Were you interested in this story? Do you want to publish it? Contact our content editor to learn more: marianaleonm@tec.mx




