Researchers at Tecnológico de Monterrey are developing an artificial intelligence (AI) model designed to identify patterns in messages people post on their social media accounts to detect signals associated with suicide risk or suicidal ideation.
The initiative, known as Mindtrack, is led by Mahdi Zareei, a research professor in the Department of Computing at the School of Engineering and Sciences (EIC), and Leivy Patricia González, director of the Department of Clinical Psychology and professor at the School of Medicine and Health Sciences (EMCS), both at Tec’s Guadalajara campus.
Every year, more than 720,000 people die by suicide worldwide; according to the United Nations, it is the third leading cause of death among people between the ages of 15 and 29. In Mexico, more than 8,000 people die by suicide annually—about seven people per 100,000 inhabitants.
Mindtrack stands out because the training of this model is based on data obtained from clinically diagnosed patients and collected under strict ethical protocols. For this work, the research team collaborates with Hospital San Juan de Dios, a private mental health clinic located in Zapopan, Jalisco.
“The literature shows that—due to multiple factors—people living with a mental disorder can take up to seven years to seek help,” González says. “Through this study, we may be able to identify these behavioral patterns much earlier. Informing the public about the kinds of signs that appear on social media among young people could be extremely valuable for health professionals, parents, and teachers.”
Applying AI in Mental Health

It is estimated that between 50 and 70% of people show warning signs before attempting suicide, such as expressing a desire to die, feeling overwhelming guilt, or believing they are a burden to others.
The initiative began when two master’s students became interested in applying AI to mental health challenges related to depression and suicidal ideation. However, they soon realized the issue could not be addressed from an engineering perspective alone and that collaboration with the School of Medicine and Health Sciences (EMCS) was essential.
“We ask patients to sign formal consent forms allowing us to access their data. They provide their social media IDs, such as Instagram, TikTok, X, or Facebook,” Zareei explains. “We extract posts from up to two years back, and that data is used to train AI algorithms that detect patterns based on what people publish. We don’t tell the algorithm to look for very specific words or phrases, like ‘suicide’ or ‘I want to hurt myself.’ Instead, it identifies patterns in the connections between words—hidden signals that are often much harder for a human to detect.”
In addition to information from their social media accounts, patients also participate in a follow-up interview where researchers ask about other factors, such as whether they have experienced cyberbullying, whether they talk about their emotions online, or if they belong to groups that discuss mental health.
Researchers also consider when people tend to use emojis, colors, or coded phrases such as “the soup is gone,” or other expressions that might seem unrelated but can allude to suicidal thoughts.
“In the interviews, we also explore with patients whether there are groups that use a kind of ‘coded’ language when talking about these topics,” the researcher explains. “This can help us interpret what we find. For example, if a common word like ‘chocolate’ appears frequently in the algorithm’s results, the interviews might reveal that in certain groups it’s used to refer to a specific mental health issue, helping us better understand what the model is detecting.”
Participants are Mostly Young People Who Use Social Media
This data collection effort began about six months ago, González explains, and so far, more than 40 diagnosed patients have agreed to participate. The sample consists of Mexican youth between the ages of 15 and 29 who use social media and who, for the most part, score in the medium-to-high range on the Columbia Scale, which is used to measure the severity of suicide risk.
The team has also faced several challenges along the way, from obtaining approval from an ethics committee to conduct the study to dealing with changes among the specialists who treat the patients. In addition, the researchers have had to train interns and trainees to properly collect and record patient information.
When it comes to confidentiality and ethical principles, the Institutional Research Ethics Committee is responsible for authorizing the project, although approval is granted for only one year at a time. To renew it, the researchers must submit progress reports. The committee also ensures the protection of participants’ rights, including data privacy and the secure handling of sensitive information.
After the data collection phase, the team will feed the information into a natural language processing (NLP) algorithm designed to detect behavioral patterns in text posted on social media. The system will analyze not only the content of what users write on these platforms, but also variables such as the time or day of the week when posts are made.
“Our project is still in its early stages. For now, we’re not going to build a tool,” Zareei says. “First, we want to identify these patterns and then validate them. Once we know they’re truly reliable patterns, we’ll be able to do many more things with that knowledge.”

Research in the Mexican Context
The researcher adds that although studies on this topic already exist, most rely on datasets collected from social media rather than from clinical patients, and they are largely based on communities of young Caucasian users in countries such as the United States or across Europe. Zareei believes it is particularly valuable to conduct this research in Spanish and within the Mexican context, since each culture may display very specific behavioral patterns.
In addition to González and Zareei, the Tecnológico de Monterrey research team includes Enrique Alejandro García from the Department of Computing at the School of Engineering and Sciences (EIC), and Melina Miaja from the Clinical Psychology area at the School of Medicine and Health Sciences (EMCS). The group has also collaborated with professors from the University of Guadalajara (UDG) and the University of Twente in the Netherlands, as well as physicians from the San Juan de Dios Psychiatric Hospital.
At different stages, the project has received support from Microsoft AI for Good, which provided access to cloud services to process data during earlier studies on AI in mental health, and from the Gonzalo Río Arronte Foundation, which awarded funding to help the team spend a year advancing the model’s training and obtaining preliminary results.
In the next phase, the team aims to collect data from around 300 patients to continue training the algorithm and to collaborate with additional clinics. Although the system will initially operate through text analysis, the researchers hope that in the future—and under appropriate ethical protocols—AI will also be able to analyze audio and video to help identify emotional signals.
“Much of what we know about AI applied to mental health comes from studies carried out over the past two years. Professionals in health care, engineering, and other fields need to get involved in this area,” González says. “It’s important to note that Tec is one of the few universities currently conducting this type of research. Right now, there are probably more questions than answers, but that will open the door to further studies.”
In Mexico, people seeking guidance on suicidal thoughts or other mental health concerns can call Línea de la Vida at 800-911-2000, available 24 hours a day, 365 days a year.
Did you find this story interesting? Would you like to publish it? Contact our content editor for more information: marianaleonm@tec.mx







