tec-de-monterrey-seeklogo.com-optimized-1.svg

EN

|

ES

EN

|

ES

Artificial Intelligence may lead to human extinction?

The Artificial Intelligence Hub's Director reflects on how realistic is the fear about AI taking over the world.

By: Enrique Cortés Rello

Artificial intelligence (AI) has become a topic of passionate debate in the scientific and technological sphere. Some people express the concern that this technology could come to destroy human beings, while others highlight its usefulness in different industries such as finance, commerce, manufacturing, and health, as well as its huge potential for doing good.

There are people and serious organizations advocating for decelerating or even stopping AI development, which they see as a threat to humanity. On March 22 of this year, more than 1,800 people, including Elon Musk and Steve Wozniak, signed a letter calling for a six-month pause on certain types of artificial intelligence development, particularly “systems more powerful than GPT-4.” Additionally, the nonprofit Center for AI Safety issued a statement on May 29 which said that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” This fear lies in the possibility of an intelligent system coming to dominate us and, eventually, eliminate us.

Are we really at an inflection point? Is humanity in danger of extinction due to artificial intelligence?

Let’s take a step back… AI refers to the capability of machines to perceive, understand, act, and learn through the use of mathematical models, software, hardware, and data to solve problems. What’s more, we use it every day. The applications on our phones, digital assistants, music and video platforms, social media, maps, language translation, and electronic commerce all indirectly use artificial intelligence techniques. Users of these apps may not be aware that they’re using it when they receive recommendations for movies, when they obtain the most efficient route to reach a destination, or when they turn off the light at home.

What has changed in the past six months is that, for the first time, 150 million people are interacting directly with ChatGPT, a generative artificial intelligence application that can create text (there are others, such as DALL-E, which creates images). This is a neural network trained on a huge amount of text from different sources such as websites, Wikipedia, Reddit, and digital books. The surprising thing about ChatGPT is that the text it generates gives the appearance of being the product of a human being, sounding reasonable and apparently true. 

If you ask ChatGPT what it thinks of itself, it replies as follows:

“As I’m a version of ChatGPT, I have no personal opinion about myself. However, I can tell you that language models like me have proven themselves to be powerful tools for a wide variety of tasks… although it’s important to remember that they also have limitations and can occasionally generate incorrect or unreliable responses. It’s essential to use this technology with caution and always verify the information obtained via reliable sources.”

ChatGPT is right. It’s excellent at generating initial ideas as a way of brainstorming for a human to then refine, but it may occasionally provide incorrect answers based on false and unreliable information. When this happens, some people say that ChatGPT “hallucinates.” This is due to its large-scale training on texts that could contain xenophobia, discrimination, conspiracy theories, or simply false data.

Is ChatGPT a threat to humanity? Clearly, it isn’t. As with any tool, it can be used for evil, to deceive, or to commit fraud. However, regardless of how sophisticated these language models are, they cannot destroy humanity. What’s true is that research and engineering related to artificial intelligence are now a long way ahead of regulation, and systems can be built that are deceptive and even dangerous. As a thought experiment, if a system were created to help people with mental health issues talk to a professional, but this “professional” were ChatGPT, it would not only be unethical but also dangerous to the users and should be banned.

There’s a hypothetical concept called artificial general intelligence (AGI), which in theory is an autonomous system capable of overcoming all aspects of human capacities that would then subjugate and destroy humanity. However, it is important to stress that a system like that does not exist today, except in science fiction. What’s more, we’re a long way from building it, and still don’t know how to do so. What we do know is that Generative AI such as ChatGPT is not the path to artificial general intelligence. 

Let’s not forget that AI is a tool, just like a hammer, a calculator, or a spreadsheet, and tools can be used for good or ill. AI is so powerful that when it’s used correctly, it doesn’t replace our capacities: it enhances them. A pause on artificial intelligence research and development would only delay its social and economic benefits.

At Tec de Monterrey, we’re focused on applying AI to contribute to the solution of social issues, such as health, education, gender violence, poverty, and the environment. It helps doctors to improve their patient’s health, teachers to accelerate their students’ learning, and scientists and decision-makers to address major issues that definitely are a threat to humanity, such as climate change and access to drinking water.

In conclusion, can AI exterminate human beings? No. However, it can enhance our abilities and turn us into superhumans.

Related news
Related videos
Play Video

Did you like this content? Share it!​