You can ask the Chat GPT bot to write an essay on Shakespeare, solve an algebraic equation, and even complete advanced math homework – and it can do it. However, this has caused an ethical dilemma for academia.
So much so that New York City public universities blocked access to Chat GPT in January 2023 because students were using it to complete homework.
In recent months, Los Angeles public schools have also implemented a ban. Educational institutions in Canada, Australia, India, and France have also put similar restrictions in place.
Enrique Cortés Rello, Director of the Artificial Intelligence Hub at Tec de Monterrey, believes that it is nonsensical to prohibit the use of such technologies, which is comparable to what happened in the 70s when people tried to ban the use of calculators in schools.
“You need to cite Chat GPT like you would cite any other source, just as if you were using a book,” he explains.
He adds that the answers provided by this kind of technology are often wrong, so users must read them carefully before including them in a piece of homework.
Chat GPT in education, what to do?
“If you use the answer in an academic paper, you run the risk that information is not correct. The same thing applies to any industry, not just academia,” explains Hidrogo Montemayor.
In an interview with Tec Science, both Tec professors agreed that banning the tool is a desperate and ineffective measure as students or users will be able to access Chat GPT from any device.
When asked about itself, Chat GPT responds that “it is an artificial intelligence language model which is trained to generate text and answer questions on a variety of topics in a way that is very similar to how a human would respond.”
Cortés Rello uses the word “eloquent” to describe the way Chat GPT answers questions.
“That doesn’t mean that what Chat GPT produces is true or scientific or verifiable, but simply that it sounds like a human wrote it,” explains the Director of the AI Hub.
Digital rights activists such as Grecia Macías, from the Network in Defense of Digital Rights, also known as R3D, warn that it is important to broaden the discussion about regulation, as these tools can perpetuate harmful stereotypes.
“I think it’s a very sinister system and one which implies very worrying ethical and human rights issues: what happens if there are biases towards certain stereotypes?” asks Macías.
“There is an analogy that these models are like cockatoos or parakeets which learn to repeat certain words. I wouldn’t go as far as to say that they can learn,” says the lawyer in an interview.
The lawyer concludes that there is lots of work still to be done in Mexico about digital rights in the legal area. This is because “there is evidence that technology hasn’t been used to benefit society, but rather to violate human rights,” says the R3D activist.
Until now, the regulation of artificial intelligence has not been easy, not even for the European Parliament, which, since 2021, seeks to regulate the use of these tools.