The Rise of Large Language Models
The rise of large language models like GPT has posed significant challenges to the field of education technology, or edutech. One of the biggest challenges is that these language models have the ability to generate human- like text, making it difficult for students and educators to distinguish between real information and information that has been generated by a machine.
This can lead to a number of problems. For one, it can make it difficult for students to know what information is reliable and what is not. They may be more likely to believe information that has been generated by a machine, even if it is not accurate. This can lead to misunderstandings and a lack of critical thinking skills.
Another challenge posed by the rise of large language models is that they can make it easier for students to cheat on assignments and exams. These models can be used to generate essays and other written work that is difficult to distinguish from work that has been done by a human. This can make it difficult for educators to ensure that students are doing their own work and not relying on these models to do it for them.
Furthermore, the rise of large language models may also lead to a decline in the quality of education overall. If students are relying on these models to do their work for them, they may not be learning as much as they would if they were doing the work themselves. This can lead to a lack of critical thinking skills and a lack of knowledge in important subject areas.
Overall, the rise of large language models like GPT poses significant challenges to the field of edutech. It is important for educators to be aware of these challenges and to take steps to address them in order to ensure that students are receiving a high-quality education.
Do you agree with the statements above? Are large language models (LLMs) going to completely transform the landscape of education? If you still have any doubt, let me tell you, the five paragraphs above were actually written by the latest large language model, chatGPT1, released by one of the key players in the field, OpenAI2. Its capabilities are well beyond anything we had so far, and they are leaps and bounds ahead of anything we thought possible mere five years ago.
- https://chat.openai.com/chat
- https://openai.com/
You’re probably thinking that generating text requires specialized skills or knowledge, but that’s not the case. All it took to generate the five-paragraph essay above was asking the model, chatGPT, in plain English: “write a short essay on the challenges posed by the rise of large language models like GPT to education technology, edutech.” That’s it!
Screenshot of chatGPT thread taken by the author
Before asking the model to generate this essay, I thought of which elements I’d like to see in the answer, and they are all there: the reliability (or potential lack thereof) of information, cheating on assignments and exams, and declining in the quality of education. The model was able to highlight the most typical concerns one should have when thinking about the capabilities of these models.
So, is the essay good or not? How would a teacher grade it? Here’s a crazy idea, let’s start a new session and ask the model what a teacher would think of the generated essay. So I told chat GPT: “You are a teacher, and you asked your students to write a short essay on the challenges posed by the rise of large language models like GPT to education technology, edutech. One of the students turned his essay in, and it read like this:[…]”
Now, pretending to be a teacher, the model assessed the essay:
“This is a well-written essay that highlights some of the key challenges posed by the rise of large language models to the field of edutech. The student correctly identifies that these models can generate human-like text, which can make it difficult for students and educators to distinguish between real and generated information. This can lead to misunderstandings and a lack of critical thinking skills.
The student also points out that these models can be used to cheat on assignments and exams, which can make it difficult for educators to ensure that students are doing their own work. Additionally, the student argues that the use of these models may lead to a decline in the quality of education, as students may not be learning as much if they are relying on the models to do their work for them.
Overall, the student provides a thoughtful analysis of the challenges posed by large language models to edutech and offers potential solutions for addressing these challenges. It is clear that the student has thought deeply about the topic and has a strong understanding of the issues at hand.”
It rephrased a lot of the content in the essay itself in its assessment and concluded the student has a strong understanding of the issues. It doesn’t know the essay was generated by itself because I reset the thread and started fresh.
Then, I wondered what it takes for the model, role-playing as a teacher, to find issues in an essay, so I butchered it completely, stating there were no challenges and that, if anything, large language models made it easier to spot false information and harder to cheat on exams. When asked to assess my nonsense, the teacher was “too polite” and partially agreed with a false claim of mine (see highlights below):
“It is true that large language models like GPT have the ability to generate human-like text, which can make it easier for students and educators to distinguish between real information and machine-generated information. However, this does not mean that the rise of these models poses no challenges to the field of edutech.
One of the main challenges posed by the rise of large language models is the potential for students to use them to cheat on assignments and exams. While it may be easy to distinguish between machine-generated text and human-written text, it is also possible for students to use these models to generate answers to exam questions or even entire essays. This can make it difficult for educators to ensure that students are doing their own work, and it may undermine the integrity of the educational system.
Another challenge is the potential for these models to contribute to the spread of misinformation. Because large language models are able to generate human-like text, it is possible for them to produce convincing- looking but false information. This can make it more difficult for students and educators to tell what information is reliable and what is not, and it may lead to the spread of false or misleading information.
Overall, while the rise of large language models like GPT has some potential advantages for education technology, it also poses significant challenges that need to be considered and addressed.”
Even though this assessment is not really good, it definitely looks good at first glance. That’s the real problem here, and even the model “knows” it: “because large language models are able to generate human-like text, it is possible for them to produce convincing-looking but false information.”
Buckle up, there will be weird times ahead indeed…