By Euronews
Published on
ChatGPT is launching a “Study Mode” to promote responsible academic use of the chatbot, amid concerns over the misuse of artificial intelligence (AI) in schools and universities.
Designed to help students do homework, prepare for exams, and learn new topics, the feature allows users to learn in an interactive, step-by-step, classroom-like manner.
The goal is to help students understand and analyse the material, rather than relying on ready-made solutions, according to OpenAI, the maker of ChatGPT.
In one example, a user asked for help understanding Bayes’ theorem. The chatbot responded with questions about the user’s level of mathematical literacy and learning goal, before proceeding with a step-by-step explanation.
“We want to highlight responsible ways to use ChatGPT in a way that is conducive to learning,” said Jaina Devaney, OpenAI’s head of international education.
The launch of this feature coincides with a growing concern within academia about the illicit use of AI tools.
In an investigation published last month, for example, The Guardian identified nearly 7,000 proven cases of university students using AI tools to cheat in the 2023-2024 school year.
Meanwhile in the United States, more than a third of college-aged adults use ChatGPT, and the company’s data shows that about a quarter of messages sent to the bot are related to learning, teaching, or homework.
“We don’t believe in using these tools for cheating, and this is a step towards minimising that,” Devaney said.
She added that tackling academic cheating requires a “broad discussion within the educational sector” to reconsider how students’ work is assessed and set clear guidelines on the responsible use of AI.
Through Study Mode, upload past exam papers and work on them in collaboration with the tool.
Notably, it does not prevent users from ignoring Study Mode and requesting direct answers to their prompts.
The company said the feature was developed in collaboration with teachers, scientists and educational experts, but warned that there could be “inconsistent behaviour and errors in some conversations”.