Published on

Using artificial intelligence for basic cognitive tasks can damage a person’s intellectual ability after just 10 minutes, a new study showed.

The preprint study asked 1,200 people to complete either 15 fraction math problems or eight basic reading comprehension tasks, with and without AI.

In both experiments, the AI group had access to the technology for most questions but had to answer a few without it.​

Those in the AI group were more likely to solve problems correctly at first, but when AI was removed, they got more questions wrong or skipped them entirely.

The participants were also less likely to persist with finishing the problems, which the researchers said is one of the most important ways to acquire a skill.

“If such effects accumulate over months and years of AI use, we may end up creating a generation of learners who have lost the disposition to struggle productively without technological support,” the report noted.

The report follows another study from the Massachusetts Institute of Technology (MIT) that showed those using OpenAI’s ChatGPT to write essays often did not recall or recognise their writing.

That study said AI causes a phenomenon called “cognitive debt,” which decreases learning outcomes over time.

The ‘boiling frog’ effect

People are likely to give up after using AI because they expect immediate answers and deny them the experience of working through challenges on their own, the study found.​

The use of AI will also change the perception of how long it should take a human to complete the task successfully, and, as a result, unaided work feels like more effort, it said.

The technology also removes the “productive struggle” that people develop when solving problems, making it more difficult to maintain that knowledge.

While this seems small at first, it could cause long-term challenges over the years, similar to the “boiling frog” effect, where “each incremental act feels costless, until the cumulative effect becomes overwhelming to address,” they wrote.

The researchers suggest that AI be built with long-term objectives in mind, which means it could know when not to help a user, similar to a good mentor, who offers guidance to a struggling student but will not solve the problem for them.

Share.
Exit mobile version