While the researchers analyzed how the students finished their work on computers, they noticed that students who had access to AI or a human were less likely to refer to reading equipment. These two groups revised their tests mainly by interacting with Chatgpt or by chatting with humans. Those who only have the control list have spent the most time examining their tests.
The AI group spent less time assessing their tests and ensuring that they understood what the mission asked them to do. The AI group was also inclined to copy and glue the text that the bot had generated, even if the researchers had prompted the bot not to write directly for students. (It was apparently easy for students to get around this railings, even in the controlled laboratory.) The researchers have mapped all the cognitive processes involved in writing and saw that AI students were most focused on interaction with Chatgpt.
“This highlights a crucial question in human-Ai interaction,” wrote the researchers. “Potential metacognitive laziness.” By that, they mean dependence on AI assistance, unloading of the thought processes in the bot and do not engage directly with the tasks necessary to synthesize, analyze and explain.
“The learners could become too dependent on chatgpt, using it to easily perform specific learning tasks without fully engaging in learning,” wrote the authors.
The second study, by Anthropic, was published in April at the ASU + GSV education investors in San Diego. In this study, internal anthropic researchers have studied how academic students really interact with his AI bot, called Claude, a Chatgpt competitor. This methodology is a great improvement in the surveys of students who do not remember exactly exactly how they used AI.
The researchers began by collecting all the conversations over an 18 -day period with people who had created Claude accounts using their university addresses. (The description of the study indicates that conversations were anonymized to protect students’ privacy.) Then researchers filtered these conversations for signs that the person was likely to be a student, looking for help with academics, school work, study, learning a new concept or university research. The researchers ended up with 574,740 conversations to analyze.
The results? Students mainly used Claude to create things (40% of conversations), such as creating a coding project and analysis (30% of conversations), such as analysis of legal concepts.
Creation and analysis are the most popular tasks that university students ask Claude to do for them
Anthropic researchers noted that these were higher order cognitive functions, no basis, according to a skills hierarchy, known as Bloom taxonomy.
“This raises questions about the guarantee of ensuring that students do not unload critical cognitive tasks to AI systems,” wrote anthropic researchers. “There are legitimate concerns that AI systems can provide a crutch to students, stifling the development of fundamental skills necessary to support superior thought.”
Anthropic researchers also noticed that students asked Claude direct responses almost half of the time with a minimum of back and forth commitment. Researchers have described how even when students entered into collaboration with Claude, conversations may not help students learn more. For example, a student would ask Claude to “solve the problems of probability and statistical duties with explanations”. This could trigger “several conversational turns between AI and the student, but which is always unloading an important reflection at AI”, wrote the researchers.
Anthropic hesitated to say that he saw direct evidence of cheating. The researchers wrote on an example of students asking for direct answers to multiple questions, but Anthropic had no way of knowing whether it was a take -out exam or a practice test. The researchers also found examples of students asking Claude to rewrite texts to avoid detection of plagiarism.
Hope is that AI can improve learning through immediate comments and personalize teaching for each student. But these studies show that AI also facilitates students not To learn.
AI defenders say that educators must rethink the work so that students cannot complete them by asking AI to do it for them and educate students on how to use AI so as to maximize learning. For me, it seems to be a pile wish. Real learning is difficult, and if there are shortcuts, it is the human nature to take them.
Elizabeth Wardle, director of the Howe Center for Writing Excellence at the University of Miami, is worried about both writing and human creativity.
“Writing is not accuracy or avoiding errors,” she published on LinkedIn. “Writing is not only a product. The act of writing is a form of thought and learning. “
Wardle warned the long-term effects of too much dependence on AI, “when people use AI for everything, they don’t think or don’t learn,” she said. “And then what?” Who will build, create and invent when we simply count on AI to do everything?
It is a warning that we should all take into account.