A new study from Aalto University has revealed an intriguing psychological twist in how humans interact with artificial intelligence systems such as ChatGPT. While people typically overestimate their abilities due to the Dunning-Kruger Effect (DKE)—a well-known cognitive bias where low performers think they’re more skilled than they really are—this effect seems to vanish when AI enters the picture. Instead, the research found something even more surprising: people of all skill levels, including those with high AI literacy, consistently overestimate their cognitive performance when using AI tools.
According to Professor Robin Welsch, who led the study published in Computers in Human Behavior, the results challenge assumptions about digital literacy and self-awareness. “We found that when it comes to AI, the Dunning-Kruger Effect disappears,” Welsch explained. “Surprisingly, higher AI literacy actually led to more overconfidence. We expected tech-savvy users to evaluate their performance more accurately, but that wasn’t the case.”
The research underscores a growing concern in human–AI interaction studies: as people become increasingly reliant on large language models (LLMs) like ChatGPT, their ability to critically evaluate information and recognize their own mistakes may decline. The study suggests that even though participants achieved better performance with AI assistance, they were consistently overconfident in how well they thought they performed.
Doctoral researcher Daniela da Silva Fernandes, co-author of the study, highlighted the broader implications. “AI literacy is not enough,” she said. “We’re seeing that even technically skilled users are not developing metacognitive awareness—the ability to reflect on and assess one’s own thought processes. Current AI platforms don’t encourage self-reflection, and that’s a major problem.”
To explore this issue, the researchers conducted two large-scale experiments involving 500 participants who were asked to solve logical reasoning questions similar to those on the Law School Admission Test (LSAT). Half of the participants used ChatGPT, while the others worked without AI assistance. After each task, participants were asked to estimate their performance, and accurate self-assessment was incentivized with additional compensation.
Interestingly, most AI users rarely prompted ChatGPT more than once per question. “Many simply copied the question, pasted it into ChatGPT, and accepted the answer without reflection,” Welsch explained. “They weren’t engaging in dialogue with the AI or verifying results. This phenomenon, known as cognitive offloading, shows how people hand over their reasoning processes to AI systems, trusting them blindly.”
The limited interaction prevented users from receiving the feedback necessary to calibrate their confidence levels, resulting in inflated self-assessments. According to the study, this overreliance on AI could lead to a broader “illusion of knowledge,” where users believe they understand more than they actually do.
To mitigate these risks, the researchers propose that AI systems should be redesigned to promote active engagement and critical thinking. For example, instead of simply generating answers, AI tools could prompt users to explain their reasoning or reflect on the logic behind a solution. “This would force users to confront their assumptions and improve their understanding,” Fernandes suggested.
The findings add to growing evidence that while AI tools improve task performance, they can simultaneously diminish cognitive awareness. Overconfidence fueled by automation might not just distort self-perception—it could also lead to poor decision-making in high-stakes fields like law, medicine, and finance, where accuracy and reasoning are critical.
Conclusion:
The Aalto University study is a reminder that while AI can amplify human capabilities, it can also distort our self-perception. True AI literacy must go beyond technical know-how—it should include the ability to question, verify, and reflect. As artificial intelligence continues to evolve, fostering metacognitive engagement will be essential to ensure humans remain thoughtful, critical, and aware participants in the digital age.





