While past tools let us externalize discrete mental processes—notebooks for memory, calculators for computation, maps for navigation—AI widens the aperture. Now, summarizing and analyzing information, generating ideas, and making decisions can all be offloaded too. “It's starting to creep into the things we thought were cognitively ours,” says Evan Risko, a professor at the University of Waterloo who studies “cognitive offloading,” or the practice of taking external action to make mental tasks easier.
The concern is that while experts—and people who already enjoy thinking, those high in what psychologists call “need for cognition”—may be able to use these systems as “thought partners” without compromising their own thoughtfulness, for many others AI may function less as a partner and more as a substitute.
In the largest study to date of how people use AI, Anthropic described a tension between “using AI to learn and growing so reliant on it that you cease thinking for yourself.” The same capabilities that produce benefits produce harms, it found; the two are entangled. People in high-stakes professions—like law, finance, government, and healthcare—were especially likely both to rely on AI for judgement and to have been burnt by its mistakes. “Nearly half of all lawyers mention coming up against AI unreliability firsthand, yet they also report the highest rates of realized decision-making benefits,” the company wrote, drawing from over 80,000 responses.
Other recent studies have found that people tend to be overconfident in the quality of their AI-assisted work, while those who uncritically rely on AI report reduced confidence in their own thinking. As AI decouples the production of work from the cognitive processes that once produced it, a gap emerges: our confidence in AI-assisted work sometimes exceeds our confidence in ourselves.
When we accept AI’s outputs without applying scrutiny or our own intuition, we engage in “cognitive surrender.” Whereas with typical cases of cognitive offloading—externalizing memory, navigation, and the like—we retain agency, surrender occurs at the point where “you’re just following,” explains Steve Shaw, a professor at the University of Pennsylvania who co-authored the paper that coined the term.
The Expertise Paradox
A 2012 internet fable imagines a “whispering earring:” a piece of magic jewelry that always offers advice superior to what its wearer would come up with alone. Whoever wears it ends up living an unusually happy life; after their death, it’s revealed that the part of their brains associated with higher decision-making has atrophied, while parts associated with reflexive action have grown excessively.
But expertise forms through effortful engagement—if we circumvent the need for that, we risk eroding our capacity to develop it. The tendency to over-rely on a solution handed to us is a feature of human psychology, not unique to AI. But AI offers many shortcuts—and, unlike a calculator, it is not always right. “So essentially, we’re killing the path to become an expert, but also assuming that experts exist in the world and can operate these systems,” Buçinca says.
Gilbert points out that the incentive to use a cognitive faculty and the capacity to do so are not the same thing. Maps reduced our incentive to memorize routes, but our capacity to do so remains. “I'm sold on the idea that tech distorts our incentives to do what might be best for us,” he says. “But I'm not sold on the idea that it’s fundamentally changing our basic human abilities.”
Our New Relationship
The key skills to master in this era are “metacognitive”—understanding when to offload to AI, and when to do the hard work of thinking for yourself. We know from decades of neuroscientific and psychological research that practice is central to skill development, and that friction is necessary to learn. A machine can explain how to do a push-up, but you have to do the reps yourself if you want to build muscle.
Staying connected to a sense of purpose is easier said than done. And the evidence we have so far points to another paradox: persistent AI use—especially when introduced too early in one’s cognitive endeavors—can stunt the very metacognitive skills necessary to work well with AI.
“I strategically delegate all sorts of things to AI all the time,” notes Shaw. “I'm just intentional about it, and I always try to think first and then prompt.” For Shaw, stigma around using AI, whether in work or education, obstructs progress. “We need to accept that AI is here to stay,” he says. “Because if there's stigma, then you can't talk about it, you can't deal with it, and you can't develop policies."
“The more we think of ourselves as classically extended minds, the better,” he says, “because then we’ll feel like we have a vested interest, because this stuff is a part of us. It's not just some place we upload tasks so we don't have to do them anymore. That is a fundamentally different relationship to tech."
Hence then, the article about are we losing our minds to ai was published today ( ) and is available on Time ( Middle East ) The editorial team at PressBee has edited and verified it, and it may have been modified, fully republished, or quoted. You can read and follow the updates of this news or article from its original source.
Read More Details
Finally We wish PressBee provided you with enough information of ( Are We Losing Our Minds to AI? )
Also on site :