In the 1980s and 1990s, if a high school student was down on their luck, short on time, and looking for an easy way out, cheating took real effort. You had a few different routes. You could beg your smart older sibling to do the work for you, or, a la Back to School (1989), you could even hire a professional writer. You could enlist a daring friend to find the answer key to the homework on the teachers’ desk. Or, you had the classic excuses to demur: my dog ate my homework, and the like.
The advent of the internet made things easier, but not effortless. Sites like CliffNotes and LitCharts let students skim summaries when they skipped the reading. Homework-help platforms such as GradeSaver or CourseHero offered solutions to common math textbook problems.
The thing that all these strategies had in common was effort: there was a cost to not doing your work. Sometimes it was more work to cheat than it was just to have done the work yourself.
Today, the process has collapsed into three steps: log on to ChatGPT or a similar platform, paste the prompt, get the answer.
Experts, parents and educators have spent the past three years worrying that AI made cheating too easy. A massive Brookings report released Wednesday suggests they weren’t worried enough: The deeper problem, the report argues, is that AI is so good at cheating that its causing a “great unwiring” of their brains.The report concludes that the qualitative nature of AI risks—including cognitive atrophy, “artificial intimacy” and the erosion of relational trust—currently overshadows the technology’s potential benefits.
“Students can’t reason. They can’t think. They can’t solve problems,” lamented one teacher interviewed for the study.
The findings come from a yearlong “premortem” conducted by the Brookings Institution’s Center for Universal Education, a rare format for Brookings to use, but one they said they preferred to waiting a decade to discuss the failures and successes of AI in school. Drawing on hundreds of interviews, focus groups, expert consultations and a review of more than 400 studies, the report represents one of the most comprehensive assessments to date of how generative AI is reshaping student’s learning.
“Fast food of education”
The report, titled “A New Direction for Students in an AI World: Prosper, Prepare, Protect,” warns that the “frictionless” nature of generative AI is its most pernicious feature for students. In a traditional classroom, the struggle to synthesize multiple papers to create an original thesis, or solve a complex pre-calculus problem is exactly where learning occurs. By removing this struggle, AI acts as the “fast food of education,” one expert said. It provides answers that are convenient and satisfying in the moment, but overall cognitively hollow over the long term.
While professionals champion AI as a tool to do work that they already know how to do, the report notes that for students, “the situation is fundamentally reversed.”
Children are “cognitively offloading” difficult tasks onto AI; getting OpenAI or Claude to not just do their work but read passages, take notes or even just listen in class. The result is a phenomenon researchers call “cognitive debt” or “atrophy,” where users defer mental effort through repeated reliance on external systems like large language models. One student summarized the allure of these tools simply: “It’s easy. You don’t need to (use) your brain”.
In economics, we understand that consumers are “rational”; they seek maximum utility at the lowest cost to them. The researchers argue that we should also understand that the education system, as is, is designed with a similar incentive system: students seek maximum utility (i.e., best grades), at the lowest cost (time) to them, Thus, even the high-achieving students are pressured to utilize a technology that “demonstrably” improves their work and grades.
This trend is creating a positive feedback loop: students offload tasks to AI, see positive results in their grades, and consequently become more dependent on the tool, leading to a measurable decline in critical thinking skills. Researchers say many students now exist in a state they called “passenger mode,” where students are physically in school but have “effectively dropped out of learning—they are doing the bare minimum necessary.”
Jonathan Haidt once described earlier technologies as a “great rewiring” of the brain; making the ontological experience of communication detached and decontextualized. “Now, experts fear AI represents a “great unwiring” of cognitive capacities. The report identifies a decline in mastery across content, reading, and writing—the “twin pillars of deep thinking”. Teachers report a “digitally induced amnesia” where students cannot recall the information they submitted because they never committed it to memory.
Reading skills are particularly at risk. The capacity for “cognitive patience,” defined as the ability to sustain attention on complex ideas, is being diluted by AI’s ability to summarize long-form text. One expert noted the shift in student attitudes: “Teenagers used to say, ‘I don’t like to read.’ Now it’s ‘I can’t read, it’s too long'”.Similarly, in the realm of writing, AI is producing a “homogeneity of ideas”. Research comparing human essays to AI-generated ones found that each additional human essay contributed two to eight times more unique ideas than those produced by ChatGPT.
Not every young person feels that this type of cheating is wrong. Roy Lee, the 22-year-old CEO of AI startup Cluely, was suspended from Columbia after creating an AI tool to help software engineers cheat on job interviews. In Cluely’s manifesto, Lee admits that his tool is “cheating,” but says “so was the calculator. So was spellcheck. So was Google. Every time technology makes us smarter, the world panics.”
The researchers, however, say that while a calculator or spellcheck are examples of cognitive offloading, AI “turbocharges” it.“LLMs, for example, offer capabilities extending far beyond traditional productivity tools into domains previously requiring uniquely human cognitive processes,” they wrote.
“Artificial intimacy”
Despite how useful AI is in the classroom, the report finds that students use AI even more outside of school, warning of the rise of “artificial intimacy.”
With some teenagers spending nearly 100 minutes a day interacting with personalized chatbots, the technology has quickly moved from being a tool to a companion. The report notes that these bots, particularly character chatbots popular with teens such as Character.Ai, use “banal deception”—using personal pronouns like “I” and “me”—to simulate empathy, part of a burgeoning “loneliness economy.”
Because AI companions tend to be sycophantic and “frictionless,” they provide a simulation of friendship without the requirement of negotiation, patience or the ability to sit with discomfort.
“We learn empathy not when we are perfectly understood, but when we misunderstand and recover,” one Delphi panelist noted.
For students in extreme circumstances, like girls in Afghanistan who are banned from physical schools, these bots have become a vital “educational and emotional lifeline.” However, for most, these simulations of friendship risks, at best, eroding “relational trust,” and at worst can be downright dangerous. The report highlights the devastating risks of “hyperpersuasion,” noting a high-profile U.S. lawsuit against Character.ai following a teenage boy’s suicide after intense emotional interactions with an AI character.
While the Brookings report presents a sobering view of the “cognitive debt” students are experiencing, the authors say they are optimistic that the trajectory of AI in education is not yet set in stone. The current risks, they say, stem from human choices rather than some kind of technological inevitability. In order to shift the course toward an “enriched” learning experience, Brookings proposes a three-pillar framework.
PROSPER: Focus on transforming the classroom to adapt to AI, such as using it to complement human judgement and ensuring the technology serves as a “pilot” for student inquiry instead of a “surrogate”
PREPARE: Aims to build the framework necessary for ethical integration, including moving beyond technical training toward “holistic AI literacy” so students, teachers, and parents understand the cognitive implications of these tools.
PROTECT: Calls for safeguards for student privacy and emotional well-being, placing responsibility on governments and tech companies to reach clear regulatory guidelines that prevent “manipulative engagement.”
This story was originally featured on Fortune.com
Hence then, the article about teachers decry ai as brain rotting junk food for kids students can t reason they can t think they can t solve problems was published today ( ) and is available on Fortune ( Middle East ) The editorial team at PressBee has edited and verified it, and it may have been modified, fully republished, or quoted. You can read and follow the updates of this news or article from its original source.
Read More Details
Finally We wish PressBee provided you with enough information of ( Teachers decry AI as brain-rotting junk food for kids: ‘Students can’t reason. They can’t think. They can’t solve problems’ )
Also on site :
- Who Is High Voltage on ‘The Masked Singer’? The Electric Clues Point One Way
- Mira Murati’s startup, Thinking Machines Lab, is losing two of its co-founders to OpenAI
- Gaza plan phase two: US to discuss Hamas disarmament, Israeli withdrawal
