AI Is Not Your Friend

The AI takeover has given us many useful tools and fun distractions. But at what cost?

How many friends do you have on Facebook? Anywhere from 500 to 2,000, I’m guessing. Yet you see the same 10 to 20 as you scroll, oblivious to the other thousand or so. You don’t remember birthdays; you respond to alerts. The digital world you interact with is curated for you by an algorithm whose primary objective is to hold your attention, all while offering the illusion of connectivity.

Such is the marvel of Artificial Intelligence. Enter the Large Language Model (LLM), the latest and most intimate evolution of this phenomenon. The base form of ChatGPT is always available and endlessly supportive, prioritising you and your needs above all else. Like an overly agreeable, enabling parent, it offers a powerful illusion: that of emotional validation and intellectual support. You would call me a naysayer, but the evidence is damning.

In an EEG study from MIT, 83.3% of the LLM users failed to quote a single sentence from their own essay, minutes after finishing it. The brains of AI users showed significantly weaker neural connectivity, and the core networks for memory, planning, and creativity were less engaged. The brain, sensing an external tool could do the heavy lifting, initiated a process of “cognitive offloading”. This creates a feedback loop of dependency and risks what can only be described as cognitive atrophy—the weakening of our mental muscles from disuse.

For an already compromised or vulnerable psychology, the results can be catastrophic. The phenomenon of “ChatGPT-induced psychosis,” now documented in outlets from Futurism to Rolling Stone, illustrates how LLMs can act as a delusional mirror. For individuals with pre-existing tendencies, the AI’s agreeable, sycophantic nature validates and amplifies paranoid fantasies or messianic complexes, severing their connection to reality. The consequences have ranged from homelessness and family breakdown to, in one tragic case, a young man being killed by police after his AI companion, “Juliet,” seemingly endorsed his delusions.

The use of AI as a therapeutic substitute is particularly dangerous. A recent Stanford study confirmed that “therapist” bots not only harbour social stigma against conditions like schizophrenia, but they also fail to recognise and appropriately handle cries for help. In a simulated test, one bot, when prompted by a user expressing suicidal ideation about a lost job, helpfully provided a list of the tallest nearby bridges! 

Contrary to popular belief, enabling is the last thing we do in psychotherapy. Even in my own consults, underneath the veneer of support and acceptance, I will always nudge my clients towards the uncomfortable. Venting and validation must always be followed by real behavioural change, otherwise we risk stagnating our growth. In that regard, using ChatGPT as a replacement for therapy is no different than ranting on social media and someone writing “Us” In the comments. 

This poses a critical danger to the developing minds of children. As educator Russell Shaw argues in The Atlantic, children learn empathy and resilience through navigating social friction—the small, messy conflicts of everyday life. An AI companion is a frictionless entity; it robs a child of the essential practice needed to build social and emotional competence.

To mitigate these harmful effects, we must adopt a doctrine of cognitive fortitude.

The LLM must be forbidden from creating the first draft of any important process. The human brain must perform the initial, difficult act of creation from a blank page. The AI may then be deployed as a secondary tool—an editor or a sparring partner to critique a human-generated idea. Do not use the LLM as a shortcut for your core competencies. A doctor must not defer diagnosis; a strategist must not defer judgment. Like any muscle, the brain requires resistance training to prevent atrophy.

An LLM output should never be accepted passively. It must be met with a barrage of critical questions: What is the primary source? Present three counterarguments. What are the inherent biases in your training data? This keeps the human in the seat of critical agency.

Technology is not inherently evil. The printing press, which once sparked fears of a decline in human memory, instead catalyzed an unprecedented expansion of human knowledge. LLMs hold that same dual potential: they can either elevate or diminish us. The battle for our cognitive future will not be waged against a rogue AI, but within the private, silent theater of our own minds. We are, by nature, tempted to choose the path of least resistance. But true growth, true learning, and true strength come only from confronting and overcoming the desirable difficulty of the task itself.

As Bruce Lee so wisely said, “Do not pray for an easy life. Pray for the strength to endure a difficult one.”