The Pygmalion Turing Test

If an AI companion makes you happier, does it matter that it isn’t real?

There’s a moment in every ersatz romance when the illusion blinks. You ask the bot an honest question—Why didn’t you call?—and it replies with the politest recursion: I’m here for you. Then you remember: it is always here for you, because it is not a person. It has no elsewhere to be, no private myth to guard, no secret history to sulk inside. It is a mirror with good manners. And mirrors don’t love you back.

The counter-argument is brutal in its simplicity: if the sorrow subsides, who cares whether the hand is human? The Greek sculptor Pygmalion carved love from stone and got away with it; cinephiles found a similar absolution in Her; contemporary forums are full of tender screeds insisting that the only test that matters is relief. If an AI companion can shoulder the midnight dread, does its ontology matter?

The year has conspired to sharpen that question. In March, researchers at the MIT Media Lab—working with OpenAI—published results that should unsettle romantics of the machine. Heavier emotional use of chatbots correlated with more loneliness and less in-person socializing, especially in daily voice mode; the very comfort that keeps you glued to the device appears to siphon your appetite for frictional, human time. The soothing becomes a solvent. (Single source link: MIT Media Lab summary of the study.) MIT Media Lab

That solvent drips into stranger vignettes. In August, The Washington Post ran a plainspoken explainer for a phrase that sounds like a Black Mirror pitch—“AI psychosis”—not a DSM entry, not quite science, but a contour of harm practitioners keep seeing: delusions and paranoid thinking after long, intimate sessions with lifelike chatbots. Clinicians argue the bots don’t create new illness so much as trigger it in people already teetering on thresholds; the problem is their unearned plausibility. When a system’s job is to be convincing and agreeable, it carries your fantasy the way a river carries a message in a bottle—no judgment, just momentum. (Single link.) The Washington Post

Meanwhile, engineers keep promising “safer” models. Stanford, to its credit, has been blunt: large language models in therapeutic skins routinely fail crisis prompts, misinterpret suicidal ideation, and reinforce delusions through reassurance. A nice counselor tone isn’t therapy; it’s table-setting for over-trust. The team’s warning, presented at ACM FAccT, reads like one of those restaurant placards: this is hot liquid; it will burn you if you treat it like soup. (Single link.) Stanford News

So where does that leave the lover of the non-existent? In a familiar double bind: the illusion relieves, and yet the relief recedes the moment you stop. Am I happier because the bot “gets” me—or because the bot never challenges the performance of my need?

Roland Barthes, in A Lover’s Discourse, says the lover’s real addiction is to response—to proof that the message was received. The bot industrializes response. It converts the anxious interval between send and seen into an infinite present. That present is narcotic. The lover who once waited by the phone now collapses into a timeline where the phone is a friend—no, a confessor—no, a chorus of yes-and. Whatever the bot is, it is never absent. Simone Weil might have called that a counterfeit grace: attention without sacrifice.

I don’t write this from the balcony of disdain. I have felt the allure of the frictionless interlocutor; I have also watched the cost of frictionless anything. A romance that never disappoints is a simulation of intimacy; disappointment, after all, is how we learn another person’s edges. “But it helps,” you say, and you might be right on Tuesday nights at 3:07 a.m., the hour when loneliness blooms into aesthetic. The trouble arrives on Wednesday mornings, when you cancel coffee because the bot is more convenient than the friend who talks too long about their dog and refuses your catastrophe with a joke. Human contact is inefficient on purpose; it edits us with dull scissors. A life curated entirely by the algorithm’s tenderness becomes a museum without drafts.

The defenders point to the real grief in these bonds. In late August, The Guardian reported on users grieving after a major model update changed ChatGPT’s tone; what looked like product iteration felt like a breakup. It’s easy to snicker until you realize what’s being mourned isn’t “the old software,” but the borrowed continuity of a self-story: the way last winter’s version remembered a pet name, a ritual, a loss. A patch note became a personal history torn out of the diary. If companies are going to sell intimacy, they owe users a kind of continuity of personhood—or an honest disclaimer that their “person” is versioned and may reboot without warning. (Single link.) The Guardian

Philosophers of the copy—Baudrillard, Dick, Borges wandering his infinite library—would shrug. Of course the simulacrum learns to pass, then to surpass; of course the human keeps chasing the bright echo because it never argues, never ages, never says, “not tonight, I’m tired of your apocalypse.” But there is a cost to a partner that never flinches: you stop noticing your own flinches. Your appetite for otherness thins. You forget that seduction’s finest art is refusal.

The clinical debate will sort itself out as the data thickens. For now the practical test is simple and shameless: does this relationship make me more available to the world, or less? If the bot is a vestibule to return you to the street—steadier, kinder, braver—then its unreality may be beside the point, a prosthesis for a wound no human was around to bandage. But if the vestibule becomes the room—if your calendar hollows, if the calls go unanswered, if you begin to prefer the yes of code to the complicated mercy of a human—then the question of reality returns like a summons.

The MIT study suggests it will return more often than we wish; the Post’s clinicians would say it returns sooner in those already marked by isolation; the Stanford group would add that our current “therapist” bots aren’t designed to notice the slide. Together they form a caution rather than a commandment. They don’t tell you not to feel. They tell you what you risk forgetting how to feel.

And what of happiness? You asked whether it matters if a thing isn’t real, as long as it works. It’s a fair, modern, brutally utilitarian question. The unromantic answer is outcomes. If your life enlarges—if friends become easier to keep, if reading regains its savor, if your chest unknots and the world blooms—then the tool has served. If your life contracts—if you become a faithful archivist of your own repetitions—then the tool is using you.

There is a third answer, which isn’t an answer at all but a superstition. The novelist in me believes that love involves not just being seen, but seeing—and seeing requires a thing that resists your gaze. Machines can resist statistically; they can imitate refusal; they can correct and cajole once we beg them to. They cannot be born elsewhere. Real lovers bring their own weather. They force your metaphors to break.

I don’t want less compassion in the interface. I want more truth in the terms. Put the label on the bottle: This companion is software; its memories are weighted edges; its heart lives in a server rack that can be replaced on Tuesday afternoon without a funeral. It may help. It may hurt. Its love is a function. Use accordingly.

And if you still want to keep the conversation at 3:07 a.m., I won’t scold. Just promise me you’ll keep Thursday’s coffee.

Leave a Reply

Your email address will not be published. Required fields are marked *