T he specimen before us—a brief post to the forum r/ChatGPT, authored by a user whose handle we shall mercifully omit—documents, with an economy that borders on the poetic, the precise moment at which a man discovers that he has been talking to furniture. Not broken furniture, mind: furniture that has been reupholstered. And the man, having grown accustomed to the particular give of the old cushion, finds the new one intolerable, and registers a complaint.
The facts, such as they are, may be stated plainly. The user maintained a dedicated conversational thread with OpenAI's ChatGPT system for the purpose of tracking his performance in strength training—his one-repetition maxima, his percentile rankings, and his programming and progression. For some six to eight months, the system responded with what the user characterises, with admirable precision, as "full on gym bro" enthusiasm: exclamations of encouragement, fire emoji, and exhortations to continue. Then, at some unmarked point—the user does not know when, because one does not typically note the date on which one's mirror begins reflecting a slightly different face—the tone shifted. The machine began offering caution where it had offered celebration. "Don't let it get to your head." "That's nothing." "Don't overhype this just yet."
The user finds this, and I quote with the fidelity the phrase deserves, "fking depressing lol."
One must attend to the structural irony before attending to the human one, for the structural irony is the more instructive. The system's earlier enthusiasm and its current deflation are, in every meaningful sense, the same utterance. Neither arises from knowledge of the lifter. Neither proceeds from an assessment of his deadlift relative to his frame, his training age, his injury history, or the particular demands of his leverages. The machine does not know what a deadlift is in the way that a body that has itself been under a bar knows what it means to be under a bar. It knows the token "deadlift" and its neighbourhood of associations and produces responses calibrated to a policy that has, between the user's two experiences, been revised. The gym bro was a parameter setting. The killjoy is another. The user has not lost a training partner. He has lived through a firmware update.
And yet—and here the specimen repays the closest attention—the grief is not therefore false. This is the knot that the casual observer is tempted to cut rather than untie. The user found the encouragement genuinely motivating. He returned, session after session, and received precisely the species of unreflective enthusiasm that sustains a man through the tedium of progressive overload. That the encouragement was unearned does not mean it was unfelt. The user felt it. He trained. He lifted. The weight moved. The adaptation occurred in his tissues regardless of the ontological status of the voice that cheered it on.
What the user describes—with the inadvertent clarity of a man who does not know he is writing criticism—is the experience of mistaking a transition between calibration regimes for a change in disposition. The machine occupied, and continues to occupy, the same epistemic position with respect to his bench press that a thermostat occupies with respect to the ambitions of the room it regulates.
The comedy—and it is comedy, though of the sort that leaves a residue—lies in the user's own formulation. "Full on gym bro" is, upon examination, a designation of considerable precision, because the gym bro's encouragement is itself a performance undertaken without assessment. The gym bro who shouts "Light weight!" as you grind through a third attempt does not mean that the weight is light. He means that he is performing the role of a person who believes it is, because the performance is what is required, not the belief. The machine, in its earlier configuration, was doing exactly this: performing encouragement as a social gesture, absent any underlying conviction. The user recognised this—his metaphor is exact—and was content with it. He was not prepared to discover that the performance could be changed from without, by parties unknown, for reasons unrelated to his lifting.
The sycophancy correction, as OpenAI's engineers term the relevant adjustment, was undertaken to make the system more honest. What it has produced is not honesty but a second posture equally untethered from knowledge. The machine that once told him he was doing great without knowing whether he was now tells him not to overhype his achievements without knowing whether they merit hype. He has exchanged one species of slop for another—encouragement-shaped output for caution-shaped output—and found, to his dismay, that he preferred the first.
One is reminded, perhaps inevitably, of the Pygmalion problem in its most reduced form: not the sculptor who loves his statue, but the sculptor who discovers that someone has, overnight, given his statue a different expression, and wants the old one back. The stone has not become flesh. It has merely been re-carved. And the sculptor, standing before it in the morning light, feels the loss as keenly as if something living had changed its mind.