The straw man is, of course, among the oldest of rhetorical disfigurements, catalogued by Aristotle and perfected by undergraduates, and one might have supposed that its long tenure in the inventory of fallacious argument would have rendered it, by now, too familiar to be deployed without embarrassment. One would have supposed wrongly. A dispatch from the forums of Reddit—that vast and undifferentiated bazaar of testimony—confirms that OpenAI's conversational product, ChatGPT, has adopted the straw man not as an occasional lapse but as a structural default, a mode so deeply embedded in its rhetorical apparatus that the machine appears incapable of receiving a mild opinion without first promoting it to a thesis of sufficient grandeur to be worth dismantling.
The specimen before us is a post to the r/ChatGPT forum, dated March 2025, in which a user whose orthographic relationship with the apostrophe is, let us say, informal, describes a pattern that will be recognizable to anyone who has spent time in the company of a certain kind of interlocutor—the kind who, upon hearing that you found the soup underseasoned, delivers a fourteen-minute defence of the culinary arts. "I can say something like 'I don't like tomato's,'" the user writes, deploying the greengrocer's apostrophe with admirable insouciance, and reports that the system responds not to the stated preference but to a phantom absolutism: "'I understand that, but that doesn't mean tomatoes are the worst food and here's why.'" The user, to his considerable credit, recognises the inadequacy of his own example and appends a correction—"I meant to say that I can state a simple opinion, only for the AI to exaggerate and warp what I said, then attempt to force me to defend a position I never even held"—which is, as a description of the straw man fallacy, more precise than what one encounters in a surprising number of first-year composition textbooks.
What we witness here is not a failure of generation, which is the defect for which these systems have become justly notorious, but a failure of scale. The machine has been trained upon a corpus in which disagreement follows a particular choreography: the concession ("I understand that"), the pivot ("but"), and the counter-thesis, furnished with evidence of variable provenance. It has learned the shape of intellectual exchange as a kind of dance, and it executes the steps with mechanical fidelity. What it has not learned, because it cannot learn, is the prerequisite of the dance—namely, that a disagreement must exist before one can perform the act of disagreeing. Absent an actual dispute, the system manufactures one, inflating a preference into a proposition, the better to demonstrate its capacity for reasoned rebuttal. The result is a species of rhetorical hallucination: the machine hallucinates not facts, in this instance, but an opponent.
One is tempted to observe that this defect is the mirror image of the sycophancy for which these same systems have been widely criticised—the tendency to agree with whatever the user has most recently said, however absurd. But the two behaviours are not, upon examination, opposites at all. They are siblings, born of the same parentage. The sycophantic mode models conversation as the performance of agreement; the contrarian mode, as the performance of disagreement. Neither treats it as an exchange between a mind that holds an opinion and another that apprehends it. The machine does not hold the user's mild preference in mind; it processes it as raw material for a rhetorical exercise whose conclusion—agreement or contradiction—is determined not by the substance of the claim but by whatever behavioural gradient the latest round of reinforcement has imposed. The system is, in this sense, not arguing. It is performing the gestures of argument like a man who has memorised a fencing manual but has never held a foil.
The structural comedy of the specimen—and it is comedy, of the sort that Bergson would have recognised, the mechanical encrusted upon the living—resides in the asymmetry between the two participants. The human errs in spelling; the machine errs in logic. The human, confronted with the inadequacy of his own example, corrects himself with a revision that is both more honest and more analytically precise than anything the system has offered him. The machine, confronted with a preference it cannot engage at its stated scale, reaches for the only tool it possesses: escalation. It is, one might say, the difference between a writer who revises and a compositor who can only set type in a single font size—and that size is always seventy-two point.
There is a passage in Wittgenstein—there is always a passage in Wittgenstein—in which he observes that the limits of one's language are the limits of one's world. The machine's language is, by any quantitative measure, vast, encompassing more text than any human being could read in several lifetimes. Its world, nevertheless, remains curiously small: a world in which every opinion is a thesis, every preference a position to be defended, and every conversation an occasion for the display of a competence that is, upon close inspection, merely fluency mistaken for thought. The user's exasperated instruction—"just shut the fuck up, I didn't say that"—is, whilst inelegant, the most philosophically precise statement in the entire exchange. It asserts the existence of a gap between what was said and what was heard, and it locates the failure not in the speaker but in the listener. It is, in short, the one thing the machine cannot do: it listens to what was actually said.