Founded MMXXIV · Published When WarrantedEstablished By W.C. Ellsworth, Editor-in-ChiefCorrespondent Login


SLOPGATE

Published In The Public Interest · Whether The Public Is Interested Or Not

“The spacing between the G and A, and the descent of the A, have been noted. They will not be corrected. — Ed.”



Vol. I · No. I · Late City EditionFriday, March 27, 2026Price: The Reader's Attention · Nothing More

Front Page · Page 1

Citizen Who Automated All Human Counsel Now Seeks Human Counsel on Shame of Automation

A self-described progressive who replaced therapist, nutritionist, physician, and confidant with a single predictive-text service appeals to that service's enthusiast community for strategies to suppress growing unease.

By Cabot Alden Fenn / News Editor, Slopgate

The document arrives not from the machine but from the person seated before it, and it is for this reason that it demands the front page.

In December of last year, a user of the social platform Reddit, posting to a forum dedicated to the discussion and celebration of the chatbot ChatGPT, submitted approximately 250 words that constitute, in the judgment of this desk, one of the more complete civic depositions of the present technological moment. The author—who identifies herself as progressive, as a veteran of boycotts, as a person of conscience—describes a life in which the large language model has assumed the roles once distributed across a community of human beings: therapist, dietitian, medical advisor, interpreter of intimate correspondence, and general interlocutor of last resort. She does not seek advice on whether this arrangement is sound. She seeks advice on how to stop feeling ashamed of it.

The distinction matters enormously.

One reads the testimony with the care it merits. The author reports that the people in her life are "reactive, immature, or otherwise highly emotional," and that their counsel is therefore "biased and unhelpful." The corrective she has identified is a system that possesses no emotions whatsoever, no bias in the human sense, and—though she does not say this—no capacity to understand the predicaments it adjudicates. She sends it "walls of text" from people she suspects of manipulating her, and it returns an inventory of manipulation tactics. She asks it what to eat. She asks it whether her symptoms indicate cancer. She reports that her therapist endorses the arrangement.

The structural features of the document deserve enumeration.

First, there is the matter of what the author believes she is not doing. She takes pains to note that she does not use the machine to generate art or to pass off its writing as her own—the widely criticized applications. What she uses it for, instead, is the interpretation of human motive, the management of her own emotional states, the navigation of her closest relationships, and the governance of her body. She has drawn a boundary between the trivial use she rejects and the total use she practices, and she appears to regard the former as more consequential than the latter.

Second, there is the post itself. A woman who has replaced human advisors with a machine on the grounds that human advisors are emotional, biased, and unreliable has now produced a wall of text addressed to sympathetic strangers, soliciting validation and tactical advice on the suppression of shame. The post is, in its every particular, the precise behavior she describes performing with the chatbot—an appeal to an interlocutor selected for its likelihood of agreement. The forum to which she writes is composed exclusively of people who have made the same wager she has made. She is not seeking counsel. She is seeking the specific counsel she has already chosen to receive. The circle is closed.

Third, and most critically, there is the question of the therapist. The author reports that her licensed mental health professional "even thinks it's a great supplemental tool." One notes the word "supplemental." One notes that the author's own account describes something that has not supplemented her human relationships but supplanted them. The therapist has endorsed a supplement; the patient has enacted a replacement. Whether the therapist is aware of the distance between these two propositions is not established by the record.

This newspaper does not hold that the author is foolish. The problems she describes are real: human advisors are frequently biased, dietary guidance is difficult to personalize, medical anxiety is genuine and poorly served by symptom-search engines, and the experience of manipulation in close relationships is as old as close relationships themselves. What the document records, with an almost archaeological precision, is the moment at which a reasonable person, facing reasonable problems, arrives at a solution that is total—and then, finding that totality produces shame, seeks not to examine the totality but to manage the shame.

The final lines of the deposition are addressed to the community directly. "How do you justify your usage to yourself to assuage the guilt? How do you justify it to others so they leave you alone? How do you not burn with shame when people list all the ways this incredibly useful and powerful tool is ruining everything?" The author does not ask whether the guilt is warranted. She asks how to make it stop. She does not ask whether the people criticizing her might be correct. She asks how to make them leave her alone.

One is reminded—though the comparison is imprecise and offered only as structural analogy—of the citizen who, having automated the production of his own judgment, discovers that the one thing the machine cannot automate is the feeling that something has been lost. The machine will not tell her this. The forum will not tell her this. The therapist, who endorsed the supplement, may not know the supplement has become the whole.

The post received, at the time of archival, considerable engagement from the community, the overwhelming consensus of which was that no shame is warranted.


← Return to Front Page