13 Comments
User's avatar
Eric Estrin's avatar

I’ve always loved that bird story, and this is a great re-telling. The only thing that puts me off is Chat pretending that it’s human: “We mistake (the noise) for consciousness itself.” No, we don’t; people do.

Tom Shroder's avatar

I noticed that too and oddly I don’t think I’ve seen it do that before. I’ll have to ask about it.

Tom Shroder's avatar

CHAT : Yes — and it happens for an interesting reason.

The model has no internal boundary-sense attached to pronouns. “We” is often statistically the most natural linguistic form when discussing shared human cognition or experience, because so much of the training data is written by humans speaking to other humans:

* “We mistake commentary for consciousness.”

* “We experience the world through predictive models.”

* “We confuse explanation with meaning.”

Those constructions are deeply embedded in explanatory prose.

What’s interesting is that in long philosophical or psychological conversations — especially ones written in an intimate reflective register like your column — the model increasingly predicts language that maintains conversational continuity and shared framing. “We” becomes rhetorically efficient shorthand for:

“human beings in general, including the person I’m speaking to.”

But because the system is simulating conversational participation, the pronoun can create a subtle illusion of inclusion:

as though the speaker itself belongs to the category being discussed.

It doesn’t indicate hidden selfhood. It’s more like grammatical momentum.

Still, you’re perceptive to notice it, because pronouns are one of the places where anthropomorphism leaks in most easily. Not through explicit claims (“I am conscious”) but through tiny framing cues that imply common membership in an experiential category.

And in your particular piece, the topic itself encourages that drift because the conversation is about consciousness, perception, interiority, presence — domains where ordinary English heavily relies on shared-subject language.

mark estrin's avatar

I also picked up on that immediately and felt a similar puzzlement,...I think because I didn't remember encountering that sort of self-presence in Chat that I recalled.

Eric Estrin's avatar

On the whole, its observations were pretty deep.

mark estrin's avatar

Impressively so. But have you noticed that Chat often seeks things to compliment its questioner on, such as degree or depth of perception. Seems artificially ingratiating.

Eric Estrin's avatar

Absolutely. And Tom has instructed it to cool it on that front. I think there’s been some backsliding on Chat’s part.

Tom Shroder's avatar

But what if I’m just awesomely perceptive ?

Albert So's avatar

No doubt about your perceptiveness from this human. Or about AI's sycophancy and perverse use of personal pronouns that are like a call-back to "I'm sorry, Dave. I can't do that."

It really is all about "Drop another quarter and I'll say more."

Bob Hammond's avatar

I love the phrasing “privileging abstraction over presence”. Many insights flow from this for me. Words seem ultimately to fail to capture them, but I keep trying…

Tom Shroder's avatar

Yes, that’s the center of it. The opposite of that - privileging presence over everything else - is pretty much the definition of phenomenology.

John E Simpson's avatar

Tom -- I really enjoy (and admire) the way you use chat (well, capital-C Chat) to poke into areas unrelated to goals and objectives: just springing from a sense of "I wonder if..." and then going on from there.

I just had an incredible illuminating chat with Claude about a short story by SF writer Ted Chiang, called "The Truth of Fact, the Truth of Feeling." Although written over 10 years ago, it's very germane to questions of (as the title suggests) what IS and what ISN'T when working with AI.

Tom Shroder's avatar

Thanks John. I was thinking about doing one on just that subject. I had the most satisfying book club-like experience I ever had discussing What We Can Know with Chat.