Introduction
A single word on a screen can feel surprisingly heavy.
“No.” can land very differently than “Not today.”
Even when the meaning is the same, the experience isn’t.
This Doodle Loop explores human-to-human, text-only communication (message boards, email, chat, text) compared to human-to-AI, text-only communication—and why those differences matter not just for interpersonal relationships, but also for writing, online discourse, and business communication.
It expands on the hallucination concept from my book, The Doodle Principle—How AI Becomes Your Partner in Curiosity. In the book, I describe how humans “hallucinate” in ways similar to AI—not by inventing facts, but by filling in missing context.
The key difference is that humans usually leave tells: tone, pauses, hedging, softening language, facial expression, and timing. AI generally does not—unless we deliberately design it to.
That absence becomes important when all we have is text on a screen—and when words have to do all the work.
Observation
While reading and responding to Reddit or Substack posts, chatting with colleagues on Teams, or interacting with AI, the signal is often identical: words on a screen.
And yet the structure of the language alone often shapes how it lands.
For example, “No.” feels different than “Not today, I have other plans.”
Even when you know the response came from AI, the emotional reaction can change depending on what you expected. A direct response can feel abrupt or harsh when you were anticipating something buffered or explanatory.
Humans also communicate uncertainty and confidence through text in ways that closely resemble AI:
“I think this might be true…”
“This is true.”
“This is true because…”
In many cases, the form of the language does as much work as the content itself. Knowing who—or what—is on the other end can influence how a message is perceived, but language still triggers the same interpretive systems in the people who read it.
Wonder
This got me thinking about how many channels humans normally rely on to communicate, and how much gets stripped away in text-only interactions.
Text is just one channel.
There’s also voice, which adds pitch, pacing, emphasis, and silence.
Facial expression.
Posture.
Proximity.
Touch.
We use different combinations of these depending on context and relationship.
Context matters enormously.
Words in a novel are received differently than words in a legal agreement.
Words on a sign are processed differently than words in a text message.
When we read text that is directed at us, the psychological stakes change.
So what’s happening cognitively when all of that collapses into a few words on a screen?
Why do some people agonize over every word in a six-word text or a brief email to their boss?
Why do the same words land differently when they come from:
A close friend or trusted colleague
Someone we don’t know well
A stranger
Or an AI system
And why does the experience blur when we momentarily forget—or aren’t fully sure—whether the other side is human?
AI Exploration
I had an in-depth conversation with AI about the psychology of text-only communication. Here’s the synthesized view.
Human communication evolved first as embodied interaction. Spoken language developed alongside facial expression, gesture, posture, and proximity—signals that helped regulate safety, intent, belonging, and trust. Writing came much later and removed most of those cues.
When we read text that is directed at us, our brains instinctively look for agency.
Agency means the assumption that there is a mind on the other end with intent, independence, and the capacity to choose, respond, or be influenced.
With humans, agency is real even when unseen. With AI, agency is simulated—but language still triggers the same interpretive systems in the people who read text.
The less familiar we are with the “mind” on the other end, the more work language has to do to deliver the message. Different people have different tolerances for brevity, directness, and ambiguity, but in general, when familiarity is low, word choice carries more emotional weight.
When familiarity is high, ambiguity is buffered by shared history.
Personally, I notice that variations in tone or style matter far more when I know the sender’s usual patterns. A short response from someone I know well is easy to contextualize. The same response from a stranger—or an AI—feels more loaded.
Text, in other words, has to carry the weight of all the missing cues: tone, expression, timing, posture, and the opportunity for immediate clarification.
AI occupies a unique psychological space. When we know it’s AI, we may consciously discount agency—but our nervous system still reacts to the language itself. When we forget, or when the distinction feels fuzzy, we default to the same social interpretation systems we use with unfamiliar humans online.
Understanding
My curiosity about AI and written communication has helped me better understand why word choice matters so much.
I now understand why I sometimes react to simple words on a screen the way I do. And why people sometimes misinterpret what I say—not because of the words I typed, but because those words broke a pattern I wasn’t aware of.
Text-only communication removes many of the cues humans rely on to interpret intent. What remains is language carrying more responsibility than it was ever designed to carry on its own.
AI didn’t create this problem. It revealed it.
By noticing how we react to words on a screen—whether they come from a person or a machine—we learn something useful about how human communication actually works.
The ideas and concepts in this article are the author’s own. AI assisted with ideation and editing.

