You’re sitting in a doctor’s office. Something hurts — somewhere around here, no, more like over here, kind of a pulling pain, sometimes sharp, mostly in the mornings, though it happened last night too. The doctor nods, jots something down in your chart. You leave. A week later you read the report: “Patient complains of periodic pain in the epigastric region with radiation to the right hypochondrium.” You didn’t say any of those words. But the doctor translated you — from human language into medical language. And something got lost in that translation.
Now picture something different. Before your visit, you spent twenty minutes with ChatGPT, described your symptoms, got a list of clarifying questions, structured your complaints, and showed up with a document the doctor read in one minute. The doctor asked three targeted questions instead of twenty vague ones. The appointment took fifteen minutes instead of forty. The diagnosis — more accurate.
Between you and another person, there has always been a translator — your own language. AI simply makes that translation visible.
Language is a bug, not a feature
Natural language is lossy compression. You think in complex multidimensional sensations, emotions, images — and what comes out is a linear sequence of words. It’s like trying to describe a Monet painting over the phone: something gets through, but most of it is lost.
And this isn’t abstract philosophy. It’s an everyday catastrophe. A manager asks to “move faster” — the developer hears “cut the tests.” A wife says “I don’t need anything” — the husband takes those words at face value. A doctor says “results are within normal range” — the patient leaves happy, not knowing that “within normal range” can mean “borderline condition requiring monitoring.”
The third agent is already in the room
Here’s what’s interesting: AI as a communication intermediary isn’t a hypothesis. It’s already happening — we just don’t call it that.
When you google your symptoms before seeing a doctor, you’re using a search engine as a translator from everyday language to medical terminology. When you use Grammarly for a business email, that’s AI translating your thoughts into the corporate dialect. When GPT helps you draft a complaint to your bank, it’s translating your anger into legally sound text.
In 2020, researchers at Stanford (Hancock, Naaman, Levy) coined the term AI-MC — communication mediated by artificial intelligence. Not chatting with a bot, but a conversation between people where AI acts as an invisible intermediary: editing, supplementing, translating, and adapting messages before they reach the recipient. They warned it would change the very nature of human communication. Six years later — they were right.
The difference between 2020 and 2026 is scale. Back then, AI-MC was an academic term. Now people literally won’t send an important email without running it through Claude or GPT. An HRD I know told me that in their company, 80% of managers use AI to “repackage” feedback before delivering it to employees. One person’s feedback, edited by a machine, read by another person. Three agents — one conversation.
What the AI layer adds to a conversation
The point isn’t that AI “improves grammar.” That’s trivial. The point is context — the kind people can’t put into words.
thought → compressed
into words
text without
context
decoding
(with errors)
▼ with AI layer ▼
thought → compressed
into words
+ emotions, culture,
history, context
enriched
message
An AI intermediary can add to your message what you couldn’t or didn’t want to articulate yourself:
- Emotional context. “He wrote ‘ok,’ but his tone has been getting shorter over the past three days — he’s probably irritated, not agreeing.”
- Cultural differences. A Japanese colleague wrote “that’s an interesting idea” — AI flags that in Japanese business culture, this often means a polite refusal.
- Interaction history. “You discussed this issue three months ago and didn’t reach agreement — here’s a summary of those arguments.”
- Register translation. A patient says “I feel bad,” AI translates for the doctor: a structured complaint with a 1-to-10 scale, duration, triggers, and accompanying symptoms.
Medicine — the first proving ground
Medicine became the natural first territory for the AI layer because the cost of miscommunication here is measured literally in lives.
In 2025, a Mayo Clinic study showed that patients who used an AI assistant to prepare for their appointment asked 60% more clinically relevant questions and understood their treatment plan 35% better. Doctors in the experimental group noted that "for the first time in their careers, patients came prepared."
But medicine is just the beginning. Next — everywhere communication matters. And it matters everywhere.
The dark side of the translator
This is where things get truly interesting. Because every intermediary has its own interests. Or, even more dangerous — the interests of whoever built it.
When AI rewrites your message to sound “more professional,” who decides what “professional” means? When AI smooths over a conflict in a chat thread, is it helping you — or preventing a conflict that needed to happen? When AI “adapts” your message for the recipient’s culture, is it translating — or censoring?
Here are specific scenarios that are already raising alarms:
Loss of authenticity. If both sides of a conversation run their messages through AI, who’s actually talking to whom? Two people — or two optimizing algorithms that converged on the optimal phrasing? The exchange turns into a ping-pong match between models, with humans reduced to formal approvers of text.
Emotional sterilization. AI defaults to smoothing things out. Making things politer, softer, “more constructive.” But sometimes anger is information. Bluntness is a signal. A trembling voice is data that needs to be transmitted. The AI layer risks turning all human communication into one flat, safe, dead tone.
Power asymmetry. Whoever controls the AI layer controls the conversation. A corporation can configure AI to make employee complaints sound “less harsh.” A government can filter dissent at the drafting stage. This isn’t censorship — it’s pre-censorship, and it’s invisible.
The better AI translates between people, the less people learn to understand each other on their own. Autocorrect killed spelling. GPS killed navigation. AI communication risks killing empathy — the ability to decode someone else's words yourself, with all the mistakes and guesses that make communication human.
So is this the future or a dystopia?
Both. Like everything that truly matters.
Cars killed thousands of people — and saved millions of hours of life. Antibiotics created superbugs — and defeated the plague. The AI communication layer will make conversations more efficient — and possibly less human.
The key question isn’t “will this happen” — it’s on what terms. Will the AI intermediary be transparent (both participants see what was changed)? Will it be controllable (the user decides what to filter)? Will it be neutral (no hidden agenda from the developer)?
Right now the answer to all three is “no.” And this needs to change before the AI layer becomes the standard. Because after that — it’ll be too late.
By 2030, at least 60% of business correspondence at Fortune 500 companies will pass through an AI intermediary — a layer built into email clients, messengers, and CRM systems that automatically adapts tone, adds context, translates between professional registers, and smooths over potential conflicts.
At least one major platform (Microsoft, Google, or Salesforce) will release a product explicitly positioned as an "AI Communication Layer" — not a writing assistant, but specifically an intermediary between sender and receiver, operating in real time on both sides of the conversation. This will spark a new wave of debate about communication authenticity and at least one major scandal involving an AI layer that distorted a message's meaning with serious consequences.