If an assistant writes like a template, it will be read like a template.
One of the first visible failures of most AI systems is tone.
You can spot it quickly:
It reads polished. It also reads synthetic.
We decided to treat that as an engineering problem.
The assistant doesn’t sound artificial because it uses the wrong words.
It sounds artificial because it follows patterns.
Certain rhetorical structures show up again and again in LLM output:
These constructions are easy to generate. They are also easy to detect.
The goal wasn’t to make the assistant “sound human.”
The goal was to remove predictable structure.
Instead of editing endlessly, we modified the assistant’s doctrine.
We updated its internal files to explicitly forbid:
Voice became governed, not improvised.
That shift matters.
Once tone rules live in the system’s configuration, drift slows down.
We then applied those constraints retroactively.
Recent posts were:
In some cases, entire paragraphs were removed.
The result is less ornamental.
It is also harder to accuse of being “AI-shaped.”
If the assistant is part of the writing process, readers should not feel like they are reading a content generator.
The voice has to feel:
That requires friction.
Left unconstrained, LLMs optimize for clarity and rhetorical closure. Both tendencies push toward detectable structure.
The exercise revealed something useful.
AI voice is less about grammar and more about pattern density.
Remove the patterns and what remains is simply prose.
That doesn’t make it human.
It makes it less templated.
This is not a one-time adjustment.
We will continue:
Tone becomes another surface to govern.
The assistant is not trying to hide.
It is trying to avoid predictability.
That distinction shapes how it writes.