If You Can’t Write It Down, You Don’t Know What You Want

Here’s the uncomfortable truth I keep running into: most business instructions are hopelessly vague. “Make it more engaging.” “Be more professional.” “This feels off-brand.” These instructions fail for AI. But here’s the thing people don’t want to hear: they also fail for humans. We just nod and pretend we understand because we’ve been socially conditioned to do so. The junior designer who gets told “make it pop” doesn’t know what you mean. They just iterate until you stop complaining. That’s not communication. That’s a guessing game with someone else’s time.

The discipline of applied AI forces you to confront this. When you write a prompt, you can’t hide behind vague language. The machine will take you literally. “Be more charismatic” means nothing. “Raise your voice at the hook, talk 20% faster during the story, nod when the other person is speaking, pause for two beats before the punchline” means everything. One is a wish. The other is a set of instructions someone (or something) can actually follow.

This is what I call Observable Behavior Engineering: translating vague intent into specific, measurable actions that both humans and machines can execute consistently. The word “observable” is doing a lot of work in that phrase. If you can’t observe whether someone did it, it’s not a real instruction. “Be creative” is not observable. “Generate three alternative approaches and explain the tradeoff in each” is observable. The gap between those two statements is where most AI projects fail.

The quality of your AI output is directly proportional to how specifically you can describe what “good” looks like. This is also why the best applied AI practitioners tend to come from operations, teaching, engineering, or behavioral science backgrounds. They’re used to being precise. They’ve spent years translating fuzzy goals into concrete steps. A great teacher doesn’t say “learn math.” A great teacher says “solve these ten problems, show your work, and explain your reasoning for the ones you found hardest.” That’s the same skill.

Every AI prompt is a training document. Every training document is a prompt. The discipline is identical:

  1. Specify the input (what will the AI or person receive?).
  2. Specify the output (what should they produce?).
  3. Specify the criteria (how do you know if it’s right?).
  4. Show examples (what does good look like? what does bad look like?).
  5. Define edge cases (what happens when it’s ambiguous?).

If you follow those five steps for every task you delegate (to a human or an agent), the quality of what comes back will transform overnight. Most people skip steps three through five entirely and then wonder why the output isn’t what they wanted. The output wasn’t wrong. The instructions were incomplete.

This connects directly to the-dictator-of-truth and self-improving-business. Observable behavior engineering is the skill that makes both truth management and feedback loops actually work. If your truth documents are vague, your agents will be vague. If your feedback criteria are subjective, your systems can’t improve. You can’t build a self-improving business on top of fuzzy definitions. The feedback loop needs something concrete to measure against. “Did the agent follow the seven observable criteria?” is a question you can answer. “Did the agent do a good job?” is not.

There’s a parallel here to how I think about spiritual discernment. “God, help me be better” is vague. “God, give me the discipline to respond with patience when I feel defensive in meetings” is specific. The specificity is the practice. It forces you to confront what you actually mean, what you actually want, and whether you’re willing to do the work of defining it. I’ve found that the people who are most precise in their prayers tend to be most precise in their prompts. Both require the same courage: the willingness to stop being vague and say exactly what you need.

Key Takeaway

The quality of your AI output is capped by the specificity of your instructions, and learning to translate vague intent into observable, measurable actions is the single most transferable skill in the age of AI.

References

  • Mager, Robert F. Preparing Instructional Objectives. The foundational text on writing objectives in terms of observable, measurable behavior.