Everyone is talking about what AI can do. Almost nobody is talking about what it should do in a specific context. That gap is where taste lives, and taste is the last defensible skill.
I deploy AI systems for a living. Not demo-grade prototypes, production systems that real people use every day to make real decisions. And what I've learned is that the technology is the easy part. The hard part is knowing when to use it, how much of it to use, and when to leave the human in the loop.
The demo trap
There's a phenomenon I see constantly: someone sees a demo of an AI tool, gets excited, and wants to deploy it everywhere. Summarize every email. Generate every report. Automate every decision. And the technology can do all of those things, poorly, at first, and then well enough to be dangerous.
"Well enough to be dangerous" is where most AI deployments live. The output looks right. The numbers seem plausible. The summary captures the main points. But there's a subtle wrongness that only becomes visible when someone with domain knowledge reads it carefully. And the whole promise of AI is that nobody has to read carefully anymore.
That's the trap. The efficiency gain is real, but it comes with an invisible cost: reduced scrutiny of the output. And reduced scrutiny is fine for low-stakes tasks but catastrophic for high-stakes ones.
Where taste comes in
Taste, in this context, means knowing the difference. Knowing which tasks can tolerate a 90% accuracy rate and which ones need 100%. Knowing where AI should draft and a human should edit versus where AI should execute autonomously. Knowing when the AI's confidence score is hiding uncertainty rather than reflecting precision.
When I built the production cost calculator, I made a deliberate decision: the AI generates the estimate, but it also shows its reasoning. The user can see which dimensions it extracted from the drawing, which materials it assumed, which cost tables it referenced. Not because the AI is wrong, it's usually right, but because the human needs to stay in the loop enough to catch the cases where it isn't.
That transparency costs something. It makes the interface more complex. It slows the user down slightly. A tasteless implementation would hide all that and just show the number. Faster, cleaner, and eventually, inevitably, wrong in a way nobody catches until it becomes expensive.
The taste gradient
I think about AI deployment on a gradient:
- Full automation, the AI handles everything. Good for notifications, categorization, routing.
- AI-first with human review, the AI drafts, a human approves. Good for reports, summaries, recommendations.
- Human-first with AI assist, the human leads, the AI provides data and options. Good for strategy, hiring, financial decisions.
- Human only, some decisions shouldn't involve AI at all. Good for anything where the process of deciding is as important as the decision itself.
Knowing where a given task falls on this gradient isn't a technical problem. It's a judgment call. It requires understanding the domain, the stakes, the organizational culture, and the specific humans involved. That's taste.
The defensible skill
AI will keep getting better at the mechanics, faster inference, better accuracy, more capable models. But it won't develop taste. Taste requires context that lives outside the training data. It requires understanding what a specific organization values, what risks it can tolerate, what its people are actually ready for.
The people who will thrive in an AI-saturated world aren't the ones who can build the most sophisticated models. They're the ones who can look at a business problem, look at the available AI capabilities, and make the right call about how much machine and how much human to put into the solution.
That's the job. Not building AI. Building the right amount of AI, in the right place, with the right guardrails. Everything else is engineering. This part is taste.