The Sculptor's Error
Years ago in an archive, I learned a lesson I now see everywhere. I was looking for a photo and couldn't find the right one. I kept telling the archivist what I wanted to see. It was a total failure. I only found the perfect image when I started telling him what I didn't want to see: no portraits, nothing after 1945, no one smiling at the camera.
By setting boundaries, by saying "no", I got to the right "yes."
This is the sculptor's principle. A sculptor reveals the statue by chipping away the stone they don’t need. And it's the single biggest thing we’re getting wrong in how we teach, learn, and work with artificial intelligence. We are so focused on describing the statue we want, we’ve forgotten that real clarity comes from removing what we don’t want.
The Problem with Our Prompts
You’ve felt this yourself. You ask an AI tool for an image of "professionals in an office," and you get back a perfectly bland, generic stock photo. You ask it to write about a topic, and it gives you a correct but boring summary.
The machine isn't failing. Your instructions are.
AI tools work with averages. When you give AI a vague, positive-only command, they deliver the most statistically average, cliché result possible because that’s the safest bet.
The path to a better, more interesting result is through negative constraints. Instead of just asking for what you want, you need to define the boundaries of what you don’t want.
"Show me a team in an office, but make the lighting moody, not bright and flat."
"Write about the French Revolution, but don't mention the royal family."
This is how experts think. An expert designer knows which clichés to avoid. A great teacher knows which common misconceptions to clear up first. This skill, the art of intelligent exclusion, is a cornerstone of critical thinking. And we are letting it get dull.
What We Are Outsourcing
The real danger here is subtle. It’s not about getting bad answers from AI. It's about slowly forgetting how to ask good questions.
When a student asks an AI to "write an essay," they outsource the entire process of inquiry: the struggle, the research, the dead ends, and the discovery. They get a finished product, but they sacrifice the understanding that only comes from the work itself. We are training a generation to ask for the final answer while skipping the intellectual labor that builds real knowledge.
This creates a dangerous dependency. We risk becoming excellent at getting plausible-sounding answers from a machine, but incompetent at formulating the unique, challenging questions that drive human ingenuity forward. We are trading the skill of thinking for the convenience of an answer.
A Way Forward
So what do we do? We have to teach this skill deliberately.
In schools, in the workplace, and in our own personal use, the focus must shift from just "prompting" to "constraining." We need to treat interacting with these tools like a Socratic dialogue, not a vending machine. Ask a question, get an answer, then refine the next question with sharp, critical limits.
The goal is to use AI as a sparring partner that sharpens our thinking, not as a servant that does our thinking for us. This requires us to value the process of getting to the answer as much as the answer itself. It means embracing the difficult work of exclusion, of saying "no," and of chipping away at the stone.
The human mind is not a database for storing answers. It is an engine for inquiry.
Let's make sure we are building a world that fuels that engine, not one that lets it idle.
Phil
Citations
[1] Chesterton, G.K. (1929). "The Drift from Domesticity," The Thing: Why I Am a Catholic. The quote is often paraphrased, but the chapter contains the sentiment: "Art is limitation; and the chief artistic limitation is the choice of a subject."
[2] Popper, Karl R. (1959). The Logic of Scientific Discovery. Routledge. The book outlines the principle that scientific theories can never be proven, only tested and falsified. A theory's strength lies in its ability to withstand attempts to disprove it.