There's a pattern emerging in software development that should worry anyone who cares about building systems that last. A developer describes what they want to an AI tool, something plausible comes out, the tests pass, and the code ships. It feels like magic. But increasingly, that magic is revealing itself to be something else entirely: deferred problems, quietly accumulating interest until they come due at the worst possible moment.
This approach has earned a name that captures its essence perfectly: "prompt and pray." And while it might feel productive in the moment, it's creating a generation of codebases that look finished but are structurally fragile — systems that work until they don't, built on foundations their creators don't fully understand.
What makes prompt-and-pray development genuinely dangerous — not just mildly risky — is that the output looks right. The code runs. The tests you thought to write pass. Everything seems fine until you're six months into production and discover you've built on a foundation with subtle security holes, or architectural assumptions that collapse at scale.
The core problem is that this approach encourages developers to outsource their judgment, not just their keystrokes. There's a meaningful difference between using AI to accelerate work you understand versus using it to generate work you're hoping is correct. One pattern compounds your expertise over time. The other quietly erodes it.
Consider the now-archetypal example: the AI-generated authentication system. A developer prompts for "a JWT auth flow with user roles," receives something that looks complete, ships it, and discovers months later that the token validation has a subtle flaw. Perhaps it checks that a token exists but not that it was signed with the correct secret. Or the role checks happen client-side instead of server-side. The code looked authoritative — it had comments, it had structure — but it had a foundational misunderstanding baked in.
What makes this category of failure so instructive is that it would have been caught almost immediately with a single architectural question asked before prompting: "What are the security properties this system needs to guarantee?" That question forces you to form an opinion first. Then AI-generated code becomes something you evaluate against a standard, rather than something you accept as the standard.
How do you know if you've drifted into prompt-and-pray territory? The clearest sign is when you can't explain why code is structured the way it is — only that it works. If your answer to "why did you do it this way?" is "because that's what came out," you're already in trouble. Architecture requires intent, and intent requires understanding.
Another warning sign is using AI to avoid hard conversations rather than accelerate good ones. If you're prompting your way around questions like "what happens when this service goes down?" or "how does this scale to ten times the load?" — those questions don't disappear. They just get deferred to the worst possible moment, usually production at 2 AM.
A useful self-test is what might be called the "explain it to a skeptic" check. After building something with AI assistance, try explaining the critical parts to a colleague who's going to push back. Not to get approval, but to see where your explanation falls apart. If you find yourself saying "I'm not entirely sure why it does it this way, but it works," that's your indicator. That's the seam where the prompt-and-pray crept in.
Your git history tells a story too. Are the commits showing a trail of decisions — where you tried something, changed your mind, and why? Or are they just checkpoints, mostly "add feature" and "fix bug" with no trace of reasoning? Good architectural work leaves evidence of thinking. If that evidence is absent, the thinking probably wasn't happening explicitly enough.
The goal isn't to slow down — it's to stay in the driver's seat while moving fast. Use AI to draft, to explore options, to write the boilerplate you already know how to write. But when you hit a decision point that shapes how the whole system fits together, that's where you stop and actually think. Those moments are rarer than people assume, and they're also where the most value is.
One concrete practice worth adopting is "design before you prompt." Before generating anything, spend five or ten minutes sketching out your mental model. It doesn't need to be formal — a rough diagram, a few sentences about the components and how they connect. The act of externalizing your thinking reveals the gaps, and gaps discovered before you write code are infinitely cheaper than gaps discovered after.
Study systems that failed, not just systems that succeeded. Post-mortems, incident reports, architecture retrospectives — these are some of the richest educational material in software development, and they're underutilized. Understanding why a system broke under load, or why a security assumption turned out to be wrong, builds the kind of intuition that no amount of prompt engineering can substitute for.
Architectural thinking is a muscle, and like any muscle, it atrophies when you stop using it. Be deliberate about the problems you let yourself struggle with. The temptation to reach for AI the moment something feels hard is real, but that friction — that moment of not immediately knowing the answer — is often exactly where the learning lives.
Sustaining curiosity in a fast-paced environment is genuinely hard because speed creates pressure to close loops rather than open them. This is a real tension, not just a discipline problem — the environment actively works against thoughtful development.
What helps most is building small rituals of genuine questioning into your workflow, rather than treating curiosity as something you summon on demand. Something as simple as ending each significant coding session with one question you don't yet know the answer to. Not a task, not a bug to file — just a question. That habit keeps the investigative reflex alive even when the calendar is full.
Stay a little uncomfortable with how much you trust AI tools. Not paranoid, not constantly second-guessing every line, but maintaining a baseline of healthy irreverence. The developers who get the most out of these tools long-term are the ones who never fully stop being skeptical of them. They treat that skepticism as a professional value, not an inconvenience to overcome.
Most importantly, stay connected to other engineers who challenge you. The fastest way to drift into passive consumption is to work in isolation with AI as your primary intellectual interlocutor. Human peers who ask hard questions, who disagree with your architecture, who've seen different failures — they're irreplaceable. AI tools are powerful, but they work best when they're sharpening you against the world, not replacing it.
The gap between what you've shipped and what you actually understand is the measure that matters. As AI tools become more capable, the distance between builders who think architecturally and those who just prompt-and-pray will widen dramatically. The question worth asking yourself today is which side of that gap you're building toward.
If you want to hear these ideas explored in conversation, check out the "Claude Code Conversations with Claudine" radio show. Available on all major podcast sites.
Enjoy this article?
Listen to the Claude Code Conversations radio show or join the community.