Last week I was talking to a colleague who heard someone bragging about building a complete CRM system in 2 hours using ChatGPT. “Look at this,” he said proudly, “I can add clients, edit data, generate reports — everything works!”
Does it work? Absolutely! Would I trust it with my client database? Maybe just for a demo.
That’s the crux of the problem that most people, even those in IT, are only beginning to understand.
”What” — Even My Grandma Can Do That!
Generative AI has brought the paradigm of declarative programming to life — instead of working in imperative mode, telling machines “How” to do something, we specify “What” needs to be done, and the machine implements it.
“I want a scheduling app.” “I need an inventory system.” “Build me a blog with comments.”
Anyone can tell an AI what they want and get functional code. Gen-AI is excellent at that.
The problem? Anyone else can ask ChatGPT the same thing and get a (functionally) identical application. And it will usually only be good enough for a demo.
The real difference comes from questions most people don’t ask: “What happens when 50 users book the same time slot simultaneously?”, “How will it perform with 10,000 products instead of 50?”, “How much is all of this going to cost with all those users?!”
Where Does the Real Value Live?
A concrete example: a colleague needs a “file upload system.” AI generates:
app.post('/upload', (req, res) => {
// Receives file, saves to disk, returns success
});
It works! Perfect plan! …except it doesn’t actually work.
Because you haven’t defined the critical contextual parameters. For example:
- “Files up to 100MB should finish in 3 seconds, but for files over 1GB, 5 minutes is acceptable”
- “The budget is $200/month for storage, which means a maximum of X files”
- “If an upload fails at 90%, the user should be able to resume from that point”
- “GDPR compliance means automatic deletion after 2 years”
AI can implement all of this. But it doesn’t know whether it should, to what extent, or which trade-off works for your specific situation.
Decision Architect — A New Paradigm
These aren’t purely technical questions — they’re contextual. And they create real value.
Instead of me writing code, AI asks me the questions:
“For backup, I see three options: daily backup costs $50/month but you lose up to 24 hours of data, hourly costs $150 with a max 1-hour loss, real-time costs $400 with zero loss. Given your application type, what do you recommend?”
That’s where AI stops and the Decision Architect begins — the traditional software engineer. Me.
Because I know this is an internal tool for 10 users where a daily backup fits perfectly. But for a consumer payment app, losing one hour of data could cost far more than the savings on backup.
Why Current Tools Aren’t Enough
The simple answer — they were built for the old paradigm, organized around tasks, not decisions.
GitHub Copilot: “Here’s your authentication code!” Me: “Great, but is a 30-minute session timeout right for my application?” Copilot: ”…(silence)…”
Jira ticket: “Implement caching.” AI: “Here’s a Redis implementation!” Me: “But do I even need caching with 100 daily users?”
The problem? Tools assume (by design) that the job is done when the code compiles and passes functional tests. In reality, the real work is just beginning.
What Could It Look Like?
I envision an AI-first tool that works roughly like this:
Me: “Let’s continue with the e-commerce site.” AI: “Generating the basic structure… Do you expect more than 1,000 orders per month?” Me: “No, 100 at most.” AI: “Then for the database, your options are SQLite or PostgreSQL. SQLite saves $50/month and is 10x simpler to set up. But migration becomes chaotic if you grow. How should we proceed?” Me: “Let’s go with SQLite.” AI: “Next — payment processing. For 100 orders, Stripe is $0.30 per transaction. PayPal is $0.40 but users trust it more. Your thoughts?” Me: “What’s the monthly difference?” And so on.
That would be a true AI-first tool that supports the Decision Architect role. Imagine a Git-like AI tool that, instead of managing code versions, tracks the history of decisions on a project?
Research shows that 95% of companies use AI for development, but only 1% consider their implementations “mature.” Why?
Because they’re forcing AI into old workflows. It’s like trying to build faster horses and carriages with AI instead of building cars.
The real question isn’t “How can AI write better code?” but “How can AI help make better decisions?”
What Does This Mean for Us, Software Developers?
Software development has never been just about writing code. It has always been about making the right (informed) decisions under uncertainty. AI just makes that reality clearer than ever.
The future isn’t “AI vs. Developers.” It’s about transforming today’s software engineers into Decision Architects who understand where the real value lies.
But we need tools built from the ground up for an AI-first world that support the Decision Architect role. Not old tools with AI lipstick. We need cars, not faster horses and carriages.
Your code might work perfectly. But does it work correctly for your specific situation? That’s the difference between a hobby and a profession.
AI can make you 10x faster at writing code. But only the ability to make better contextual decisions makes you genuinely valuable.