Skip to main content

Command Palette

Search for a command to run...

From Parenting AI to Architecting Intent

Published
7 min read
From Parenting AI to Architecting Intent
A

13+ Years of experienced as Full Stack Developer. Also worked as architect for building solutions and product to help for automation. Solution-oriented and hands-on technical utility player. Having experience of more than 4 years of experience in E commerce and finance in each domain. Experience in having driving business automation, marketing using technology. Strong follower of open source technology. Used PHP, Python, AWS and Angular as technology stack to build product

A few months ago I wrote that AI needs a parent, not a replacement. That instinct was right. But instinct without structure is just anxiety with good intentions.

What I've been reading since then gave me the structure. And it has a name: AI-Driven Development Life Cycle (AI-DLC).

Parenting Was the Instinct. AI-DLC Is the Framework.

In that earlier piece, I described four things a good AI parent does: observation, intervention at ambiguity, approval as a feature, and gut-level judgment.

AI-DLC operationalises every single one of them.

What I called "intervention at ambiguity" — AI-DLC calls Question-Driven Clarification. The AI stops when it hits ambiguity and presents structured options. Your answer becomes a traceable artifact in requirement-verification-questions.md. The instinct becomes an audit trail.

What I called "approval as a feature" — AI-DLC calls human-owned checkpoints. The workflow does not move forward until you explicitly approve. Not as a bottleneck. As architecture.

What I called "gut-level judgment" — AI-DLC is honest about this one. The methodology acknowledges AI does not understand organizational trade-offs. It does not know where downtime is acceptable and where it is not. It does not know the cost of a badly worded error message. That context lives in you. The framework is designed around that reality, not against it.

The Part I Got Wrong

In my earlier piece, I framed parenting AI as a personal discipline. Something you develop through experience and instinct.

That's necessary but not sufficient — especially when you're building with a team.

One developer parenting their AI well means nothing if the person next to them is letting the agent run unchecked. AI-DLC solves this at the team level. Shared steering files. Shared artifact standards. Shared review practices. The parenting becomes institutional, not individual.

The .md File Is the New ADR

I used to maintain ADR documents for every significant architectural decision. Tool choices, trade-offs, system design calls — all carefully documented. Which database, why we chose this queue over that one, what we considered and rejected.

It felt rigorous. Looking back, it was also fragile. ADRs lived in Confluence or Google Docs, disconnected from the code they described. Six months later, a new team member would find a decision document that no longer matched reality. Nobody knew when it drifted. Nobody caught it because the doc and the code lived in different worlds.

AI-DLC changes this fundamentally.

Every decision now lives as a structured .md file inside the project itself. requirement-verification-questions.md captures every ambiguity and how you resolved it. design.md captures architectural choices. stories.md captures user intent. execution-plan.md captures what was planned and approved.

Think about the knowledge base evolution:

  • Jira era — decisions buried in ticket comments, lost when tickets closed

  • Google Docs / Confluence era — better structure, but disconnected from code

  • AI-DLC era — decisions live inside the codebase, generated as a byproduct of building, linked directly to the code they produced

The .md file is the new ADR. But unlike the ADR I used to write manually, this one is generated through the actual decision-making process — not reconstructed after the fact from memory.

But Wait — Should Everything Live in the Codebase?

This is the question I kept asking myself while reading

The AI-DLC Handbook

. The artifact system is powerful. But putting everything inside the codebase creates its own risks. Here are my honest thoughts on each:

Access control problem In Jira and Confluence, a product manager or a stakeholder could read decision history without repo access. With .md files in a private repository, non-technical stakeholders are locked out of the very decisions that affect them.

My take: this is solvable with tooling — read-only repo access, auto-generated decision summaries published elsewhere. But right now it's an open problem most teams haven't solved yet.

How do you give stakeholder visibility into AI-DLC artifacts without giving them full codebase access?

Repo bloat over time A product that runs for three years accumulates hundreds of decision files. Discoverability becomes a real problem. Which design.md is current? Which requirements are superseded?

My take: Git history helps, but it's not enough. We need either a convention for archiving stale artifacts or tooling that surfaces the current state clearly.

What does artifact lifecycle management look like in a long-running AI-DLC project?

Stale context risk If the AI loads an old design.md without knowing it's outdated, it makes decisions based on superseded context. At least in Jira, tickets had status — open, closed, won't fix. .md files have no native status.

My take: This is the most under appreciated risk in AI-DLC right now. The methodology's strength — persistent context — becomes a liability if that context goes stale silently.

How does AI-DLC handle context invalidation when architectural decisions change mid-project?

Compliance in regulated industries Git commits can be rewritten. For industries like healthcare or fintech where audit trails must be tamper-evident, this is a real compliance concern.

My take: AI-DLC's traceability is genuinely stronger than anything I've seen in Jira-based workflows. But the tamper-evidence problem needs a solution before regulated teams can fully adopt it.

Can AI-DLC's Artifact system meet compliance requirements in regulated industries without additional tooling?

These are not criticisms of AI-DLC. They are the natural questions any architect should ask before adopting a new methodology at scale. And honestly, sitting with these questions is what made me appreciate The AI-DLC Handbook more .

Why You Should Read The AI-DLC Handbook

I finished The AI-DLC Handbook by Bhuvaneswari Subramani

ast week. Here's what made it worth reading:

It doesn't just explain AI-DLC — it documents what actually happened when a team ran it on a real project. What broke. What they fixed. What surprised them.

The chapter on brownfield projects alone is worth the read. Most AI development content assumes greenfield — clean slate, no legacy, no hidden dependencies. The handbook goes deep into the messy reality of modernising systems that already exist and must keep running.

The "Mob Elaboration" concept stood out to me specifically as an architect. It's a structured ritual for surfacing the undocumented rules that live in legacy systems — the kind of tribal knowledge that an AI cannot know because it was never written down. Before AI-DLC, that knowledge either lived in someone's head or got discovered painfully in production.

Read it not as a how-to guide. Read it as a field report from someone who actually ran the experiment.

The Real Shift

We aren't just parenting AI anymore. We are architecting intent. The parent metaphor was right for where we were — figuring out how to stay in control as AI got faster and more capable. The next phase isn't about staying in control. It's about designing the system so control is structurally guaranteed — not dependent on one engineer's discipline on a given day.

The .md file replacing the ADR ( Architecture Decision Record) is a small signal of a larger shift. Documentation is no longer a tax you pay after building. It's a byproduct of building correctly.

AI provides options. You provide context. The framework ensures neither replaces the other.

Ready to stop parenting by instinct and start building by design?

→ My earlier take on why AI needs a parent:

https://x.com/AvinashDalvi_/status/2029809813782503928

→ AI-DLC white paper by Raja SP (AWS):

https://aws.amazon.com/blogs/devops/ai-driven-development-life-cycle/

→ Open-source workflow:

https://github.com/awslabs/aidlc-workflows

→ The AI-DLC Handbook by Bhuvaneswari Subramani:

https://aidlcbook.com/

T
tehasaw39833m ago

What an enlightening read! I love how you explore the nuances of AI intent. Could we also consider how user-centric design can shape this? Thank you for sparking such important conversations! dordle