A Structured Methodology for Disciplined AI Software Development
Collaborating with Large Language Models (LLMs) on software development can be highly effective, but it often introduces problems like architectural drift and context dilution. The Disciplined AI Software Development methodology by Jay Baleine offers a structured approach to mitigate these issues by imposing systematic constraints on the AI-human interaction.
The methodology addresses common issues like code bloat, architectural drift, and context dilution through systematic constraints and validation checkpoints.
The core problem it identifies is that AI systems operate on a simple “Question → Answer” pattern. When given broad, multi-faceted requests, they tend to produce code that lacks structure, contains repetition, and drifts from the intended architecture over time.
The Four Stages of the Methodology
The approach is broken down into four distinct stages, designed to build upon each other with clear validation points.
- AI Configuration: The process begins by providing the AI with custom instructions (e.g.,
AI-PREFERENCES.XML
) that establish behavioural constraints. This includes instructing the AI to flag any uncertainty, ensuring it does not “hallucinate” solutions when it lacks sufficient information. - Collaborative Planning: The developer and AI work together to define the project’s scope, components, dependencies, and phases. This structured planning session results in a development plan with clear modular boundaries and measurable checkpoints.
- Systematic Implementation: Development proceeds phase by phase, with each request focused on a single, specific component. A key constraint is a strict file size limit (e.g., ≤150 lines) to maintain small context windows, enforce focused implementation, and simplify debugging.
- Data-Driven Iteration: A benchmarking suite is built first (Phase 0). Performance data from this suite is fed back to the AI, ensuring that optimisation decisions are based on empirical measurements rather than assumptions.
Architectural Guardrails for AI Collaboration
The principles outlined in this methodology can be viewed as tactical architectural guardrails, similar in spirit to the high-level principles found in frameworks like TOGAF.[1]
Where TOGAF provides a strategic framework for enterprise architecture, these rules provide a micro-framework for AI code generation. Constraints like the 150-line file limit and single-component implementation are not arbitrary; they enforce modularity and prevent the AI from making sprawling, unmanaged changes. This approach transforms the AI from an unpredictable partner into a disciplined builder that adheres to a pre-defined architectural vision.
The repository includes practical tools to support this workflow, such as example configuration files and a Python script (project_extract.py
) for generating structured snapshots of the codebase to share with the AI.
By enforcing a systematic process, the developer’s role shifts from constantly debugging inconsistent output to strategically planning the work and validating the results. It is a practical framework for turning an AI into a more reliable and predictable development partner.
The Open Group Architecture Framework (TOGAF) is a framework for enterprise architecture that provides an approach for designing, planning, implementing, and governing an enterprise information technology architecture. ↩︎