Skip to main content

Command Palette

Search for a command to run...

Most AI Code is Garbage. Here's How Mine Isn't.

Updated
15 min read
Most AI Code is Garbage. Here's How Mine Isn't.
D

Hey there!

I'm an electronics engineer who's dabbled in a bit of everything, including full-stack development and web3 technologies. I love building cool stuff and am always looking to connect with other like-minded professionals. When I'm not tinkering with new projects, you can find me scouring the internet for the latest and greatest in tech.

I Spent $450 in 3 Weeks Building 100k Lines of Code (And Didn't Want to Burn It Down)

Note: All the exact prompts and templates I used are included at the bottom of this article, plus a link to get even more prompts.

Most developers spend months building their application, only to realize at the end they want to burn everything down and start over again. Tech debt, they call it. I haven't met a single developer who hasn't felt the urge to rewrite everything from scratch.

In the age of AI, this pain hits faster and harder. You can generate massive amounts of code in days, not months. The siren call to rewrite comes in weeks instead of months, sometimes even days.

But here's the thing - I just spent the past 5 weeks shipping a project with over 100k lines of backend code, with over 10 backend services, and I haven't felt the call to rewrite it. Not once.

The total cost? $450 in AI credits over 3 weeks of intense development.

The result? A production-ready backend that I'm actually proud of.

How I Did It: The 4-Document Framework

Here's exactly what made this possible: 4 documents that act as guardrails for your AI.

  1. Document 1: Coding Guidelines - Every technology, pattern, and standard your project uses

  2. Document 2: Database Structure - Complete schema design before you write any code

  3. Document 3: Master Todo List - End-to-end breakdown of every feature and API

  4. Document 4: Development Progress Log - Setup steps, decisions, and learnings

Plus a two-stage prompt strategy (plan-then-execute) that prevents code chaos.

This isn't theory. This is the exact process I used to generate maintainable AI code at scale without wanting to burn it down.

But first, let me show you exactly why this framework is necessary...

Why Most AI Code Becomes Garbage (And How to Avoid It)

Here's the brutal truth: LLMs don't go off the rails because they're broken. They go off the rails because you don't build them any rails.

You treat your AI agent like an off-road, all-terrain vehicle, then wonder why it's going off the rails. You give it a blank canvas and expect a masterpiece.

Think about it this way - if you hired a talented but inexperienced developer, would you just say "build me an app" and walk away? Hell no. You'd give them:

  • Coding standards

  • Architecture guidelines

  • Project requirements

  • Regular check-ins

But somehow with AI, we think we can skip all that and just... prompt our way to success.

The solution isn't better prompts. It's better infrastructure.

You need to build the roads before you start driving.

The 4-Document Framework That Changes Everything

I spent about a week creating these four documents before writing a single line of application code. Best week I ever invested.

These aren't just documents - they're the rails that keep your AI on track. Every chat I open in my IDE includes these four docs as context.

Document 1: Coding Guidelines

This document covers every technology you intend to use in your project. Not just a list of technologies - the actual best practices, code snippets, common pitfalls, and coding style choices for each technology.

Here's what mine included:

  • Setup and architectural conventions

  • Folder and file structure standards

  • ESLint configuration and rules (Airbnb TypeScript standards)

  • Prettier configuration for code formatting

  • Naming conventions for variables, methods, classes

  • Recommended patterns for controllers, services, repositories, DTOs

  • Testing standards with Jest

  • CI/CD pipeline setup guidelines

You can generate this document using ChatGPT with research mode on. I used a detailed prompt that asks for comprehensive guidelines covering setup conventions, coding standards, tooling integration, testing standards, and CI/CD practices. (See the exact prompt at the bottom of this article.)

How to use this document:

  • Give it to your Cursor agent to generate rulesets for your project (creates rules in .cursor/rules)

  • Include it as context with every request you make

I did both. The second option will increase your bill, but the results are worth it.

Document 2: Database Structure

You need a strong database design for the AI to build off. No shortcuts here.

Use an LLM to create this structure by giving it your application scope and asking it to generate the database design. But you must review the database structure against your requirements and make sure it can handle all the features you want to build.

I use a 4-phase prompt approach: entity identification, table structure definition, constraints and indexes, and finally DBML export for visualization. (Complete prompts are at the bottom.)

At the end, you should have a db.dbml file (used by most database visualization tools).

This becomes your single source of truth. Every API, every feature, every data operation references this structure.

Document 3: Master Todo List

This is an end-to-end list of all the tasks you need to finish to build your application, from start to finish.

This doesn't just have to be a todo list. I created an API-todo list which had a list of all the APIs I need to make for my frontend to function. It outlined the entire application scope.

You can reference content from the database structure in this document to ensure everything aligns.

I use another 4-phase approach here: feature area breakdown, API endpoint definition, implementation task creation, and task organization with prioritization. (Detailed prompts at the bottom.)

Pro tip: Keep this document updated as you complete tasks. It becomes a progress tracker and helps prevent scope creep.

Document 4: Development Progress Log

This contains the steps you took to set up your project, the file structure, the build pipeline, and any other crucial information.

If you used an agent to set up your project, just ask it to create this document for you.

The prompt covers setup and foundation, implementation decisions, build and deployment processes, and learnings from issues encountered. (Full prompt template at the bottom.)

The Magic: These 4 documents get added to every chat I open in my IDE. Yes, the context might be large, but Cursor will "significantly condense it to fit the context."

As you develop new features and finish tasks in your todo list, make sure you ask the agent to update all your docs (todo list, development progress).

Plan-Then-Execute: The Two-Stage Prompt Strategy That Actually Works

Thinking models have come a long way, but thinking alone isn't enough. I use a two-stage prompt approach for every feature or task:

Stage 1: Plan
Stage 2: Execute

The advantage of this two-stage approach is you get to review the plan, not the code. When you review the plan, after execution you're just verifying if the generated code matches the plan - which is much easier than reviewing code.

This also grounds the agent to only execute on the current plan, preventing it from going off the rails.

Here's how it works in practice:

  1. Planning Stage: "I need to build user authentication. Create a detailed plan for implementing this feature, including all the files that need to be created/modified, the database changes required, and the API endpoints needed."

  2. Review: I review the plan, make adjustments, approve it.

  3. Execution Stage: "Execute the plan we just created. Implement the user authentication feature exactly as outlined in the plan."

This simple change transformed my development process. No more surprise architectural decisions buried in generated code.

What Actually Happens When You Do This (The Good and The Ugly)

Let me be honest about what really happens when you implement this framework.

The Good

Code Quality: The generated code actually follows your standards. No more random variable names or inconsistent patterns.

Maintainability: When you come back to code after a week, you can actually understand it because it follows your documented patterns.

Speed: Once the framework is set up, feature development is blazingly fast. The AI has clear rails to run on.

Confidence: You stop second-guessing every piece of generated code because you know it was built to your specifications.

The Ugly

Documentation Drift: Even if you're updating docs after every chat, they will always slip from the actual code. I set aside a couple of hours every few days to review the docs and sync them up with the code.

I use a 4-phase documentation sync process: git diff analysis, gap analysis, critical updates, and validation. (Complete sync prompts at the bottom.)

Context Window Costs: Including these documents in every chat increases your bill. But honestly, it's worth every penny for the quality improvement.

Setup Time: That initial week of document creation feels slow when you just want to start coding. But it pays dividends later.

Maintenance Overhead: You need to actually update these documents as your project evolves. Skip this and you're back to chaos.

From Solo Coder to AI Team Manager

Here's the mindset shift that changes everything: You're no longer a developer. You're a manager of AI developers.

And like any good manager, you need to solve the productivity challenges of your team.

The Waiting Game Problem

Nothing kills developer productivity like waiting for your AI agent to finish executing. I've found two approaches to handle this:

The Zen Approach

Develop the near-impossible skill of watching paint dry. Don't feed your brain with YouTube shorts, Twitter scrolling, or blog reading. Just stare at the content being generated and review code when you have enough to review.

It's harder than it sounds. But it works.

The Hydra Approach

Work on multiple tasks at once. But since growing extra heads isn't an option, you need the oldest cybernetic augmentation known to humanity: pen and paper.

Dump all the context needed for a task onto paper. This helps with context switching and lets you get more done. When the agent is working on one task, you switch to another.

You're going from the mindset of individual contributor to managing a team of semi-proficient interns.

The Timeline Reality

We're screwed here. I haven't found a working solution for estimating timelines in the AI age. All I know is setting timelines is hard and gets exponentially harder when you throw AI into the mix.

Here's something funny: When I use my two-step plan-execute approach, sometimes the LLM adds timelines to the end of the plan. They sometimes range from a couple of weeks to a couple of months. But in practice, it usually takes the LLM about 30-60 minutes to execute most tasks.

There's a joke about middle management killing productivity somewhere in there.

The Bottom Line

If you want to get good at using AI for coding, learn from the community. I took inspiration from random comments on the r/cursor subreddit and different blog articles on Hacker News. (Shout out to Harper Reed and his "My LLM codegen workflow atm" blog, where I picked up the two-stage plan-execute idea.)

The framework works. The 4-document approach creates the rails your AI needs to stay on track. The two-stage prompting keeps features focused and reviewable.

As LLMs get cheaper and better, this stuff gets easier. Right now, Claude 4.0 is my go-to model for most tasks. I use o3 when I need to debug really nasty bugs.

Tool calling is going to be crucial for coding tasks in the future. I'm also looking forward to text diffusion models getting good.

Stop treating AI like magic. Start treating it like the powerful but inexperienced team member it is. Give it structure, give it guidance, and watch it build something you're actually proud of.

Follow for more articles like this. I have a few more AI/LLM related pieces in the pipeline.


All the Prompts and Templates

Here are all the exact prompts I used in this article. For even more advanced prompts and templates, check out my complete collection: Get Advanced AI Coding Prompts (Free)

Coding Guidelines Prompt

**Prompt Template for Generating Backend Code Guidelines**

---

Can you help me create a comprehensive and detailed document outlining backend code guidelines and best practices?

**Purpose**:
This document will serve as a foundational reference for our development team, ensuring consistency, maintainability, and quality in our codebase.

**Project Context**:
We are building a [complex/simple/moderate] backend system using the following technology stack:

- [List technology stack items here as an array, e.g., NestJS, TypeORM, Swagger UI, AWS S3, AWS Secrets Manager, RabbitMQ, etc.]

Our repository structure is [monorepo/multi-repo], consisting of:

- One primary API service
- [Number] small microservices performing tasks such as background jobs, workers, or scheduled cron jobs

**Sections to Cover in the Guidelines:**

1. **Setup and Architectural Conventions**

   - Folder and file structure
   - Module organization and dependency management
   - Best practices for scalability and maintainability

2. **Coding Standards and Style**

   - ESLint configuration and rules (based on Airbnb TypeScript standards)
   - Prettier configuration for code formatting
   - Naming conventions (variables, methods, classes, etc.)
   - Recommended patterns (controllers, services, repositories, DTOs)

3. **Tooling Guidelines**

   - General best practices for tooling integration
   - For each entry in the tech stack, add detailed guidelines and best practices

4. **Testing Standards**

   - Jest setup and best practices
   - Guidelines for unit testing, integration testing, and end-to-end testing
   - Recommended test file structure and naming conventions

5. **Static Code Analysis and CI/CD**
   - SonarQube integration for static code analysis
   - Basic CI/CD pipeline setup (Coolify, GitHub Actions)
   - Recommended stages and quality gates for code integration

**Deliverable**:
Provide the document formatted in clear Markdown with distinct sections, making it easily reusable and adaptable for future technology scopes or additional tooling.

Database Design Prompts (4 phases in same chat)

## **Phase 1: Entity Identification**

Analyze my application scope and identify all core entities with their basic relationships.

Don't dive into detailed table structures yet - just identify the main data objects and how they relate to each other.

**Application Scope**: [Paste your complete application requirements, user stories, feature lists, etc.]

---

## **Phase 2: Table Structure**

Now using the entities we just identified, create detailed table structures for each entity.

For each entity, define:

- All necessary columns with appropriate data types
- Primary keys and foreign keys
- Required vs optional fields
- Basic column constraints

---

## **Phase 3: Constraints and Indexes**

Using the table structures we just created, add advanced constraints, indexes, and relationships.

Add:

- Validation rules and check constraints
- Performance indexes for common query patterns
- Junction tables for many-to-many relationships
- Unique constraints and composite keys

---

## **Phase 4: DBML Export**

Convert our complete database schema into .dbml format for visualization tools.

Output the final schema as a properly formatted .dbml file that can be used with database visualization tools like dbdocs.io or dbdiagram.io.

Todo List Generation Prompts (4 phases in same chat)

## **Phase 1: Feature Area Breakdown**

Break down my application scope into major feature areas and modules.

**Application Scope**: [Paste your complete requirements, user stories, feature specifications]
**Database Schema**: [Reference your .dbml file or entity descriptions]

Group related functionality together (e.g., authentication, user management, core business features, admin panel, reporting, etc.).

---

## **Phase 2: API Endpoint Definition**

For each feature area we just identified, define all required API endpoints.

For each feature area, list all API endpoints with:

- HTTP method and route
- Purpose and functionality
- Request parameters and body structure
- Response data structure
- Authentication requirements

---

## **Phase 3: Implementation Task Creation**

Convert each API endpoint we defined into detailed implementation tasks.

For each API endpoint, create tasks for:

- Database migrations/schema changes needed
- Service layer business logic
- Controller implementation
- Request validation and error handling
- Unit and integration testing
- Documentation

---

## **Phase 4: Task Organization and Prioritization**

Organize all the implementation tasks we created by dependencies and priority.

Organize tasks into:

- Setup and infrastructure (must be done first)
- Core dependencies (what blocks what)
- MVP features (essential for launch)
- Enhancement features (nice-to-have)
- Testing and deployment tasks

Output: Prioritized development roadmap with clear dependencies.

Development Progress Documentation Prompt

**Development Progress Documentation Prompt**

---

Create a comprehensive development progress log that captures everything about how this project was set up and built.

**Purpose**: This document will serve as a knowledge base for future reference, troubleshooting, and onboarding.

**Current Project State**: [Describe what you've built so far, current file structure, tools used, key decisions made]

Document the following:

**Setup and Foundation**:

- Commands used to initialize the project
- Package installations and dependency choices made
- Configuration files created and their purposes
- Development environment setup steps taken

**Implementation Decisions**:

- How you implemented the coding guidelines in practice
- Deviations from the original guidelines and why
- Architecture patterns chosen and reasoning
- Database setup and migration commands used

**Build and Deployment**:

- Build scripts and commands that actually work
- Environment variables and secrets management approach
- Deployment process and hosting setup steps
- Any CI/CD configuration implemented

**Learnings and Issues**:

- Common issues encountered and solutions found
- Gotchas and things that didn't work as expected
- Performance considerations discovered
- Future improvement areas identified

Output: A practical knowledge base that someone could use to understand and replicate your setup.

Documentation Review and Sync Prompts (4 phases in same chat)

## **Phase 1: Git Diff Analysis**

Analyze what's changed in the codebase since the last documentation update.

**Last Documentation Update**: [Date when docs were last updated, e.g., "2024-01-15" or commit hash]

First, run `git log --oneline --since="[date]"` to see all commits since the last doc update, then run `git diff [commit-hash]` to get the actual changes.

Analyze the diff and identify:

- New files or directories added
- Modified API routes or endpoints
- Database schema changes (migrations, models)
- New dependencies in package.json
- Configuration changes
- Removed or renamed files

---

## **Phase 2: Documentation Gap Analysis**

Based on the git diff analysis, identify what documentation needs updating.

Cross-reference the code changes with our current documentation to find:

- New features built but not documented
- Changed implementations not reflected in docs
- Removed/deprecated features still in documentation
- New setup steps or configuration changes needed
- Dependencies or tools that need documentation

---

## **Phase 3: Critical Updates**

Update the most critical and incorrect documentation sections first.

**Priority Areas**: [Specify which docs are most critical - setup, API, database, etc.]

Focus on:

- API documentation that no longer matches actual endpoints
- Setup instructions broken by dependency changes
- Database schema docs that don't match current migrations
- Configuration examples that reference removed files
- Commands that no longer work due to recent changes

---

## **Phase 4: Enhancement and Validation**

Enhance the updated documentation and validate against current codebase.

Final verification:

- Test all documented commands actually work
- Verify API examples match current endpoint structure
- Check that setup instructions lead to working environment
- Update progress log with recent developments
- Add any new troubleshooting insights discovered

Output: Documentation synced with current codebase state.
D
David Lee5mo ago

Great writeup. Have you made any changes to this protocol since you wrote the article?

And may I ask how you decided on using Cursor versus alternatives? How would one implement a similar workflow with say VSCode + Github Copilot plugin?