The Build III: Scope and Knowing When To Ship
Even a solo project benefits from project management discipline. The practices that seem like "overhead" for a personal project are actually what let you ship.
[Editor’s note: This is the third post in a quartet that explains the process behind building the CMS for this website using AI tools.]
This project had no stakeholders, no budget, no deadlines. If something could not happen for whatever reason, that was something I had to live with. And yet this project still required sound project management discipline—perhaps more so than a project with external accountability. Without someone else asking “is this in scope?” or “when will this ship?”, I had to ask those questions myself.
Knowing what to build and knowing what not to build was the most important skill in this project. Scope creep is always a risk, but it is especially dangerous when implementation feels frictionless. The practices that seem like unnecessary overhead for a personal project—branch discipline, explicit scope decisions, documentation, ship criteria—were the tools that led to this project’s success.
One Feature, One Branch
There were five different areas that helped keep me grounded in this. First, I maintained good branch discipline. By this, I mean that each new feature I worked on had its own branch in GitHub. If something went wrong in the implementation of that feature, I could abandon everything and start again. It’s tempting to consolidate things into one branch, but the cost of bundling when something breaks is a lot. You can’t isolate what happened, and related and unrelated work has to be discarded.
Things broke during the build, and my previous experience with AI coding tools had me prepared for the ways in which they work. The only exception I made to this rule was updating documentation, because that had no functional change to the code and was low risk. But for everything else, it paid off: When things broke, I could isolate and start again, without losing work.
Ship, shelve, scrap
I had to think a lot about what I wanted to get to a shippable product and what “success” meant. This was a deliberate exercise, and one that counters the “quick wins” narrative that AI coding assistants bring by making adding new features so easy. One of the features I deferred on deployment is how to handle photography. If you go on the site now, there’s a minimal photography page. Once I come up with what I want shown on that page for my photography, and how that informs the post schema, then I’ll get to implementing something. I also wanted to make the main dashboard of the CMS a single pane of glass for viewing site analytics from Plausible Analytics, my site analytics provider. It would be nice to have, but it wasn’t an essential feature for launching.
One feature I should have deferred was PWA (progressive web app) support. Sure, it makes adding to my computer taskbar or iPad home screen as an application easier, but I don’t get the full features of PWAs, like offline access. It was nice to have, but it was AI usage and time that I won’t get back. There were also features I worked on but ended up abandoning. I tried to have the system read through all of the tags and categories in each post to automatically suggest them as I type them in the category picker, but I could not get that to work. After about an hour of attempting to make it happen, I decided to abandon the feature. Since I don’t use categories for anything particular on my site at this point, making sure things are typed in correctly is good enough for now.
Good judgment means knowing when to abandon a feature altogether, not just defer it for later.
Document, document, document!
The biggest breakthrough in this project was using GitHub Copilot and Claude Code to keep updating themselves. In their instructions, they had this sentence:
## Documentation Requirements
**Any changes to the operation of the application must be documented in this file (.github/copilot-instructions.md).**
The biggest problem with any project is keeping the documentation updated. This one command in Copilot’s instruction file made sure that anything new we added was properly noted, so that anything in the future didn’t get dragged back by old instructions or old patterns that were being used. At the time of this writing, the Copilot documentation is over 1,000 lines of Markdown, and I did not add a single word of that. It’s not documentation written after the fact, it’s institutional memory that is captured as the project evolved. This is the learning organization idea in practice: we can build project resilience through making sure we know what our projects are about.
You’re Done (at least for now)
Personal projects have no external deadlines. While that seems like it is a good thing, it’s more dangerous than you think. Going off of the notion of “oh, I’ll know when it’s ready” is a very dangerous lie. Going into this project, I had a very clear set of criteria for what would constitute a success, both on the Payload CMS side and on the Jekyll site backend. Once we met those core benchmarks, then we had something successful. There are things I’m still working on, of course, but those are peripheral to the core feature of this CMS—getting content online.
There were also some bugs that popped up in production, but of course there will always be some bugs that pop up then. When I was writing the first post in this quartet (which was the first post written exclusively in the CMS, not existing anywhere as a Markdown file to be pasted into a repository), there were some issues with webhooks that appeared. I couldn’t have predicted how those would have worked, and you can’t test your way to perfect. That’s why having a separate staging/pre-production branch is important, because that’s where you can test things before they go live. So ship and iterate: real use will reveal what artificial testing cannot. The goal is to have the infrastructure set up to fix issues when they surface.
A Note on Technical Debt
I am aware that this code has technical debt in it. For a personal project, I’m OK with that risk. There is a difference between knowing what technical debt you’re carrying and not knowing that at all.
The good news is that AI systems are reasonably well-suited to find and remediate technical debt along the way. I’ve just started a process to have Copilot find the technical debt in my repository and propose some ways to remediate them. Knowing what is worthwhile, and what risk I may be taking on by doing those remediations is going to be me putting my knowledge into practice. The same tools that helped build quickly can also help maintain and improve.
This is not advice to be careless and accumulate technical debt carelessly! It is permission to ship things that serve their purpose, with the confidence you can address those issues later on as part of a continuous improvement framework.
Project management lessons were valuable in this entire process. Personal projects are a low-risk/high-reward way to exercise skills you might not normally get to apply in your day-to-day job. While building out a CMS for my website is hardly a thing I would put on my résumé/CV, the habits and techniques I used here will inform my work and will make any project—software code or not—more likely to ship.
In this entire project, however, the biggest lesson I learned was not about code or project management, but rather about pacing myself and keeping sane. That’s for Part 4 on Friday.