The Build I: CMS, meet AI and directed work
AI-assisted development is not about letting the AI take the wheel. The value comes from your ability to direct it effectively, and to recognize when it is heading in the wrong direction.
[Editor’s note: This is the first post in a quartet that explains the process behind building the CMS for this website using AI tools.]
Earlier this year, when I moved my blog over from WordPress to Jekyll, there was a new point of friction with how I would publish content to this blog. WordPress’s integrated WYSIWYG editor was replaced by text-based Markdown files in various folders in the Jekyll site repository. The publishing system worked, but it was unwieldy in the best of times on the desktop, and next to impossible to create or edit content when on mobile.
To remove friction from the publishing experience, I knew deploying a content management system (CMS) was going to happen. This wasn’t going to be out-of-the-box; this was going to require some serious configuration, and would require me pushing myself further. I settled on Payload CMS as the base, and used GitHub Copilot and Claude Code, two different AI coding utilities, to help get me there. From start to finish, the project took about 10 days: about half of that proving the concepts, and the other half with building it out.
The thing that we need to remember is that AI-assisted development, at least for it to work well, is not to cede control to the AI systems. The value is in the user’s ability to direct it effectively and to recognize when things may be going sideways. The best analogy I have for this is manager and direct report. I was not writing code; I was directing work, reviewing output, and intervening when the work was heading in the wrong direction.
Some may dismiss this entire endeavor as “vibe coding.” I do not see it that way. I did not say, “GitHub Copilot, write me a CMS for my Jekyll site,” and accept whatever it returned sight unseen. I knew where the project needed to end up, which meant I could recognize when Copilot was heading somewhere incorrect. I broke the work into scoped tasks with clear deliverables and documentation references. After the work came back, I reviewed it, caught problems, and redirected. That is the full loop: direction, delegation, review, and correction. That is not vibes; that is management.
Very early on in this project, I sensed that Copilot was using training data from what it knew rather than what the current documentation would have suggested. There is a great module in Payload CMS v3 for handling scheduled posts, called the jobs queue. But I noticed that Copilot was building out something completely bespoke and ignoring the v3 patterns. Copilot was doing what it always does: write code confidently based on its training data, even when that data is outdated or is not necessarily appropriate for the task at hand. When we tested Copilot’s bespoke solution, we found that it (a) automatically published draft posts that were set for publishing, and (b) automatically published scheduled posts every minute, rather than at the time you wanted.
Had I not done my homework and understood Payload CMS v3’s patterns, I would have gone along with Copilot’s recommendation to build out something bespoke. I sensed that because it was new, Payload CMS v3 hadn’t made it into the AI systems powering GitHub Copilot. Training data cutoff is a real thing with AI systems, and they are not always the most transparent about identifying their own knowledge gaps. Because I knew there were v3 patterns, I could intervene, point Copilot at the Payload CMS v3 documentation, and then we got on our way. After ingesting the v3 documentation, it understood what it was now tasked to do and implemented v3 patterns successfully. The rest of the process was relatively smooth.
That’s worth repeating: The project was successful because I was checking Copilot’s work and could intervene. It involved knowing what the system could do, giving Copilot clear instructions, and having Copilot write its own context file along the way in a positive feedback loop. After 10 days of coding, that context file is nearly 850 lines of Markdown text, and is still being revised as code is being added or edited. It also involved pasting any errors from the terminal console back into the Copilot window so it could troubleshoot its work.
Bigger still, however, it involved my knowledge from previous projects and work I have done, even if not on as grand a scale as this. I have some experience with some of the deployment tools. Using my past experience with GitHub Actions development and integration tools and the VPN product I use in my environment, I had that advance knowledge of how to guide the development so it worked in the ways and patterns that I needed it to.
This is not the glamorous (or dystopian) future that AI boosters (or doomers) predict of AI doing everything and putting us all out of work. AI did the things I told it to do under general supervision, and I retained the creativity and directorial oversight of the development tasks. I had to ask the questions; I had to know when to step in; I had to know when to point the system at the correct documentation. If I had just said “go” to GitHub Copilot, this would not have worked at all.
By far, this was the biggest software project I’ve done, and this stretched my learning considerably. Even though I had AI tools write the code, this was still hard work, because I had to think about how things need to work together, how my environment will play with the software, and other areas that the AI system would not have known.
And along the way, I learned a lot, but not just about how to get the most out of AI tools, but rather software development practices, project management, and staying healthy while creating and shipping a project. If you’re hoping for a tutorial on how you yourself can build a CMS for your Jekyll site or how to prompt your AI system to spit out what you want in one or two takes, I regret to say you’ll be disappointed. The essays over the next two weeks are to share my practitioner’s experience, not dive into the code line-by-line or share my process step-by-step.
This series will be an honest accounting of this project: what went well, what didn’t go so well, and what I learned about this process and about myself in this journey.