Articles · · 4 min read

A workflow to ship better designed software with LLMs

I've been experimenting with different approaches for shipping my designs with the help of LLMs. I've found a process that is working really well.

A workflow to ship better designed software with LLMs
Photo by Abstral Official / Unsplash

The most frustrating part of designing software with LLMs is how shit they are with the details. If you've tried using them as part of your workflow, you know what I mean.

Initially, you're able to generate a basic UI layout instantly. What would take days to do manually in Figma is done while you're replying to a Slack message. That progress feels really exciting. It gives you the sense that you'll be able to be more creative and productive in your work.

But as soon as you start to iterate, instead of excitement you feel frustration and disappointment. The model make changes you didn't ask for, tells you that issues have been fixed when they haven't, and is unable to revert back to a previous step. You go from feeling excited to feeling like you've wasted a lot of precious time.

A workflow that's working for me

For the last 18 months or so, I've been experimenting with different approaches for shipping my designs with the help of LLMs. My goals have been two-fold: ship software that can be used anyone and find a workflow that actually helps me ship that software to my liking.

I've experimented with a lot of models and tools, but it's the process that has matter most. I've finally found a process that is working really well. This process is nothing novel, but I haven't seen it spoken or talked about much within the design world so here I am. 

Rather than focusing on better prompts, I've focused on using a foundational practice in software development: forking (or branching).

In software development, a good practice is to separate working code from code you're working on. Called forking or branching, this practice is akin to updating the name of your Figma file as you reach different milestones. But with forking or branching, tools like Github make it super easy to go back to a previous, stable version of your code if your updates aren't working as expected. 

My biggest breakthrough came when I began "forking" my LLM chat threads much in the same way. When I create a new branch in the codebase, I create new chat thread at the same time. I keep them in sync, always. 

For each update, I iterate intentionally, making relatively small changes and once the updated app is stable, I deploy the working code to production, create a new code branch, and start a new chat thread.

My LLM of choice is Claude, and the tools I'm using are v0 for the initial UI builds, then GitHub as the home of the source code, Vercel for deployments, and Supabase for the backend.

Here's my approach

When starting a new app from scratch, I've found v0 to be a great tool to generate the initial UI layouts. Out of the box, it comes with the popular UI framework shadcn and it very familiar with Tailwind as well.

  1. I create a new Claude chat and instruct Claude to play the role of my co-founder and software architect. I let Claude know I'm the product designer and we're working together in the development, build, and deploy process. I'm honest about my knowledge and capabilities, and that I'm relying on their judgement as an expert software architect.
  2. Before sharing the requirement specs or document, I create a Claude project and a v0 project with the same name. Using the same name allows me to track progress along the way.
  3. I give Claude the product requirements, sometimes it's just an overview, and ask the model to review the specs, analyze what's there, and ask any follow-up questions if needed. 
  4. Once Claude and I are clear with each other on what we're doing, I tell Claude I'll be starting with v0 and to give me the  instructions to set up the initial codebase. This process goes back and forth: Claude gives me instructions for what to do in v0, I tell Claude what's happening in v0, and Claude gives me further instructions.
  5. Once I have a basic UI layout in v0, I move that code over to Github. Why? Because Github was made to manage code branches while v0, MagicPatterns, Figma Make, etc. are not.
  6. Now, anytime I make updates to the app, I'm no longer vibe coding with v0, but just coding. Claude continues to share instructions based on feedback and I update the code directly in Github. This is how stable software is actually built, rather than just vibe coding.
  7. Every time I fork in GitHub, I "fork" in Claude. What does forking in Claude look like? It's just a custom prompt instructing Claude to create an overview of what happened in that chat and a checklist of what the priorities are to include in the next chat. I instruct Claude to export this overview as a README.md file. I also instruct Claude to export the entire log of prompts, responses, and artifacts from the current chat thread as a separate file called DATE_CHATLOG.md.
  8. If/When I'm ready to design updates to the app, I create a new branch in Github, create a new Claude chat, import the README.md and DATE_CHATLOG.md files, and instruct the model to review each, analyze them, and offer next-step suggestions as my co-founder.

Vibe coding ≠ shipping code

Working this way been a breakthrough for me. The most popular vibe coding tools don't tell you how important forking is and don't make it easy to do so. I imagine that when you spend your time and energy vibe coding, you want more than making a pretty prototype. You want to see your ideas ship.

By forking, I'm making progress with stable builds, reverting when I need to, and most importantly, making the kind of progress I expect to make. I still get frustrated, but I can just fork those and move on.


✳️
You can use Claude Code in the place of Claude + v0, but I've found that Claude Code burns up a lot of tokens quickly and because it combines the roles of software architect and developer, you can't really fork chat threads in the same way as I described. Without a lot of experience developing and shipping code, it's easy to get stuck in an error loop working this way.

Read next

AI ·

On managing AI like a bright intern

Product manager Maggie Chen discusses transitioning from design to product management, the reality of working with generative AI, and why treating AI systems like talented interns reveals both their power and limitations.