Pete Hodgson

Software Delivery Consultant

Vibe-coding 101

March 10, 2025

After a few hours of experimenting with Claude Code, I’m starting to take Agentic Coding/Vibe Coding seriously.

Based on my experimentation the “full vibe” hype - where an AI builds software entirely unsupervised - is pretty ridiculous. But these agentic systems ARE extremely capable, and even as they stand today I believe they can give every developer a significant productivity boost when wielded appropriately.

So, let’s talk about:

Your new AI pair-programmer

My mental model for these agentic coding assistants is to consider them like a weirdly knowledgeable new hire.

They have a ridiculously broad amount of knowledge - all the languages, all the frameworks, all parts of the stack - but they have somewhat questionable taste. Some of their design decisions are reminiscent of a super-productive junior engineer who’s eager to please, mixed with a senior engineer who tends to over-engineer things. They also have a tendency to leap to an incorrect conclusion and then continue to double down on that bad path - they’re very bad at taking a beat and re-considering their approach.

This means that you’ll be much more successful with agentic coding when you position yourself to give a lot of feedback and course-correction on their decisions. Think of it as a pairing session with an extremely productive new hire who’s very smart but has a bit of a tendency to go off the rails.

Tips

For “intern-level tasks”, go Full Vibe Mode

These agentic systems are ideal for coding chores where the solution space is well-constrained. Removing an old feature flag from a codebase, fixing a broken unit test, and any tedious programming where an existing pattern is already established and you’re just doing more of the same.

In this context, try going “full vibe” and having the tool do the whole thing. This is the equivalent of throwing the task over the wall to a new intern and asking them to send you the PR once they’re done. They might not get it 100% correct, but they’ll probably get it pretty much done.

An example prompt:

The new_checkout_flow feature flag has now been fully launched - please remove it from our codebase.

After it’s removed, look for any dead code that can now be deleted, and opportunities for code simplification. Once you’re done, create a commit.

Any questions before we get started?

Note that you won’t need to tell your AI partner where the flag lives; it can start off by grepping through the codebase to find usage of that flag, just like a human engineer.

Once your AI partner is done, carefully review the code, clean up any issues, then move on to the next task.

see also:

Point to Helpful Context

Remember that your AI partner has no prior knowledge of your codebase or of your team’s particular design philosophy or coding conventions. It’s very capable of searching through the codebase to learn things, but it isn’t always going to know all the places to look, and just like a new hire it can’t distinguish between good vs bad examples in your codebase.

At the start of a pairing session, point your AI partner at resources that will help them - technical design docs, relevant parts of the codebase. When relevant, let them know what patterns and libraries it should be using or avoiding.

If your AI tooling isn’t able to read your internal wiki (yet!), just print the relevant wiki pages to HTML and reference that.

An example prompt:

I’m working on implementing the TDD described here: ~/tmp/async-tasks-tdd.html. I’ve already implemented some of it - check out the backend/async_tasks directory.

I’d like you to help me implement the next step - invoking tasks at scheduled time. We always use jodatime to manage datetime math. … …

Maintain an AI Onboarding Doc

Every time you start a new coding session it’s as if you are pairing with a brand new hire who’s never seen your codebase. Your AI needs an onboarding doc, and most agentic coding tools support this out of the box - Claude Code has CLAUDE.md, Cursor has Rules for AI.

It’s particularly helpful to document the principles, conventions and “elements of style” that your team has established over time.

A section from a hypothetical CLAUDE.md

...
- We are big fans of otel, and always use spans and events rather than logging.
- We used to use moment for datetime math, and it's still in some areas of the code, but we now prefer date-fns. We never use native Date()!
- We prefer explaining variables and small well-named methods over explanatory comments.
- We try to only assert one thing in any given unit test.
- We avoid using mocks where possible.

- To run the linter: `./go lint-all`
- To run a specific test: `./go test path/to/file`
...

Pro tip: you can ask the AI to update this doc as you go!

Prompt for Questions

AI coding assistants don’t tend to ask clarifying questions; they have an overly strong bias for action and will generally just start writing code.

You can counteract this by explicitly ending your prompt with something like “What extra information do you need? What questions or concerns do you have before we get started?”

Example prompt:

I need to create a report on the number of accounts created each week this year, broken down by bucketed account value. Don’t forget, you’ll find the current database schema in schema_dump.sql.

Do you have any queries you want me to run first so you can understand the shape of the data? Any other questions or concerns before we start?

The AI will often respond with some thoughtful questions that fill in gaps in the context you’ve provided. Worst case, the AI will feel compelled to ask mediocre questions which you can just ignore!

Defensive Code Review

Never forget that the AI’s overriding goal is to write code that looks correct. Most of the time it will achieve that goal by writing code that is actually correct, but when it doesn’t the code will still look correct. Because of this, you have to be way more vigilant when reviewing code from an AI.

You are also way more likely to be reviewing code that doesn’t stick to your codebase’s conventions or standard practices - this code is written by a metaphorical brand new hire, don’t forget! If you don’t stick to your guns on correcting the AI’s code, your codebase will slowly drift towards the gray mean of every codebase on the internet.

Break non-trivial problems into a Chain-of-Vibes, with you in the loop

I would not suggest Full Vibe Mode for anything larger than the “intern-level” tasks I described above. But that doesn’t mean you can’t use agentic coding. Instead, break the work into a series of steps ensuring that each step has a pause point where you can review the AI’s work so far and course-correct as needed.

Think of this as giving a meaty task to a junior engineer on your team. You wouldn’t just throw the whole thing to the engineer and then check back in two weeks (would you?!). You’d check in with the engineer every day or so to make sure they’re on the right track (right?!).

Plenty more to come

I’ll dig into specific Chain-of-Vibes tips in my next post, and also talk about some specific weaknesses that agentic assistants have today, and how to work around them.

I’m convinced that working in this way will be a required skillset in the very near future. If you haven’t done so already, you should definitely start experimenting with Claude Code, Cursor Agent, or one of the other agentic systems. Pick a simple intern-level task right now, try out these tips, and let me know how it’s going!