For what it’s worth, I’m not a Software Engineer. In fact, my academic background is in finance.
However, I’ve been an amateur computer hobbyist from a very young age, so I’ve always had some of exposure to programming. Professionally, I moved into the data engineering space over the years, building ETL for large databases, so my wheelhouse is really Python, SQL, and what I would call the modern data stack. So to B9’s point, I 100% am not going to pretend I looked at Claude’s change to VirtualT. I had no desire to learn C++ this week to satisfy my curiosity about LLM agents’ ability to write BASIC. Although I’m sure that it’s something that I should take the time to sit down and familiarize myself with. But I think my personal experience highlights the value I have seen in my professional experience as well. LLMs generate code that can validate hypotheses rapidly. They can help build a PoC for an idea in hours instead of days, without the need for agile teams, project managers, and sprint planning. After an idea is proven, you can throw away all the LLM code is it’s crap. It honestly doesn’t matter. For me, I didn’t set out to fix VirtualT’s socket server. And I don’t want to responsible for the fix :D But in less than an hour, that issue was in my rear view mirror and I was able to progress to the thing I actually cared about, which was getting an LLM to interact with a BASIC program in real time, without a human in the middle. And watching GPT not just type LOAD “STOCKS.BA” RUN, but then proceed to play the game, and throughly test all three variations and debug any issues. It was exactly what I was envisioning. So for me, I have been “vibe coding”. I don’t really look at the code. For the VS Code extension I built, it’s written in Node, but I don’t know a thing about JS. Granted, I do ask Codex/Claude questions about the underlying architecture and design. I treat the LLMs like I treat employees. I ask them questions about decisions they made, and to justify them. Sometimes I notice things that GPT or Claude is doing things don’t make sense to me. For example, implementing the same “logic” in multiple places, rather than refactoring existing code to be more generic. I noticed that my tokenizer and de-tokenizer were essentially using two different sources of truth for the token table. Both copies of the table were correct, but nonetheless I forced GPT to refactor to use one table and just invert the mapping for the other direction. Or another example, I found that some specific case was working correctly in my Line Renumber feature, but was working incorrectly in my Packer feature. I had to ask GPT basically “well, what the heck happened? Shouldn’t we be re-using the same program flow analysis for both” Turns out… not to be the case. So I asked him to refactor it, and we went over a few proposals, and I asked him to implement one that seemed the most logical from a design perspective. But I still don’t know a lick of javascript. So for me “vibe coding” ends up being more like “managing” an LLM agent. I tell it what to do, and I challenge its work and its recommendations. I don’t just blindly go along with everything it suggests. But if I am getting deep into the code it’s writing, it means something has gone terribly awry. -George On Tue, Dec 30, 2025 at 5:55 PM B 9 <[email protected]> wrote: > On Sat, Dec 27, 2025 at 11:36 AM John R. Hogerhuis <[email protected]> > wrote: > >> [...] But in this case George is a programmer by my reckoning and the AI >> produced a reasonable (makes sense) change request which resolved a >> problem. >> > >> Plus there is the other side to bug fixes inflicted on programmers: bugs >> inflicted on users. >> Users just want the program to work. If there is a new way they can >> contribute a little labor in code generation and testing it seems likely to >> help more than hurt. >> Particularly users who are actually programmers with enough taste to know >> that a change is a waste of time. >> > > I agree that AI could be very helpful when the patch reporter is a > programmer with taste. However, wasn't this an experiment in VIBE CODING? > Perhaps I misunderstood George's intent, but when he said he was going to > be "vibe coding", I took that to mean he was going to try to *not* be a > programmer. > > “Vibe coding” was coined less than a year ago by Andrej Karpathy, founder > of OpenAI who tweeted, "There's a new kind of coding I call 'vibe coding', > where you fully give in to the vibes, embrace exponentials, and forget that > the code even exists". Since then he's gone on to explain it more fully. One > useful rubric is, “It's not vibe coding if you look at the code.” There's > a good article describing the difference between AI-assisted coding and > vibe coding here: https://simonwillison.net/2025/Mar/19/vibe-coding/ . > > >> Looking on the bright side I think these developments may actually >> finally help deliver on the open source promise. The idea of everyone being >> empowered by having the code and the benefit to all of programmers >> contributing changes back. >> > > You may very well be right. I already see LLM-assisted programming opening > a door for people. I believe we are in a transition stage similar to when > high-level languages like BASIC came out. It certainly gives more people > power over their computers and I think it will be especially valuable for > people who want to learn to code or understand someone else's code. As for > "vibe coding", I'm skeptical it will ever live up to the hype, but I like > that non-programmers are feeling empowered and I especially appreciate > that, with pure "vibe coding", no human will ever have to read the AI > generated code. Finding errors in AI generated code can be a pain since, > right or wrong, generative AIs always present statistically plausible > output. > > All this may be moot for simple bugfixes though. As programmers integrate >> these tools into their own workflow there will be far fewer easy fixes for >> users to contribute because the programmer with his tool chain will already >> have found and applied them. >> > > That was precisely what I had been wondering. Will developers actually > find vibe-coded patches worthwhile given the effort it takes to understand > them and integrate them cleanly into their code? For now, it seems the > answer is "yes". > > —b9 > > P.S. Brian: I totally understand your reaction. When I saw the long paste > from Claude, I was reminded of how so much AI slop is already wasting my > time. I'm glad Ken pointed out that he did ask for it. > >
