Hi all, I do have to admit I have basically no experience outside of the Python data ecosystem that I typically work in.
I’m not formally trained in the discipline of computer science, but it’s always been a hobby of mine, and I enjoy tinkering with old machines. I’ve been using VS Code to do my edits to Text Sweeper over the years, and then tested in Virtual T. So in addition to the deployment scripts I vibe coded into existence this morning, I spent most of the afternoon working on a VS Code extension for M100 Basic. https://marketplace.visualstudio.com/items?itemName=Grimakis.trs80-basic Feel free to install and check it out. It’s pretty simple at this point, but it already seems like it would be somewhat useful with it’s typeahead, info hovers, and it can catch some Type Mismatch errors just from the syntax. Please let me know if you have any questions. On Sun, Dec 7, 2025 at 1:47 PM Joshua O'Keefe <[email protected]> wrote: > > > On Dec 7, 2025, at 7:28 AM, George M. Rimakis <[email protected]> > wrote: > > > > I hope everyone is well. As new AI-Agent coding tools and models are > released and improved, I've started to adopt them in my professional life > to some extent personally as well. I've tried out Claude Code and OpenAI's > GPT Codex, with a handful of models on both sides. > > > On Dec 7, 2025, at 9:35 AM, Scott McDonnell <[email protected]> > wrote: > > Use the AI tools just like you do traditional programming. Break it into > functions, modules, and prototypes. This gives it one thing to do at a > time. You describe the inputs, the outputs, and the expected processing. > > I've mentioned before I have some level of professional experience on the > back end of LLMs. This keeps me pretty wary of the pitfalls of using them > thoughtlessly but they're great tools within their limitations. The best > way I've been able to use these things in a software development context is > to treat them as junior developers: assume they know very little (they > do!), are worse at the job than you are, and are prone to jumping the gun > (they are!). > > Sometimes the smaller, more focused open-weight models of the kind you can > run at home are easier to keep on track. There's the additional advantage > of never "running out of tokens" which can be helpful considering how much > back-and-forth winds up being necessary in the inference context over time. > Context sizes get expensive fast with the big providers, you're chucking > the entire conversation in for processing each time you reply. > > It's never occurred to me to do LLM-assisted development on old-school > systems and I'm really interested in seeing how that goes for folks. I > might try my hand at some point. I don't have the 8080/85 chops to have a > device assist without imposing a bunch of debugging on me, particularly on > a platform I doubt any LLMs have a ton of training on, but BASIC shouldn't > be an issue at all. > > Very cool project. Please share as much of the experience as you're > comfortable sharing!
