Hey George,
At one point VirtualT had a *working* socket interface that would let
you do almost exactly what you are trying to do ... namely create an
agentic tool interface workflow to VirtualT.
In fact with your first messages in this thread I even fired up VirtualT
to try it! But it seems very slightly broken. I'm sure I could fix it
in short order if I had 2 working arms, but alas I slept on my left arm
wrong 4 nights ago and can barely movev it because my shoulder hurts so
much. I am typing this email very slowly using only 2 fingers and a thumb.
The socket interface (when it works) lets you:
1. Load programs from the host to the emualtion
2. Cold boot
3. Emulate key presses (i.e. arrow you your app and "press" enter)
4. Dump the character contents of the LCD
5. Monitor individual changes made to the LCD
6. Other.
I have a Dr. appointment tomorrow to get my arm looked at. Based on how
I feel, *MAYBE* I can get a chance to look at what borked the socket
interface (it *almost* works).
Ken
On 12/22/25 11:47 AM, George M. Rimakis wrote:
Hi All,
So after this discussion, I've been back and forth between work
related projects and fun ones, ironically both relating to using
AI-coding agents.
I decided to use ChatGPT and GPT-Codex to make a game from scratch for
the M100. Honestly, it's just a prototype at the moment, because I
haven't figured out how to exactly make it fun yet. I'll explain what
I did.
1. I used ChatGPT to brainstorm on winter-themes games that could
possibly run on the M100. We settled on a game where you have to
shovel your driveway. It's essentially a tile-moving game with
stacking mechanics / templated obstacle overlays.
2. I used GPT-Codex to write the entire thing in BASIC first. I
created a Codex-Skill which it is able to invoke as needed. It can add
to its own context relevant information for whatever task it's working
on at the time. I included various documentation I converted to markdown.
* 8085 Op-Code Manual from Intel
* BASIC Keywords for M100 - and their descriptions on how to use them
* General approaches for integrating BASIC with Assembly
* Sim8085 (an Intel 8085 CPU simulator written in Python) - Which
Codex can use to evaluate its ML subroutines, and assemble them.
* Documentation from the Bitchin100 Wiki (System Memory Map,
Variable Memory Layout for BASIC, etc)
3. Most of the issues come from the fact that Codex cannot directly
run BASIC. Thus it does things like creating test programs in Basic,
print debug info onto the screen, and ask me to read it back to it
when running in VirtualT. If there were a way to expose virtual-t to
Codex directly, and allow it to "see" the screen prints, and provide
keyboard input, it would be able to iterate on things by itself.
I think many would be surprised at how well it performs given the
limitations. A $20 a month GPT subscription goes a long way with
Codex. It used up around 80% of my weekly token limit making this game.
It was able to rewrite some of the slow-initial generation loops in ML
and integrate them. Likewise I was able to understand the ALTLCD print
routine I used in Text Sweeper, and incorporate it into this game, as
well populate ALTLCD.
When I started to get concerned about the amount of RAM the game uses,
GPT was able to suggest bit-packing together two separate arrays into
one, and then modified the ML subroutines and BASIC to handle the
adjusted structure.
Overall the experience was positive. I would say with High-Thinking
enabled, complex problems took a long time to solve (5-10 minutes) but
Codex was able to iterate, and test, and iterate and test, until it
was happy with the outcome. To the point where I went downstairs, did
the dishes, made a cup of coffee, and came back upstairs to review the
work.
Feel free to try it out from the release here, already minified and
tokenized:
https://github.com/Grimakis/Snowed-In/releases/tag/v0.1.0
If you want to see the readable source:
https://github.com/Grimakis/Snowed-In/blob/master-protected/SNOW.DO
https://github.com/Grimakis/Snowed-In/tree/master-protected/assembly
-George
On Fri, Dec 12, 2025 at 4:13 AM B 9 <[email protected]> wrote:
On Sun, Dec 7, 2025 at 11:37 PM Joshua O'Keefe
<[email protected]> wrote:
In any case, if you'd like to get in touch I'm happy to help.
One of my hobbies a few years ago was helping folks with
normal, small-scale home systems be able to do LLM inference
with readily available tools. For everyone else: I'll let
folks know if anything list-relevant comes out of poking some
of my code generation models. I'm... not especially hopeful.
The exercise is fun, though.
I'm very curious how this exercise will turn out. It is an
interesting challenge as proper programming back in the day often
involved discovering clever, problem-specific optimizations to
save space and time. Our Model T's are compute engines from that
more civilized age when people handcrafted elegant assembly code
that was precise, succinct, often beautiful, but almost never
reusable. Generative AI works on statistics and such idiosyncratic
patterns may be difficult to discern. Even worse, the code was
rarely well commented, and so the mapping from natural language to
implementation may be especially difficult for an LLM.
While training on BASIC (or C) would be easier, it is the Model
T's machine language which is the most tedious to code and where
an LLM would (theoretically) be the most helpful. I agree with
John that the lack of a large assembly language corpus may be a
significant barrier. (There's the ROM disassembly with commentary
<https://github.com/eriktier/RomT102Disassembly/blob/master/t102rom.asm>,
but what else?)
Please do keep us posted with any developments.
—b9