On Sun, Dec 7, 2025 at 11:37 PM Joshua O'Keefe <[email protected]>
wrote:

> In any case, if you'd like to get in touch I'm happy to help. One of my
> hobbies a few years ago was helping folks with normal, small-scale home
> systems be able to do LLM inference with readily available tools. For
> everyone else: I'll let folks know if anything list-relevant comes out of
> poking some of my code generation models. I'm... not especially hopeful.
> The exercise is fun, though.


I'm very curious how this exercise will turn out. It is an interesting
challenge as proper programming back in the day often involved discovering
clever, problem-specific optimizations to save space and time. Our Model
T's are compute engines from that more civilized age when people
handcrafted elegant assembly code that was precise, succinct, often
beautiful, but almost never reusable. Generative AI works on statistics and
such idiosyncratic patterns may be difficult to discern. Even worse, the
code was rarely well commented, and so the mapping from natural language to
implementation may be especially difficult for an LLM.

While training on BASIC (or C) would be easier, it is the Model T's machine
language which is the most tedious to code and where an LLM would
(theoretically) be the most helpful. I agree with John that the lack of a
large assembly language corpus may be a significant barrier. (There's the ROM
disassembly with commentary
<https://github.com/eriktier/RomT102Disassembly/blob/master/t102rom.asm>,
but what else?)

Please do keep us posted with any developments.

—b9

Reply via email to