Hey there, folks!

A while ago, I wrote a BASIC tokenizer in C and was happy enough with the
results. It let me write a program that was too large to store both the .DO
and .BA files in memory at the same time. I would tokenize the .DO file on
a remote host and download just the resulting .BA file to my Tandy 200.

Unfortunately, it had a major flaw: it could only run on UNIX systems (BSD,
GNU/Linux, MacOS).

Well, I am now hesitatingly proud to announce that I (may) have fixed it so
it works on Microsoft Windows as well. I don’t actually have a Windows box,
so if people could try it out and stress test it with the weirdest Model T
BASIC programs they can think of, I would greatly appreciate it.

Main page: https://github.com/hackerb9/tokenize
Download: https://github.com/hackerb9/tokenize/releases/latest

To convert an ASCII BASIC document, FOO.DO, to a tokenized BASIC program,
FOO.BA, run:

tokenize FOO.DO

or to save space by deleting all non-essential REM statements and
“crunching” white space, try:

tokenize -c FOO.DO

Thanks everyone for any help or bugs you can uncover!

—b9

P.S. There are other, more advanced options for weirdos who want to do
things like create valid programs that could never be tokenized on a Model
T. (E.g., GOTO 10
<https://github.com/hackerb9/tokenize/blob/v2.0.0/degenerate/GOTO10.DO>).

Reply via email to