Nifty! I had been thinking about how to encode NULLs in a single byte since they are so much more common, but hadn’t known about the yEnc offset. Doing some histogram analysis on the most easily available batch <https://github.com/hackerb9/co2do/tree/histogram/histogram/kurtdekker/co> of .CO files, it does not look like 42 is an optimal offset. I like the idea of a bespoke offset! Here’s a sample of a program which can determine that for you for a single .CO file or a whole directory of them: histco.py <https://github.com/hackerb9/co2do/blob/histogram/histogram/histco.py>:
$ ./histogram/histco.py testfiles/ALTERN.CO Unrotated: 4758 bytes. Can save 956 bytes (20.09%) Rotation +143 => 3802 bytes. Rotation of +42 would save 871 bytes (18.31%) $ cd histogram/kurtdekker/co/$ ../../histco.py Unrotated: 111237 bytes. Can save 18353 bytes (16.50%) Rotation +136 => 92884 bytes. Rotation of +42 would save 6696 bytes (6.02%) —b9 P.S. A possible lesson for us on rolling our own encoding: Searching for sample .CO files to test with my histogram program, I found a ZIP file of Kurt Dekker’s games on Bitchin 100 <https://bitchin100.com/m100-oss/archive.html>. Kurt actually released those in his own DEC format <https://github.com/hackerb9/co2do/blob/histogram/histogram/kurtdekker/util/FTU.TXT>. I downloaded the link labeled “everything in one BIG zip”, but it did not include any .CO files, so I rolled my own dec2co.sh <https://github.com/hackerb9/co2do/blob/histogram/histogram/kurtdekker/dec2co.sh> program. Later, when I found that bitchin100 *did* have the .CO files, merely misfiled. I was rather surprised to see that three of them did not exactly match mine. It seems there’s a bug in the tool Kurt released ( FTU.BAS <https://github.com/hackerb9/co2do/blob/histogram/histogram/kurtdekker/util/FTU.BAS>) which occasionally causes it to emit bytes beyond the length specified in the .CO file header. On Mon, Mar 9, 2026 at 3:24 PM Brian K. White <[email protected]> wrote: > On 3/9/26 14:03, Brian K. White wrote: > > Wow, I haven't tested this enough to push it up to github yet (I haven't > > even tried loading the result on a 100 yet to make sure it actually > > decodes correctly) but I think I just reduced the output .DO size from > > 5305 to 4378 just by applying a static offset to all bytes before > encoding. > > > > Almost a whole 1K out of 5 just from that! > > Ran ok. The decode time stayed the same, but the transfer time went down > and the ram used went down of course. > > -- > bkw >
