On Wed, Oct 20, 2021 at 3:07 PM Jeffrey Birt <[email protected]> wrote:

> Renaming the files is not the issue.
>

If when you place the files on the backpack you name ASCII formatted file
with a .DO extension and tokenized files with a BA extension (regardless of
what they were when you got them from the Internet) you will avoid
all problems.

This is what I mean when I say that renaming the files can correct the
issue.

If the goal is simply to make files load faster, then converting them en
masse to tokenized format is a reasonable goal. A command line tokenizer
like Mike or Bob's is the easiest way. This will work for nearly all files.
The only 100% way is to use the tokenizer of the Model T itself, and you
could do that by scripting out VirtualT via the debug socket.


> The issue is that many text files are incorrectly named as .BA. As you
> know if you try to load a text file labeled as .BA the M100 will try to
> tokenize it on the fly, and it is not fast enough to keep up.
>

And my habit is, when I take a BASIC file from the Internet I try to open
it in a text editor. If it has a B* extension but I can read it as plain
text, then I rename it to DO. If it looks like binary gobbledygook and it
has a .B*  extension then I assume it's BASIC.

It's not a matter of speed. It's a matter of encoding of the information.
If you have DO encoded information mislabeled as BA then the Model T
expects it to be tokenized already. It will not try to tokenize it. It will
treat as already tokenized, and will simply try to poke it straight into
RAM, which corrupts the RAM file system.

For example, what LaddieAlpha does is it looks at the file and determines
if it is formatted as plain text or tokenized basic, and when, say TS-DOS
requests the directory it presents the filename with corrected extension on
the fly. That's one way to handle it.

-- John.

Reply via email to