John R. Hogerhuis wrote:
> I suggest for data entry just use TEXT. You can use tabs or commas as field
> separators, CRLF as record separator and then import using tab separated
> CSV on your desktop. Very little overhead this way, maximizing use of your
T's RAM.

Thanks for the advice. That was the first version I made, but it had a
few problems for my case.
- Not all data I wanted fitted in the 32KB
- Loading it in a basic program to do data reports took about one
minute (and also limited the data volume since they had to fit twice:
once in the file and once in basic's RAM)

I ended up using CO files to store my data in binary. It's very
interesting for integers since they take only two bytes each instead
of 1-5 bytes + 1 byte for the separator.
Also I stored my strings in a separate CO file, so the main data file
has "fixed length records". This allowed me to implement a simple
random access file support.
And most important, I no longer load the files in basic RAM, but
instead parse the directory structure to find the file's address in
storage RAM and peek/poke directly in it.

>If you want to get really fancy you could make a BASIC program that
> would process the DO file and format search results on screen.

I actually re-implemented the "DO files forms" from Lucid-Database as
I found it was a very clever way to do this.
I cheated a little with "formula fields" by basically using them as
"hooks entry points" for basic subroutines (so it's more an DB API for
BASIC than a stand alone DB engine) but it does the job.

At the moment I have 809 records and 15430 bytes of data + 1682 bytes
of index and the "loading" takes about one second (the slower part is
the loading of the form.do which uses basic files functions).
Finding a record by record number is too quick to me measured, and
searching a record by string search takes between 0 and 4 seconds
(I'll probably try rewriting this part in assembly as soon as I read a
good book about it).

This is a fun project so far and since it's really a pet project and I
have no deadline I tried to do the maximum only using the M100 instead
of using the PC to pre-process the data files for the fun of it :o)
Because of this, when I calculated the index files used for alphabetic
research it took the M100 about 7 hours (it's a good thing I had an AC
power adapter).

It's not totally finished yet but if someone is interested in how I
did this I'd be happy to share the code and share things I've learn in
the process.

Eric

PS: Before starting this, I also tried T-Base but it wasn't as nice to
use as Lucid, and also entering/editing data was very slow

Reply via email to