MGC,

I have no idea why you're so angry. Anyway, there are so many things I can think of saying, I'll just make it brief and to the point.

1) Regarding your statement:

This thing won't scale. I'd like to see it when you have the 4.5 million records my database contains,
and that is still tiny for all intents-and-purposes.

For the type of application I'm building, it doesn't need to scale. At most I'll have 25K records. Even with the 100K database I've used for testing, it simply flies.

I'm not building anything for the outside world. Just something that serves me well.

2) Regarding:

Absolutely NO ONE  suggested moving anything out of "SQLite-land".

What!?!? You mentioned it two days ago!:

Stuff it into a sorted flat file.
that would be faster and simpler.

3) Regarding your statement:

As to your 'real good reason' for doing it this way, I'd bet cash money it's crap and based on nothing more than 'Because that's the way I decided to do it, and I'm smart'.

Talk about making things up... you're a funny guy :-)

I'm storing variable-length data, with a very different set of attributes. Some may have 1 attribute, others tens of them. Perhaps even a hundred. Using a column per attribute is not a good idea. A few days ago I asked this question and Dr. Hipp mentioned:

The more tables you have, the slower the first query will run
and the more memory SQLite will use.  For long-running applications
where the startup time is not a significant factor, 100s or
1000s of tables is fine.  For a CGI script that starts itself
up anew several times per second, then you should try to keep
the number of tables below a 100, I think.  Less than that if
you can. You should also try and keep down the number of tables
in low-memory embedded applications, in order to save on memory
usages.  Each table takes a few hundred bytes of memory - depending
on the number of columns and features.

Having two columns (one for the key and the other one for the data itself) seems like a good balance between speed and ease of use. I don't care if it doesn't scale because the intended deployment is 25K at most, as I said earlier. Even with data sets 4x that size works fine.

There is absolutely no reason this could not be properly designed and STILL fit any possible need for that ugly packed record.

So you know it all, eh? And you call me arrogant? :-)

I'm very happy for the solution: it's speedy and is simple. As for the original question I posted, I'm also glad to report that LIKE and GLOB works fine.

Cheers,

-- Tito

Reply via email to