Oh, go ahead, be nasty :-) I'm not in favor of the design, either,
but that's what our client wants. Getting confirmation that it's a
poor design helps, too.
We have previously delivered tools that use CSV files instead of a
database. Once you pay the penalty of reading the entire file and
creating an index of the offsets of each record in the file, the
performance is good.
Now that I say that, I realize I could something similar with SQLite:
create a temp table that is nothing but rowids of my main table,
sorted in the order I want. I could then do the select ... limit x,y;
on the temp table and use the results to look up and cache a page of
data.
Thanks,
Aaron
On Aug 29, 2005, at 9:59 AM, Jay Sprenkle wrote:
On 8/29/05, Aaron Burghardt <[EMAIL PROTECTED]> wrote:
Hi All,
I'm trying to efficiently display a large table to users. I have a
test database with 2.7M records that is 1.6 GB in size (though over
half of that is 6 indexes). My best idea so far has been to display
one "page" of the database at time, where a page is some arbitrary
number (20K for now). I'm using the LIMIT clause to get a particular
page of results. Some sample query results are below:
Uh, not to be nasty, but this isn't a good idea. Nobody can read a
list of
2.7 million entries. Implement a search and show them the results.
It's
easier for you, and for them. If you have to do this, then limit is
the
correct method.
In general, my programming rule of thumb is: "If it's really
difficult,
you're probably doing it the wrong way"