Thats interesting Max, I'll look at that.
There are two main string fields one has an average of about 15
characters per field with a max of about 100. The other has an average
length of about 150 characters with a max of about 250 (although
occasionally up to 500 or so)
It may be possible to
On Wed, Nov 24, 2010 at 1:13 AM, Paul Sanderson <
sandersonforens...@gmail.com> wrote:
> I have a table with over 1 million rows and 20+ columns all of which
> are indexed, I reasonably regularly recreate the table with new data
> and find that the indexing process takes about 30 minutes. Are
Stuck with a 32 bit Windows set up as the lowest common so in memory
is not feasible - I have seen some of my DB's in excess of 2GB - users
might object even if I could nick that much memory
___
sqlite-users mailing list
sqlite-users@sqlite.org
>Simon, I'll try that and see what difference itmakes
That or build a :memory: DB, populate it, build indices and then only
back it up to disk using the backup API. That requires you have enough
memory available, but should be really fast if you have.
Thanks All
Simon, I'll try that and see what difference itmakes
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
On 24 Nov 2010, at 2:18pm, Paul Sanderson wrote:
> Unfortunately all of the columns are represented in a grid that users
> can choose to sort on any column, although they are less likely to
> sort on certain columns, if they do chose to do so then an excessive
> delay is not really acceptable.
Hey guy, you are doing wrong, you don't need the booth indexes in same field;
DROP TABLE tb;
CREATE TEMP TABLE tb (
a INTEGER,
b TEXT,
CONSTRAINT 'idx_tb00' PRIMARY KEY (a));
CREATE INDEX 'idx_tb01' ON tb (b);
INSERT INTO tb (a, b) VALUES (1, '1');
INSERT INTO tb (a, b) VALUES (2,
Unfortunately all of the columns are represented in a grid that users
can choose to sort on any column, although they are less likely to
sort on certain columns, if they do chose to do so then an excessive
delay is not really acceptable.
Currently I create ascending and descending indexes for
On Tue, Nov 23, 2010 at 10:13:34PM +, Paul Sanderson scratched on the wall:
> I have a table with over 1 million rows and 20+ columns all of which
> are indexed,
Unless you have an extremely diverse set of queries, that seems
excessive. A compound index is unlikely to need so many
For all columns more useful may be FTS3 extension.
2010/11/24 Paul Sanderson
> I have a table with over 1 million rows and 20+ columns all of which
> are indexed, I reasonably regularly recreate the table with new data
> and find that the indexing process takes
On 23 Nov 2010, at 10:13pm, Paul Sanderson wrote:
> I have a table with over 1 million rows and 20+ columns all of which
> are indexed
Do you really need them all indexed ? You might be able to pick a few columns
that people would never use as search criteria or for sorting results. Don't
I have a table with over 1 million rows and 20+ columns all of which
are indexed, I reasonably regularly recreate the table with new data
and find that the indexing process takes about 30 minutes. Are there
any settings/tweaks that I can use to reduce the time required to
create the index?
The
12 matches
Mail list logo