[sqlite] Access violation at address 6090B662 in module 'sqlite3.dll'. Read of address DE8D6B84

2011-04-06 Thread Vander Clock Stephane
hello,

in heavy multithread environnmeent we receive (one time a month, so not 
very often), this error :
Access violation at address 6090B662 in module 'sqlite3.dll'. Read of 
address DE8D6B84

any idea ?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to put the database in cache without waiting the system do it by himself ?

2011-01-14 Thread Vander Clock Stephane
it's not enalf :(

On 1/14/2011 1:48 AM, Jean-Christophe Deschamps wrote:
>> when i just launch th application, at the beginning the query can take
>> around fews seconds... but after some time (10 - 20 minutes), it's take
>> only few ms !
>>
>> so i guess it's because the windows cache in memory the database file ?
>>
>> so how to speed up this time to make windows cache more fastly in memory
>> the database file ?
> Read the file!
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] how to put the database in cache without waiting the system do it by himself ?

2011-01-13 Thread Vander Clock Stephane
hello,

when i just launch th application, at the beginning the query can take 
around fews seconds... but after some time (10 - 20 minutes), it's take 
only few ms !

so i guess it's because the windows cache in memory the database file ?

so how to speed up this time to make windows cache more fastly in memory 
the database file ?

thanks you by advance
stephane
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] huge performance decrease after deleting/creating a table !

2011-01-12 Thread Vander Clock Stephane
hello,

i want to update a column name in a table, but the only way for that is 
to redo the table
the table have around 15 000 000 records

so i do like this :

ALTER TABLE PICTURE_HASH_ID RENAME TO PICTURE_HASH_ID_OLD;
DROP INDEX PICTURE_HASH_ID_PIC_IDX;
CREATE TABLE PICTURE_HASH_ID(
   HASH_ID INTEGER PRIMARY KEY ASC,
   SERVER_ID INTEGER,
   VOLUME_ID INTEGER,
   NEEDLE_ID UNSIGNED INTEGER,
   NEEDLE_KEY UNSIGNED BIG INT
);
CREATE UNIQUE INDEX PICTURE_HASH_ID_PIC_IDX on PICTURE_HASH_ID 
(SERVER_ID ASC, VOLUME_ID ASC, NEEDLE_ID ASC, NEEDLE_KEY ASC);

INSERT INTO PICTURE_HASH_ID(HASH_ID, SERVER_ID, VOLUME_ID, NEEDLE_ID, 
NEEDLE_KEY) SELECT HASH_ID, SERVER_ID, VOLUME_ID, PICTURE_ID, 
PICTURE_KEY FROM PICTURE_HASH_ID_OLD;

DROP TABLE PICTURE_HASH_ID_OLD;


but now the speed of the insert/select is completely crazy.
before it's take 100 ms max and now it's take 30 secondes !


this the database definition :

PRAGMA page_size = 1024;
PRAGMA encoding = "UTF-8";
PRAGMA legacy_file_format = 0;
PRAGMA auto_vacuum = NONE;

CREATE VIRTUAL TABLE PICTURE_HASH_1 USING rtree(
   HASH_ID INTEGER PRIMARY KEY ASC,
   x1_y1_min UNSIGNED TINYINT, x1_y1_max UNSIGNED TINYINT,
   x1_y2_min UNSIGNED TINYINT, x1_y2_max UNSIGNED TINYINT,
   x1_y3_min UNSIGNED TINYINT, x1_y3_max UNSIGNED TINYINT,
   x1_y4_min UNSIGNED TINYINT, x1_y4_max UNSIGNED TINYINT,
   x1_y5_min UNSIGNED TINYINT, x1_y5_max UNSIGNED TINYINT
);

CREATE VIRTUAL TABLE PICTURE_HASH_2 USING rtree(
   HASH_ID INTEGER PRIMARY KEY ASC,
   x2_y1_min UNSIGNED TINYINT, x2_y1_max UNSIGNED TINYINT,
   x2_y2_min UNSIGNED TINYINT, x2_y2_max UNSIGNED TINYINT,
   x2_y3_min UNSIGNED TINYINT, x2_y3_max UNSIGNED TINYINT,
   x2_y4_min UNSIGNED TINYINT, x2_y4_max UNSIGNED TINYINT,
   x2_y5_min UNSIGNED TINYINT, x2_y5_max UNSIGNED TINYINT
);

CREATE VIRTUAL TABLE PICTURE_HASH_3 USING rtree(
   HASH_ID INTEGER PRIMARY KEY ASC,
   x3_y1_min UNSIGNED TINYINT, x3_y1_max UNSIGNED TINYINT,
   x3_y2_min UNSIGNED TINYINT, x3_y2_max UNSIGNED TINYINT,
   x3_y3_min UNSIGNED TINYINT, x3_y3_max UNSIGNED TINYINT,
   x3_y4_min UNSIGNED TINYINT, x3_y4_max UNSIGNED TINYINT,
   x3_y5_min UNSIGNED TINYINT, x3_y5_max UNSIGNED TINYINT
);

CREATE VIRTUAL TABLE PICTURE_HASH_4 USING rtree(
   HASH_ID INTEGER PRIMARY KEY ASC,
   x4_y1_min UNSIGNED TINYINT, x4_y1_max UNSIGNED TINYINT,
   x4_y2_min UNSIGNED TINYINT, x4_y2_max UNSIGNED TINYINT,
   x4_y3_min UNSIGNED TINYINT, x4_y3_max UNSIGNED TINYINT,
   x4_y4_min UNSIGNED TINYINT, x4_y4_max UNSIGNED TINYINT,
   x4_y5_min UNSIGNED TINYINT, x4_y5_max UNSIGNED TINYINT
);

CREATE VIRTUAL TABLE PICTURE_HASH_5 USING rtree(
   HASH_ID INTEGER PRIMARY KEY ASC,
   x5_y1_min UNSIGNED TINYINT, x5_y1_max UNSIGNED TINYINT,
   x5_y2_min UNSIGNED TINYINT, x5_y2_max UNSIGNED TINYINT,
   x5_y3_min UNSIGNED TINYINT, x5_y3_max UNSIGNED TINYINT,
   x5_y4_min UNSIGNED TINYINT, x5_y4_max UNSIGNED TINYINT,
   x5_y5_min UNSIGNED TINYINT, x5_y5_max UNSIGNED TINYINT
);

CREATE TABLE PICTURE_HASH_ID(
   HASH_ID INTEGER PRIMARY KEY ASC,
   SERVER_ID INTEGER,
   VOLUME_ID INTEGER,
   NEEDLE_ID UNSIGNED INTEGER,
   NEEDLE_KEY UNSIGNED BIG INT
);
CREATE UNIQUE INDEX PICTURE_HASH_ID_PIC_IDX on PICTURE_HASH_ID 
(SERVER_ID ASC, VOLUME_ID ASC, NEEDLE_ID ASC, NEEDLE_KEY ASC);


and i do only this kind of SQL :


Select
P1.SERVER_ID,
P1.VOLUME_ID,
P1.NEEDLE_ID,
P1.NEEDLE_KEY
from
PICTURE_HASH_1 H1
JOIN PICTURE_HASH_2 H2 ON H2.HASH_ID=H1.HASH_ID
JOIN PICTURE_HASH_3 H3 ON H3.HASH_ID=H1.HASH_ID
JOIN PICTURE_HASH_4 H4 ON H4.HASH_ID=H1.HASH_ID
JOIN PICTURE_HASH_5 H5 ON H5.HASH_ID=H1.HASH_ID
JOIN PICTURE_HASH_ID P1 ON P1.HASH_ID=H1.HASH_ID
where
H1.x1_y1_min <= xxx and
H1.x1_y1_max >= yyy and
...
H5.x5_y5_min <= www and
H5.x5_y5_max >= zzz

following by some insert

insert into PICTURE_HASH_1 ..
insert into PICTURE_HASH_2 ..
insert into PICTURE_HASH_3 ..
insert into PICTURE_HASH_4 ..
insert into PICTURE_HASH_5 ..
insert into PICTURE_HASH_ID ..


before it's return in 100 ms now 30 secondes :(

What i do wrong and how to correct it ?
i m on Windows 2008 R2 64 bit

thanks you by advance
stéphane
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up this ?

2010-12-24 Thread Vander Clock Stephane

> Select
>H1.ID
> from
>HASH1 H1
> where
>x1_y1 BETWEEN min11 AND max11 AND
>x1_y2 BETWEEN min12 AND max12 AND
>x1_y3 BETWEEN min13 AND max13 AND
>x1_y4 BETWEEN min14 AND max14 AND
>x1_y5 BETWEEN min15 AND max15;
>
>

no it's not work at all !! without an rtree index the speed is a total 
disaster (several minutes)
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up this ?

2010-12-24 Thread Vander Clock Stephane
i do it.. but it's change nothing :(

On 12/24/2010 3:47 PM, Simon Slavin wrote:
> On 24 Dec 2010, at 8:17am, Vander Clock Stephane wrote:
>
>> I have a key like this
>>
>> 123-098-230-120-111 where (123), (098), (230), (120), (111) are what i
>> call node
>> Node are integer comprise between 0 and 255 (bytes)
> You mean from 000 to 254.  255 breaks your system because you are using 
> modulus 255.
>
>> and i need to found "similare" key.
>> A similar key is a key where one node is max + - 10 more or lower than
>> another node
>>
>> so
>>
>> 123-098-230-120-111
>> is similare to
>> 120-097-235-118-110
>>
>> but not similare to
>> 180-197-215-018-010
>>
>> it's for our graphical software
> Precalculate five sets of minimum and maximum bounds:
>
> min11 = max((<#randomnumber>   % 255)-10,0)
> max11 = min((<#randomnumber>   % 255)+10,255)
>
> Then you can just seize the ten values you need from the table and use them 
> to make up your SELECT command.
>
> Select
>H1.ID
> from
>HASH1 H1
> where
>x1_y1 BETWEEN min11 AND max11 AND
>x1_y2 BETWEEN min12 AND max12 AND
>x1_y3 BETWEEN min13 AND max13 AND
>x1_y4 BETWEEN min14 AND max14 AND
>x1_y5 BETWEEN min15 AND max15;
>
> This will be faster than trying to get SQLite to do the calculations for 
> every row of your 50 000 000 row table.
>
> It will work much faster with an index on x1_y1.
> It may or may not work faster with an index on 
> (x1_y1,x1_y2,x1_y3,x1_y4,x1_y5).  Try it.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up this ?

2010-12-24 Thread Vander Clock Stephane
can you gave me the name of a good SSD you advise me to buy ?
i decide to make a try !

Thanks again
stéphane


On 12/24/2010 12:24 AM, John Drescher wrote:
> On Thu, Dec 23, 2010 at 4:06 PM, Vander Clock Stephane
> <svandercl...@yahoo.fr>  wrote:
>> that very very much expensive :( how much you thing ?
>>
> $500 to $600 US.
>
> John
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up this ?

2010-12-24 Thread Vander Clock Stephane

> Can you describe what you're trying to do with that command ?

of course !

I have a key like this

123-098-230-120-111 where (123), (098), (230), (120), (111) are what i 
call node
Node are integer comprise between 0 and 255 (bytes)

and i need to found "similare" key.
A similar key is a key where one node is max + - 10 more or lower than 
another node

so

123-098-230-120-111
is similare to
120-097-235-118-110

but not similare to
180-197-215-018-010

it's for our graphical software

so i add in a rtree table the key like this

insert into rtreetable
Node1_min = 123-10,
Node1_max = 123+10,
Node2_min = 098-10,
Node2_max = 098+10,
etc..

and after to detect similare node i simply do

select from rtreetable
where
Node1_min >= 120 and
Node1_max <= 120 and
Node2_min >= 098 and
Node2_max <= 098 and
etc...


it's ok ?

thanks you by advance !
stéphane

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up this ?

2010-12-23 Thread Vander Clock Stephane
hmmm,

how spatial index can help me better than sqlite Rtree index ?

i just need to do

Select
 H1.ID
from
 HASH1 H1
where
 x1_y1_min>= x and
 x1_y1_max<= y and
 x1_y2_min>= z and
 x1_y2_max<= w and
 x1_y3_min>= a and
 x1_y3_max<= b and
 x1_y4_min>= c and
 x1_y4_max<= d and
 x1_y5_min>= e and
 x1_y5_max<= f;

thanks by advance
stéphane



On 12/24/2010 12:40 AM, stormtrooper wrote:
> I'd be surprised if you run out of memory, updating a field, even with a
> 50,000,000 row table.
>
> Also, if you are working with real geospatial data such as Imagery or vector
> data, there are many applications that may be suited for these calculations.
>
> Spatialite is a Sqlite extension that has spatial indexes(rtree) and spatial
> functions such as buffer, intersect,etc
>
>
>
> Vander Clock Stephane wrote:
>> i don't know but i quite sure not, because the cost to update all the
>> row in the table Hash will be much much (much) more expensive ...
>> and also this solution it's absolutely not multi thread :(
>>
>> On 12/23/2010 11:46 PM, stormtrooper wrote:
>>> would it run faster if you add two columns to the Hast table - randmax
>>> and
>>> randmin
>>>
>>> update Hash set randmax = max((<#randomnumber>   % 255)-10,0)
>>> update Hash set randmin = min((<#randomnumber>   % 255)+10,255)
>>>
>>> CREATE INDEX HASH_RMIN_IDX ON HASH (RANDMIN);
>>> CREATE INDEX HASH_RMAX_IDX ON HASH (RANDMAX);
>>>
>>> Select
>>>  H1.ID
>>> from
>>>  HASH1 H1
>>> where
>>>  x1_y1_min>= randmax and
>>>  x1_y1_max<= randmin and
>>>  x1_y2_min>= randmax and
>>>  x1_y2_max<= randmin and
>>>  x1_y3_min>= randmax and
>>>  x1_y3_max<= randmin and
>>>  x1_y4_min>= randmax and
>>>  x1_y4_max<= randmin and
>>>  x1_y5_min>= randmax and
>>>  x1_y5_max<= randmin;
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>>
>>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up this ?

2010-12-23 Thread Vander Clock Stephane
that very very much expensive :( how much you thing ?



On 12/23/2010 11:55 PM, John Drescher wrote:
>> i m affraid so ... but what it's will be with 50 000 000 rows ? i don't
>> have 100 gigabytes of memory :(
> I would get a 256GB SSD.
>
> John
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up this ?

2010-12-23 Thread Vander Clock Stephane
i don't know but i quite sure not, because the cost to update all the 
row in the table Hash will be much much (much) more expensive ...
and also this solution it's absolutely not multi thread :(

On 12/23/2010 11:46 PM, stormtrooper wrote:
> would it run faster if you add two columns to the Hast table - randmax and
> randmin
>
> update Hash set randmax = max((<#randomnumber>  % 255)-10,0)
> update Hash set randmin = min((<#randomnumber>  % 255)+10,255)
>
> CREATE INDEX HASH_RMIN_IDX ON HASH (RANDMIN);
> CREATE INDEX HASH_RMAX_IDX ON HASH (RANDMAX);
>
> Select
> H1.ID
> from
> HASH1 H1
> where
> x1_y1_min>= randmax and
> x1_y1_max<= randmin and
> x1_y2_min>= randmax and
> x1_y2_max<= randmin and
> x1_y3_min>= randmax and
> x1_y3_max<= randmin and
> x1_y4_min>= randmax and
> x1_y4_max<= randmin and
> x1_y5_min>= randmax and
> x1_y5_max<= randmin;
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up this ?

2010-12-23 Thread Vander Clock Stephane

> Right.  So you have a database with 2 000 000 rows that is 1.8GB
> So your first 1 000 000 rows takes up about 1GB.
> And your test case with just 1 000 000 rows in runs really fast.
>
> So what is happening is that most of the first 1 000 000 rows fits in memory. 
>  Once the database gets bigger than that the application has to keep fetching 
> information from disk all the time, and that's far slower.
>

i m affraid so ... but what it's will be with 50 000 000 rows ? i don't 
have 100 gigabytes of memory :(
with 50 000 000 rows and btree index like where fiedA = yyy then result 
is imediate ! but why not also
with rtree index ?

thanks you by advance
stéphane

>> how to speed up this query :
>>
>> Select
>>H1.ID
>> from
>>HASH1 H1
>> where
>>x1_y1_min>= max((<#randomnumber>  % 255)-10,0) and
>>x1_y1_max<= min((<#randomnumber>  % 255)+10,255) and
>>x1_y2_min>= max((<#randomnumber>  % 255)-10,0) and
>>x1_y2_max<= min((<#randomnumber>  % 255)+10,255) and
>>x1_y3_min>= max((<#randomnumber>  % 255)-10,0) and
>>x1_y3_max<= min((<#randomnumber>  % 255)+10,255) and
>>x1_y4_min>= max((<#randomnumber>  % 255)-10,0) and
>>x1_y4_max<= min((<#randomnumber>  % 255)+10,255) and
>>x1_y5_min>= max((<#randomnumber>  % 255)-10,0) and
>>x1_y5_max<= min((<#randomnumber>  % 255)+10,255);
> If there is only one random number involved in all that, precalculate all the 
> max and mins:
>
> ll = max((<#randomnumber>  % 255)-10,0)
> mm = min((<#randomnumber>  % 255)+10,255)

in my test all the random are different ...
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to speed up this ?

2010-12-23 Thread Vander Clock Stephane
Windows 2008 R2 with 8GB of memory.

but actually i run the test on a beta server with only 1Gb of memory and 
win2003 ..

the database si with 2 000 000 rows is 1.8 GO

thanks by advance !
stéphane

On 12/23/2010 10:52 PM, Simon Slavin wrote:
> On 23 Dec 2010, at 7:36pm, Vander Clock Stephane wrote:
>
>> when their is lower than 1 000 000 row it's return in lower than 10 ms
>> with more than 1 000 000 row it's return with around 350 ms :(
>> and i need more than 50 000 000 rows :( :(
> How much memory do you have in that computer ?  What operating system are you 
> running ?
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] First(s) select are very slow

2010-12-20 Thread Vander Clock Stephane
so what the difference between the sqlite3 cache and the OS cache ?

On 12/20/2010 9:49 PM, Doug wrote:
> Possibly because the OS file cache doesn't have any of your database file 
> loaded into memory yet (cold start), so those first selects are experiencing 
> I/O hits.  After a number of them have been done, some/much of the file might 
> be in OS file cache memory, and selects are then hitting memory instead of 
> the much slower disk.
>
> Doug
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org 
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Vander Clock Stephane
> Sent: Monday, December 20, 2010 8:43 AM
> To: General Discussion of SQLite Database
> Subject: [sqlite] First(s) select are very slow
>
> hello,
>
> why, on a big table (+5 000 000 rows with RTREE ndex) at the beginning, the 
> select is very slow to return (2-5 secondes), but after 1000 selects, it's 
> start to return immediatly (10 ms) ??
>
> thanks by advance
> st phane
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] First(s) select are very slow

2010-12-20 Thread Vander Clock Stephane
hello,

why, on a big table (+5 000 000 rows with RTREE ndex) at the beginning, 
the select is very slow to return (2-5 secondes), but after 1000 selects,
it's start to return immediatly (10 ms) ??

thanks by advance
stéphane
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to optimize this simple select query ?

2010-12-13 Thread Vander Clock Stephane
Hello,

But i not understand how the R* tree based table can help me here ?

can you explain me ?

thanks you by advance
stephane

On 12/9/2010 8:24 PM, Gabríel A. Pétursson wrote:
> It seems to me that you may want to consider defining your table as a
> virtual R* tree based table.
>
> See http://www.sqlite.org/rtree.html
>
> On Wed, 2010-12-08 at 19:56 +0300, Vander Clock Stephane wrote:
>> Hello,
>>
>> on the table :
>>
>> CREATE TABLE HASH(
>> ID INTEGER PRIMARY KEY ASC,
>> x1_y1 INTEGER,
>> x1_y2 INTEGER,
>> x1_y3 INTEGER,
>> x1_y4 INTEGER,
>> x1_y5 INTEGER,
>> x2_y1 INTEGER,
>> x2_y2 INTEGER,
>> x2_y3 INTEGER,
>> x2_y4 INTEGER,
>> x2_y5 INTEGER,
>> x3_y1 INTEGER,
>> x3_y2 INTEGER,
>> x3_y3 INTEGER,
>> x3_y4 INTEGER,
>> x3_y5 INTEGER,
>> x4_y1 INTEGER,
>> x4_y2 INTEGER,
>> x4_y3 INTEGER,
>> x4_y4 INTEGER,
>> x4_y5 INTEGER,
>> x5_y1 INTEGER,
>> x5_y2 INTEGER,
>> x5_y3 INTEGER,
>> x5_y4 INTEGER,
>> x5_y5 INTEGER
>> );
>> CREATE INDEX HASH_X1_Y1_IDX ON HASH (X1_Y1);
>> CREATE INDEX HASH_X2_Y2_IDX ON HASH (X2_Y2);
>> CREATE INDEX HASH_X3_Y3_IDX ON HASH (X3_Y3);
>> CREATE INDEX HASH_X4_Y4_IDX ON HASH (X4_Y4);
>> CREATE INDEX HASH_X5_Y5_IDX ON HASH (X5_Y5);
>>
>>
>> with millions of rows, how to optimize such query :
>>
>> Select
>> ID
>> from
>> HASH
>> where
>> x1_y1>=<#randomnumber1>  and
>> x1_y1<=<#randomnumber1>+ 20 and
>> x1_y2>=<#randomnumber4>  and
>> x1_y2<=<#randomnumber4>  + 20 and
>> x1_y3>=<#randomnumber7>  and
>> x1_y3<=<#randomnumber7>  + 20 and
>> x1_y4>=<#randomnumber10>  and
>> x1_y4<=<#randomnumber10>  + 20 and
>> x1_y5>=<#randomnumber13>  and
>> x1_y5<=<#randomnumber13>  + 20 and
>> x2_y1>=<#randomnumber16>  and
>> x2_y1<=<#randomnumber16>  + 20 and
>> x2_y2>=<#randomnumber19>  and
>> x2_y2<=<#randomnumber19>  + 20 and
>> x2_y3>=<#randomnumber22>  and
>> x2_y3<=<#randomnumber22>  + 20 and
>> x2_y4>=<#randomnumber25>  and
>> x2_y4<=<#randomnumber25>  + 20 and
>> x2_y5>=<#randomnumber28>  and
>> x2_y5<=<#randomnumber28>  + 20 and
>> x3_y1>=<#randomnumber31>  and
>> x3_y1<=<#randomnumber31>  + 20 and
>> x3_y2>=<#randomnumber34>  and
>> x3_y2<=<#randomnumber34>  + 20 and
>> x3_y3>=<#randomnumber37>  and
>> x3_y3<=<#randomnumber37>  + 20 and
>> x3_y4>=<#randomnumber40>  and
>> x3_y4<=<#randomnumber40>  + 20 and
>> x3_y5>=<#randomnumber43>  and
>> x3_y5<=<#randomnumber43>  + 20 and
>> x4_y1>=<#randomnumber46>  and
>> x4_y1<=<#randomnumber46>  + 20 and
>> x4_y2>=<#randomnumber49>  and
>> x4_y2<=<#randomnumber49>  + 20 and
>> x4_y3>=<#randomnumber52>  and
>> x4_y3<=<#randomnumber52>  + 20 and
>> x4_y4>=<#randomnumber55>  and
>> x4_y4<=<#randomnumber55>  + 20 and
>> x4_y5>=<#randomnumber58>  and
>> x4_y5<=<#randomnumber58>  + 20 and
>> x5_y1>=<#randomnumber61>  and
>> x5_y1<=<#randomnumber61>  + 20 and
>> x5_y2>=<#randomnumber64>  and
>> x5_y2<=<#randomnumber64>  + 20 and
>> x5_y3>=<#randomnumber67>  and
>> x5_y3<=<#randomnumber67>  + 20 and
>> x5_y4>=<#randomnumber70>  and
>> x5_y4<=<#randomnumber70>  + 20 and
>> x5_y5>=<#randomnumber73>  and
>> x5_y5<=<#randomnumber73>  + 20;
>>
>> because they takes very very lot of time (hourS) to return :(
>> on other SGBD (like Firebird) with same amount of data
>> they return immediatly ...
>>
>> Thanks by advance
>> stéphane
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Increase the datafile file size to limit the file fragmentation

2010-12-11 Thread Vander Clock Stephane
I know this trick, but it's a little longer to do than simply manually 
increate the file DB size ?
my test show it's work, i m just currious why we can not do like this ?

thanks again
stéphane

On 12/11/2010 3:44 AM, Max Vlasov wrote:
> On Sat, Dec 11, 2010 at 1:52 AM, Vander Clock Stephane<
> svandercl...@yahoo.fr>  wrote:
>
>> yes, i agree except that the file fragmentation of the file database
>> cause also the file fragmentation of some other files... that is logique
>> when more than 2 file grow a little at a time ! and the other file need
>> absolutely to not be fragmented !
>>
>>
> Ok then, if you feel you need this, there's a trick for db expanding. Create
> something big, like a table filled with random or uniform data and then
> delete it, you will get a database file with plenty of free space.
>
> Max
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Increase the datafile file size to limit the file fragmentation

2010-12-10 Thread Vander Clock Stephane
yes, i agree except that the file fragmentation of the file database 
cause also the file fragmentation of some other files... that is logique 
when more than 2 file grow a little at a time ! and the other file need 
absolutely to not be fragmented !

On 12/11/2010 12:07 AM, Max Vlasov wrote:
> On Fri, Dec 10, 2010 at 11:20 PM, Vander Clock Stephane<
> svandercl...@yahoo.fr>  wrote:
>
>> Hello,
>>
>> to limit the file fragmentation i want to increase the size of the
>> database file (with the windows API function).
>>
>> Can i do this ?
>>
>>
> Vander, my tests show that the internal sqlite fragmentation affects more
> than the file system fragmentation. This is because when data placed
> naturally, the os predicts the data flow together with the client using the
> data, for internally fragmented db it's different, read this discussion
> http://www.mail-archive.com/sqlite-users@sqlite.org/msg56083.html (I gave a
> linke to my post since root message doesn't contain the full discussion for
> some reasons). So, VACUUM from time to time is better overall than Os defrag
> (imho)
>
> Max Vlasov
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to optimize this simple select query ?

2010-12-10 Thread Vander Clock Stephane
thanks for the long explanation pavel !


On 12/10/2010 11:50 PM, Pavel Ivanov wrote:
>> it's sad that this simple select is not possible under sqlite3 :(
> This query is not so simple as you think, it requires pretty
> complicated advanced techniques to be executed differently than SQLite
> executes it. And even using those techniques you are not guaranteed to
> have good performance - it depends on data distribution in your table.
> To get an idea of how complicated technique should be try to think of
> your query in terms of phone book. So you have 6 phone books with the
> same data (million phones in total). 1st book has all data unsorted,
> 2nd has all data sorted by 1st letter (no tie-breaking sorting), 3rd -
> all data sorted by 2nd letter (no tie-breaking sorting) and so on. Now
> you want to find phones of all people whose name has first letter
> between d and i, second letter between t and v, third letter between f
> and k and so on. How would you search for these phones? It's pretty
> complicated stuff and wasn't included in SQLite because it's Lite.
>
>
> Pavel
>
> On Fri, Dec 10, 2010 at 3:18 PM, Vander Clock Stephane
> <svandercl...@yahoo.fr>  wrote:
>> not work :(
>>
>> it's sad that this simple select is not possible under sqlite3 :(
>>
>>
>>
>> On 12/10/2010 6:11 PM, Jim Morris wrote:
>>> Did you try a compound index?
>>>
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Increase the datafile file size to limit the file fragmentation

2010-12-10 Thread Vander Clock Stephane
but i try and it's seam to work ?

i simply increase manually the size of the database and it's seam to 
work ???
can you confirm that it's not possible because here it's work ?

> Wait until your data file is large enough, then use any OS tool to
> defragment it :)

that a big mistake, because the windows defragmenter is not very strong, 
for exemple
it's simply skip all file > 64 mb !
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Increase the datafile file size to limit the file fragmentation

2010-12-10 Thread Vander Clock Stephane
Hello,

to limit the file fragmentation i want to increase the size of the 
database file (with the windows API function).

Can i do this ?

thanks you by advance
stéphane
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to optimize this simple select query ?

2010-12-10 Thread Vander Clock Stephane
not work :(

it's sad that this simple select is not possible under sqlite3 :(



On 12/10/2010 6:11 PM, Jim Morris wrote:
> Did you try a compound index?
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to optimize this simple select query ?

2010-12-10 Thread Vander Clock Stephane

> I usually seem to be wrong when I try to help here, but I keep trying.
>
> My guess is that SQLite uses only one index per query (or per table per
> query or something like that), and so has to do a whole bunch of full
> table scans (or at least full scans of the remaining rows).
>

huum if it's true it's a big disaster :(

> Maybe you need a query like:
>
>
> select id from (select id from hash where
>
> x1_y1 BETWEEN #randomnumber1 AND (#randomnumber1 + 20)) where x1_y2 BETWEEN 
> #randomnumber4 AND (#randomnumber4 + 20)
>
>
> nested many more levels deep. This might allow using all the indices.

i will try thanks
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [Bulk] How to optimize this simple select query ?

2010-12-10 Thread Vander Clock Stephane
ok i will try

On 12/9/2010 9:33 PM, Richard Hipp wrote:
> On Thu, Dec 9, 2010 at 11:27 AM, Vander Clock Stephane<
> svandercl...@yahoo.fr>  wrote:
>
>> no one have an idea how to do such query ??
>>
> You can try building with SQLITE_ENABLE_STAT2 and then running ANALYZE.  Or,
> the query you show is ideally suited for the R-Tree module.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [Bulk] How to optimize this simple select query ?

2010-12-10 Thread Vander Clock Stephane

> I can't solve your problem but I have observations.  I don't see how any SGDB 
> (or RDBS as we call them) could do this quickly without lots of indexes.
>

but they do :( Firebird for exemple ...

> Your long SELECT command is something I would probably do in my programming 
> language instead of SQLite.  Just read every record and do the 'ifs' in my 
> language.  I think this would actually be as fast or faster than having 
> SQLite do it.
>

yes but we speak about millionS of rows :(


> But there are ways to make SQLite do it quickly.  Either way you have to read 
> every record, so there's no way to save time there.  Another way to do it 
> would be to add a column to your HASH table.  Let's call it 'tally'.  You 
> would start by doing
>
> UPDATE HASH set tally = 0;
>
> Then you would do 25 UPDATE commands like
>
> UPDATE HASH SET tally = tally + 1 WHERE x1_y1 BETWEEN #randomnumber1 AND 
> (#randomnumber1 + 20);
> [...]
> UPDATE HASH SET tally = tally + 1 WHERE x3_y2 BETWEEN #randomnumber34 AND 
> (#randomnumber34 + 20);
> [...]
> UPDATE HASH SET tally = tally + 1 WHERE x5_y5 BETWEEN #randomnumber73 AND 
> (#randomnumber73 + 20);
>
> Then to find the records which obeyed all 25 limits you'd do
>
> SELECT id FROM HASH WHERE tally = 25;
>

yes but not really a "multri thread" solutions :(

> If you put 26 indexes on the table (one on each x_y and one on tally) this 
> should execute really quickly. It might be faster still if you make another 
> table just for the 'tally' column.

but the row returning by just the 5 index is already zero :( so how 
adding more index can help ?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [Bulk] How to optimize this simple select query ?

2010-12-10 Thread Vander Clock Stephane

> If you have another situation,
> wit same amount of data,
> which returns immedialty,
> than either situation is not the same,
> or you are making an error.
>

same situation (same amount of data) but on other SGBD like Firebird. 
the result
return imediatly. on sqlite3 it's take hours :(

> maybe creating indexes on more than one field,
>can speed up the query,
> but i cannot tell you on which fields,
> because i dont know the data
>

i thing that 5 index is quite enalf because the number of rows
returning by just the indexed column is zero
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [Bulk] How to optimize this simple select query ?

2010-12-09 Thread Vander Clock Stephane
no one have an idea how to do such query ??

thanks
stéphane

On 12/8/2010 7:56 PM, Vander Clock Stephane wrote:
> Hello,
>
> on the table :
>
> CREATE TABLE HASH(
> ID INTEGER PRIMARY KEY ASC,
> x1_y1 INTEGER,
> x1_y2 INTEGER,
> x1_y3 INTEGER,
> x1_y4 INTEGER,
> x1_y5 INTEGER,
> x2_y1 INTEGER,
> x2_y2 INTEGER,
> x2_y3 INTEGER,
> x2_y4 INTEGER,
> x2_y5 INTEGER,
> x3_y1 INTEGER,
> x3_y2 INTEGER,
> x3_y3 INTEGER,
> x3_y4 INTEGER,
> x3_y5 INTEGER,
> x4_y1 INTEGER,
> x4_y2 INTEGER,
> x4_y3 INTEGER,
> x4_y4 INTEGER,
> x4_y5 INTEGER,
> x5_y1 INTEGER,
> x5_y2 INTEGER,
> x5_y3 INTEGER,
> x5_y4 INTEGER,
> x5_y5 INTEGER
> );
> CREATE INDEX HASH_X1_Y1_IDX ON HASH (X1_Y1);
> CREATE INDEX HASH_X2_Y2_IDX ON HASH (X2_Y2);
> CREATE INDEX HASH_X3_Y3_IDX ON HASH (X3_Y3);
> CREATE INDEX HASH_X4_Y4_IDX ON HASH (X4_Y4);
> CREATE INDEX HASH_X5_Y5_IDX ON HASH (X5_Y5);
>
>
> with millions of rows, how to optimize such query :
>
> Select
> ID
> from
> HASH
> where
> x1_y1>=<#randomnumber1>  and
> x1_y1<=<#randomnumber1>+ 20 and
> x1_y2>=<#randomnumber4>  and
> x1_y2<=<#randomnumber4>  + 20 and
> x1_y3>=<#randomnumber7>  and
> x1_y3<=<#randomnumber7>  + 20 and
> x1_y4>=<#randomnumber10>  and
> x1_y4<=<#randomnumber10>  + 20 and
> x1_y5>=<#randomnumber13>  and
> x1_y5<=<#randomnumber13>  + 20 and
> x2_y1>=<#randomnumber16>  and
> x2_y1<=<#randomnumber16>  + 20 and
> x2_y2>=<#randomnumber19>  and
> x2_y2<=<#randomnumber19>  + 20 and
> x2_y3>=<#randomnumber22>  and
> x2_y3<=<#randomnumber22>  + 20 and
> x2_y4>=<#randomnumber25>  and
> x2_y4<=<#randomnumber25>  + 20 and
> x2_y5>=<#randomnumber28>  and
> x2_y5<=<#randomnumber28>  + 20 and
> x3_y1>=<#randomnumber31>  and
> x3_y1<=<#randomnumber31>  + 20 and
> x3_y2>=<#randomnumber34>  and
> x3_y2<=<#randomnumber34>  + 20 and
> x3_y3>=<#randomnumber37>  and
> x3_y3<=<#randomnumber37>  + 20 and
> x3_y4>=<#randomnumber40>  and
> x3_y4<=<#randomnumber40>  + 20 and
> x3_y5>=<#randomnumber43>  and
> x3_y5<=<#randomnumber43>  + 20 and
> x4_y1>=<#randomnumber46>  and
> x4_y1<=<#randomnumber46>  + 20 and
> x4_y2>=<#randomnumber49>  and
> x4_y2<=<#randomnumber49>  + 20 and
> x4_y3>=<#randomnumber52>  and
> x4_y3<=<#randomnumber52>  + 20 and
> x4_y4>=<#randomnumber55>  and
> x4_y4<=<#randomnumber55>  + 20 and
> x4_y5>=<#randomnumber58>  and
> x4_y5<=<#randomnumber58>  + 20 and
> x5_y1>=<#randomnumber61>  and
> x5_y1<=<#randomnumber61>  + 20 and
> x5_y2>=<#randomnumber64>  and
> x5_y2<=<#randomnumber64>  + 20 and
> x5_y3>=<#randomnumber67>  and
> x5_y3<=<#randomnumber67>  + 20 and
> x5_y4>=<#randomnumber70>  and
> x5_y4<=<#randomnumber70>  + 20 and
> x5_y5>=<#randomnumber73>  and
> x5_y5<=<#randomnumber73>  + 20;
>
> because they takes very very lot of time (hourS) to return :(
> on other SGBD (like Firebird) with same amount of data
> they return immediatly ...
>
> Thanks by advance
> stéphane
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to optimize this simple select query ?

2010-12-08 Thread Vander Clock Stephane
Hello,

on the table :

CREATE TABLE HASH(
   ID INTEGER PRIMARY KEY ASC,
   x1_y1 INTEGER,
   x1_y2 INTEGER,
   x1_y3 INTEGER,
   x1_y4 INTEGER,
   x1_y5 INTEGER,
   x2_y1 INTEGER,
   x2_y2 INTEGER,
   x2_y3 INTEGER,
   x2_y4 INTEGER,
   x2_y5 INTEGER,
   x3_y1 INTEGER,
   x3_y2 INTEGER,
   x3_y3 INTEGER,
   x3_y4 INTEGER,
   x3_y5 INTEGER,
   x4_y1 INTEGER,
   x4_y2 INTEGER,
   x4_y3 INTEGER,
   x4_y4 INTEGER,
   x4_y5 INTEGER,
   x5_y1 INTEGER,
   x5_y2 INTEGER,
   x5_y3 INTEGER,
   x5_y4 INTEGER,
   x5_y5 INTEGER
);
CREATE INDEX HASH_X1_Y1_IDX ON HASH (X1_Y1);
CREATE INDEX HASH_X2_Y2_IDX ON HASH (X2_Y2);
CREATE INDEX HASH_X3_Y3_IDX ON HASH (X3_Y3);
CREATE INDEX HASH_X4_Y4_IDX ON HASH (X4_Y4);
CREATE INDEX HASH_X5_Y5_IDX ON HASH (X5_Y5);


with millions of rows, how to optimize such query :

Select
   ID
from
   HASH
where
   x1_y1 >= <#randomnumber1> and
   x1_y1 <= <#randomnumber1>+ 20 and
   x1_y2 >= <#randomnumber4> and
   x1_y2 <= <#randomnumber4> + 20 and
   x1_y3 >= <#randomnumber7> and
   x1_y3 <= <#randomnumber7> + 20 and
   x1_y4 >= <#randomnumber10> and
   x1_y4 <= <#randomnumber10> + 20 and
   x1_y5 >= <#randomnumber13> and
   x1_y5 <= <#randomnumber13> + 20 and
   x2_y1 >= <#randomnumber16> and
   x2_y1 <= <#randomnumber16> + 20 and
   x2_y2 >= <#randomnumber19> and
   x2_y2 <= <#randomnumber19> + 20 and
   x2_y3 >= <#randomnumber22> and
   x2_y3 <= <#randomnumber22> + 20 and
   x2_y4 >= <#randomnumber25> and
   x2_y4 <= <#randomnumber25> + 20 and
   x2_y5 >= <#randomnumber28> and
   x2_y5 <= <#randomnumber28> + 20 and
   x3_y1 >= <#randomnumber31> and
   x3_y1 <= <#randomnumber31> + 20 and
   x3_y2 >= <#randomnumber34> and
   x3_y2 <= <#randomnumber34> + 20 and
   x3_y3 >= <#randomnumber37> and
   x3_y3 <= <#randomnumber37> + 20 and
   x3_y4 >= <#randomnumber40> and
   x3_y4 <= <#randomnumber40> + 20 and
   x3_y5 >= <#randomnumber43> and
   x3_y5 <= <#randomnumber43> + 20 and
   x4_y1 >= <#randomnumber46> and
   x4_y1 <= <#randomnumber46> + 20 and
   x4_y2 >= <#randomnumber49> and
   x4_y2 <= <#randomnumber49> + 20 and
   x4_y3 >= <#randomnumber52> and
   x4_y3 <= <#randomnumber52> + 20 and
   x4_y4 >= <#randomnumber55> and
   x4_y4 <= <#randomnumber55> + 20 and
   x4_y5 >= <#randomnumber58> and
   x4_y5 <= <#randomnumber58> + 20 and
   x5_y1 >= <#randomnumber61> and
   x5_y1 <= <#randomnumber61> + 20 and
   x5_y2 >= <#randomnumber64> and
   x5_y2 <= <#randomnumber64> + 20 and
   x5_y3 >= <#randomnumber67> and
   x5_y3 <= <#randomnumber67> + 20 and
   x5_y4 >= <#randomnumber70> and
   x5_y4 <= <#randomnumber70> + 20 and
   x5_y5 >= <#randomnumber73> and
   x5_y5 <= <#randomnumber73> + 20;

because they takes very very lot of time (hourS) to return :(
on other SGBD (like Firebird) with same amount of data
they return immediatly ...

Thanks by advance
stéphane
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to know the list of pragma that are database related or connection related

2010-12-07 Thread Vander Clock Stephane

>> Some pragma are set be connection,
>> some by database (and all the connection to this database) and
>> some by the engine (all database and all connections)
> Could you give an example of this last category? I don't see by what possible 
> mechanism could a PRAGMA issued in one process connected to one database 
> file, affect a separate process connected to a different database file. 
> Einstein's spooky action at a distance?

for exemple page_size, encoding, auto_vaccum, legacy_file_format or also

 From the doc :
*PRAGMA secure_delete;
*When there are attached databases and no database is specified in the 
pragma, all databases have their secure-delete setting altered
The secure-delete setting for newly attached databases is the setting of 
the main database at the time the ATTACH command is evaluated

so doing this pragma will change the behavior of all the database and by 
the way all the connections ...


>> so how to distinguish the pragma that must be call on every connection
>> or just set one time after the DLL initialization ?
> I'm not aware of any pragmas that could be set once and somehow magically 
> take effect on all subsequent connections. How would you even set such a 
> pragma? You need a connection to run PRAGMA statements.

like page_size, encoding, auto_vaccum, legacy_file_format, secure_delete 
for exemple ? but i can say about this because the doc speak about it, 
but most of the time the doc say nothing :( for exemple what about 
cache_size when we use the Share_Cache mode ?

>> for exemple did i need to call the pragma read_uncommitted on every
>> connection
> Yes.

ok, that is clair :)

so can you confirm me that what i do is good :

at the initialization :
sqlite3_config(SQLITE_CONFIG_MULTITHREAD);
sqlite3_initialize;
sqlite3_enable_shared_cache(1);

after before creating any database i do :

PRAGMA page_size = 512
PRAGMA encoding = "UTF-8
PRAGMA legacy_file_format = 0
PRAGMA auto_vacuum = NONE

after for "EVERY" new connection on the database(s) i do :

PRAGMA cache_size = 2000
PRAGMA count_changes = 0
PRAGMA journal_mode = MEMORY
PRAGMA journal_size_limit = -1
PRAGMA locking_mode = NORMAL
PRAGMA read_uncommitted = 1
PRAGMA secure_delete = 0
PRAGMA synchronous = OFF
PRAGMA temp_store = MEMORY


is it an good way to do ?

thanks you by advance
stephane
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] What the concequence to use a page size of 512 bytes on a hard drive with a cluster size of 64kb ?

2010-12-06 Thread Vander Clock Stephane
Hello,

what are the concequence to use a page size of 512 bytes on a
hard drive with a cluster size of 64kb ?

thanks by advance
stéphane
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to choose the good page size

2010-11-22 Thread Vander Clock Stephane
Hello,

our database have around 50 millions rows
the database have only one table with singleton value name(int)=valuet(int)

the database will be store in an hardrive formatted with cluster of 64kb (but 
we can format
it with lower if neccessary)

the database must use the minimal neccessary memory (we already have an in 
memory index,
the database is only use for accessing rarely records not already in memory).

1/ if i use a page cache of 64kb (same size of the cluster), it's mean that
if sqlite3 by default use 2000 pages caches, then it's will use by
default 2000 * 64 kb of memory right  ?

2/ what the consequence to chose a page size of 512 bytes instead of the cluster
size of 64kb ? also Every row in our table use max 100 bytes !

Thanks by advance for your help !
stephane



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to set pragma page size before the database has not yet created ?

2010-11-22 Thread Vander Clock Stephane
thanks !

On 11/22/2010 4:23 PM, Igor Tandetnik wrote:
> Vander Clock Stephane<svandercl...@yahoo.fr>  wrote:
>> i want to use the pragma command to setup the page size. however they say :
>> "The page size may only be set if the database has not yet been created."
>>
>> but how to do because as soon as i do sqlite3_open_v2 then the database
>> is created ?
> No, the database is created when the first data manipulation statement runs 
> (usually, CREATE TABLE). Precisely so that you could issue some pragmas first.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] how to set pragma page size before the database has not yet created ?

2010-11-22 Thread Vander Clock Stephane
Hello,

i want to use the pragma command to setup the page size. however they say :
"The page size may only be set if the database has not yet been created."

but how to do because as soon as i do sqlite3_open_v2 then the database 
is created ?
and to execute the pragma i need connection to the database ?


also regarding page size, the default page size on our hardrive is 64kb 
(the maximum),
but i thing it's too much for the sqlite as we will use it only to store
on one table juste singleton index(int)=value(int) pair (but a very lot 
of rows,
arround 50 millions). memory consumption must be minimal also !
what a good page size you thing i need to use ?

thanks by advance
stéphane



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Error in the Doc

2010-11-21 Thread Vander Clock Stephane
Hello,

just to say you that the doc probably want to say :

If the 3rd parameter to sqlite3_open_v2() is not one of the combinations 
shown above or *NOT* one of the combinations
shown above combined with the SQLITE_OPEN_NOMUTEX, 
SQLITE_OPEN_FULLMUTEX, SQLITE_OPEN_SHAREDCACHE and/or
SQLITE_OPEN_PRIVATECACHE flags, then the behavior is undefined.

it's forget the word "*NOT*" :)

http://www.sqlite.org/c3ref/open.html
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users