[sqlite] sqlite3_interrupt

2019-01-18 Thread Michele Pradella
Hi all, I was looking to the  sqlite3_interrupt to make my application
closing faster without waiting for long standing DB operation. I read in
the documentation that should not be a problem to call it during insert
update or delete: if transaction is running is automatically rolled
back. 

Do you think there's some extra care I have to deal with before using
interrupt function? Do you have some experience in using it during
read/write operation on DB? 

regards
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] unique values from a subset of data based on two fields

2018-06-29 Thread Michele Pradella

Select DISTINCT name,id,status from names where status = 1



*Michele Pradella*
/R Software Engineer
/ michele.prade...@selea.com <mailto:michele.prade...@selea.com>
Office: +39 0375 889091




Selea s.r.l

V. Aldo Moro, 69
46019 Cicognara MN - Italy
Phone: +390375889091
Fax: +390375889080
http://www.selea.com



Note: The information contained in this message may be privileged and 
confidential and protected from disclosure. If the reader of this 
message is not the intended recipient, or an employee or agent 
responsible for delivering this message to the intended recipient, you 
are hereby notified that any dissemination, distribution or copying of 
this communication is strictly prohibited. If you have received this 
communication in error, please notify us immediately by replying to the 
message and deleting it from your computer. Thank you.


Il 29/06/2018 17.50, Paul Sanderson ha scritto:

I have a table

Create table names (id int, status int, name text)



1, 1, 'paul'

2, 1,'helen'

3, 0, 'steve'

4, 0, 'steve'

5, 0, 'pete'

6, 0, 'paul'



I want a query that returns all of the records with status = 1 and unique
records, based on name, where the status =0 and the name is not in the list
status=1



So from the above I would want to see



1, 1, paul

2, 1, helen

3, 0, steve (or 4, 0, steve)

5, 0, pete



I could do something like



Select * from names where status = 1 or name not in (select name from names
where status = 1)



But this gets both rows for steve, e.g.



1, 1, paul

2, 1, helen

3, 0, steve

4, 0, steve

5, 0, pete

while I am not bothered about which of the two steves I get back, I must
have all occurences of names with status = 1

I am probably missing somethng obvious

Paul
www.sandersonforensics.com
SQLite Forensics Book <https://www.amazon.co.uk/dp/ASIN/1980293074>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Transactions

2017-02-05 Thread Michele Pradella
Hi all, I have a question about transactions and SQLite: 

Do you think transactions are useful only when you have to do a sequence
of statements that depends on each other and you need a way to rollback
all statements if something goes wrong? or you can use transactions even
with not interdependent statements for performance reason? and if yes do
you think there's a trade-off about the number of query number in each
transaction? 

I'm think about 1000 INSERT query to execute, is transaction works
better because you do not have to change index at any insert but just
one time on commit?
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Network file system

2016-08-21 Thread Michele Pradella
Ok so speeking about network file system. 

1 process with 2 threads that read/write DB file, is safe or not? 

Do you think the SQLITE_THREADSAFE [1] in Thread-Safe or in Serialize
mode can avoid network problem in this use case? 

Il 2016-08-05 14:52 Keith Medcalf ha scritto:

> iSCSI is not shared network filesystem file system.  
> 
> iSCSI is a used to mount a local block file system using "SCSI over an IP 
> Network Cable".  Without the i the iSCSI protocol is plain SCSI over a SCSI 
> Parallel Cable or sometimes (if you call it SAS) SCSI over a Serial Cable.  
> You can also have SCSI over Fibre Channel, SCSI over Token Ring, or probably 
> even SCSI over Avian Carriers.
> 
> The issues with Network File Systems are with Network File Systems, not with 
> the method used to connect the physical block device to local computer. 
> 
>> -Original Message-
>> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
>> On Behalf Of Michele Pradella
>> Sent: Friday, 5 August, 2016 00:31
>> To: sqlite-users@mailinglists.sqlite.org
>> Subject: [sqlite] Network file system
>> 
>> Hi all, I read documentation about using sqlite on a Network file
>> system, so I know is not a good environment because file lock problem.
>> Anyway do you think I can have the same problem if I'm sure that only my
>> precess try write or read database? So I have just one process using
>> network DB (for example an iSCSI), with more connections on the same
>> database but just one process. Do you think this can result in
>> corruptions? Do you have some experience about?
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> 
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

 

Links:
--
[1] https://www.sqlite.org/compile.html#threadsafe
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Network file system

2016-08-06 Thread Michele Pradella
Ok understand... so the question is: if I have just 1 process with 2
threads reading/writing a network DB, is it safe if threads write at the
same time? can SQLITE_THREADSAFE [1] define with correct value help
handle this?   

Il 2016-08-05 14:52 Keith Medcalf ha scritto:

> iSCSI is not shared network filesystem file system.  
> 
> iSCSI is a used to mount a local block file system using "SCSI over an IP 
> Network Cable".  Without the i the iSCSI protocol is plain SCSI over a SCSI 
> Parallel Cable or sometimes (if you call it SAS) SCSI over a Serial Cable.  
> You can also have SCSI over Fibre Channel, SCSI over Token Ring, or probably 
> even SCSI over Avian Carriers.
> 
> The issues with Network File Systems are with Network File Systems, not with 
> the method used to connect the physical block device to local computer. 
> 
>> -Original Message-
>> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
>> On Behalf Of Michele Pradella
>> Sent: Friday, 5 August, 2016 00:31
>> To: sqlite-users@mailinglists.sqlite.org
>> Subject: [sqlite] Network file system
>> 
>> Hi all, I read documentation about using sqlite on a Network file
>> system, so I know is not a good environment because file lock problem.
>> Anyway do you think I can have the same problem if I'm sure that only my
>> precess try write or read database? So I have just one process using
>> network DB (for example an iSCSI), with more connections on the same
>> database but just one process. Do you think this can result in
>> corruptions? Do you have some experience about?
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> 
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

 

Links:
--
[1] https://www.sqlite.org/compile.html#threadsafe
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Network file system

2016-08-05 Thread Michele Pradella

Il 05/08/2016 09.47, Simon Slavin ha scritto:

On 5 Aug 2016, at 7:30am, Michele Pradella <michele.prade...@selea.com> wrote:


Hi all, I read documentation about using sqlite on a Network file system, so I 
know is not a good environment because file lock problem. Anyway do you think I 
can have the same problem if I'm sure that only my precess try write or read 
database?

If you have single-access (just one application/process/thread using the SQLite 
API) then you're okay.

Unless you have a hardware fault, of course.
So if you have just 1 application with 2 threads that access same DB, is 
it safe?


Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Network file system

2016-08-05 Thread Michele Pradella
Hi all, I read documentation about using sqlite on a Network file 
system, so I know is not a good environment because file lock problem. 
Anyway do you think I can have the same problem if I'm sure that only my 
precess try write or read database? So I have just one process using 
network DB (for example an iSCSI), with more connections on the same 
database but just one process. Do you think this can result in 
corruptions? Do you have some experience about?

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Good way for CEIL, or is there a better way

2016-05-09 Thread Michele Pradella

> 2016-05-09 13:40 GMT+02:00 Michele Pradella :
>
>> I need to have a CEIL function in SQLite. This is the way I implemented it:
>>> WITH percentage AS (
>>>   SELECT date
>>>   ,   100.0 * rank / outOf AS percentage
>>>   ,  CAST(100.0 * rank / outOf AS int) AS castedPercentage
>>>   FROM ranking
>>> )
>>> SELECT date
>>> ,  (CASE WHEN percentage = castedPercentage
>>>  THEN castedPercentage
>>>  ELSE castedPercentage + 1
>>>  END) AS percentage
>>> FROM percentage
>>>
>>> Is this a good way, or is there a better way?
>>>
>>> Probably  you can create your own function
> ?But I want it to be possible for ?everyone? to use the application. People
> need to implement my function then. Or am I wrong about that?
Just add CEIL function with (sqlite3_create_function) when you need in 
your code. I do not understand what do you mean with "everyone" anyway 
if your application has the definition of CEIL function everyone using 
your application will have the function


[sqlite] Good way for CEIL, or is there a better way

2016-05-09 Thread Michele Pradella
> I need to have a CEIL function in SQLite. This is the way I implemented it:
> WITH percentage AS (
>  SELECT date
>  ,   100.0 * rank / outOf AS percentage
>  ,  CAST(100.0 * rank / outOf AS int) AS castedPercentage
>  FROM ranking
> )
> SELECT date
> ,  (CASE WHEN percentage = castedPercentage
> THEN castedPercentage
> ELSE castedPercentage + 1
> END) AS percentage
> FROM percentage
>
> Is this a good way, or is there a better way?
>
Probably  you can create your own function

void sqlite3_ceilFunc(sqlite3_context* context, int argc, 
sqlite3_value** values) {
 //yourcode
}

SQliteContext cContext; //any sqlite context
sqlite3* pDB; //your DB session
sqlite3_create_function(pDB, "CEIL", 1, SQLITE_UTF8, , 
_ceilFunc, NULL, NULL);


[sqlite] In-Memory database PRAGMA read_uncommitted

2016-04-24 Thread Michele Pradella
Il 2016-04-23 11:05 R Smith ha scritto:

> On 2016/04/23 10:20 AM, Michele Pradella wrote: 
> 
>> I have an In-Memory DB that is written and read from connections of the
>> same process. All good with shared cache, but I found that TableLock
>> occur more often on In-Memory than on disk DB, probably because in
>> memory we can't use WAL.
>> 
>> Anyway I found the PRAGMA read_uncommitted that from documentation seams
>> a way to read without the problem of table lock. The question is about
>> this sentence "This can lead to inconsistent query results if another
>> database connection modifies a table while it is being read".
>> "inconsistent" means just "out of date"? or there can be some other type
>> of inconsistent data?
> 
> It means that you can read a record set, using such a shared cache 
> connection, while a sibling connection (with whom you are sharing) is 
> altering the data, resulting in the possibility that the record set will be 
> inconsistent with both the pre-change and the post-change DB states. To draw 
> a picture, imagine the following scenario:
> 
> Create connections C1 and C2 which shares the cache and at least C2 uses 
> pragma read_uncomitted.
> 
> The following table "t" exists so that:
> ID | Val
> ---|
> 1 | 10
> 2 | 10
> 3 | 10
> 
> Connection C1 starts updating the DB with:
> UPDATE t SET Val = 20;
> 
> At close after that same moment, C2 starts reading (uncommitted, i.e. 
> non-serialized) the DB with:
> SELECT * FROM t;
> 
> But reading is faster than writing, so the result set might look like this 
> perhaps:
> ID | Val
> ---|
> 1 | 20
> 2 | 20
> 3 | 10
> 
> which is not consistent with either the DB state before C1 writes, nor after 
> C1 committed.
> 
> So no, "inconsistent" doesn't "just" mean outdated, it truly means 
> non-consistent. This may or may not be a problem to your scenario.
> 
> Perhaps the timeout setting is of more value to you? I do not have experience 
> of in-memory DBs that gets used to the point where table locks become 
> intrusive - but perhaps someone else here have solved the problem and can 
> shed some light.
> 
> Cheers,
> Ryan
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Ok understood thank you. I'll have a look to the timeout settings just
to check it, but in my environment this kind of "inconsistency" it's not
a problem. 

Anyway I think that shared cache in Memory DB give you the ability to
make sqlite realy very fast in SELECT statement, very good feature.




[sqlite] In-Memory database PRAGMA read_uncommitted

2016-04-23 Thread Michele Pradella
I have an In-Memory DB that is written and read from connections of the
same process. All good with shared cache, but I found that TableLock
occur more often on In-Memory than on disk DB, probably because in
memory we can't use WAL. 

Anyway I found the PRAGMA read_uncommitted that from documentation seams
a way to read without the problem of table lock. The question is about
this sentence "This can lead to inconsistent query results if another
database connection modifies a table while it is being read".
"inconsistent" means just "out of date"? or there can be some other type
of inconsistent data?




[sqlite] In-Memory estimate size

2016-03-20 Thread Michele Pradella
Il 2016-03-18 17:18 Richard Hipp ha scritto:

> On 3/18/16, Michele Pradella  wrote: 
> 
>> Which is the best way to estimate the size of an In-Memory DB?
>> Now I'm using (PRAGMA page_size)*(PRAGMA page_count) to have an idea of
>> the size in RAM of the db. It's quite ok but obviously it gives a value
>> less than the real (probably because indexes and other stuff are missing
>> from cont)
>> For example a DB that is about 5GB of ram with the operation above can
>> result in 4GB
> 
> The sqlite3_memory_used() interface is not giving the answer you want?

I'll try, I do not see any PRAGMA related so I think I have to use the
low level C primitive 




[sqlite] In-Memory estimate size

2016-03-18 Thread Michele Pradella
Which is the best way to estimate the size of an In-Memory DB?
Now I'm using (PRAGMA page_size)*(PRAGMA page_count) to have an idea of 
the size in RAM of the db. It's quite ok but obviously it gives a value 
less than the real (probably because indexes and other stuff are missing 
from cont)
For example a DB that is about 5GB of ram with the operation above can 
result in 4GB


[sqlite] In-Memory DB cache_size

2016-03-17 Thread Michele Pradella
I check the default cache_size of a In-Memory DB and it's 2000
Do you think for that kind of DB I can put cache_size to 0 like default 
for TEMP DB? or you think it's better leave 2000?
Just wandering if it's correct to have ram cache of a In-Memory DB


[sqlite] Multiple Column index

2016-02-22 Thread Michele Pradella

> On 22 Feb 2016, at 9:07am, Michele Pradella  
> wrote:
>
>> Already done this check. My last question was about to reduce the number of 
>> indexes on table avoiding kill a "quite unnecessary" index that if used do a 
>> better job that the other.
>> Generally speaking, I think that if you use and index on (field1) and an 
>> index on (field1,field2) the work that SQLite have to do to create and use 
>> each index is different. So from point of view of SELECT statement, if the 
>> WHERE clause include only field1, is it the same for SQLite to use index on 
>> (filed1) compared from using the index on (field1,filed2)?
> Oh, right.
>
> If you have one index on (field1) and an index on (field1,field2), the index 
> on (field1) is completely pointless and can be deleted.  SQLite will use the 
> other one, and it will get the correct result by using it.
>
> So as I wrote in another thread two days ago ...
>
> "If you have a long index, and shorter index which is the start of it, you 
> don't need the shorter index."
Ok thanks is what I need to know,
regards
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] Multiple Column index

2016-02-22 Thread Michele Pradella
>>> Your indexes are badly designed.
>>>
>>> You require the following two indexes:
>>> CREATE INDEX indexAB ON test(DateTime,CarPlate);
>>> CREATE INDEX indexBA ON test(CarPlate,DateTime);
>>>
>>> The indexes:
 CREATE INDEX indexA ON test(DateTime);
 CREATE INDEX indexB ON test(CarPlate);
>>> serve no useful purpose and should be dropped.
>> So if I have a query like this
>> "SELECT * FROM table_name WHERE DateTimeVALUE_MIN"
>> do you think I can use indexA or indexAB and the results in terms of 
>> performance are the same? is there some "more work" that query have to do if 
>> we use indexAB instead index?
> Here is your original SELECT:
>
> SELECT * FROM table WHERE (field1 LIKE 'TEXT%')AND(field2>=X)AND(field2<=Y)
>
> The question of which index is best depends on how 'chunky' field1 and field2 
> are.  Perhaps half the values in field1 are LIKE 'TEXT%'.  In that case 
> filtering on field1 first does not help much in narrowing down your 
> selection, and SQLite would do better filtering on field2 first.
>
> The way to find this out is to do what I wrote in my previous post.
>
> Create your table.
> Put data in your table like the data you'll be using in real life.
> Define the two indexes Dr Hipp listed above.
> Run the SQL command ANALYZE.
> Use EXPLAIN QUERY PLAN on your SELECT command and see which index SQLite 
> chose to use.
Already done this check. My last question was about to reduce the number 
of indexes on table avoiding kill a "quite unnecessary" index that if 
used do a better job that the other.
Generally speaking, I think that if you use and index on (field1) and an 
index on (field1,field2) the work that SQLite have to do to create and 
use each index is different. So from point of view of SELECT statement, 
if the WHERE clause include only field1, is it the same for SQLite to 
use index on (filed1) compared from using the index on (field1,filed2)?
>
> Once you're used to how it works you won't have to do this any more, you will 
> just naturally know what index to make.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] Multiple Column index

2016-02-22 Thread Michele Pradella

> Your indexes are badly designed.
>
> You require the following two indexes:
> CREATE INDEX indexAB ON test(DateTime,CarPlate);
> CREATE INDEX indexBA ON test(CarPlate,DateTime);
>
> The indexes:
>> CREATE INDEX indexA ON test(DateTime);
>> CREATE INDEX indexB ON test(CarPlate);
> serve no useful purpose and should be dropped.
So if I have a query like this
"SELECT * FROM table_name WHERE DateTimeVALUE_MIN"
do you think I can use indexA or indexAB and the results in terms of  
performance are the same? is there some "more work" that query have to 
do if we use indexAB instead index?
>
>> -Original Message-
>> From: sqlite-users-bounces at mailinglists.sqlite.org [mailto:sqlite-users-
>> bounces at mailinglists.sqlite.org] On Behalf Of Michele Pradella
>> Sent: Friday, 19 February, 2016 01:07
>> To: sqlite-users at mailinglists.sqlite.org
>> Subject: [sqlite] Multiple Column index
>>
>> Hi all, I have a question about using index with multiple column.
>> Take this test case
>> CREATE TABLE test (DateTime BIGINT,CarPlate VARCHAR(255));
>> CREATE INDEX indexA ON test(DateTime);
>> CREATE INDEX indexB ON test(CarPlate);
>> CREATE INDEX indexAB ON test(DateTime,CarPlate);
>>
>> now if you do
>> [1] -> EXLPAIN QUERY PLAN SELECT * FROM test WHERE (DateTime=0) AND
>> (CarPlate='AA000BB')
>>
>> you obtain:
>> 0|0|0|SEARCH TABLE test USING COVERING INDEX indexAB (DateTime=? AND
>> CarPlate=?)
>> So it's good.
>>
>> if you do
>> EXLPAIN QUERY PLAN SELECT * FROM test WHERE
>> (DateTime>1)AND(DateTime<100) AND (CarPlate = 'AA000BB');
>> 0|0|0|SEARCH TABLE test USING INDEX indexB (CarPlate=?)
>> So is used only the indexB not the index for DateTime, and if you force
>> it indexAB
>> 0|0|0|SEARCH TABLE test USING COVERING INDEX indexAB (DateTime>? AND
>> DateTime> so it used only for DateTime.
>>
>> Do you think Is there a way to use indexAB (or using both index
>> combining two select) to cover both condition (DateTime>X AND
>> DateTime> only for [1] queries?
>> ___
>> sqlite-users mailing list
>> sqlite-users at mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] Multiple Column index

2016-02-19 Thread Michele Pradella

> (please don't top-post)
>
> Michele Pradella wrote:
>> so there's no way to use that kind of double column index on a select like
>> explain query plan select * from test where (CarPlate LIKE 'AA000%') AND 
>> (DateTime>1);
>> because at least one field have to do with operator = correct?
> No, the number of "initial columns" that use "=" might be zero.
>
> This query can user either an index that has DateTime as first column,
> or an index that has CarPlate as first column (if the column has text
> affinity, and if the index use a case-insensitive collation):
Ok but the question is: can a statement (SELECT * FROM table WHERE 
(field1 LIKE 'TEXT%')AND(field2>=X)AND(field2<=Y)) use the index created 
on multicolumn field1,field2? or I use it only if I have file1 IN,=,IS?
Which is the best index to use here to have the best result in terms of 
query speed?
>
> CREATE INDEX indexA ON test(DateTime);
> CREATE INDEX indexB_for_LIKE ON test(CarPlate COLLATE NOCASE);
>
> To be able to use a normal (case-sensitive) index, you'd need to do
> a case-sensitive search: (CarPlate GLOB 'AA000*').
>
>
> Regards,
> Clemens
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] Multiple Column index

2016-02-19 Thread Michele Pradella
because at least one field have to do with operator = correct?
no can be one of = or IN or IS but not LIKE operator

Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.pradella at selea.com* <mailto:michele.pradella at selea.com>
*http://www.selea.com*
Il 19/02/2016 09.41, Michele Pradella ha scritto:
> Ok understand, so there's no way to use that kind of double column 
> index on a select like
> explain query plan select * from test where (CarPlate LIKE 'AA000%') 
> AND (DateTime>1);
> because at least one field have to do with operator = correct?
>
> Selea s.r.l.
>
>
>Michele Pradella R
>
>
>SELEA s.r.l.
>
> Via Aldo Moro 69
> Italy - 46019 Cicognara (MN)
> Tel +39 0375 889091
> Fax +39 0375 889080
> *michele.pradella at selea.com* <mailto:michele.pradella at selea.com>
> *http://www.selea.com*
> Il 19/02/2016 09.35, Clemens Ladisch ha scritto:
>> Michele Pradella wrote:
>>> CREATE TABLE test (DateTime BIGINT,CarPlate VARCHAR(255));
>>> CREATE INDEX indexA ON test(DateTime);
>>> CREATE INDEX indexB ON test(CarPlate);
>>> CREATE INDEX indexAB ON test(DateTime,CarPlate);
>>>
>>> EXLPAIN QUERY PLAN SELECT * FROM test WHERE 
>>> (DateTime>1)AND(DateTime<100) AND (CarPlate = 'AA000BB');
>>> 0|0|0|SEARCH TABLE test USING INDEX indexB (CarPlate=?)
>>> So is used only the indexB not the index for DateTime, and if you 
>>> force it indexAB
>>> 0|0|0|SEARCH TABLE test USING COVERING INDEX indexAB (DateTime>? AND 
>>> DateTime>> so it used only for DateTime.
>>>
>>> Do you think Is there a way to use indexAB to cover both condition?
>> The documentation <http://www.sqlite.org/optoverview.html#where_clause>
>> says:
>> | The initial columns of the index must be used with the = or IN or IS
>> | operators. The right-most column that is used can employ inequalities.
>>
>> So the column with the inequalities (DateTime) must be the right-most
>> one in the index:
>>
>>CREATE INDEX indexBA ON test(CarPlate, DateTime);
>>
>>
>> Regards,
>> Clemens
>> ___
>> sqlite-users mailing list
>> sqlite-users at mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] Multiple Column index

2016-02-19 Thread Michele Pradella
Ok understand, so there's no way to use that kind of double column index 
on a select like
explain query plan select * from test where (CarPlate LIKE 'AA000%') AND 
(DateTime>1);
because at least one field have to do with operator = correct?

Selea s.r.l.


    Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.pradella at selea.com* <mailto:michele.pradella at selea.com>
*http://www.selea.com*
Il 19/02/2016 09.35, Clemens Ladisch ha scritto:
> Michele Pradella wrote:
>> CREATE TABLE test (DateTime BIGINT,CarPlate VARCHAR(255));
>> CREATE INDEX indexA ON test(DateTime);
>> CREATE INDEX indexB ON test(CarPlate);
>> CREATE INDEX indexAB ON test(DateTime,CarPlate);
>>
>> EXLPAIN QUERY PLAN SELECT * FROM test WHERE (DateTime>1)AND(DateTime<100) 
>> AND (CarPlate = 'AA000BB');
>> 0|0|0|SEARCH TABLE test USING INDEX indexB (CarPlate=?)
>> So is used only the indexB not the index for DateTime, and if you force it 
>> indexAB
>> 0|0|0|SEARCH TABLE test USING COVERING INDEX indexAB (DateTime>? AND 
>> DateTime> so it used only for DateTime.
>>
>> Do you think Is there a way to use indexAB to cover both condition?
> The documentation <http://www.sqlite.org/optoverview.html#where_clause>
> says:
> | The initial columns of the index must be used with the = or IN or IS
> | operators. The right-most column that is used can employ inequalities.
>
> So the column with the inequalities (DateTime) must be the right-most
> one in the index:
>
>CREATE INDEX indexBA ON test(CarPlate, DateTime);
>
>
> Regards,
> Clemens
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] Multiple Column index

2016-02-19 Thread Michele Pradella
Hi all, I have a question about using index with multiple column.
Take this test case
CREATE TABLE test (DateTime BIGINT,CarPlate VARCHAR(255));
CREATE INDEX indexA ON test(DateTime);
CREATE INDEX indexB ON test(CarPlate);
CREATE INDEX indexAB ON test(DateTime,CarPlate);

now if you do
[1] -> EXLPAIN QUERY PLAN SELECT * FROM test WHERE (DateTime=0) AND 
(CarPlate='AA000BB')

you obtain:
0|0|0|SEARCH TABLE test USING COVERING INDEX indexAB (DateTime=? AND 
CarPlate=?)
So it's good.

if you do
EXLPAIN QUERY PLAN SELECT * FROM test WHERE 
(DateTime>1)AND(DateTime<100) AND (CarPlate = 'AA000BB');
0|0|0|SEARCH TABLE test USING INDEX indexB (CarPlate=?)
So is used only the indexB not the index for DateTime, and if you force 
it indexAB
0|0|0|SEARCH TABLE test USING COVERING INDEX indexAB (DateTime>? AND 
DateTimeX AND 
DateTime

[sqlite] query Benchmark

2016-02-15 Thread Michele Pradella
Anyway thank you Simon the point of view it's clear now

Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.pradella at selea.com* <mailto:michele.pradella at selea.com>
*http://www.selea.com*
Il 15/02/2016 10.51, Simon Slavin ha scritto:
> On 15 Feb 2016, at 9:42am, Michele Pradella  
> wrote:
>
>> CREATE TABLE car_plates (Id INTEGER PRIMARY KEY AUTOINCREMENT,FileName 
>> VARCHAR(255),Plate VARCHAR(255));
>> CREATE INDEX car_plates_plate on car_plates(Plate);
>> PRAGMA case_sensitive_like=ON;
>>
>> explain query plan SELECT * FROM car_plates INDEXED BY car_plates_plate 
>> WHERE (Plate LIKE '*552*'); //OK
>> explain query plan SELECT * FROM car_plates INDEXED BY car_plates_plate 
>> WHERE (Plate LIKE '__552*'); //Error: no query solution
>> explain query plan SELECT * FROM car_plates INDEXED BY car_plates_plate 
>> WHERE (Plate LIKE '_*552*'); //Error: no query solution
>> explain query plan SELECT * FROM car_plates INDEXED BY car_plates_plate 
>> WHERE (Plate LIKE '*_552*'); /OK
>>
>> I think is a strange behavior, but I'm going to read documentation.
> An underline character _ in the LIKE field means "any character".  There is 
> no way for SQLite to use an index to find a string that starts with any 
> character.  The index provides no help at all.  The query will work perfectly 
> if you don't insist on using that index.
>
> By the way, I'm guessing from the above that you think the asterisk * means 
> "any string of characters".  It doesn't.  You should use the percent sign % 
> for that.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] query Benchmark

2016-02-15 Thread Michele Pradella
Sorry you are right, the test I did was with % not with a *just a 
cut and paste error
Anyway it's tricky because I have to tell sqlite which index to use in 
LIKE to fast search but I do not have to tell the index if start with % 
or _ because otherwise the query is not executed.. I'll handle it...

Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.pradella at selea.com* <mailto:michele.pradella at selea.com>
*http://www.selea.com*
Il 15/02/2016 10.51, Simon Slavin ha scritto:
> On 15 Feb 2016, at 9:42am, Michele Pradella  
> wrote:
>
>> CREATE TABLE car_plates (Id INTEGER PRIMARY KEY AUTOINCREMENT,FileName 
>> VARCHAR(255),Plate VARCHAR(255));
>> CREATE INDEX car_plates_plate on car_plates(Plate);
>> PRAGMA case_sensitive_like=ON;
>>
>> explain query plan SELECT * FROM car_plates INDEXED BY car_plates_plate 
>> WHERE (Plate LIKE '*552*'); //OK
>> explain query plan SELECT * FROM car_plates INDEXED BY car_plates_plate 
>> WHERE (Plate LIKE '__552*'); //Error: no query solution
>> explain query plan SELECT * FROM car_plates INDEXED BY car_plates_plate 
>> WHERE (Plate LIKE '_*552*'); //Error: no query solution
>> explain query plan SELECT * FROM car_plates INDEXED BY car_plates_plate 
>> WHERE (Plate LIKE '*_552*'); /OK
>>
>> I think is a strange behavior, but I'm going to read documentation.
> An underline character _ in the LIKE field means "any character".  There is 
> no way for SQLite to use an index to find a string that starts with any 
> character.  The index provides no help at all.  The query will work perfectly 
> if you don't insist on using that index.
>
> By the way, I'm guessing from the above that you think the asterisk * means 
> "any string of characters".  It doesn't.  You should use the percent sign % 
> for that.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] query Benchmark

2016-02-15 Thread Michele Pradella
Probably is documented so I'm going to read carefully, anyway if you do this

CREATE TABLE car_plates (Id INTEGER PRIMARY KEY AUTOINCREMENT,FileName 
VARCHAR(255),Plate VARCHAR(255));
CREATE INDEX car_plates_plate on car_plates(Plate);
PRAGMA case_sensitive_like=ON;

explain query plan SELECT * FROM car_plates INDEXED BY car_plates_plate  
WHERE (Plate LIKE '*552*'); //OK
explain query plan SELECT * FROM car_plates INDEXED BY car_plates_plate  
WHERE (Plate LIKE '__552*'); //Error: no query solution
explain query plan SELECT * FROM car_plates INDEXED BY car_plates_plate  
WHERE (Plate LIKE '_*552*'); //Error: no query solution
explain query plan SELECT * FROM car_plates INDEXED BY car_plates_plate  
WHERE (Plate LIKE '*_552*'); /OK

I think is a strange behavior, but I'm going to read documentation.

PS.I think another strange behavior is that if you use default COLLATION 
for the column and the default car_sensitive_like you will get always 
the slowest result in LIKE queries

Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.pradella at selea.com* <mailto:michele.pradella at selea.com>
*http://www.selea.com*
Il 13/02/2016 11.43, R Smith ha scritto:
>
>
> On 2016/02/12 6:38 PM, Michele Pradella wrote:
>> Already solved with UNION of SELECT
>>
>
> It isn't solved, it is circumvented by trial and error without 
> understanding why.
>
> I'm sure that works ok for you in this case, but the point is if you 
> do study those documents a bit more, you may grasp the reason too, and 
> perhaps make much better/faster queries in future without spending 
> hours on trial and error. Save yourself a lot of time, we think.
>
>
>
>> Il 12/02/2016 17.03, Richard Hipp ha scritto:
>>> ...//so that it is easier to understand. Perhaps a table that shows the
>>> various combinations of COLLATE and PRAGMA case_sensitive_like work
>>> together.  I dunno.  I'll take the action item to revisit the
>>> documentation and try to improve it.
>
> @Dr.Hipp: Would it be possible / difficult to perhaps improve the like 
> algorithm and pragma case-insensitive_like to take 3 values, such that 
> 0 = OFF, 1 = ON and 2 = AS PER COLUMN DEF?
>
> I realize the like function maybe doesn't know which column, nor is 
> the like operation guaranteed to happen on a column. Maybe there is 
> another way? Perhaps only from inside the query planner when making 
> plan decisions? (Currently it compares the ci-ness during planning to 
> decide on best index to use, if I'm not mistaken).
>
> What I have done is simply added a custom LIKE function that takes a 
> second parameter for case sensitivity to avoid pragma-juggling - but 
> by this, the query planner is none the wiser
>
> I find myself sometimes needing the above, though not often enough to 
> warrant requesting the feature, but it seems now maybe others have the 
> need too.
>
> Thanks,
> Ryan
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] query Benchmark

2016-02-12 Thread Michele Pradella
Already solved with UNION of SELECT

Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.pradella at selea.com* <mailto:michele.pradella at selea.com>
*http://www.selea.com*
Il 12/02/2016 17.03, Richard Hipp ha scritto:
> On 2/12/16, Michele Pradella  wrote:
>> Why this query
>> PRAGMA case_sensitive_like=ON;EXPLAIN QUERY PLAN SELECT * FROM
>> car_plates INDEXED BY car_plates_plate
>> WHERE ((CarPlateType!=-1)AND((Plate LIKE 'AA00O%')OR(Plate LIKE
>> 'AA0O0%')))OR((CarPlateType==-1)AND((Plate LIKE '~A00O%')))
>>
>> give me "Error: no query solution"
>>
>> if I remove the second expression ((CarPlateType==-1)AND((Plate LIKE
>> '~A00O%'))) it works
>>
> All of your questions are answered in the documentation, in the links
> provided in previous replies.  The fact that you are still asking
> questions tells me that I need to work on improving the documentation
> so that it is easier to understand.  Perhaps a table that shows the
> various combinations of COLLATE and PRAGMA case_sensitive_like work
> together.  I dunno.  I'll take the action item to revisit the
> documentation and try to improve it.
>
> Meanwhile, please go reread the documentation yourself, slowly and
> carefully, and see if this doesn't all suddenly make sense.



[sqlite] query Benchmark

2016-02-12 Thread Michele Pradella
Splitting query in 2 SELECT using UNION let me use car_plates_plate 
index without problemvery strange...but I found a walkaround

Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.pradella at selea.com* <mailto:michele.pradella at selea.com>
*http://www.selea.com*
Il 12/02/2016 16.43, Michele Pradella ha scritto:
> Why this query
> PRAGMA case_sensitive_like=ON;EXPLAIN QUERY PLAN SELECT * FROM 
> car_plates INDEXED BY car_plates_plate
> WHERE ((CarPlateType!=-1)AND((Plate LIKE 'AA00O%')OR(Plate LIKE 
> 'AA0O0%')))OR((CarPlateType==-1)AND((Plate LIKE '~A00O%')))
>
> give me "Error: no query solution"
>
> if I remove the second expression ((CarPlateType==-1)AND((Plate LIKE 
> '~A00O%'))) it works
>
> Selea s.r.l.
>
>
>Michele Pradella R
>
>
>SELEA s.r.l.
>
> Via Aldo Moro 69
> Italy - 46019 Cicognara (MN)
> Tel +39 0375 889091
> Fax +39 0375 889080
> *michele.pradella at selea.com* <mailto:michele.pradella at selea.com>
> *http://www.selea.com*
> Il 12/02/2016 15.02, Michele Pradella ha scritto:
>> the strange thing is that if you create
>>
>> CREATE TABLE car_plates (Id INTEGER PRIMARY KEY 
>> AUTOINCREMENT,DateTime BIGINT,StringKey VARCHAR(255) UNIQUE,Plate 
>> VARCHAR(255),Type BIGINT,CameraName VARCHAR(255),CameraIP 
>> VARCHAR(255),CameraMAC VARCHAR(255),FileName VARCHAR(255),Country 
>> VARCHAR(255),Confidence VARCHAR(255),Reason VARCHAR(255),CarPlateType 
>> BIGINT,VehicleType BIGINT,GPS VARCHAR(255));
>> CREATE INDEX car_plates_datetime ON car_plates(DateTime);
>> CREATE INDEX car_plates_plate ON car_plates(Plate);
>>
>> and you do
>> PRAGMA case_sensitive_like=ON;
>> EXPLAIN QUERY PLAN SELECT * FROM car_plates WHERE ((Plate LIKE 
>> 'EX011%')) AND 
>> (DateTime>=14550588)AND(DateTime<=14552315);
>>
>> you will see the use of car_plates_datetime not car_plates_plate, but 
>> if you force the use of the index(car_plates_plate) it will use the 
>> correct index
>> PRAGMA case_sensitive_like=ON;
>> EXPLAIN QUERY PLAN SELECT * FROM car_plates indexed by 
>> car_plates_plateWHERE ((Plate LIKE 'EX011%')) AND 
>> (DateTime>=14550588)AND(DateTime<=14552315)
>>
>> with PRAGMA case_sensitive_like=OFF; you obviously obtain error
>>
>> Selea s.r.l.
>>
>>
>>Michele Pradella R
>>
>>
>>SELEA s.r.l.
>>
>> Via Aldo Moro 69
>> Italy - 46019 Cicognara (MN)
>> Tel +39 0375 889091
>> Fax +39 0375 889080
>> *michele.pradella at selea.com* <mailto:michele.pradella at selea.com>
>> *http://www.selea.com*
>> Il 12/02/2016 13.20, Richard Hipp ha scritto:
>>> On 2/12/16, Michele Pradella  wrote:
>>>> table:
>>>> CREATE TABLE IF NOT EXISTS car_plates (Id INTEGER PRIMARY KEY
>>>> AUTOINCREMENT,DateTime BIGINT,StringKey VARCHAR(255) UNIQUE,Plate
>>>> VARCHAR(255),Type BIGINT,CameraName VARCHAR(255),CameraIP
>>>> VARCHAR(255),CameraMAC VARCHAR(255),FileName VARCHAR(255),Country
>>>> VARCHAR(255),Confidence VARCHAR(255),Reason VARCHAR(255),CarPlateType
>>>> BIGINT,VehicleType BIGINT,GPS VARCHAR(255))
>>>>
>>>> index:
>>>> CREATE INDEX IF NOT EXISTS car_plates_plate ON car_plates(Plate)
>>>>
>>> Test case:
>>>
>>> /* Your original schema */
>>> CREATE TABLE IF NOT EXISTS car_plates (Id INTEGER PRIMARY KEY
>>> AUTOINCREMENT,DateTime BIGINT,StringKey VARCHAR(255) UNIQUE,Plate
>>> VARCHAR(255),Type BIGINT,CameraName VARCHAR(255),CameraIP
>>> VARCHAR(255),CameraMAC VARCHAR(255),FileName VARCHAR(255),Country
>>> VARCHAR(255),Confidence VARCHAR(255),Reason VARCHAR(255),CarPlateType
>>> BIGINT,VehicleType BIGINT,GPS VARCHAR(255));
>>> CREATE INDEX IF NOT EXISTS car_plates_plate ON car_plates(Plate);
>>> /* Sample query with PRAGMA case_sensitive_like=OFF (the default) */
>>> EXPLAIN QUERY PLAN
>>> SELECT 
>>> DateTime,FileName,Plate,Type,CameraName,Id,Country,Reason,CarPlateType,VehicleType,GPS
>>> FROM car_plates WHERE ((Plate LIKE 'EX011%')) AND (Type IN
>>> (1,2,3,6,7,8)) AND (DateTime>=14550588) AND
>>> (DateTime<=14552315) ORDER BY DateTime DESC LIMIT 2;
>>> .print -
>>> /* Sample Query with PRAGMA case_sensitive_like=ON */
>>> PRAGMA case_sensitive_like=ON;
>>> EXPLAIN QUERY PLAN

[sqlite] query Benchmark

2016-02-12 Thread Michele Pradella
Why this query
PRAGMA case_sensitive_like=ON;EXPLAIN QUERY PLAN SELECT * FROM 
car_plates INDEXED BY car_plates_plate
WHERE ((CarPlateType!=-1)AND((Plate LIKE 'AA00O%')OR(Plate LIKE 
'AA0O0%')))OR((CarPlateType==-1)AND((Plate LIKE '~A00O%')))

give me "Error: no query solution"

if I remove the second expression ((CarPlateType==-1)AND((Plate LIKE 
'~A00O%'))) it works

Selea s.r.l.


    Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.pradella at selea.com* <mailto:michele.pradella at selea.com>
*http://www.selea.com*
Il 12/02/2016 15.02, Michele Pradella ha scritto:
> the strange thing is that if you create
>
> CREATE TABLE car_plates (Id INTEGER PRIMARY KEY AUTOINCREMENT,DateTime 
> BIGINT,StringKey VARCHAR(255) UNIQUE,Plate VARCHAR(255),Type 
> BIGINT,CameraName VARCHAR(255),CameraIP VARCHAR(255),CameraMAC 
> VARCHAR(255),FileName VARCHAR(255),Country VARCHAR(255),Confidence 
> VARCHAR(255),Reason VARCHAR(255),CarPlateType BIGINT,VehicleType 
> BIGINT,GPS VARCHAR(255));
> CREATE INDEX car_plates_datetime ON car_plates(DateTime);
> CREATE INDEX car_plates_plate ON car_plates(Plate);
>
> and you do
> PRAGMA case_sensitive_like=ON;
> EXPLAIN QUERY PLAN SELECT * FROM car_plates WHERE ((Plate LIKE 
> 'EX011%')) AND 
> (DateTime>=14550588)AND(DateTime<=14552315);
>
> you will see the use of car_plates_datetime not car_plates_plate, but 
> if you force the use of the index(car_plates_plate) it will use the 
> correct index
> PRAGMA case_sensitive_like=ON;
> EXPLAIN QUERY PLAN SELECT * FROM car_plates indexed by 
> car_plates_plateWHERE ((Plate LIKE 'EX011%')) AND 
> (DateTime>=14550588)AND(DateTime<=14552315)
>
> with PRAGMA case_sensitive_like=OFF; you obviously obtain error
>
> Selea s.r.l.
>
>
>Michele Pradella R
>
>
>SELEA s.r.l.
>
> Via Aldo Moro 69
> Italy - 46019 Cicognara (MN)
> Tel +39 0375 889091
> Fax +39 0375 889080
> *michele.pradella at selea.com* <mailto:michele.pradella at selea.com>
> *http://www.selea.com*
> Il 12/02/2016 13.20, Richard Hipp ha scritto:
>> On 2/12/16, Michele Pradella  wrote:
>>> table:
>>> CREATE TABLE IF NOT EXISTS car_plates (Id INTEGER PRIMARY KEY
>>> AUTOINCREMENT,DateTime BIGINT,StringKey VARCHAR(255) UNIQUE,Plate
>>> VARCHAR(255),Type BIGINT,CameraName VARCHAR(255),CameraIP
>>> VARCHAR(255),CameraMAC VARCHAR(255),FileName VARCHAR(255),Country
>>> VARCHAR(255),Confidence VARCHAR(255),Reason VARCHAR(255),CarPlateType
>>> BIGINT,VehicleType BIGINT,GPS VARCHAR(255))
>>>
>>> index:
>>> CREATE INDEX IF NOT EXISTS car_plates_plate ON car_plates(Plate)
>>>
>> Test case:
>>
>> /* Your original schema */
>> CREATE TABLE IF NOT EXISTS car_plates (Id INTEGER PRIMARY KEY
>> AUTOINCREMENT,DateTime BIGINT,StringKey VARCHAR(255) UNIQUE,Plate
>> VARCHAR(255),Type BIGINT,CameraName VARCHAR(255),CameraIP
>> VARCHAR(255),CameraMAC VARCHAR(255),FileName VARCHAR(255),Country
>> VARCHAR(255),Confidence VARCHAR(255),Reason VARCHAR(255),CarPlateType
>> BIGINT,VehicleType BIGINT,GPS VARCHAR(255));
>> CREATE INDEX IF NOT EXISTS car_plates_plate ON car_plates(Plate);
>> /* Sample query with PRAGMA case_sensitive_like=OFF (the default) */
>> EXPLAIN QUERY PLAN
>> SELECT 
>> DateTime,FileName,Plate,Type,CameraName,Id,Country,Reason,CarPlateType,VehicleType,GPS
>> FROM car_plates WHERE ((Plate LIKE 'EX011%')) AND (Type IN
>> (1,2,3,6,7,8)) AND (DateTime>=14550588) AND
>> (DateTime<=14552315) ORDER BY DateTime DESC LIMIT 2;
>> .print -
>> /* Sample Query with PRAGMA case_sensitive_like=ON */
>> PRAGMA case_sensitive_like=ON;
>> EXPLAIN QUERY PLAN
>> SELECT 
>> DateTime,FileName,Plate,Type,CameraName,Id,Country,Reason,CarPlateType,VehicleType,GPS
>> FROM car_plates WHERE ((Plate LIKE 'EX011%')) AND (Type IN
>> (1,2,3,6,7,8)) AND (DateTime>=14550588) AND
>> (DateTime<=14552315) ORDER BY DateTime DESC LIMIT 2;
>>
>> Run the above in the shell.  See that the second query uses the index.
>>
>> Or, leave case_sensitive_like turned off (the default) but change the
>> table definition to include:
>>
>> ... Plate VARCHAR(255) COLLATE nocase, ...
>>
>> If you add the "COLLATE nocase" to the example above, you will see
>> that the index is used in the default setting, but not when PRAGMA
>> case_sensitive_like=ON.
>>
>>
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] query Benchmark

2016-02-12 Thread Michele Pradella
the strange thing is that if you create

CREATE TABLE car_plates (Id INTEGER PRIMARY KEY AUTOINCREMENT,DateTime 
BIGINT,StringKey VARCHAR(255) UNIQUE,Plate VARCHAR(255),Type 
BIGINT,CameraName VARCHAR(255),CameraIP VARCHAR(255),CameraMAC 
VARCHAR(255),FileName VARCHAR(255),Country VARCHAR(255),Confidence 
VARCHAR(255),Reason VARCHAR(255),CarPlateType BIGINT,VehicleType 
BIGINT,GPS VARCHAR(255));
CREATE INDEX car_plates_datetime ON car_plates(DateTime);
CREATE INDEX car_plates_plate ON car_plates(Plate);

and you do
PRAGMA case_sensitive_like=ON;
EXPLAIN QUERY PLAN SELECT * FROM car_plates WHERE ((Plate LIKE 
'EX011%')) AND (DateTime>=14550588)AND(DateTime<=14552315);

you will see the use of car_plates_datetime not car_plates_plate, but if 
you force the use of the index(car_plates_plate) it will use the correct 
index
PRAGMA case_sensitive_like=ON;
EXPLAIN QUERY PLAN SELECT * FROM car_plates indexed by 
car_plates_plateWHERE ((Plate LIKE 'EX011%')) AND 
(DateTime>=14550588)AND(DateTime<=14552315)

with PRAGMA case_sensitive_like=OFF; you obviously obtain error

Selea s.r.l.


    Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.pradella at selea.com* <mailto:michele.pradella at selea.com>
*http://www.selea.com*
Il 12/02/2016 13.20, Richard Hipp ha scritto:
> On 2/12/16, Michele Pradella  wrote:
>> table:
>> CREATE TABLE IF NOT EXISTS car_plates (Id INTEGER PRIMARY KEY
>> AUTOINCREMENT,DateTime BIGINT,StringKey VARCHAR(255) UNIQUE,Plate
>> VARCHAR(255),Type BIGINT,CameraName VARCHAR(255),CameraIP
>> VARCHAR(255),CameraMAC VARCHAR(255),FileName VARCHAR(255),Country
>> VARCHAR(255),Confidence VARCHAR(255),Reason VARCHAR(255),CarPlateType
>> BIGINT,VehicleType BIGINT,GPS VARCHAR(255))
>>
>> index:
>> CREATE INDEX IF NOT EXISTS car_plates_plate ON car_plates(Plate)
>>
> Test case:
>
> /* Your original schema */
> CREATE TABLE IF NOT EXISTS car_plates (Id INTEGER PRIMARY KEY
> AUTOINCREMENT,DateTime BIGINT,StringKey VARCHAR(255) UNIQUE,Plate
> VARCHAR(255),Type BIGINT,CameraName VARCHAR(255),CameraIP
> VARCHAR(255),CameraMAC VARCHAR(255),FileName VARCHAR(255),Country
> VARCHAR(255),Confidence VARCHAR(255),Reason VARCHAR(255),CarPlateType
> BIGINT,VehicleType BIGINT,GPS VARCHAR(255));
> CREATE INDEX IF NOT EXISTS car_plates_plate ON car_plates(Plate);
> /* Sample query with PRAGMA case_sensitive_like=OFF (the default) */
> EXPLAIN QUERY PLAN
> SELECT 
> DateTime,FileName,Plate,Type,CameraName,Id,Country,Reason,CarPlateType,VehicleType,GPS
> FROM car_plates WHERE ((Plate LIKE 'EX011%')) AND (Type IN
> (1,2,3,6,7,8)) AND (DateTime>=14550588) AND
> (DateTime<=14552315) ORDER BY DateTime DESC LIMIT 2;
> .print -
> /* Sample Query with PRAGMA case_sensitive_like=ON */
> PRAGMA case_sensitive_like=ON;
> EXPLAIN QUERY PLAN
> SELECT 
> DateTime,FileName,Plate,Type,CameraName,Id,Country,Reason,CarPlateType,VehicleType,GPS
> FROM car_plates WHERE ((Plate LIKE 'EX011%')) AND (Type IN
> (1,2,3,6,7,8)) AND (DateTime>=14550588) AND
> (DateTime<=14552315) ORDER BY DateTime DESC LIMIT 2;
>
> Run the above in the shell.  See that the second query uses the index.
>
> Or, leave case_sensitive_like turned off (the default) but change the
> table definition to include:
>
> ... Plate VARCHAR(255) COLLATE nocase, ...
>
> If you add the "COLLATE nocase" to the example above, you will see
> that the index is used in the default setting, but not when PRAGMA
> case_sensitive_like=ON.
>
>



[sqlite] query Benchmark

2016-02-12 Thread Michele Pradella
No...I do not understand, try dump file and re-import but the result 
it's always the same
even if I use PRAGMA case_sensitive_like=ON;
the query
PRAGMA case_sensitive_like=ON;
EXPLAIN QUERY PLAN
SELECT 
DateTime,FileName,Plate,Type,CameraName,Id,Country,Reason,CarPlateType,VehicleType,GPS
FROM car_plates WHERE ((Plate LIKE 'EX011%')) AND (Type IN
(1,2,3,6,7,8)) AND (DateTime>=14550588) AND
(DateTime<=14552315) ORDER BY DateTime DESC LIMIT 2;

the index selected is the one on DateTime
0|0|0|SEARCH TABLE car_plates USING INDEX car_plates_datetime 
(DateTime>? AND DateTimemailto:michele.pradella at selea.com>
*http://www.selea.com*
Il 12/02/2016 13.44, Michele Pradella ha scritto:
> ok, assume casr_sensitive_like=OFF (default), according the point 6 of 
> LIKE optimization:
> http://www.sqlite.org/optoverview.html
> should be the same to have my table definition and
>
> CREATE INDEX IF NOT EXISTS car_plates_plate ON car_plates(Plate 
> COLLATE NOCASE);
>
> Correct? because that way LIKE do not use index.
>
>
> Anyway about the test case you propose, the first EXPLAIN do not use 
> index, the second (after PRAGMA case_sensitive_like=ON;) correctly use 
> the index.
> I tried the same in an DB I created month ago with the same 
> structure...but it doesn't workprobably the DB was created with a 
> earlier sqlite version and I do not know if this can cause the use of 
> wrong index.
>
> So speaking about performance, which is better PRAGMA 
> case_sensitive_like=ON; or PRAGMA case_sensitive_like=OFF;?
>
>
>
> Selea s.r.l.
>
>
>Michele Pradella R
>
>
>SELEA s.r.l.
>
> Via Aldo Moro 69
> Italy - 46019 Cicognara (MN)
> Tel +39 0375 889091
> Fax +39 0375 889080
> *michele.pradella at selea.com* <mailto:michele.pradella at selea.com>
> *http://www.selea.com*
> Il 12/02/2016 13.20, Richard Hipp ha scritto:
>> On 2/12/16, Michele Pradella  wrote:
>>> table:
>>> CREATE TABLE IF NOT EXISTS car_plates (Id INTEGER PRIMARY KEY
>>> AUTOINCREMENT,DateTime BIGINT,StringKey VARCHAR(255) UNIQUE,Plate
>>> VARCHAR(255),Type BIGINT,CameraName VARCHAR(255),CameraIP
>>> VARCHAR(255),CameraMAC VARCHAR(255),FileName VARCHAR(255),Country
>>> VARCHAR(255),Confidence VARCHAR(255),Reason VARCHAR(255),CarPlateType
>>> BIGINT,VehicleType BIGINT,GPS VARCHAR(255))
>>>
>>> index:
>>> CREATE INDEX IF NOT EXISTS car_plates_plate ON car_plates(Plate)
>>>
>> Test case:
>>
>> /* Your original schema */
>> CREATE TABLE IF NOT EXISTS car_plates (Id INTEGER PRIMARY KEY
>> AUTOINCREMENT,DateTime BIGINT,StringKey VARCHAR(255) UNIQUE,Plate
>> VARCHAR(255),Type BIGINT,CameraName VARCHAR(255),CameraIP
>> VARCHAR(255),CameraMAC VARCHAR(255),FileName VARCHAR(255),Country
>> VARCHAR(255),Confidence VARCHAR(255),Reason VARCHAR(255),CarPlateType
>> BIGINT,VehicleType BIGINT,GPS VARCHAR(255));
>> CREATE INDEX IF NOT EXISTS car_plates_plate ON car_plates(Plate);
>> /* Sample query with PRAGMA case_sensitive_like=OFF (the default) */
>> EXPLAIN QUERY PLAN
>> SELECT 
>> DateTime,FileName,Plate,Type,CameraName,Id,Country,Reason,CarPlateType,VehicleType,GPS
>> FROM car_plates WHERE ((Plate LIKE 'EX011%')) AND (Type IN
>> (1,2,3,6,7,8)) AND (DateTime>=14550588) AND
>> (DateTime<=14552315) ORDER BY DateTime DESC LIMIT 2;
>> .print -
>> /* Sample Query with PRAGMA case_sensitive_like=ON */
>> PRAGMA case_sensitive_like=ON;
>> EXPLAIN QUERY PLAN
>> SELECT 
>> DateTime,FileName,Plate,Type,CameraName,Id,Country,Reason,CarPlateType,VehicleType,GPS
>> FROM car_plates WHERE ((Plate LIKE 'EX011%')) AND (Type IN
>> (1,2,3,6,7,8)) AND (DateTime>=14550588) AND
>> (DateTime<=14552315) ORDER BY DateTime DESC LIMIT 2;
>>
>> Run the above in the shell.  See that the second query uses the index.
>>
>> Or, leave case_sensitive_like turned off (the default) but change the
>> table definition to include:
>>
>> ... Plate VARCHAR(255) COLLATE nocase, ...
>>
>> If you add the "COLLATE nocase" to the example above, you will see
>> that the index is used in the default setting, but not when PRAGMA
>> case_sensitive_like=ON.
>>
>>
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] query Benchmark

2016-02-12 Thread Michele Pradella
ok, assume casr_sensitive_like=OFF (default), according the point 6 of 
LIKE optimization:
http://www.sqlite.org/optoverview.html
should be the same to have my table definition and

CREATE INDEX IF NOT EXISTS car_plates_plate ON car_plates(Plate COLLATE NOCASE);

Correct? because that way LIKE do not use index.


Anyway about the test case you propose, the first EXPLAIN do not use index, the 
second (after PRAGMA case_sensitive_like=ON;) correctly use the index.
I tried the same in an DB I created month ago with the same structure...but it 
doesn't workprobably the DB was created with a earlier sqlite version and I 
do not know if this can cause the use of wrong index.

So speaking about performance, which is better PRAGMA case_sensitive_like=ON; 
or PRAGMA case_sensitive_like=OFF;?



Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.pradella at selea.com* <mailto:michele.pradella at selea.com>
*http://www.selea.com*
Il 12/02/2016 13.20, Richard Hipp ha scritto:
> On 2/12/16, Michele Pradella  wrote:
>> table:
>> CREATE TABLE IF NOT EXISTS car_plates (Id INTEGER PRIMARY KEY
>> AUTOINCREMENT,DateTime BIGINT,StringKey VARCHAR(255) UNIQUE,Plate
>> VARCHAR(255),Type BIGINT,CameraName VARCHAR(255),CameraIP
>> VARCHAR(255),CameraMAC VARCHAR(255),FileName VARCHAR(255),Country
>> VARCHAR(255),Confidence VARCHAR(255),Reason VARCHAR(255),CarPlateType
>> BIGINT,VehicleType BIGINT,GPS VARCHAR(255))
>>
>> index:
>> CREATE INDEX IF NOT EXISTS car_plates_plate ON car_plates(Plate)
>>
> Test case:
>
> /* Your original schema */
> CREATE TABLE IF NOT EXISTS car_plates (Id INTEGER PRIMARY KEY
> AUTOINCREMENT,DateTime BIGINT,StringKey VARCHAR(255) UNIQUE,Plate
> VARCHAR(255),Type BIGINT,CameraName VARCHAR(255),CameraIP
> VARCHAR(255),CameraMAC VARCHAR(255),FileName VARCHAR(255),Country
> VARCHAR(255),Confidence VARCHAR(255),Reason VARCHAR(255),CarPlateType
> BIGINT,VehicleType BIGINT,GPS VARCHAR(255));
> CREATE INDEX IF NOT EXISTS car_plates_plate ON car_plates(Plate);
> /* Sample query with PRAGMA case_sensitive_like=OFF (the default) */
> EXPLAIN QUERY PLAN
> SELECT 
> DateTime,FileName,Plate,Type,CameraName,Id,Country,Reason,CarPlateType,VehicleType,GPS
> FROM car_plates WHERE ((Plate LIKE 'EX011%')) AND (Type IN
> (1,2,3,6,7,8)) AND (DateTime>=14550588) AND
> (DateTime<=14552315) ORDER BY DateTime DESC LIMIT 2;
> .print -
> /* Sample Query with PRAGMA case_sensitive_like=ON */
> PRAGMA case_sensitive_like=ON;
> EXPLAIN QUERY PLAN
> SELECT 
> DateTime,FileName,Plate,Type,CameraName,Id,Country,Reason,CarPlateType,VehicleType,GPS
> FROM car_plates WHERE ((Plate LIKE 'EX011%')) AND (Type IN
> (1,2,3,6,7,8)) AND (DateTime>=14550588) AND
> (DateTime<=14552315) ORDER BY DateTime DESC LIMIT 2;
>
> Run the above in the shell.  See that the second query uses the index.
>
> Or, leave case_sensitive_like turned off (the default) but change the
> table definition to include:
>
> ... Plate VARCHAR(255) COLLATE nocase, ...
>
> If you add the "COLLATE nocase" to the example above, you will see
> that the index is used in the default setting, but not when PRAGMA
> case_sensitive_like=ON.
>
>



[sqlite] query Benchmark

2016-02-12 Thread Michele Pradella
table:
CREATE TABLE IF NOT EXISTS car_plates (Id INTEGER PRIMARY KEY 
AUTOINCREMENT,DateTime BIGINT,StringKey VARCHAR(255) UNIQUE,Plate 
VARCHAR(255),Type BIGINT,CameraName VARCHAR(255),CameraIP 
VARCHAR(255),CameraMAC VARCHAR(255),FileName VARCHAR(255),Country 
VARCHAR(255),Confidence VARCHAR(255),Reason VARCHAR(255),CarPlateType 
BIGINT,VehicleType BIGINT,GPS VARCHAR(255))

index:
CREATE INDEX IF NOT EXISTS car_plates_plate ON car_plates(Plate)


query sample:
SELECT 
DateTime,FileName,Plate,Type,CameraName,Id,Country,Reason,CarPlateType,VehicleType,GPS
 
FROM car_plates WHERE ((Plate LIKE 'EX011%')) AND (Type IN 
(1,2,3,6,7,8)) AND (DateTime>=14550588) AND 
(DateTime<=14552315) ORDER BY DateTime DESC LIMIT 2;

Anyway I tried even with :
SELECT 
DateTime,FileName,Plate,Type,CameraName,Id,Country,Reason,CarPlateType,VehicleType,GPS
 
FROM car_plates WHERE ((Plate LIKE 'EX011%'))

SELECT 
DateTime,FileName,Plate,Type,CameraName,Id,Country,Reason,CarPlateType,VehicleType,GPS
 
FROM car_plates INDEXED BY  car_plates_plate WHERE ((Plate LIKE 'EX011%'))
give me error

Selea s.r.l.


    Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.pradella at selea.com* <mailto:michele.pradella at selea.com>
*http://www.selea.com*
Il 12/02/2016 13.06, Richard Hipp ha scritto:
> On 2/12/16, Michele Pradella  wrote:
>> I try using COLLATE NOCASE index with PRAGMA case_sensitive_like=OFF;
>> and COLLATE BINARY index with PRAGMA case_sensitive_like=ON;
>> but I have always the query with field LIKE 'AA%' that can't use index
>> on field "No Query solution" is reported by shell if you try to force
>> index. And with explain query plan the index on field is not used.
>> Do you think I'm doing something wrong?
>>
> Yes I do.
>
> Please post your schema and your query and we will have a look.



[sqlite] query Benchmark

2016-02-12 Thread Michele Pradella
I try using COLLATE NOCASE index with PRAGMA case_sensitive_like=OFF;
and COLLATE BINARY index with PRAGMA case_sensitive_like=ON;
but I have always the query with field LIKE 'AA%' that can't use index 
on field "No Query solution" is reported by shell if you try to force 
index. And with explain query plan the index on field is not used.
Do you think I'm doing something wrong?

Selea s.r.l.


    Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.pradella at selea.com* <mailto:michele.pradella at selea.com>
*http://www.selea.com*
Il 12/02/2016 11.23, Simon Slavin ha scritto:
> On 12 Feb 2016, at 10:14am, Michele Pradella  
> wrote:
>
>> I can make the query:
>> SELECT field FROM car_plates WHERE (field>='EX011A')AND(field<='EX011Z');
>> now I can use the index the query is faster.
> SQLite makes this optimization for you.  See section 4.0 of
>
> <https://www.sqlite.org/optoverview.html>
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] query Benchmark

2016-02-12 Thread Michele Pradella
Moreover if I make field LIKE 'AA%' and I use INDEXED BY index_on_field
shell return me "Error: no query solution"
so it's seams sqlite with LIKE operator can't use index on field

Selea s.r.l.


    Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.pradella at selea.com* <mailto:michele.pradella at selea.com>
*http://www.selea.com*
Il 12/02/2016 11.39, Michele Pradella ha scritto:
> Ok understood...anyway trying with sqlite shell it's seams that (filed 
> LIKE 'AA%') is slower than (field>='AAA' AND field<='AAZ')
> do you think there's a way I can check if the optimization is working?
>
> Selea s.r.l.
>
>
>Michele Pradella R
>
>
>SELEA s.r.l.
>
> Via Aldo Moro 69
> Italy - 46019 Cicognara (MN)
> Tel +39 0375 889091
> Fax +39 0375 889080
> *michele.pradella at selea.com* <mailto:michele.pradella at selea.com>
> *http://www.selea.com*
> Il 12/02/2016 11.23, Simon Slavin ha scritto:
>> On 12 Feb 2016, at 10:14am, Michele Pradella 
>>  wrote:
>>
>>> I can make the query:
>>> SELECT field FROM car_plates WHERE 
>>> (field>='EX011A')AND(field<='EX011Z');
>>> now I can use the index the query is faster.
>> SQLite makes this optimization for you.  See section 4.0 of
>>
>> <https://www.sqlite.org/optoverview.html>
>>
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users at mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] query Benchmark

2016-02-12 Thread Michele Pradella
Ok understood...anyway trying with sqlite shell it's seams that (filed 
LIKE 'AA%') is slower than (field>='AAA' AND field<='AAZ')
do you think there's a way I can check if the optimization is working?

Selea s.r.l.


    Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.pradella at selea.com* <mailto:michele.pradella at selea.com>
*http://www.selea.com*
Il 12/02/2016 11.23, Simon Slavin ha scritto:
> On 12 Feb 2016, at 10:14am, Michele Pradella  
> wrote:
>
>> I can make the query:
>> SELECT field FROM car_plates WHERE (field>='EX011A')AND(field<='EX011Z');
>> now I can use the index the query is faster.
> SQLite makes this optimization for you.  See section 4.0 of
>
> <https://www.sqlite.org/optoverview.html>
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] query Benchmark

2016-02-12 Thread Michele Pradella
Just to understand, if I have the query with LIKE simplified:
SELECT field FROM car_plates WHERE (field LIKE 'EX011%');
I can't use index by field
now suppose you know text value you can have in field, I can make the query:
SELECT field FROM car_plates WHERE (field>='EX011A')AND(field<='EX011Z');
now I can use the index the query is faster. Do you think the results 
should be the same (consider field that can have only 2 letters after 
EX011)?

generalizing the behavior is (field LIKE 'EX011%') tha same as 
(field>='EX011*lowestASCII*')AND(field<='EX011*greatestASCII*')?

Selea s.r.l.


    Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.pradella at selea.com* <mailto:michele.pradella at selea.com>
*http://www.selea.com*
Il 12/02/2016 10.28, Michele Pradella ha scritto:
> I'm already using my software. The scope of test my query with sqlite 
> shell is to have a test environment without other interaction, just to 
> test witch is the fast version of my query to use in my software.
> Another question is, if I have a query like this:
> SELECT *field* FROM car_plates WHERE (*field* LIKE 'EX011%') AND 
> (DateTime>=14550588) AND (DateTime<=14552315) ORDER BY 
> DateTime;
>
> If I try to force the use of an index on *field *but I think sqlite 
> can't use it, is it right?
>
> Selea s.r.l.
>
>
>Michele Pradella R
>
>
>SELEA s.r.l.
>
> Via Aldo Moro 69
> Italy - 46019 Cicognara (MN)
> Tel +39 0375 889091
> Fax +39 0375 889080
> *michele.pradella at selea.com* <mailto:michele.pradella at selea.com>
> *http://www.selea.com*
> Il 12/02/2016 10.23, Simon Slavin ha scritto:
>> On 12 Feb 2016, at 8:47am, Michele Pradella 
>>  wrote:
>>
>>> Hi all, is there a way to make a benchmark of queries to check which 
>>> version is faster? I'm using sqlite shell, the question is about on 
>>> how to make repetitive tests in the same conditions (for example I 
>>> need to totally disable cache to avoid different results the second 
>>> time query is executed).
>> Using the ".timer ON" command in the command-line shell, as you 
>> write, is the best way to do timing.  But there are many levels of 
>> cache in your computer and there's no way to disable them all.  You 
>> will definitely get timings influenced by interactions.
>>
>> It looks like you are trying to make commands run as fast as 
>> possible.  Unless you are writing an academic paper on theory, this 
>> is probably the wrong thing to do.  Write your software and see if it 
>> runs fast enough.  Only if it runs too slowly do you need to start 
>> worrying about speed.
>>
>> We can advise on good SQL commands and good indexes for you if you do 
>> need to increase your speed.
>>
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users at mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] query Benchmark

2016-02-12 Thread Michele Pradella
I'm already using my software. The scope of test my query with sqlite 
shell is to have a test environment without other interaction, just to 
test witch is the fast version of my query to use in my software.
Another question is, if I have a query like this:
SELECT *field* FROM car_plates WHERE (*field* LIKE 'EX011%') AND 
(DateTime>=14550588) AND (DateTime<=14552315) ORDER BY 
DateTime;

If I try to force the use of an index on *field *but I think sqlite 
can't use it, is it right?

Selea s.r.l.


    Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.pradella at selea.com* <mailto:michele.pradella at selea.com>
*http://www.selea.com*
Il 12/02/2016 10.23, Simon Slavin ha scritto:
> On 12 Feb 2016, at 8:47am, Michele Pradella  
> wrote:
>
>> Hi all, is there a way to make a benchmark of queries to check which version 
>> is faster? I'm using sqlite shell, the question is about on how to make 
>> repetitive tests in the same conditions (for example I need to totally 
>> disable cache to avoid different results the second time query is executed).
> Using the ".timer ON" command in the command-line shell, as you write, is the 
> best way to do timing.  But there are many levels of cache in your computer 
> and there's no way to disable them all.  You will definitely get timings 
> influenced by interactions.
>
> It looks like you are trying to make commands run as fast as possible.  
> Unless you are writing an academic paper on theory, this is probably the 
> wrong thing to do.  Write your software and see if it runs fast enough.  Only 
> if it runs too slowly do you need to start worrying about speed.
>
> We can advise on good SQL commands and good indexes for you if you do need to 
> increase your speed.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] query Benchmark

2016-02-12 Thread Michele Pradella
Hi all, is there a way to make a benchmark of queries to check which 
version is faster? I'm using sqlite shell, the question is about on how 
to make repetitive tests in the same conditions (for example I need to 
totally disable cache to avoid different results the second time query 
is executed).
I try to explain better: the same query re-arranged in two different 
queries to check which is the faster, executed on the same DB, how can I 
do speed check of those queries avoiding interaction between two 
queries(example cache)?



[sqlite] best way to have a constraint over 2 fields

2015-07-17 Thread Michele Pradella
Hi all, I have a question about the best way to have a constrain on a table.
think to a DB with 2 fields in the table ColA and ColB and you don't 
want to have multiple records with the same ColA,ColB couple
so we can
CONSTRAINT ColA_ColB UNIQUE (ColA,ColB) in the create table
or
CREATE UNIQUE INDEX uq_ColA_ColB ON table(ColA, ColB)

which one do you think is better in terms of performance? keep in mind 
the table have millions of records and SELECT is the most frequent operation

-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.pradella at selea.com* <mailto:michele.pradella at selea.com>
*http://www.selea.com*


Re: [sqlite] AUTOINCREMENT BIGINT

2014-11-07 Thread Michele Pradella

Ok understand thanks

Il 07/11/2014 14.40, Richard Hipp ha scritto:

On Fri, Nov 7, 2014 at 8:26 AM, Michele Pradella <michele.prade...@selea.com

wrote:
Is there a way to sue AUTOINCREMENT with BIGINT? what's the reason for
this check?


No.  Furthermore, AUTOINCREMENT probably does not do what you think it
does.  Please read the details at https://www.sqlite.org/autoinc.html



--
Selea s.r.l.


   Michele Pradella R


   SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] AUTOINCREMENT BIGINT

2014-11-07 Thread Michele Pradella
Hi all, I have a question about data type BIGINT: from docs 
(http://www.sqlite.org/datatype3.html) I understand that INTEGER and 
BIGINT results in the same affinity (INTEGER), so datatypes are same, is 
it correct?
Unfortunately if I create a table with a field "Id BIGINT PRIMARY KEY 
AUTOINCREMENT" I got the error "AUTOINCREMENT is only allowed on an 
INTEGER PRIMARY KEY"

from those lines:
#ifndef SQLITE_OMIT_AUTOINCREMENT
sqlite3ErrorMsg(pParse, "AUTOINCREMENT is only allowed on an "
   "INTEGER PRIMARY KEY");
#endif

Is there a way to sue AUTOINCREMENT with BIGINT? what's the reason for 
this check?


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] compile 3.8.4 error

2014-03-10 Thread Michele Pradella

Compiling on Windows 32bit VS2010
Line 73595
static const int iLn = __LINE__+4;
give an error: "error C2099: initializer is not a constant"
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory used by sqlite library

2011-02-25 Thread Michele Pradella
thank you for the advice, I found even this:
http://www.sqlite.org/malloc.html#memstatus
that describe the possibility to set the memory usage limit

Il 25/02/2011 14.24, Marco Bambini ha scritto:
> Michele take a look at the sqlite3_status function:
> http://www.sqlite.org/c3ref/status.html
> and
> http://www.sqlite.org/c3ref/c_status_malloc_count.html
>
> --
> Marco Bambini
> http://www.sqlabs.com
>
>
>
>
>
>
> On Feb 25, 2011, at 2:17 PM, Michele Pradella wrote:
>
>> Do you know if is there a way to ask to the sqlite library the amount of
>> memory that is using?
>> It could be useful when I have to take a look to the memory used in my
>> application. So I can distinguish between memory allocated by my
>> application itself and allocated by sqlite library.
>> It's possible to set up a maximum amount of memory that the library can use?
>>
>> -- 
>> Selea s.r.l.
>>
>>
>> Michele Pradella R
>>
>>
>> SELEA s.r.l.
>>
>> Via Aldo Moro 69
>> Italy - 46019 Cicognara (MN)
>> Tel +39 0375 889091
>> Fax +39 0375 889080
>> *michele.prade...@selea.com*<mailto:michele.prade...@selea.com>
>> *http://www.selea.com*
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] memory used by sqlite library

2011-02-25 Thread Michele Pradella
Do you know if is there a way to ask to the sqlite library the amount of 
memory that is using?
It could be useful when I have to take a look to the memory used in my 
application. So I can distinguish between memory allocated by my 
application itself and allocated by sqlite library.
It's possible to set up a maximum amount of memory that the library can use?

-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using stored Regular Expressions to match given String

2011-01-12 Thread Michele Pradella
it's possible to use  REGEXP in a select statement like this:
SELECT * from table_name WHERE Path REGEXP 'regular expression'

before you can use REGEXP you have to use the function
sqlite3_create_function(db, "regexp", 2, 
SQLITE_ANY,(void*)pAppPointer,_RegExpFunction,0,0)
to tell SQLITE to map REGEXP keyword to your function

and than you have to implement the function
static void sqlite3_RegExpFunction(sqlite3_context* context, int argc, 
sqlite3_value** values){
 CAppPointer* pAppPointer=(CAppPointer*)sqlite3_user_data(context);
 char* reg = (char*)sqlite3_value_text(values[0]);
 char* text = (char*)sqlite3_value_text(values[1]);

 if ( (argc != 2) || (reg == 0) || (text == 0) ){
 sqlite3_result_error(context, "SQL function regexp() called 
with invalid arguments.\n", -1);
 return;
 }

 //item not match
 return sqlite3_result_int(context, 0);
 //item match
   return sqlite3_result_int(context, 1);
}


Il 12/01/2011 12.40, Bruno Augusto ha scritto:
> Hi,
>
> I hope I'm doing the right thing. I never used Mailing Lists before.
>
> So, I know I need a user function to use the REGEXP operator. But most of
> the implementations I'd found (in PHP, I have to say) requires TWO
> parameters, the Regular Expression and the string to match.
>
> I created an SQLITE database where the Regular Expressions is already
> stored, and I would like to send a raw string to retrieve the correspondent
> result.
>
> E.g.:
>
> In the database's column 'URI', I have the following values: /(.*?) and
> /main(.*?)
>
> I would like to send a query similar to: SELECT * FROM `table` WHERE `URI`
> REGEXP( '/main/' )
>
> And the record to be returned would be, in the example case, the second,
> which have the text "main".
>
> Is that possible? Maybe adding an UDF?
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS Question

2010-11-23 Thread Michele Pradella
you have to use

select * from contacts where contacts LIKE '%t-online.de%'



Il 23/11/2010 14.53, ady ha scritto:
> Hello!
>
> I am trying this query
>
> select * from contacts where contacts match ('*t-online.de*')
>
> How could i modify this query to return say m...@t-online.de ?
>
> the - is an exclusion
>
> Thanks in advance!
>
> Ady
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] INSERT OR UPDATE

2010-11-10 Thread Michele Pradella
Hi all, I have to INSERT a row in a DB but I have first to check if the 
Key I'm inserting already exist.
Now I'm doing a "SELECT count..." first to check if the key exist and 
then INSERT or UPDATE records.
Do you know  if there's a better or faster way to do that?
Perhaps with an ON CONFLICT  
resolution algorithm of INSERT or UPDATE?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-12 Thread Michele Pradella
  Thank you for the advice, I'll try it in my application.
regards

Il 12/10/2010 17.17, Pavel Ivanov ha scritto:
> Michele,
>
> Here is another thought for you to consider. Apparently your
> application consistently generates some records, each record is marked
> with a timestamp of its creation and after some time you have to
> garbage-collect all records that are at least at a certain amount of
> time in the past. You can do this with generational gc: split your
> full database in parts according to record creation time and then your
> garbage collection will consist of deletion of database file from file
> system without deleting any records. E.g. let's say you start with one
> part, you write your records in it for 15 minutes. Then you create new
> part and write all new records in it. Old part just sits there for
> querying. After 15 minutes you create new part and so on. Then when
> time comes and all records in the first part must be deleted you just
> delete that database file and that's it - no interruption in access
> for newly created records. Of course this implementation complicates
> querying of existing records especially when you need some statistics
> pretty often (as opposed to querying individual records) but for some
> access patterns it can give you a huge performance improvement (it
> actually did for my application).
>
>
> Pavel
>
> On Mon, Oct 11, 2010 at 11:13 AM, Jay A. Kreibich<j...@kreibi.ch>  wrote:
>> On Mon, Oct 11, 2010 at 02:08:54PM +0200, Michele Pradella scratched on the 
>> wall:
>>>Ok so the main idea it's always the same: split the DELETE to make the
>>> operation on less records, but do it more often.
>>   Another thought occurs to me...   If your insert rates are fairly
>>   consistent (e.g. the number of records per minute is consistent, or
>>   at least predictable) then you can just use the table as a circular
>>   buffer.  That is, rather than inserting new data and deleting the
>>   old data, simply overwrite the older records with the new data.
>>   Rather than inserting new records, simply find the oldest record and
>>   update it with the new data.
>>
>>   If the records have similar static length content (e.g. no variable
>>   length text strings) this should be fairly fast, and will eliminate
>>   the need to to delete the whole records.  The UPDATE might be a tad
>>   slower than a unencumbered INSERT, but it is still likely to be
>>   faster than an INSERT plus a later DELETE.
>>
>>   Be careful, however, as even integers are variable-length records in
>>   SQLite (if their magnitude is different).  It might be best to use
>>   fixed size strings, even for the numeric values.  The UPDATE will be
>>   much faster if the new data fits into the same "slot" as the old
>>   record, and it can be updated in-place.
>>
>>   You could setup the inserts to find the oldest time and update that
>>   records (which should be fairly quick if there is an index on your
>>   timestamp column) or you could just manually create however many
>>   NULL entries you need and explicitly update incremental records.
>>   When the application starts up, just find the oldest date and
>>   continue.
>>
>>   This all depends on being able to predict the number of records
>>   required to meet you storage needs, however.  I suppose you could
>>   allow the database to expand as needed (that is, get the oldest date,
>>   and if it is not outside your window, INSERT rather than UPDATE).
>>   There are lots of ways to do this, the specifics depend on your
>>   needs.
>>
>>
>>-j
>>
>> --
>> Jay A. Kreibich<  J A Y  @  K R E I B I.C H>
>>
>> "Intelligence is like underwear: it is important that you have it,
>>   but showing it to the wrong people has the tendency to make them
>>   feel uncomfortable." -- Angela Johnson
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-11 Thread Michele Pradella
  Ok so the main idea it's always the same: split the DELETE to make the 
operation on less records, but do it more often.

Il 11/10/2010 13.33, Black, Michael (IS) ha scritto:
> I said this before but never saw a response...
>
> Just put your delete inside a select loop so it will always be interruptable.
>
> I assume you are talking about writing your own code in C or such?
>
> So rather than
> DELETE FROM TABLE WHERE datetime<  oldtime;
>
> Do this;
> SELECT datetime from TABLE where datetime<  oldtime;
> BEGIN
> for each record
> DELETE FROM TABLE where datetime=thistime;
> COMMIT
>
> I don't know how distinct your datetime values are...this delete could delete 
> multiple records at once which would be OK.
>
> One other possibility is to break out your delete statements like this:
>
> DELETE FROM TABLE WHERE datetime<  oldtime AND ((datetime % 100)=0)
> DELETE FROM TABLE WHERE datetime<  oldtime AND ((datetime % 100)=1)
> DELETE FROM TABLE WHERE datetime<  oldtime AND ((datetime % 100)=2)
> DELETE FROM TABLE WHERE datetime<  oldtime AND ((datetime % 100)=3)
> ...
>
> So each delete would be approx 100 times faster and would allow interruption 
> inbetween deletes.
>
>
>
> Michael D. Black
> Senior Scientist
> Advanced Analytics Directorate
> Northrop Grumman Information Systems
>
>
> 
>
> From: sqlite-users-boun...@sqlite.org on behalf of Michele Pradella
> Sent: Mon 10/11/2010 4:56 AM
> To: General Discussion of SQLite Database
> Subject: EXTERNAL:Re: [sqlite] Speed up DELETE of a lot of records
>
>
>
>
>I know that in this use case UPDATE is lither than DELETE, but if I
> make a DELETE at 4am I can have the DB locked for a lot of time at 4am:
> I'm only shift the problem.
> It's not a problem of interface efficiency for user experience: the goal
> is to make the system always reactive without slow down all the other DB
> operation.
>
> Il 11/10/2010 11.46, Simon Slavin ha scritto:
>> On 11 Oct 2010, at 10:26am, Michele Pradella wrote:
>>
>>>Soft delete could increase the SELECT speed because you have to check
>>> always for the "deleted" column.
>>> Moreover the DB will grow up without limit if no one physically delete
>>> the records: anyway UPDATE of a lot of records could be expensive too.
>>> I think the only way is to DELETE more frequently less records.
>> I recommend that you try it.  At least in the command-line application if 
>> not in your own code.  Deleting a record is very expensive in terms of time 
>> and resources.  Changing a field from a 1 to a 0 is far cheaper and faster, 
>> even if it's in an index or two.
>>
>> You can have a regular task that runs at 4am that deletes all the records 
>> with a 0 in.  That bit is easy.
>>
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
>
> --
> Selea s.r.l.
>
>
>  Michele Pradella R
>
>
>  SELEA s.r.l.
>
> Via Aldo Moro 69
> Italy - 46019 Cicognara (MN)
> Tel +39 0375 889091
> Fax +39 0375 889080
> *michele.prade...@selea.com*<mailto:michele.prade...@selea.com>
> *http://www.selea.com*<http://www.selea.com*/>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-11 Thread Michele Pradella
  It's what I'm doing: I'm deleting every 15 minutes of a small number 
of records at a time.

Il 11/10/2010 12.40, Simon Slavin ha scritto:
> On 11 Oct 2010, at 10:56am, Michele Pradella wrote:
>
>>   I know that in this use case UPDATE is lither than DELETE, but if I
>> make a DELETE at 4am I can have the DB locked for a lot of time at 4am:
>> I'm only shift the problem.
>> It's not a problem of interface efficiency for user experience: the goal
>> is to make the system always reactive without slow down all the other DB
>> operation.
> Then delete the records immediately.  Don't wait for a batch to build up.  
> Delete some every minute.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-11 Thread Michele Pradella
  I know that in this use case UPDATE is lither than DELETE, but if I 
make a DELETE at 4am I can have the DB locked for a lot of time at 4am: 
I'm only shift the problem.
It's not a problem of interface efficiency for user experience: the goal 
is to make the system always reactive without slow down all the other DB 
operation.

Il 11/10/2010 11.46, Simon Slavin ha scritto:
> On 11 Oct 2010, at 10:26am, Michele Pradella wrote:
>
>>   Soft delete could increase the SELECT speed because you have to check
>> always for the "deleted" column.
>> Moreover the DB will grow up without limit if no one physically delete
>> the records: anyway UPDATE of a lot of records could be expensive too.
>> I think the only way is to DELETE more frequently less records.
> I recommend that you try it.  At least in the command-line application if not 
> in your own code.  Deleting a record is very expensive in terms of time and 
> resources.  Changing a field from a 1 to a 0 is far cheaper and faster, even 
> if it's in an index or two.
>
> You can have a regular task that runs at 4am that deletes all the records 
> with a 0 in.  That bit is easy.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-11 Thread Michele Pradella
  Soft delete could increase the SELECT speed because you have to check 
always for the "deleted" column.
Moreover the DB will grow up without limit if no one physically delete 
the records: anyway UPDATE of a lot of records could be expensive too.
I think the only way is to DELETE more frequently less records.
If someone have a smart idea, I'm listening ;)

Il 08/10/2010 22.54, Nicolas Williams ha scritto:
> On Fri, Oct 08, 2010 at 05:49:18PM +0100, Simon Slavin wrote:
>> On 8 Oct 2010, at 5:48pm, Stephan Wehner wrote:
>>> On Fri, Oct 8, 2010 at 7:14 AM, Michele Pradella
>>> <michele.prade...@selea.com>  wrote:
>>>>   "science fiction?" was a rhetorically question. I'm only wondering
>>>> about what is the best and fastest way to DELETE a lot of records from
>>>> huge DB. I know and understand physical limit of data moving: anyway for
>>>> now I'm trying to split the BIG DELETE in some smaller DELETE to spread
>>>> the time used. It's the only way I can figure out at the moment.
>>> Is a soft-delete faster? Then you could add a slow-moving delete
>>> (mentioned earlier by Aldes Rossi, for example)
>>> for the soft-deleted records.
>> Soft-delete ?  Is that having another column which is a '1' if the
>> record is supposed to exist and a '0' if it's supposed to be deleted ?
> Use NULL to indicate deleted-ness.  But note that you may still have to
> update lots of rows and indexes.  Indeed, a true delete might not need
> much more I/O (whereas a VACUUM after the DELETE would).
> _______
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Michele Pradella
  "science fiction?" was a rhetorically question. I'm only wondering 
about what is the best and fastest way to DELETE a lot of records from 
huge DB. I know and understand physical limit of data moving: anyway for 
now I'm trying to split the BIG DELETE in some smaller DELETE to spread 
the time used. It's the only way I can figure out at the moment.

Il 08/10/2010 15.55, Jay A. Kreibich ha scritto:
> On Fri, Oct 08, 2010 at 09:09:09AM +0200, Michele Pradella scratched on the 
> wall:
>>I was thinking this too, but I take this for last chance: my hope is I
>> can delete 5 millions of records in few seconds, science fiction? :)
>Science fiction of the worst B-grade sort.
>
>Think about the numbers.  You're talking about updating a significant
>chunk of a multi-gigabyte file.  The WAL file tells you the changes
>amount to ~600MB of writes.  That's a whole CDs worth of data.  These
>days that might not be much for storage, but it is still a lot of
>data to move around.  Even if your storage system has a continuous,
>sustained write ability of 20MB/sec, that's a half minute.  How fast
>can your disk copy 600MB worth of data?
>
>But you're not just writing.  You're doing a lot of reads from all
>over the file in an attempt to figure out what to modify and write.
>Both the reads and the writes (the integration, at least) are
>scattered and small, so you're not going to get anywhere near the
>sustained performance levels.  10x slower would be extremely good.
>
>Or think of it in more physical numbers... If you're using a single
>vanilla disk, it likely spins at 7200 RPMs.  If it takes five minutes
>to update 5,000,000 records, that's an average of almost 140 records
>per disk revolution.  That's pretty good, considering everything else
>that is going on!
>
>
>
>The only possible way to manipulate that much data in a "few seconds"
>is to load up on RAM, get a real operating system, and throw the
>whole database into memory.  Or spend many, many, many thousands of
>dollars on a very wide disk array with a very large battery-backed
>cache and a huge pipe between your host and the array.
>
>    Big storage is cheap.  Fast storage is not.  Don't confuse the two.
>
> -j
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Michele Pradella
  I try 32 and it takes same time. Moreover the RAM used is about 500MB
Even with "PRAGMA synchronous = OFF;" operation time is the same.
I think the only way for now is to make more DELETE with less element, 
but it's not the best way


Il 08/10/2010 10.41, Marcus Grimm ha scritto:
> Michele Pradella wrote:
>>ok I'll try with 3.7.3
>> DELETE is a little bit faster, and the -wal is reintegrated when I close
>> the connection.
>> Changing cache_size (I try 1) DELETE takes same time to complete.
> 1 doesn't sounds very big, I used to define it to e.g. 32 instead when
> working with a GB size DB (mainly for index creation and "PRAGMA 
> integrity_check;")
>
> Maybe playing with temporary setting PRAGMA synchronous = OFF; would be 
> useful,
> although I think the wal mode should already avoid too much syncing.
>
> Marcus
>
>
>> Was my fault, because to close the shell connection I used Ctrl+C but
>> this leave the -wal file. If I close with .quit the -wal file is
>> reintegrated.
>> I thought that Ctrl+C is like a ".quit " but it's not.
>> Anyway if I close the DB connection with Ctrl+C and than reopen
>> connection and close it with .quit the -wal file is not reintegrated.
>>
>> Il 08/10/2010 9.56, Michele Pradella ha scritto:
>>> I'll try to increase cache size, and I'll try operation on my Db with
>>> the 3.7.3 anyway I already ported the Fix of the WAL issue from recent
>>> snapshot. I'll try and let you know
>>>
>>> Il 08/10/2010 9.52, Marcus Grimm ha scritto:
>>>> Michele Pradella wrote:
>>>>>  As I explain in previews email, I think that recreating the index is
>>>>> the slowest operation I can do on my DB.
>>>>> Anyway in my first email I ask another question about -wal file
>>>>> Tryin to DELETE the (5 millions) records with the shell SQLITE interface
>>>>> I can see the -wal grow up till 600MB. I can not understand why the -wal
>>>>> is no more reintegrated, and even when I close the connection (closing
>>>>> the shell) -wal file still exist.
>>>>> Trying for example the statement "create table new as select * from
>>>>> current where condition = keep;" the -wal file grow up till 1,5GB and
>>>>> than the same, after closing shell the -wal remain.
>>>>>
>>>>> Moreover the operation above "create new" terminate with a "Error:
>>>>> disk I/O error"
>>>>> The hard disk I use has a lot of free space and it's SATA2 hard disk, so
>>>>> is internal
>>>> You may try with the new 3.7.3 version, the 3.7.2 doesn't operate
>>>> very well on win32 when doing huge transactions in wal mode.
>>>>
>>>> 2nd, when running sqlite with a gigabyte sized database it is useful
>>>> to heavily increase the cache size, not sure if that helps for delete
>>>> statements though, but it does in particular when creating indices.
>>>>
>>>> Marcus
>>>>
>>>>> Il 07/10/2010 20.38, Petite Abeille ha scritto:
>>>>>> On Oct 7, 2010, at 6:05 PM, Michele Pradella wrote:
>>>>>>
>>>>>>> I have a DB of about 3GB: the DB has about 23 millions of records. [..]
>>>>>>> the statement is trying to delete about 5 millions records and it takes
>>>>>>> about 4-5minutes.
>>>>>>> Is there a way to try to speed up the DELETE?
>>>>>> Considering that you want to delete about a quarter of the records, 
>>>>>> perhaps it would be more efficient to recreate that table altogether, no?
>>>>>>
>>>>>> Pseudocode:
>>>>>>
>>>>>> create table new as select * from current where condition = keep;
>>>>>> create index on new;
>>>>>> drop table current;
>>>>>> alter table rename new to current;
>>>>>>
>>>>>> ___
>>>>>> sqlite-users mailing list
>>>>>> sqlite-users@sqlite.org
>>>>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>>>>
>>>>>>
>>>> ___
>>>> sqlite-users mailing list
>>>> sqlite-users@sqlite.org
>>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>>
>>>>
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Michele Pradella
  DELETE on PrimaryKey instead of DateTime index takes same time

Il 08/10/2010 10.30, Michele Pradella ha scritto:
>ok I'll try with 3.7.3
> DELETE is a little bit faster, and the -wal is reintegrated when I close
> the connection.
> Changing cache_size (I try 1) DELETE takes same time to complete.
> Was my fault, because to close the shell connection I used Ctrl+C but
> this leave the -wal file. If I close with .quit the -wal file is
> reintegrated.
> I thought that Ctrl+C is like a ".quit " but it's not.
> Anyway if I close the DB connection with Ctrl+C and than reopen
> connection and close it with .quit the -wal file is not reintegrated.
>
> Il 08/10/2010 9.56, Michele Pradella ha scritto:
>> I'll try to increase cache size, and I'll try operation on my Db with
>> the 3.7.3 anyway I already ported the Fix of the WAL issue from recent
>> snapshot. I'll try and let you know
>>
>> Il 08/10/2010 9.52, Marcus Grimm ha scritto:
>>> Michele Pradella wrote:
>>>>  As I explain in previews email, I think that recreating the index is
>>>> the slowest operation I can do on my DB.
>>>> Anyway in my first email I ask another question about -wal file
>>>> Tryin to DELETE the (5 millions) records with the shell SQLITE interface
>>>> I can see the -wal grow up till 600MB. I can not understand why the -wal
>>>> is no more reintegrated, and even when I close the connection (closing
>>>> the shell) -wal file still exist.
>>>> Trying for example the statement "create table new as select * from
>>>> current where condition = keep;" the -wal file grow up till 1,5GB and
>>>> than the same, after closing shell the -wal remain.
>>>>
>>>> Moreover the operation above "create new" terminate with a "Error:
>>>> disk I/O error"
>>>> The hard disk I use has a lot of free space and it's SATA2 hard disk, so
>>>> is internal
>>> You may try with the new 3.7.3 version, the 3.7.2 doesn't operate
>>> very well on win32 when doing huge transactions in wal mode.
>>>
>>> 2nd, when running sqlite with a gigabyte sized database it is useful
>>> to heavily increase the cache size, not sure if that helps for delete
>>> statements though, but it does in particular when creating indices.
>>>
>>> Marcus
>>>
>>>> Il 07/10/2010 20.38, Petite Abeille ha scritto:
>>>>> On Oct 7, 2010, at 6:05 PM, Michele Pradella wrote:
>>>>>
>>>>>> I have a DB of about 3GB: the DB has about 23 millions of records. [..]
>>>>>> the statement is trying to delete about 5 millions records and it takes
>>>>>> about 4-5minutes.
>>>>>> Is there a way to try to speed up the DELETE?
>>>>> Considering that you want to delete about a quarter of the records, 
>>>>> perhaps it would be more efficient to recreate that table altogether, no?
>>>>>
>>>>> Pseudocode:
>>>>>
>>>>> create table new as select * from current where condition = keep;
>>>>> create index on new;
>>>>> drop table current;
>>>>> alter table rename new to current;
>>>>>
>>>>> ___
>>>>> sqlite-users mailing list
>>>>> sqlite-users@sqlite.org
>>>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>>>
>>>>>
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Michele Pradella
  ok I'll try with 3.7.3
DELETE is a little bit faster, and the -wal is reintegrated when I close 
the connection.
Changing cache_size (I try 1) DELETE takes same time to complete.
Was my fault, because to close the shell connection I used Ctrl+C but 
this leave the -wal file. If I close with .quit the -wal file is 
reintegrated.
I thought that Ctrl+C is like a ".quit " but it's not.
Anyway if I close the DB connection with Ctrl+C and than reopen 
connection and close it with .quit the -wal file is not reintegrated.

Il 08/10/2010 9.56, Michele Pradella ha scritto:
>I'll try to increase cache size, and I'll try operation on my Db with
> the 3.7.3 anyway I already ported the Fix of the WAL issue from recent
> snapshot. I'll try and let you know
>
> Il 08/10/2010 9.52, Marcus Grimm ha scritto:
>> Michele Pradella wrote:
>>> As I explain in previews email, I think that recreating the index is
>>> the slowest operation I can do on my DB.
>>> Anyway in my first email I ask another question about -wal file
>>> Tryin to DELETE the (5 millions) records with the shell SQLITE interface
>>> I can see the -wal grow up till 600MB. I can not understand why the -wal
>>> is no more reintegrated, and even when I close the connection (closing
>>> the shell) -wal file still exist.
>>> Trying for example the statement "create table new as select * from
>>> current where condition = keep;" the -wal file grow up till 1,5GB and
>>> than the same, after closing shell the -wal remain.
>>>
>>> Moreover the operation above "create new" terminate with a "Error:
>>> disk I/O error"
>>> The hard disk I use has a lot of free space and it's SATA2 hard disk, so
>>> is internal
>> You may try with the new 3.7.3 version, the 3.7.2 doesn't operate
>> very well on win32 when doing huge transactions in wal mode.
>>
>> 2nd, when running sqlite with a gigabyte sized database it is useful
>> to heavily increase the cache size, not sure if that helps for delete
>> statements though, but it does in particular when creating indices.
>>
>> Marcus
>>
>>> Il 07/10/2010 20.38, Petite Abeille ha scritto:
>>>> On Oct 7, 2010, at 6:05 PM, Michele Pradella wrote:
>>>>
>>>>> I have a DB of about 3GB: the DB has about 23 millions of records. [..]
>>>>> the statement is trying to delete about 5 millions records and it takes
>>>>> about 4-5minutes.
>>>>> Is there a way to try to speed up the DELETE?
>>>> Considering that you want to delete about a quarter of the records, 
>>>> perhaps it would be more efficient to recreate that table altogether, no?
>>>>
>>>> Pseudocode:
>>>>
>>>> create table new as select * from current where condition = keep;
>>>> create index on new;
>>>> drop table current;
>>>> alter table rename new to current;
>>>>
>>>> ___
>>>> sqlite-users mailing list
>>>> sqlite-users@sqlite.org
>>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>>
>>>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Michele Pradella
  I'll try to increase cache size, and I'll try operation on my Db with 
the 3.7.3 anyway I already ported the Fix of the WAL issue from recent 
snapshot. I'll try and let you know

Il 08/10/2010 9.52, Marcus Grimm ha scritto:
> Michele Pradella wrote:
>>As I explain in previews email, I think that recreating the index is
>> the slowest operation I can do on my DB.
>> Anyway in my first email I ask another question about -wal file
>> Tryin to DELETE the (5 millions) records with the shell SQLITE interface
>> I can see the -wal grow up till 600MB. I can not understand why the -wal
>> is no more reintegrated, and even when I close the connection (closing
>> the shell) -wal file still exist.
>> Trying for example the statement "create table new as select * from
>> current where condition = keep;" the -wal file grow up till 1,5GB and
>> than the same, after closing shell the -wal remain.
>>
>> Moreover the operation above "create new" terminate with a "Error:
>> disk I/O error"
>> The hard disk I use has a lot of free space and it's SATA2 hard disk, so
>> is internal
> You may try with the new 3.7.3 version, the 3.7.2 doesn't operate
> very well on win32 when doing huge transactions in wal mode.
>
> 2nd, when running sqlite with a gigabyte sized database it is useful
> to heavily increase the cache size, not sure if that helps for delete
> statements though, but it does in particular when creating indices.
>
> Marcus
>
>> Il 07/10/2010 20.38, Petite Abeille ha scritto:
>>> On Oct 7, 2010, at 6:05 PM, Michele Pradella wrote:
>>>
>>>> I have a DB of about 3GB: the DB has about 23 millions of records. [..]
>>>> the statement is trying to delete about 5 millions records and it takes
>>>> about 4-5minutes.
>>>> Is there a way to try to speed up the DELETE?
>>> Considering that you want to delete about a quarter of the records, perhaps 
>>> it would be more efficient to recreate that table altogether, no?
>>>
>>> Pseudocode:
>>>
>>> create table new as select * from current where condition = keep;
>>> create index on new;
>>> drop table current;
>>> alter table rename new to current;
>>>
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>>
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Michele Pradella
  I was thinking this too, but I take this for last chance: my hope is I 
can delete 5 millions of records in few seconds, science fiction? :)

Il 08/10/2010 9.00, Aldes Rossi ha scritto:
>Il 10/08/2010 08:30 AM, Michele Pradella ha scritto:
>> I don't know if could be faster to do more Delete of less records, or
>> perhaps making a VIEW and than deleting all the records matching the
>> VIEW. I'm thinking about this to find the fastest solution, because the
>> problem is that when sqlite is deleting the records obviously I can not
>> access the DB for insert new records and all the operations have to wait
>> for delete complete...and 4-5minutes is too much time to wait.
> Maybe splitting this operation in many Delete of less record lengthens the
> total time, but results in shortest service interruptions, and may in
> fact be
> more acceptable.
>
> Aldes Rossi
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Michele Pradella
  As I explain in previews email, I think that recreating the index is 
the slowest operation I can do on my DB.
Anyway in my first email I ask another question about -wal file
Tryin to DELETE the (5 millions) records with the shell SQLITE interface 
I can see the -wal grow up till 600MB. I can not understand why the -wal 
is no more reintegrated, and even when I close the connection (closing 
the shell) -wal file still exist.
Trying for example the statement "create table new as select * from 
current where condition = keep;" the -wal file grow up till 1,5GB and 
than the same, after closing shell the -wal remain.

Moreover the operation above "create new" terminate with a "Error: 
disk I/O error"
The hard disk I use has a lot of free space and it's SATA2 hard disk, so 
is internal

Il 07/10/2010 20.38, Petite Abeille ha scritto:
> On Oct 7, 2010, at 6:05 PM, Michele Pradella wrote:
>
>> I have a DB of about 3GB: the DB has about 23 millions of records. [..]
>> the statement is trying to delete about 5 millions records and it takes
>> about 4-5minutes.
>> Is there a way to try to speed up the DELETE?
> Considering that you want to delete about a quarter of the records, perhaps 
> it would be more efficient to recreate that table altogether, no?
>
> Pseudocode:
>
> create table new as select * from current where condition = keep;
> create index on new;
> drop table current;
> alter table rename new to current;
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Michele Pradella
  I think that if the total records are 23 millions and the record to 
delete is 5 millions, creating a temp table of 23-5=18millions records 
is slower than deleting directly 5millions records.
Anyway, the table has got only 1 index on the DateTime column that is 
INTEGER.
I dropping all index deleting and recreate index is the slowest 
operation I can do on the DB, so I can not use it.
I can try to Delete on Primary Key instead on the DateTime, but I think 
that the slowest operation is the deletion of the record not the 
selection. Further more I can not be sure that the Primary Key is always 
incremental, usually should be, but if for example the Primary key wrap 
could not be the right thing to do.
I don't know if could be faster to do more Delete of less records, or 
perhaps making a VIEW and than deleting all the records matching the 
VIEW. I'm thinking about this to find the fastest solution, because the 
problem is that when sqlite is deleting the records obviously I can not 
access the DB for insert new records and all the operations have to wait 
for delete complete...and 4-5minutes is too much time to wait.


Il 07/10/2010 19.14, P Kishor ha scritto:
> On Thu, Oct 7, 2010 at 11:05 AM, Michele Pradella
> <michele.prade...@selea.com>  wrote:
>>   Hi all, I have a question about how to speed up a DELETE statement.
>> I have a DB of about 3GB: the DB has about 23 millions of records.
>> The DB is indexed by a DateTime column (is a 64 bit integer), and
>> suppose you want to delete all records before a date.
>> Now I'm using a syntax like this (I try all the statement with the
>> sqlite shell):
>> suppose to use __int64 DateValue=the date limit you want to delete
>>
>> DELETE FROM table_name WHERE DateTime>
> What is the speed of
>
> SELECT FROM table WHERE DateTime>= DateValue;
>
> If the above speed is acceptable, then try the following
>
> CREATE TABLE tmp AS SELECT FROM table WHERE DateTime>= DateValue;
> DROP TABLE table;
> ALTER TABLE tmp RENAME to table;
>
>
>> the statement is trying to delete about 5 millions records and it takes
>> about 4-5minutes.
>> Is there a way to try to speed up the DELETE? I already try to put the
>> DELETE statement between a BEGIN; COMMIT; statement, but same result.
>> After the delete complete I have a -wal file of about 600MB: this file
>> is not deleted even if I disconnect from the database.
>> Is that the right behavior? I thought that when the last DB connection
>> terminate the -wal file is reintegrated in the DB, but it's not.
>>
>> _______
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Speed up DELETE of a lot of records

2010-10-07 Thread Michele Pradella
  Hi all, I have a question about how to speed up a DELETE statement.
I have a DB of about 3GB: the DB has about 23 millions of records.
The DB is indexed by a DateTime column (is a 64 bit integer), and 
suppose you want to delete all records before a date.
Now I'm using a syntax like this (I try all the statement with the 
sqlite shell):
suppose to use __int64 DateValue=the date limit you want to delete

DELETE FROM table_name WHERE DateTimehttp://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] When do I need SQLITE_THREADSAFE?

2010-10-06 Thread Michele Pradella
  I check in the sqlite3.c code for the SQLITE_THREADSAFE. I can't found 
any difference between SQLITE_THREADSAFE=1 and SQLITE_THREADSAFE=2. I 
found only differences if SQLITE_THREADSAFE=0 or SQLITE_THREADSAFE>0
Have I miss something?

Il 06/10/2010 8.07, Michele Pradella ha scritto:
>So let me know if I understand:
>
> 1. SQLITE_THREADSAFE=0: lock are disabled at all, and you can use
>this if the application  is not multi thread, so if there's only
>one thread that can use the SQLITE library
> 2. SQLITE_THREADSAFE=1: is the highest level of thread safety, so if
>the slowest mode but you do not have to care about multiple access
>to the SQLITE library from different thread. You need this in a
>multi thread application to be sure to avoid damage to the DB
> 3. SQLITE_THREADSAFE=2: it's a compromise between thread safety and
>speed. The library are thread safe but you have to avoid that two
>thread can use at the same time the same DB connection. So if you
>have for example 3 thread and every thread access the library with
>it own connection there's no problem So you can use two different
>DB connection from two thread at the same time without problem. Is
>that right?
>
>
> Il 05/10/2010 18.34, Jay A. Kreibich ha scritto:
>> On Tue, Oct 05, 2010 at 12:44:59PM +0200, Zaher Dirkey scratched on the wall:
>>> On Tue, Oct 5, 2010 at 4:55 AM, Jay A. Kreibich<j...@kreibi.ch>   wrote:
>>>
>>>> On Mon, Oct 04, 2010 at 07:25:05PM -0700, Dustin Sallings scratched on the
>>>> wall:
>>>>
>>>>
>>>> The main difference between =1 and =2 is that =2 assumes you more or
>>>>less know what you're doing and will either lock a database handle as
>>>>you pass it between threads or you'll keep it private to a thread.
>>>>
>>>>=1 is designed to be more or less idiot proof, and will simply not
>>>>let you do something stupid with a database handle, like have two
>>>>threads try to execute two different statements at the same time.
>>>>
>>> What if two therad make insert or update then commit, what is happen to the
>>> second thread after the first made commit?.
>> The second thread will block if it attempts to use a database
>> connection that the first thread is already using.  Once the first
>> thread finishes (either with an explicit COMMIT/ROLLBACK, or because
>> all auto-commit transactions go out of scope) then the first thread
>> will release the database connection and the second thread will wake
>> up and allowed to proceed.  That's why the =1 mode is called "serial"...
>> it automatically serializes the database statements.
>>
>> At least, I'm pretty sure that's how it works.  I generally avoid
>> threaded code, and when I do use it, I tend to use thread-specific
>> resources that are carefully locked and guarded.
>>
>>  -j
>>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] When do I need SQLITE_THREADSAFE?

2010-10-06 Thread Michele Pradella
  So let me know if I understand:

   1. SQLITE_THREADSAFE=0: lock are disabled at all, and you can use
  this if the application  is not multi thread, so if there's only
  one thread that can use the SQLITE library
   2. SQLITE_THREADSAFE=1: is the highest level of thread safety, so if
  the slowest mode but you do not have to care about multiple access
  to the SQLITE library from different thread. You need this in a
  multi thread application to be sure to avoid damage to the DB
   3. SQLITE_THREADSAFE=2: it's a compromise between thread safety and
  speed. The library are thread safe but you have to avoid that two
  thread can use at the same time the same DB connection. So if you
  have for example 3 thread and every thread access the library with
  it own connection there's no problem So you can use two different
  DB connection from two thread at the same time without problem. Is
  that right?


Il 05/10/2010 18.34, Jay A. Kreibich ha scritto:
> On Tue, Oct 05, 2010 at 12:44:59PM +0200, Zaher Dirkey scratched on the wall:
>> On Tue, Oct 5, 2010 at 4:55 AM, Jay A. Kreibich<j...@kreibi.ch>  wrote:
>>
>>> On Mon, Oct 04, 2010 at 07:25:05PM -0700, Dustin Sallings scratched on the
>>> wall:
>>>
>>>
>>>The main difference between =1 and =2 is that =2 assumes you more or
>>>   less know what you're doing and will either lock a database handle as
>>>   you pass it between threads or you'll keep it private to a thread.
>>>
>>>   =1 is designed to be more or less idiot proof, and will simply not
>>>   let you do something stupid with a database handle, like have two
>>>   threads try to execute two different statements at the same time.
>>>
>> What if two therad make insert or update then commit, what is happen to the
>> second thread after the first made commit?.
>The second thread will block if it attempts to use a database
>connection that the first thread is already using.  Once the first
>thread finishes (either with an explicit COMMIT/ROLLBACK, or because
>all auto-commit transactions go out of scope) then the first thread
>will release the database connection and the second thread will wake
>up and allowed to proceed.  That's why the =1 mode is called "serial"...
>it automatically serializes the database statements.
>
>At least, I'm pretty sure that's how it works.  I generally avoid
>    threaded code, and when I do use it, I tend to use thread-specific
>resources that are carefully locked and guarded.
>
> -j
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] COUNT very slow

2010-09-24 Thread Michele Pradella
  I can get a big speed up of COUNT if I first do a VIEW of what I have 
to count and than make select COUNT on the view.
Without VIEW: 9 Minutes
With VIEW: 8 Seconds!

Il 24/09/2010 10.58, Martin Engelschalk ha scritto:
>
> Am 24.09.2010 10:38, schrieb Michele Pradella:
>> ok, thank you for the advices, I'll try to use a TRIGGER.
>> The DB already has an index.
>> Anyway if I have to count something like this:
>> select COUNT(*) from logs WHERE DateTime<=yesterday
>> I can't do it with a TRIGGER
> No, but in this case an index on DateTime will help (except when most of
> the records are older than yesterday).
> Also, you could keep track of the number of records for each day with a
> table containing DateTime and RecordCount.
>
>> Il 24/09/2010 10.29, Martin Engelschalk ha scritto:
>>>  Hello Michele,
>>>
>>> sqlite does not remember the number of records in a table. Therefore,
>>> counting them requires to scan the full table, which explains the slow
>>> perfornamce.
>>>
>>> This topic has been discussed previously in this list. See
>>> http://www.mail-archive.com/sqlite-users@sqlite.org/msg10279.html
>>>
>>> If you need the result quickly, you have to maintain the rnumber of
>>> records yourself in a different table, perhaps using triggers.
>>>
>>> Martin
>>>
>>>
>>> Am 24.09.2010 10:13, schrieb Michele Pradella:
>>>>   I have an SQLite DB of about 9GB with about 2.500.000 records.
>>>> I can't understand why the "select COUNT(*) from log" statement is
>>>> extremely slow, it takes me about 9-10 minutes!
>>>> I try with:
>>>> select COUNT(1) from logs
>>>> select COUNT(DateTime) from logs
>>>> same result. Have you idea of why it's so slow?
>>>> ___
>>>> sqlite-users mailing list
>>>> sqlite-users@sqlite.org
>>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>> _______
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] COUNT very slow

2010-09-24 Thread Michele Pradella
  ok, thank you for the advices, I'll try to use a TRIGGER.
The DB already has an index.
Anyway if I have to count something like this:
select COUNT(*) from logs WHERE DateTime<=yesterday
I can't do it with a TRIGGER

Il 24/09/2010 10.29, Martin Engelschalk ha scritto:
>Hello Michele,
>
> sqlite does not remember the number of records in a table. Therefore,
> counting them requires to scan the full table, which explains the slow
> perfornamce.
>
> This topic has been discussed previously in this list. See
> http://www.mail-archive.com/sqlite-users@sqlite.org/msg10279.html
>
> If you need the result quickly, you have to maintain the rnumber of
> records yourself in a different table, perhaps using triggers.
>
> Martin
>
>
> Am 24.09.2010 10:13, schrieb Michele Pradella:
>> I have an SQLite DB of about 9GB with about 2.500.000 records.
>> I can't understand why the "select COUNT(*) from log" statement is
>> extremely slow, it takes me about 9-10 minutes!
>> I try with:
>> select COUNT(1) from logs
>> select COUNT(DateTime) from logs
>> same result. Have you idea of why it's so slow?
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] COUNT very slow

2010-09-24 Thread Michele Pradella
  I have an SQLite DB of about 9GB with about 2.500.000 records.
I can't understand why the "select COUNT(*) from log" statement is 
extremely slow, it takes me about 9-10 minutes!
I try with:
select COUNT(1) from logs
select COUNT(DateTime) from logs
same result. Have you idea of why it's so slow?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance problems and large memory size

2010-09-24 Thread Michele Pradella
  I read about optimization for page_size, and I'll try to use different 
size to check if I got speed up with page size 4096 instead of 1024. 
Anyway, did you make some tests about this speed up? which operation are 
faster with 4096 page size: Select, insert or delete?

Il 22/09/2010 20.19, Max Vlasov ha scritto:
>1024 for everything except Windows.  The Windows filesystem module
>>   attempts to match the page size to the minimum write block of the
>>   filesystem.  For a typical NTFS volume, that's usually 4K.
>>
>>
> Jay, small correction, the default page_size on windows is still 1024
> (checked it), there's just an advice on the site wiki to change it to the
> cluster size immediately after the db creation (
> http://www.sqlite.org/cvstrac/wiki?p=PerformanceTuningWindows).
>
> Max
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance problems and large memory size

2010-09-22 Thread Michele Pradella
  ok I think the default is 1024.
So for 2000 of cache size:
(100+1024)*2000=2,2MB

Il 22/09/2010 16.30, Michele Pradella ha scritto:
>ok thank you, usually how big is the default page_size?
>
> Il 22/09/2010 16.17, Jay A. Kreibich ha scritto:
>> On Wed, Sep 22, 2010 at 12:02:33PM +0200, Michele Pradella scratched on the 
>> wall:
>>> I have a question about "PRAGMA cache_size"
>>> if I use the default value(2000) and I use the default value for the
>>> page size, what is the max memory size sqlite can reach in a request?
>> The *cache* can grow to something on the order of
>> (page_size + 100) * cache_size (there is some overhead), but the
>> total memory consumption can be higher if transient tables or indexes
>> are required to service the query.  These allocations are typically
>> short-lived, however.  Normally each database connection has its own
>> cache.
>>
>> If you need to hard-limit SQLite's total memory footprint, you use
>> sqlite3_config() and several other functions to provide a static
>> memory pool, or provide your own memory allocator.
>>
>>  -j
>>
>>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance problems and large memory size

2010-09-22 Thread Michele Pradella
  ok thank you, usually how big is the default page_size?

Il 22/09/2010 16.17, Jay A. Kreibich ha scritto:
> On Wed, Sep 22, 2010 at 12:02:33PM +0200, Michele Pradella scratched on the 
> wall:
>>I have a question about "PRAGMA cache_size"
>> if I use the default value(2000) and I use the default value for the
>> page size, what is the max memory size sqlite can reach in a request?
>The *cache* can grow to something on the order of
>(page_size + 100) * cache_size (there is some overhead), but the
>total memory consumption can be higher if transient tables or indexes
>are required to service the query.  These allocations are typically
>short-lived, however.  Normally each database connection has its own
>cache.
>
>If you need to hard-limit SQLite's total memory footprint, you use
>sqlite3_config() and several other functions to provide a static
>memory pool, or provide your own memory allocator.
>
> -j
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance problems and large memory size

2010-09-22 Thread Michele Pradella
  I have a question about "PRAGMA cache_size"
if I use the default value(2000) and I use the default value for the 
page size, what is the max memory size sqlite can reach in a request?

Il 21/09/2010 19.31, Pavel Ivanov ha scritto:
>> Is Sqlite somewhere caching data? If so, how do I disable it or decrease the 
>> amount of cached data?
> http://www.sqlite.org/pragma.html#pragma_cache_size
>
>
> Pavel
>
> On Tue, Sep 21, 2010 at 1:24 PM, "Richard Wähnelt"<deslo...@web.de>  wrote:
>> Hello there,
>>
>> I hope, someone can help me with the problem I'm having.
>>
>> The whole picture:
>> I'm running a .NET 3.5 application using System.Data.SQLite as provider. 
>> Inserting Data happens via Entity Framework and Linq to Entitites. For 
>> querying data I use DataAdapter/DataView/DataGridView combination with 
>> SQL-Strings. The application runs 24/7.
>> I created the database with just plain DDL commands and added two triggers, 
>> no changes to configuration were made. The size of the sqlite file is about 
>> 100 MB and there is data being inserted every 2 seconds. Each insert 
>> triggers either one of the triggers. After a while (considerably short time 
>> of 1 day) the memory consumption of my application as seen in Windows Task 
>> Manager grows from about 60 MB to 150 MB. As far as I can tell, this doesn't 
>> come from my application directly.
>> This becomes a problem because the system is only equipped with Intel Atom 
>> 1.6 GHz and 1 GB RAM (no other applications are running) and after a while 
>> my application slows down and doesn't compute input as fast as needed.
>>
>>
>> My question is now:
>> Is Sqlite somewhere caching data? If so, how do I disable it or decrease the 
>> amount of cached data?
>>
>> Or does this maybe happen in combination with the DataConnections for 
>> EntityFramework and/or DataAdapter and do I need to reset them somehow to 
>> clear memory?
>>
>> Any help is greatly appreciated.
>>
>> Kind Regards
>> deslokrw
>> ___
>> WEB.DE DSL SOMMER-SPECIAL: Surf&  Phone Flat 16.000 für
>> nur 19,99/mtl.!* http://produkte.web.de/go/DSL_Doppel_Flatrate/2
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-16 Thread Michele Pradella
  ok, I think I'll use the snapshot in the mean time. Thank you

Il 16/09/2010 15.43, Richard Hipp ha scritto:
> On Thu, Sep 16, 2010 at 9:02 AM, Michele Pradella<
> michele.prade...@selea.com>  wrote:
>
>>   After some days of test the application work fine and with VMMap I see
>> small value of mapped file and I see only 1 instance of -shm allocated,
>> some times 2 but it's all ok. So I'll wait for the next amalgamation
>> release to have an official sources for that fix.
>> When do you think will be released?
>>
> The next scheduled official release is for the middle of October.
>
> But you can download a development snapshot from
> http://www.sqlite.org/draft/download.html and use it in the mean time.  The
> snapshot has not been tested to the same extent that an official release
> has, but it is still stable and should not give you any problems.  (On the
> other hand, if you do encounter problems with a snapshot, please let us
> know.)
>
>
>
>> Il 13/09/2010 8.18, Michele Pradella ha scritto:
>>> Ok I think that the latest patch I used to try the WAL is the patch
>>> that fix the problem. I do not get wasted memory by mapped file ;) good
>>> job. Anyway I leave the application running this day. If I have news
>>> I'll tell you.
>>> Just a question: when you plan to release a new version of sqlite with
>>> this patches?
>>>
>>> Il 10/09/2010 20.46, Max Vlasov ha scritto:
>>>> On Fri, Sep 10, 2010 at 6:32 PM, Shane Harrelson<sh...@sqlite.org>
>> wrote:
>>>>> I tried to reproduce this, and could not.
>>>>>
>>>>> There are some questions inline below.Additionally, I want to
>>>>> verify that you've tried this with a version of SQLite containing the
>>>>> previously linked fix.
>>>>>
>>>>>
>>>> Shane, the fix helped, the thread is a bit mixed, the information I
>> posted
>>>> (and you had questions about) was about non fixed 3_7_2, now I finally
>> did
>>>> the test with the patched library.2,000,000 appends were made without
>> any
>>>> problem.
>>>>
>>>>
>>>> On Fri, Sep 10, 2010 at 5:37 PM, Michele Pradella<
>>>> michele.prade...@selea.com>wrote:
>>>>
>>>>> ...connection do the operation that in my situation cause the -shm
>> Mapped
>>>>> File grow up, Am I sure that sqlite do not waste application's memory
>>>>> even if I never close the DB connection? is just a question, and anyway
>>>>> I got my application running to test this behavior.
>>>>>
>>>>>
>>>> Michele, I think, giving this calculation, you can be sure of about
>> 2G*128/2
>>>> (~128G) size of the db till the mapped address space is out. But are you
>>>> sure you want appending without committing for months? I doubt you
>> consider
>>>> the saved information valuable in this case since  a chance of a power
>> or
>>>> system failure is higher and higher each day :)
>>>>
>>>> Max
>>>> ___
>>>> sqlite-users mailing list
>>>> sqlite-users@sqlite.org
>>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>>
>>>>
>>
>> --
>> Selea s.r.l.
>>
>>
>> Michele Pradella R
>>
>>
>> SELEA s.r.l.
>>
>> Via Aldo Moro 69
>> Italy - 46019 Cicognara (MN)
>> Tel +39 0375 889091
>> Fax +39 0375 889080
>> *michele.prade...@selea.com*<mailto:michele.prade...@selea.com>
>> *http://www.selea.com*
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-16 Thread Michele Pradella
  After some days of test the application work fine and with VMMap I see 
small value of mapped file and I see only 1 instance of -shm allocated, 
some times 2 but it's all ok. So I'll wait for the next amalgamation 
release to have an official sources for that fix.
When do you think will be released?

Il 13/09/2010 8.18, Michele Pradella ha scritto:
>Ok I think that the latest patch I used to try the WAL is the patch
> that fix the problem. I do not get wasted memory by mapped file ;) good
> job. Anyway I leave the application running this day. If I have news
> I'll tell you.
> Just a question: when you plan to release a new version of sqlite with
> this patches?
>
> Il 10/09/2010 20.46, Max Vlasov ha scritto:
>> On Fri, Sep 10, 2010 at 6:32 PM, Shane Harrelson<sh...@sqlite.org>   wrote:
>>
>>> I tried to reproduce this, and could not.
>>>
>>> There are some questions inline below.Additionally, I want to
>>> verify that you've tried this with a version of SQLite containing the
>>> previously linked fix.
>>>
>>>
>> Shane, the fix helped, the thread is a bit mixed, the information I posted
>> (and you had questions about) was about non fixed 3_7_2, now I finally did
>> the test with the patched library.2,000,000 appends were made without any
>> problem.
>>
>>
>> On Fri, Sep 10, 2010 at 5:37 PM, Michele Pradella<
>> michele.prade...@selea.com>   wrote:
>>
>>> ...connection do the operation that in my situation cause the -shm Mapped
>>> File grow up, Am I sure that sqlite do not waste application's memory
>>> even if I never close the DB connection? is just a question, and anyway
>>> I got my application running to test this behavior.
>>>
>>>
>> Michele, I think, giving this calculation, you can be sure of about 2G*128/2
>> (~128G) size of the db till the mapped address space is out. But are you
>> sure you want appending without committing for months? I doubt you consider
>> the saved information valuable in this case since  a chance of a power or
>> system failure is higher and higher each day :)
>>
>> Max
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-13 Thread Michele Pradella
  Ok I think that the latest patch I used to try the WAL is the patch 
that fix the problem. I do not get wasted memory by mapped file ;) good 
job. Anyway I leave the application running this day. If I have news 
I'll tell you.
Just a question: when you plan to release a new version of sqlite with 
this patches?

Il 10/09/2010 20.46, Max Vlasov ha scritto:
> On Fri, Sep 10, 2010 at 6:32 PM, Shane Harrelson<sh...@sqlite.org>  wrote:
>
>> I tried to reproduce this, and could not.
>>
>> There are some questions inline below.Additionally, I want to
>> verify that you've tried this with a version of SQLite containing the
>> previously linked fix.
>>
>>
> Shane, the fix helped, the thread is a bit mixed, the information I posted
> (and you had questions about) was about non fixed 3_7_2, now I finally did
> the test with the patched library.2,000,000 appends were made without any
> problem.
>
>
> On Fri, Sep 10, 2010 at 5:37 PM, Michele Pradella<
> michele.prade...@selea.com>  wrote:
>
>> ...connection do the operation that in my situation cause the -shm Mapped
>> File grow up, Am I sure that sqlite do not waste application's memory
>> even if I never close the DB connection? is just a question, and anyway
>> I got my application running to test this behavior.
>>
>>
> Michele, I think, giving this calculation, you can be sure of about 2G*128/2
> (~128G) size of the db till the mapped address space is out. But are you
> sure you want appending without committing for months? I doubt you consider
> the saved information valuable in this case since  a chance of a power or
> system failure is higher and higher each day :)
>
> Max
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
  The question is about the possibility, but I should be sure that this 
do not happen in my application, because I have to make my application 
running for month without restart. So because in first few test I always 
see the -shm Mapped File grow up (even that less than before the patch) 
and I see the memory cleaned up only after the DB connection close, my 
question is: if I have a DB connection active for 1 month and this 
connection do the operation that in my situation cause the -shm Mapped 
File grow up, Am I sure that sqlite do not waste application's memory 
even if I never close the DB connection? is just a question, and anyway 
I got my application running to test this behavior.

Il 10/09/2010 15.20, Max Vlasov ha scritto:
> On Fri, Sep 10, 2010 at 5:07 PM, Michele Pradella<
> michele.prade...@selea.com>  wrote:
>
>> what I worry about is that the Addressed space of
>> sqlite (during an operation of DELETE or UPDATE a lot of data in WAL
>> mode use case described before) could grow up till 2GB. This cause the
>> application crash because it can allocate no more monitor.
>>
>
> Are you talking about real tests or just a possibility? From what I see the
> actual size of wal file has nothing to do with the memory available, it's
> just structurally the same database pages saved separately from the main db
> file and shm file is the way to find those pages for readers and for pager
> to commit them. The shm file could be hungry for memory, but it's not a big
> deal since it's always wants 128 times less then the size of the data
> changed.
>
> Max
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
  Ok I understand that it's only address space and not allocated: anyway 
if the addressed, and so the reserved, memory grow up you will see the 
Virutal Bytes in the windows performance monitor grow up. This is ok in 
the normal function: what I worry about is that the Addressed space of 
sqlite (during an operation of DELETE or UPDATE a lot of data in WAL 
mode use case described before) could grow up till 2GB. This cause the 
application crash because it can allocate no more monitor.
So is there something in the code of sqlite to control and limit the 
maximum amount of memory Adressed by WAL mechanism?


Il 10/09/2010 14.51, Max Vlasov ha scritto:
> On Fri, Sep 10, 2010 at 3:52 PM, Michele Pradella<
> michele.prade...@selea.com>  wrote:
>
>>   After some tests, with the new sqlite3.c source, seams that the
>> behavior is better than before. So I see the -shm and -wal file grow up,
>> and in VMMap I can see the Mapped File of -shm growing up, but not so
>> much as before.
>>
> Great to hear.
>
>
>> The only thing I'm thinking about is the behavior that I obtain if I
>> have 1 connection that is kept active from the beginning of the
>> application till the end. In this situation (with the DB connection
>> always active) I can see only Mapped File grow up, till 5MBis there
>> a limit that make impossible to the Mapped File to waste all the memory
>> even if I leave the connection open?
>>
>
> It seems that now you're concerned sqlite doesn't use memory enough :)
> Actually memory mapping here is not the thing that should grow significantly
> (let's not take the bug into account). Every 8 bytes in shm (and in memory
> mapping) is dedicated to a single page in the database file, in theory you
> should divide total affected (changed/updated) database bytes by 128 (=
> 1024/8 when page_size = 1024) to see how much memory mapping is going to
> occupy. I suppose since the latest fix also takes alignment into account, we
> see 64k and 32k jumping, so you can safely take shm size, multiply it by 2
> and this will be an estimate for the mapping address space sqlite will need.
>
>
> And have in mind, it's address space, not memory used, there are much
> confusion about memory in windows (and possibly in any other modern OS).
> When I investigated this thing, I wished Task manager had a value called
> "Address space" and also Windows had an error "Out of Address space".
> Currently I wrote a simple program that emulates the bug, it increases the
> file by 32k and maps the whole current file leaving the handles open.
> Imagine, this program occupies 2G and gives an error in a fraction of a
> second. So there are no actual memory allocation, just reserving pages (this
> time Intel processor architecture pages) in 386-adress space. The actual
> error appears after the MapViewOfFile call and the text is 'Not enough
> storage is available to process this command' (Code: 9)
>
> Max
> _______
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
  After some tests, with the new sqlite3.c source, seams that the 
behavior is better than before. So I see the -shm and -wal file grow up, 
and in VMMap I can see the Mapped File of -shm growing up, but not so 
much as before.
With VMMap I can see only -shm mapped files of 32K or 64K and I'm able 
to reach about 5MB of mapped files with a -shm of 2MB.
When the connection with the DB is closed I can see -shm -wal back 
inside the DB and all the mapped file disappear.
The only thing I'm thinking about is the behavior that I obtain if I 
have 1 connection that is kept active from the beginning of the 
application till the end. In this situation (with the DB connection 
always active) I can see only Mapped File grow up, till 5MBis there 
a limit that make impossible to the Mapped File to waste all the memory 
even if I leave the connection open?
Anyway I leave my application running till tomorrow, so I can tell you 
more about this situation.

Il 10/09/2010 12.16, Michele Pradella ha scritto:
>ok, Dan already sent me a sqlite3.c source, and I'm doing some
> tests...I let you know the results
>
> Il 10/09/2010 12.12, Max Vlasov ha scritto:
>> On Fri, Sep 10, 2010 at 12:53 PM, Dan Kennedy<danielk1...@gmail.com>   wrote:
>>
>>> The bug is fixed by Shane's patch linked earlier in the thread. With
>>> the patch, an 11MB -shm file is mapped into memory in around 350 chunks
>>> with an average size of 48KB. Total address space used is around
>>> (350*48KB).
>>> Instead of the (350*5.5MB) that 3.7.2 was using.
>>>
>>>
>> Dan, thanks
>> I tried the patched version, It seems now the sizes changes only between 32k
>> and 64k, I hope I will make additional test with large data insert today.
>>
>> Michele, you can download this file:
>> http://www.maxerist.net/tmp/sqlite3_sqlite3_f213e133f6.zip
>> I kind of injected modified os_win.c from the full package into 3_7_2
>> amalgamation.
>>
>> Max
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
  ok, Dan already sent me a sqlite3.c source, and I'm doing some 
tests...I let you know the results

Il 10/09/2010 12.12, Max Vlasov ha scritto:
> On Fri, Sep 10, 2010 at 12:53 PM, Dan Kennedy<danielk1...@gmail.com>  wrote:
>
>> The bug is fixed by Shane's patch linked earlier in the thread. With
>> the patch, an 11MB -shm file is mapped into memory in around 350 chunks
>> with an average size of 48KB. Total address space used is around
>> (350*48KB).
>> Instead of the (350*5.5MB) that 3.7.2 was using.
>>
>>
> Dan, thanks
> I tried the patched version, It seems now the sizes changes only between 32k
> and 64k, I hope I will make additional test with large data insert today.
>
> Michele, you can download this file:
> http://www.maxerist.net/tmp/sqlite3_sqlite3_f213e133f6.zip
> I kind of injected modified os_win.c from the full package into 3_7_2
> amalgamation.
>
> Max
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
  I tried yesterday to apply the patch  to my sqlite3.c and I think that 
the problem's not fixed: but perhaps I'm wrong or I miss something 
applying the patch.
Can you provide me a sqlite3.c source with this patch applied correctly 
so I can test if this issue is correct?

Il 10/09/2010 10.53, Dan Kennedy ha scritto:
>> Michele, thanks for pointing out to vmmap, sysinternals made them so
>> fast, I
>> can not track them all. This is an excellent utility.
>> I think that the development team already knows that is going on,
>> just my
>> speculation.
>>
>> As long as I see, every next file mapping wants to see not only
>> requested
>> 32k region, but also every prior, so every CreateFileMapping/
>> MapViewOfFile
>> wants more on every next step, 32k-64k-96k (this is visible in vmmap
>> and
>> corresponds to the code in winShmMap). And as long as I see, the
>> problem is
>> that Windows allocates separated ranges of memory space for every
>> region
>> even if they're intersecting, i.e every MapViewOfFile needs to find
>> a brand
>> new address space range for every new region request. So we have
>>
>> 2,000,000k = (32k*(X + 1)/2)*X
>>
>> where X - is the number of region requests when the memory address
>> space get
>> to 2g limit.
>> And the answer for this X here is about 350 that leads us to 350*32k
>> = 11M
>> shm file. So this size that I noticed in my tests confirms this theory
> This bug analysis is correct.
>
> The bug is fixed by Shane's patch linked earlier in the thread. With
> the patch, an 11MB -shm file is mapped into memory in around 350 chunks
> with an average size of 48KB. Total address space used is around
> (350*48KB).
> Instead of the (350*5.5MB) that 3.7.2 was using.
>
>> I don't know about the internal logic that requires this, but on
>> Windows 350
>> regions is a maximum in this case. Does it mean that linux share
>> address
>> space for superset/subset regions in contrary to Windows logic?
> It is separate. This bug was in the OS specific win32 layer
>
> Dan.
>
>
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
  I forgot to tell you something: in my situation the -shm file is 
always small in size (about 6-7MB) even when I got 2GB of Virtual Bytes. 
But it seams that is mapped too much times.
I see something that you could obtain if you do this:
for (int i=0;i<1000;i++)
 "Map -shm in memory"
This is the strange behavior...because seams that the file is mapped in 
memory but never removed: probably it's removed only when you close the 
connection, but if the during the query you wast too much memory you got 
"Disk I/O error" and the application reach an critical state.


Il 10/09/2010 8.27, Michele Pradella ha scritto:
>Even in my use case I got "Disk I/O error" after I reached 2GB of
> virtual memory.
> Max tell us the size of the memory mapped file in VMMap tool when you
> got "Disk I/O error", and check the value of Virtual Bytes in windows
> performance counter
>
> Il 10/09/2010 6.54, Max Vlasov ha scritto:
>>> But as a side effect I got not expected result in other area, when I tried
>>> to append 1,500,000 records to this 1.7G file having 5G of free space on the
>>> disk, I got the error a user reported recently about win7 64bit, "Disk I/O
>>> error". (http://www.mail-archive.com/sqlite-users@sqlite.org/msg54935.html,
>>> but this seems was not related to WAL)
>>>
>>>
>> Now I the problem is fully reproducible. A modified versions of the steps:
>>
>> Windows 7 64bit Home Premium, sqlite 3.7.2 in dll
>>
>> 1. Create db with the table
>> CREATE TABLE [TestTable] (
>> [Id] INTEGER PRIMARY KEY AUTOINCREMENT,
>> [Text] VARCHAR(200)
>> )
>>
>> 2. Open the db that should currently be in journal_mode=delete
>>
>> 3. Change journal_mode=WAL;
>>
>> 4. BEGIN TRANSACTION
>>
>> 4. Make 1,300,000 repeated queries
>> INSERT INTO TestTable (Text) VALUES ("12345678912345 (the exact length
>> of the string = 1152)
>>
>> 5. While the queries are executed, when the shm file grows to 11M
>> (0xAC), the failure occurs with Disk I/O error (both result and extended
>> are 10 (SQLITE_IOERR)).
>>
>> There's a change that there's something wrong with my program, can someone
>> do a similar test on another Windows 64bit system?
>>
>> Thanks
>>
>> Max
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
  Even in my use case I got "Disk I/O error" after I reached 2GB of 
virtual memory.
Max tell us the size of the memory mapped file in VMMap tool when you 
got "Disk I/O error", and check the value of Virtual Bytes in windows 
performance counter

Il 10/09/2010 6.54, Max Vlasov ha scritto:
>> But as a side effect I got not expected result in other area, when I tried
>> to append 1,500,000 records to this 1.7G file having 5G of free space on the
>> disk, I got the error a user reported recently about win7 64bit, "Disk I/O
>> error". (http://www.mail-archive.com/sqlite-users@sqlite.org/msg54935.html,
>> but this seems was not related to WAL)
>>
>>
> Now I the problem is fully reproducible. A modified versions of the steps:
>
> Windows 7 64bit Home Premium, sqlite 3.7.2 in dll
>
> 1. Create db with the table
> CREATE TABLE [TestTable] (
> [Id] INTEGER PRIMARY KEY AUTOINCREMENT,
> [Text] VARCHAR(200)
> )
>
> 2. Open the db that should currently be in journal_mode=delete
>
> 3. Change journal_mode=WAL;
>
> 4. BEGIN TRANSACTION
>
> 4. Make 1,300,000 repeated queries
> INSERT INTO TestTable (Text) VALUES ("12345678912345 (the exact length
> of the string = 1152)
>
> 5. While the queries are executed, when the shm file grows to 11M
> (0xAC), the failure occurs with Disk I/O error (both result and extended
> are 10 (SQLITE_IOERR)).
>
> There's a change that there's something wrong with my program, can someone
> do a similar test on another Windows 64bit system?
>
> Thanks
>
> Max
> _______
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
  Hi Shane, have you look at the Virtual Bytes in the performance 
monitor? in my situation is that counter that grow up to 2GB and than 
application crash because virtual space finish.
I'm using Windows XP professional SP3.
I can reproduce the problem in my DB with this SQL statement:

DELETE FROM metadata INDEXED BY metadata_datetime_sourcename WHERE 
((DateTime<=timestamp_value) AND (Source='source_name'))

Usually timestamp_value identify 1 week records. So I Delete 1 week of 
records at a time.
The DELETE statement is inside a  BEGIN;COMMIT; statement and I can have 
a lot of delete, like this:
BEGIN;

DELETE FROM metadata INDEXED BY metadata_datetime_sourcename WHERE 
((DateTime<=timestamp_value1) AND (Source='source_name1'))
DELETE FROM metadata INDEXED BY metadata_datetime_sourcename WHERE 
((DateTime<=timestamp_value2) AND (Source='source_name2'))
DELETE FROM metadata INDEXED BY metadata_datetime_sourcename WHERE 
((DateTime<=timestamp_value3) AND (Source='source_name3'))
DELETE FROM metadata INDEXED BY metadata_datetime_sourcename WHERE 
((DateTime<=timestamp_value4) AND (Source='source_name4'))
DELETE FROM metadata INDEXED BY metadata_datetime_sourcename WHERE 
((DateTime<=timestamp_value5) AND (Source='source_name5'))

..

COMMIT;

could be event 50 DELETE inside BEGIN; COMMIT;

My table have 3 indexes and have the size I told you yesterday.
We are using the default page size, because I do not change it.

I create the table like this:
CREATE TABLE IF NOT EXISTS metadata (Id INTEGER PRIMARY KEY 
AUTOINCREMENT,DateTime INTEGER,Source TEXT,SensorName TEXT,SensorValue TEXT)


Il 09/09/2010 23.25, Shane Harrelson ha scritto:
> Michele-
>
> I've looked at trying to reproduce your issue on an 32-bit Windows XP
> system using the latest code, and could not.
>
> Even assuming the "worst case" of a 512 byte page size, starting with
> a 1.2gb DB file, deleting all the records would result in a WAL file
> of roughly the same size containing 2mil+ pages.  Each of the WAL
> pages has an 8 byte entry in the SHM file.  The SHM file uses 32k
> regions which can each hold 4k entries.   For 2mil+ entries, we would
> need around 500 regions.  The SHM file is what is memory mapped, with
> each 32k region being mapped into memory.  The Windows implementation
> uses an average 48k for each region, so at most we would use around
> 24mb of memory.
>
> I attempted to reproduce this by creating a 1.5gb DB, containing 17mil
> records, each 65 bytes long, using a page size of 512 bytes.  Starting
> with this DB in WAL mode, I deleted all records.   Maximum shared
> memory usage did not exceed 24mb.
>
> If you're sure you added the fix Dan indicated correctly into your
> build, then we're going to need more info on exactly what you're
> doing.  What Windows version are you using specifically?  What page
> size are you using?   What SQL queries are you executing?  Can you
> provide a short series of statements with the CLI to reproduce this?
>
> HTH.
> -Shane
>
>
>
>
> On Thu, Sep 9, 2010 at 11:36 AM, Michele Pradella
> <michele.prade...@selea.com>  wrote:
>>   Hi Max, I got the problem in both situations:
>>
>>1. I have a journal_mode=DELETE database and I convert it to WAL.
>>2. I create a new database with WAL mode.
>>
>> I never check the handles in the task manager, but I always see the
>> mapped files in vmmap growing up. I think it's the same.
>> Anyway I have the memory wasted especially when I have a quite big
>> DB(about 1.2GB with  about 17milions of records) and I try to Delete a
>> lot of records: in this situation I see mapped files growing up and
>> waste a lot of memory (I reached the maximum 32bit windows memory limit
>> so my application crash).
>>
>> ps.With this DB the "SELECT count(ID) FROM table_name" it's very
>> slow...it take  minutes(with the sqlite shell)!
>>
>> Il 09/09/2010 17.04, Max Vlasov ha scritto:
>>> On Thu, Sep 9, 2010 at 11:37 AM, Dan Kennedy<danielk1...@gmail.com>
>>> wrote:
>>>
>>>> On Sep 9, 2010, at 1:12 PM, Michele Pradella wrote:
>>>>
>>>>>Hi, do you have some news about the wasted memory? have you found the
>>>>> reason for the windows backend?
>>>> Fixed here:
>>>>
>>>> http://www.sqlite.org/src/ci/f213e133f6
>>>>
>>>>
>>> Dan, don't know whether it is related, but I detected memory leak in 3.7.2
>>> related to handle count increasing.
>>>
>>> Steps to reproduce
>>> (Windows XP SP3, sqlite3.7.2.dll compiled with bcc)
>>>
>>> 1. Create or use previous db with the table
>>> CREATE TABLE [TestTab

Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
  Hi Max, I'm sure that is a leak related to sqlite WAL because if I 
disable WAL with journal_mode=DELETE and I do not have the leak. More 
over if I use vmmap tool I see the size of the mapped files growing up 
and vmmap show me that all the memory is used by -shm files mapped a lot 
of times. I'm using Windows XP 32bit, but even with win7 64bit I got the 
same result.

Il 09/09/2010 22.21, Max Vlasov ha scritto:
> On Thu, Sep 9, 2010 at 7:16 PM, Dan Kennedy<danielk1...@gmail.com>  wrote:
>
>>> Dan, don't know whether it is related, but I detected memory leak in
>>> 3.7.2
>>> related to handle count increasing.
>> I think there will be one open handle for each 32KB of
>> shared-memory space in use. Or put another way, one open
>> handle for every 4000 pages in the WAL file. They should
>> all be closed when all connections to the database within
>> the process are closed.
>>
>>
>>
> Dan,
> thank you for your explanation, it makes perfect sense if we calculate the
> sizes and everything.
>
> I tried to test till the sizes Michele mentioned, this time on Win7 64 bit,
> I had almost perfect results, non-stopped appending till 1.7G and memory
> size was never more than 50MB, so I'd suggest to Michele to check the code,
> maybe there's a leak there not related to sqlite.
>
> But as a side effect I got not expected result in other area, when I tried
> to append 1,500,000 records to this 1.7G file having 5G of free space on the
> disk, I got the error a user reported recently about win7 64bit, "Disk I/O
> error". (http://www.mail-archive.com/sqlite-users@sqlite.org/msg54935.html,
> but this seems was not related to WAL)
>
> While I wrote this e-mail, I tried to perform another test, 1,200,000
> appends worked perfectly, the next series (without prior commit) failed
> almost immediately with the same Disk I/O Error.Free size is still 3,7G, so
> this was not related to the absence of free space. The size of shm file is
> 0xAC, and the section starting 0xAB8000 till the end filled with zeros.
> Please let me know if anything would be helpful to know from these files, I
> keep it in the state they were after the failure, I even did not perform
> Commit and Close.
>
> Max
> _______
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-09 Thread Michele Pradella
  Hi Max, I got the problem in both situations:

   1. I have a journal_mode=DELETE database and I convert it to WAL.
   2. I create a new database with WAL mode.

I never check the handles in the task manager, but I always see the 
mapped files in vmmap growing up. I think it's the same.
Anyway I have the memory wasted especially when I have a quite big 
DB(about 1.2GB with  about 17milions of records) and I try to Delete a 
lot of records: in this situation I see mapped files growing up and 
waste a lot of memory (I reached the maximum 32bit windows memory limit 
so my application crash).

ps.With this DB the "SELECT count(ID) FROM table_name" it's very 
slow...it take  minutes(with the sqlite shell)!

Il 09/09/2010 17.04, Max Vlasov ha scritto:
> On Thu, Sep 9, 2010 at 11:37 AM, Dan Kennedy<danielk1...@gmail.com>  wrote:
>
>> On Sep 9, 2010, at 1:12 PM, Michele Pradella wrote:
>>
>>>   Hi, do you have some news about the wasted memory? have you found the
>>> reason for the windows backend?
>> Fixed here:
>>
>>http://www.sqlite.org/src/ci/f213e133f6
>>
>>
> Dan, don't know whether it is related, but I detected memory leak in 3.7.2
> related to handle count increasing.
>
> Steps to reproduce
> (Windows XP SP3, sqlite3.7.2.dll compiled with bcc)
>
> 1. Create or use previous db with the table
> CREATE TABLE [TestTable] (
> [Id] INTEGER PRIMARY KEY AUTOINCREMENT,
> [Text] VARCHAR(200)
> )
>
> 2. Open the db that currently in journal_mode=delete
>
> 3. Change journal_mode=WAL;
>
> 4. BEGIN TRANSACTION
>
> 4. Make 50,000 repeated queries
> INSERT INTO TestTable (Text) VALUES ("12345678912345 (in my case the
> lengh of this string was about 1100 bytes)
>
> 5. See while the queries are processing how handles in Task manager
> increasing (total about 14). The followiing commit does not help in
> decreasing the number to the start value.
>
> I tried to look with Process explorer, it seems there are many handles
> titled "section".
> Also I could not reproduce this when the db is already in WAL mode when
> opened. Michele, can you tell us what is the mode when you initially open
> db?
>
> Max
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-09 Thread Michele Pradella
  I have ported the changes of the fix 
http://www.sqlite.org/src/ci/f213e133f6 to my code (with some difficult 
because I have ported directly in my sqlite3.c source).
Anyway I think that the problem still exist, after few test I'll see 
with vmmap mapped file growing up like before the fix.
I think I ported the fix in the right way, and the problem still exist. 
I'll do some other tests.
After this fix did you never see the problem?

Il 09/09/2010 9.46, Michele Pradella ha scritto:
>ok thank you, today I'm going to port the difference to my source code
> and I'm going to try if the memory it's ok
>
> Il 09/09/2010 9.37, Dan Kennedy ha scritto:
>> On Sep 9, 2010, at 1:12 PM, Michele Pradella wrote:
>>
>>>Hi, do you have some news about the wasted memory? have you found the
>>> reason for the windows backend?
>> Fixed here:
>>
>>  http://www.sqlite.org/src/ci/f213e133f6
>>
>> Does the problem still show up for you using fossil tip?
>>
>>
>>
>>
>>
>>
>>> do you think it could be due to the windows implementation of the
>>> mmap?
>>>
>>> Il 02/09/2010 16.46, Richard Hipp ha scritto:
>>>> On Thu, Sep 2, 2010 at 9:59 AM, Marcus Grimm<mgr...@medcom-
>>>> online.de>wrote:
>>>>
>>>>> Michele Pradella wrote:
>>>>>> ok, I'll wait for the walk around.
>>>>>> I always use a BEGIN; COMMIT; transaction but often, after a
>>>>>> COMMIT; the
>>>>>> -wal file does not change in size, it seams it's not checkponted.
>>>>>> Anyway do you think that with WAL journal mode I should continue
>>>>>> to use
>>>>>> BEGIN; COMMIT; statement? or not?
>>>>> as Richard mentioned, the wal mode is not intended to work well
>>>>> for bulk-insert kind of actions. You may try to split your insert
>>>>> cycles into smaller pieces.
>>>>>
>>>>> However, that might not help if you do sql statements which involve
>>>>> a huge implicit transaction, for example "CREATE INDEX .." on a
>>>>> huge table.
>>>>> At least on windows it can fail with IO error on a GB sized db.
>>>>>
>>>> We are working on that problem.  In the meantime, your workaround
>>>> is to
>>>> switch to journal_mode=DELETE before creating large indices.
>>>>
>>>>
>>>>> Btw, I think the wal file doesn't shrink because sqlite doesn't
>>>>> truncate
>>>>> that file after completing the checkpoint. That's by design I guess.
>>>>>
>>>> Correct.  The -wal file is deleted when the last connection to the
>>>> database
>>>> is closed.  But prior to that, the WAL file is kept open and is not
>>>> truncated.  This is a performance optimization.  Most filesystems
>>>> are faster
>>>> at overwriting an existing file than they are at appending to the
>>>> end of a
>>>> file.  (Note the qualifier "Most" in the previous sentence.  There
>>>> are
>>>> exceptions to the rule.  We try to optimize for the common case.)
>>>>
>>>>
>>>>> Marcus
>>>>>
>>>>>
>>>>>
>>>>>> Il 02/09/2010 14.43, Richard Hipp ha scritto:
>>>>>>> On Thu, Sep 2, 2010 at 8:34 AM, Michele Pradella<
>>>>> michele.prade...@selea.com
>>>>>>>> wrote:
>>>>>>>> Hi,
>>>>>>>> I found a strange behavior of the sqlite 3.7.2 with WAL journal
>>>>>>>> mode.
>>>>>>>> Yesterday I found my application DB with a -wal file of 1,5GB
>>>>>>>> and a
>>>>> -shm
>>>>>>>> file of few MB (about 9MB) with a DB file of 1,2GB: in this
>>>>>>>> situation I got the process memory wasted by "mapped file" of
>>>>>>>> the -shm
>>>>>>>> file. It seams that the file is mapped a lot of times in memory
>>>>>>>> so the
>>>>>>>> process memory become 2GB and it can't allocate more memory. In
>>>>>>>> that
>>>>>>>> situation operation made on the DB cause I/O disk errors
>>>>>>>> probably due
>>>>> to
>>>>>>>> the wasted memory.
>>>>>>>>
>>>>&

Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-09 Thread Michele Pradella
  ok thank you, today I'm going to port the difference to my source code 
and I'm going to try if the memory it's ok

Il 09/09/2010 9.37, Dan Kennedy ha scritto:
> On Sep 9, 2010, at 1:12 PM, Michele Pradella wrote:
>
>>   Hi, do you have some news about the wasted memory? have you found the
>> reason for the windows backend?
> Fixed here:
>
> http://www.sqlite.org/src/ci/f213e133f6
>
> Does the problem still show up for you using fossil tip?
>
>
>
>
>
>
>> do you think it could be due to the windows implementation of the
>> mmap?
>>
>> Il 02/09/2010 16.46, Richard Hipp ha scritto:
>>> On Thu, Sep 2, 2010 at 9:59 AM, Marcus Grimm<mgr...@medcom-
>>> online.de>wrote:
>>>
>>>> Michele Pradella wrote:
>>>>>ok, I'll wait for the walk around.
>>>>> I always use a BEGIN; COMMIT; transaction but often, after a
>>>>> COMMIT; the
>>>>> -wal file does not change in size, it seams it's not checkponted.
>>>>> Anyway do you think that with WAL journal mode I should continue
>>>>> to use
>>>>> BEGIN; COMMIT; statement? or not?
>>>> as Richard mentioned, the wal mode is not intended to work well
>>>> for bulk-insert kind of actions. You may try to split your insert
>>>> cycles into smaller pieces.
>>>>
>>>> However, that might not help if you do sql statements which involve
>>>> a huge implicit transaction, for example "CREATE INDEX .." on a
>>>> huge table.
>>>> At least on windows it can fail with IO error on a GB sized db.
>>>>
>>> We are working on that problem.  In the meantime, your workaround
>>> is to
>>> switch to journal_mode=DELETE before creating large indices.
>>>
>>>
>>>> Btw, I think the wal file doesn't shrink because sqlite doesn't
>>>> truncate
>>>> that file after completing the checkpoint. That's by design I guess.
>>>>
>>> Correct.  The -wal file is deleted when the last connection to the
>>> database
>>> is closed.  But prior to that, the WAL file is kept open and is not
>>> truncated.  This is a performance optimization.  Most filesystems
>>> are faster
>>> at overwriting an existing file than they are at appending to the
>>> end of a
>>> file.  (Note the qualifier "Most" in the previous sentence.  There
>>> are
>>> exceptions to the rule.  We try to optimize for the common case.)
>>>
>>>
>>>> Marcus
>>>>
>>>>
>>>>
>>>>> Il 02/09/2010 14.43, Richard Hipp ha scritto:
>>>>>> On Thu, Sep 2, 2010 at 8:34 AM, Michele Pradella<
>>>> michele.prade...@selea.com
>>>>>>> wrote:
>>>>>>>Hi,
>>>>>>> I found a strange behavior of the sqlite 3.7.2 with WAL journal
>>>>>>> mode.
>>>>>>> Yesterday I found my application DB with a -wal file of 1,5GB
>>>>>>> and a
>>>> -shm
>>>>>>> file of few MB (about 9MB) with a DB file of 1,2GB: in this
>>>>>>> situation I got the process memory wasted by "mapped file" of
>>>>>>> the -shm
>>>>>>> file. It seams that the file is mapped a lot of times in memory
>>>>>>> so the
>>>>>>> process memory become 2GB and it can't allocate more memory. In
>>>>>>> that
>>>>>>> situation operation made on the DB cause I/O disk errors
>>>>>>> probably due
>>>> to
>>>>>>> the wasted memory.
>>>>>>>
>>>>>> By coincidence, the SQLite developers were just discussing this
>>>>>> problem
>>>>>> earlier this morning.  There are technical issues with windows
>>>>>> that make
>>>> a
>>>>>> solution difficult.  We are trying to come up with a work-
>>>>>> around.  (The
>>>>>> problem you describe is specific to the windows backend and does
>>>>>> not
>>>> come up
>>>>>> in unix.)
>>>>>>
>>>>>>
>>>>>>> I'm doing some other test to reproduce the problem, but I think
>>>>>>> that
>>>>>>> could be when I got a lot of operation between a BEGIN; COMMIT;
>>>>>>> So is i

Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-09 Thread Michele Pradella
  Hi, do you have some news about the wasted memory? have you found the 
reason for the windows backend?
do you think it could be due to the windows implementation of the mmap?

Il 02/09/2010 16.46, Richard Hipp ha scritto:
> On Thu, Sep 2, 2010 at 9:59 AM, Marcus Grimm<mgr...@medcom-online.de>wrote:
>
>> Michele Pradella wrote:
>>>ok, I'll wait for the walk around.
>>> I always use a BEGIN; COMMIT; transaction but often, after a COMMIT; the
>>> -wal file does not change in size, it seams it's not checkponted.
>>> Anyway do you think that with WAL journal mode I should continue to use
>>> BEGIN; COMMIT; statement? or not?
>> as Richard mentioned, the wal mode is not intended to work well
>> for bulk-insert kind of actions. You may try to split your insert
>> cycles into smaller pieces.
>>
>> However, that might not help if you do sql statements which involve
>> a huge implicit transaction, for example "CREATE INDEX .." on a huge table.
>> At least on windows it can fail with IO error on a GB sized db.
>>
> We are working on that problem.  In the meantime, your workaround is to
> switch to journal_mode=DELETE before creating large indices.
>
>
>> Btw, I think the wal file doesn't shrink because sqlite doesn't truncate
>> that file after completing the checkpoint. That's by design I guess.
>>
> Correct.  The -wal file is deleted when the last connection to the database
> is closed.  But prior to that, the WAL file is kept open and is not
> truncated.  This is a performance optimization.  Most filesystems are faster
> at overwriting an existing file than they are at appending to the end of a
> file.  (Note the qualifier "Most" in the previous sentence.  There are
> exceptions to the rule.  We try to optimize for the common case.)
>
>
>> Marcus
>>
>>
>>
>>>
>>> Il 02/09/2010 14.43, Richard Hipp ha scritto:
>>>> On Thu, Sep 2, 2010 at 8:34 AM, Michele Pradella<
>> michele.prade...@selea.com
>>>>> wrote:
>>>>>Hi,
>>>>> I found a strange behavior of the sqlite 3.7.2 with WAL journal mode.
>>>>> Yesterday I found my application DB with a -wal file of 1,5GB and a
>> -shm
>>>>> file of few MB (about 9MB) with a DB file of 1,2GB: in this
>>>>> situation I got the process memory wasted by "mapped file" of the -shm
>>>>> file. It seams that the file is mapped a lot of times in memory so the
>>>>> process memory become 2GB and it can't allocate more memory. In that
>>>>> situation operation made on the DB cause I/O disk errors probably due
>> to
>>>>> the wasted memory.
>>>>>
>>>> By coincidence, the SQLite developers were just discussing this problem
>>>> earlier this morning.  There are technical issues with windows that make
>> a
>>>> solution difficult.  We are trying to come up with a work-around.  (The
>>>> problem you describe is specific to the windows backend and does not
>> come up
>>>> in unix.)
>>>>
>>>>
>>>>> I'm doing some other test to reproduce the problem, but I think that
>>>>> could be when I got a lot of operation between a BEGIN; COMMIT;
>>>>> So is it ok to use the BEGIN; COMMIT; with the WAL journal activated?
>>>>> is there some kind of limit in the number of operation between a BEGIN;
>>>>> COMMIT; statement?
>>>>>
>>>> SQLite will not checkpoint the journal until you commit your
>> transaction.
>>>> So if you leave the transaction open too long, the WAL file and the -shm
>>>> file will grow excessively large.  WAL works best with many smaller
>>>> transactions.  If you have one or two big transactions, then using a
>>>> traditional rollback-journal mode works better.
>>>>
>>>>
>>>>
>>>>> I try to use the PRAGMA wal_checkpoint; to try resolve this situation,
>>>>> but seams that command was ignored by sqlite because the -wal file does
>>>>> not change in size, even the DB file.
>>>>> ___
>>>>> sqlite-users mailing list
>>>>> sqlite-users@sqlite.org
>>>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>>>
>>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-02 Thread Michele Pradella
  ok, I'll wait for the walk around.
I always use a BEGIN; COMMIT; transaction but often, after a COMMIT; the 
-wal file does not change in size, it seams it's not checkponted.
Anyway do you think that with WAL journal mode I should continue to use 
BEGIN; COMMIT; statement? or not?


Il 02/09/2010 14.43, Richard Hipp ha scritto:
> On Thu, Sep 2, 2010 at 8:34 AM, Michele Pradella<michele.prade...@selea.com
>> wrote:
>>   Hi,
>> I found a strange behavior of the sqlite 3.7.2 with WAL journal mode.
>> Yesterday I found my application DB with a -wal file of 1,5GB and a -shm
>> file of few MB (about 9MB) with a DB file of 1,2GB: in this
>> situation I got the process memory wasted by "mapped file" of the -shm
>> file. It seams that the file is mapped a lot of times in memory so the
>> process memory become 2GB and it can't allocate more memory. In that
>> situation operation made on the DB cause I/O disk errors probably due to
>> the wasted memory.
>>
> By coincidence, the SQLite developers were just discussing this problem
> earlier this morning.  There are technical issues with windows that make a
> solution difficult.  We are trying to come up with a work-around.  (The
> problem you describe is specific to the windows backend and does not come up
> in unix.)
>
>
>> I'm doing some other test to reproduce the problem, but I think that
>> could be when I got a lot of operation between a BEGIN; COMMIT;
>> So is it ok to use the BEGIN; COMMIT; with the WAL journal activated?
>> is there some kind of limit in the number of operation between a BEGIN;
>> COMMIT; statement?
>>
> SQLite will not checkpoint the journal until you commit your transaction.
> So if you leave the transaction open too long, the WAL file and the -shm
> file will grow excessively large.  WAL works best with many smaller
> transactions.  If you have one or two big transactions, then using a
> traditional rollback-journal mode works better.
>
>
>
>> I try to use the PRAGMA wal_checkpoint; to try resolve this situation,
>> but seams that command was ignored by sqlite because the -wal file does
>> not change in size, even the DB file.
>> _______
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>


-- 
Selea s.r.l.


Michele Pradella R


SELEA s.r.l.

Via Aldo Moro 69
Italy - 46019 Cicognara (MN)
Tel +39 0375 889091
Fax +39 0375 889080
*michele.prade...@selea.com* <mailto:michele.prade...@selea.com>
*http://www.selea.com*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] memory wasted shm mapped file (3.7.2)

2010-09-02 Thread Michele Pradella
  Hi,
I found a strange behavior of the sqlite 3.7.2 with WAL journal mode.
Yesterday I found my application DB with a -wal file of 1,5GB and a -shm 
file of few MB (about 9MB) with a DB file of 1,2GB: in this
situation I got the process memory wasted by "mapped file" of the -shm 
file. It seams that the file is mapped a lot of times in memory so the 
process memory become 2GB and it can't allocate more memory. In that 
situation operation made on the DB cause I/O disk errors probably due to 
the wasted memory.
I'm doing some other test to reproduce the problem, but I think that 
could be when I got a lot of operation between a BEGIN; COMMIT;
So is it ok to use the BEGIN; COMMIT; with the WAL journal activated?
is there some kind of limit in the number of operation between a BEGIN; 
COMMIT; statement?
I try to use the PRAGMA wal_checkpoint; to try resolve this situation, 
but seams that command was ignored by sqlite because the -wal file does 
not change in size, even the DB file.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users