[sqlite] AUTOINCREMENT BIGINT

2014-11-07 Thread Michele Pradella
Hi all, I have a question about data type BIGINT: from docs (http://www.sqlite.org/datatype3.html) I understand that INTEGER and BIGINT results in the same affinity (INTEGER), so datatypes are same, is it correct? Unfortunately if I create a table with a field "Id BIGINT PRIMARY KEY

Re: [sqlite] AUTOINCREMENT BIGINT

2014-11-07 Thread Michele Pradella
Ok understand thanks Il 07/11/2014 14.40, Richard Hipp ha scritto: On Fri, Nov 7, 2014 at 8:26 AM, Michele Pradella <michele.prade...@selea.com wrote: Is there a way to sue AUTOINCREMENT with BIGINT? what's the reason for this check? No. Furthermore, AUTOINCREMENT probably does not do w

[sqlite] best way to have a constraint over 2 fields

2015-07-17 Thread Michele Pradella
INDEX uq_ColA_ColB ON table(ColA, ColB) which one do you think is better in terms of performance? keep in mind the table have millions of records and SELECT is the most frequent operation -- Selea s.r.l. Michele Pradella R SELEA s.r.l. Via Aldo Moro 69 Italy - 46019 Cicognara

[sqlite] compile 3.8.4 error

2014-03-10 Thread Michele Pradella
Compiling on Windows 32bit VS2010 Line 73595 static const int iLn = __LINE__+4; give an error: "error C2099: initializer is not a constant" ___ sqlite-users mailing list sqlite-users@sqlite.org

[sqlite] In-Memory database PRAGMA read_uncommitted

2016-04-23 Thread Michele Pradella
I have an In-Memory DB that is written and read from connections of the same process. All good with shared cache, but I found that TableLock occur more often on In-Memory than on disk DB, probably because in memory we can't use WAL. Anyway I found the PRAGMA read_uncommitted that from

[sqlite] In-Memory database PRAGMA read_uncommitted

2016-04-24 Thread Michele Pradella
Il 2016-04-23 11:05 R Smith ha scritto: > On 2016/04/23 10:20 AM, Michele Pradella wrote: > >> I have an In-Memory DB that is written and read from connections of the >> same process. All good with shared cache, but I found that TableLock >> occur more often on I

[sqlite] query Benchmark

2016-02-12 Thread Michele Pradella
Hi all, is there a way to make a benchmark of queries to check which version is faster? I'm using sqlite shell, the question is about on how to make repetitive tests in the same conditions (for example I need to totally disable cache to avoid different results the second time query is

[sqlite] query Benchmark

2016-02-12 Thread Michele Pradella
(*field* LIKE 'EX011%') AND (DateTime>=14550588) AND (DateTime<=14552315) ORDER BY DateTime; If I try to force the use of an index on *field *but I think sqlite can't use it, is it right? Selea s.r.l. Michele Pradella R SELEA s.r.l. Via Aldo Moro 69

[sqlite] query Benchmark

2016-02-12 Thread Michele Pradella
Selea s.r.l. Michele Pradella R SELEA s.r.l. Via Aldo Moro 69 Italy - 46019 Cicognara (MN) Tel +39 0375 889091 Fax +39 0375 889080 *michele.pradella at selea.com* <mailto:michele.pradella at selea.com> *http://www.selea.com* Il 12/02/2016 10.28, Michele Pradella ha scritto: >

[sqlite] query Benchmark

2016-02-12 Thread Michele Pradella
Ok understood...anyway trying with sqlite shell it's seams that (filed LIKE 'AA%') is slower than (field>='AAA' AND field<='AAZ') do you think there's a way I can check if the optimization is working? Selea s.r.l. Michele Pradella R SELEA s.r.l. Via Aldo Moro 69

[sqlite] query Benchmark

2016-02-12 Thread Michele Pradella
Moreover if I make field LIKE 'AA%' and I use INDEXED BY index_on_field shell return me "Error: no query solution" so it's seams sqlite with LIKE operator can't use index on field Selea s.r.l. Michele Pradella R SELEA s.r.l. Via Aldo Moro 69 Italy - 46019 Cicognar

[sqlite] query Benchmark

2016-02-12 Thread Michele Pradella
th explain query plan the index on field is not used. Do you think I'm doing something wrong? Selea s.r.l. Michele Pradella R SELEA s.r.l. Via Aldo Moro 69 Italy - 46019 Cicognara (MN) Tel +39 0375 889091 Fax +39 0375 889080 *michele.pradella at selea.com* <mailto:mic

[sqlite] query Benchmark

2016-02-12 Thread Michele Pradella
(Plate LIKE 'EX011%')) SELECT DateTime,FileName,Plate,Type,CameraName,Id,Country,Reason,CarPlateType,VehicleType,GPS FROM car_plates INDEXED BY car_plates_plate WHERE ((Plate LIKE 'EX011%')) give me error Selea s.r.l. Michele Pradella R SELEA s.r.l. Via Aldo Moro 69 Italy -

[sqlite] query Benchmark

2016-02-12 Thread Michele Pradella
with a earlier sqlite version and I do not know if this can cause the use of wrong index. So speaking about performance, which is better PRAGMA case_sensitive_like=ON; or PRAGMA case_sensitive_like=OFF;? Selea s.r.l. Michele Pradella R SELEA s.r.l. Via Aldo Moro 69 Italy - 46019

[sqlite] query Benchmark

2016-02-12 Thread Michele Pradella
teTime>? AND DateTimemailto:michele.pradella at selea.com> *http://www.selea.com* Il 12/02/2016 13.44, Michele Pradella ha scritto: > ok, assume casr_sensitive_like=OFF (default), according the point 6 of > LIKE optimization: > http://www.sqlite.org/optoverview.html > should be t

[sqlite] query Benchmark

2016-02-12 Thread Michele Pradella
Y PLAN SELECT * FROM car_plates indexed by car_plates_plateWHERE ((Plate LIKE 'EX011%')) AND (DateTime>=14550588)AND(DateTime<=14552315) with PRAGMA case_sensitive_like=OFF; you obviously obtain error Selea s.r.l. Michele Pradella R SELEA s.r.l. Via

[sqlite] query Benchmark

2016-02-12 Thread Michele Pradella
f I remove the second expression ((CarPlateType==-1)AND((Plate LIKE '~A00O%'))) it works Selea s.r.l. Michele Pradella R SELEA s.r.l. Via Aldo Moro 69 Italy - 46019 Cicognara (MN) Tel +39 0375 889091 Fax +39 0375 889080 *michele.pradella at selea.com* <mailto:michele.pradella

[sqlite] query Benchmark

2016-02-12 Thread Michele Pradella
Splitting query in 2 SELECT using UNION let me use car_plates_plate index without problemvery strange...but I found a walkaround Selea s.r.l. Michele Pradella R SELEA s.r.l. Via Aldo Moro 69 Italy - 46019 Cicognara (MN) Tel +39 0375 889091 Fax +39 0375 889080

[sqlite] query Benchmark

2016-02-12 Thread Michele Pradella
Already solved with UNION of SELECT Selea s.r.l. Michele Pradella R SELEA s.r.l. Via Aldo Moro 69 Italy - 46019 Cicognara (MN) Tel +39 0375 889091 Fax +39 0375 889080 *michele.pradella at selea.com* <mailto:michele.pradella at selea.com> *http://www.selea.com* Il 12/0

[sqlite] query Benchmark

2016-02-15 Thread Michele Pradella
for the column and the default car_sensitive_like you will get always the slowest result in LIKE queries Selea s.r.l. Michele Pradella R SELEA s.r.l. Via Aldo Moro 69 Italy - 46019 Cicognara (MN) Tel +39 0375 889091 Fax +39 0375 889080 *michele.pradella at selea.com

[sqlite] query Benchmark

2016-02-15 Thread Michele Pradella
it... Selea s.r.l. Michele Pradella R SELEA s.r.l. Via Aldo Moro 69 Italy - 46019 Cicognara (MN) Tel +39 0375 889091 Fax +39 0375 889080 *michele.pradella at selea.com* <mailto:michele.pradella at selea.com> *http://www.selea.com* Il 15/02/2016 10.51, Simon Slavin ha s

[sqlite] query Benchmark

2016-02-15 Thread Michele Pradella
Anyway thank you Simon the point of view it's clear now Selea s.r.l. Michele Pradella R SELEA s.r.l. Via Aldo Moro 69 Italy - 46019 Cicognara (MN) Tel +39 0375 889091 Fax +39 0375 889080 *michele.pradella at selea.com* <mailto:michele.pradella at selea.com>

[sqlite] Multiple Column index

2016-02-19 Thread Michele Pradella
Hi all, I have a question about using index with multiple column. Take this test case CREATE TABLE test (DateTime BIGINT,CarPlate VARCHAR(255)); CREATE INDEX indexA ON test(DateTime); CREATE INDEX indexB ON test(CarPlate); CREATE INDEX indexAB ON test(DateTime,CarPlate); now if you do [1] ->

[sqlite] Multiple Column index

2016-02-19 Thread Michele Pradella
Ok understand, so there's no way to use that kind of double column index on a select like explain query plan select * from test where (CarPlate LIKE 'AA000%') AND (DateTime>1); because at least one field have to do with operator = correct? Selea s.r.l. Michele Pradell

[sqlite] Multiple Column index

2016-02-19 Thread Michele Pradella
because at least one field have to do with operator = correct? no can be one of = or IN or IS but not LIKE operator Selea s.r.l. Michele Pradella R SELEA s.r.l. Via Aldo Moro 69 Italy - 46019 Cicognara (MN) Tel +39 0375 889091 Fax +39 0375 889080 *michele.pradella

[sqlite] Multiple Column index

2016-02-19 Thread Michele Pradella
> (please don't top-post) > > Michele Pradella wrote: >> so there's no way to use that kind of double column index on a select like >> explain query plan select * from test where (CarPlate LIKE 'AA000%') AND >> (DateTime>1); >> because at least one fiel

[sqlite] Multiple Column index

2016-02-22 Thread Michele Pradella
; that query have to do if we use indexAB instead index? > >> -Original Message- >> From: sqlite-users-bounces at mailinglists.sqlite.org [mailto:sqlite-users- >> bounces at mailinglists.sqlite.org] On Behalf Of Michele Pradella >> Sent: Friday, 19 February, 2016 01:07

[sqlite] Multiple Column index

2016-02-22 Thread Michele Pradella
>>> Your indexes are badly designed. >>> >>> You require the following two indexes: >>> CREATE INDEX indexAB ON test(DateTime,CarPlate); >>> CREATE INDEX indexBA ON test(CarPlate,DateTime); >>> >>> The indexes: CREATE INDEX indexA ON test(DateTime); CREATE INDEX indexB ON test(CarPlate);

[sqlite] Multiple Column index

2016-02-22 Thread Michele Pradella
> On 22 Feb 2016, at 9:07am, Michele Pradella > wrote: > >> Already done this check. My last question was about to reduce the number of >> indexes on table avoiding kill a "quite unnecessary" index that if used do a >> better job that the other. >>

[sqlite] In-Memory DB cache_size

2016-03-17 Thread Michele Pradella
I check the default cache_size of a In-Memory DB and it's 2000 Do you think for that kind of DB I can put cache_size to 0 like default for TEMP DB? or you think it's better leave 2000? Just wandering if it's correct to have ram cache of a In-Memory DB

[sqlite] In-Memory estimate size

2016-03-18 Thread Michele Pradella
Which is the best way to estimate the size of an In-Memory DB? Now I'm using (PRAGMA page_size)*(PRAGMA page_count) to have an idea of the size in RAM of the db. It's quite ok but obviously it gives a value less than the real (probably because indexes and other stuff are missing from cont) For

[sqlite] In-Memory estimate size

2016-03-20 Thread Michele Pradella
Il 2016-03-18 17:18 Richard Hipp ha scritto: > On 3/18/16, Michele Pradella wrote: > >> Which is the best way to estimate the size of an In-Memory DB? >> Now I'm using (PRAGMA page_size)*(PRAGMA page_count) to have an idea of >> the size in RAM of the db. It's quite o

[sqlite] Good way for CEIL, or is there a better way

2016-05-09 Thread Michele Pradella
> I need to have a CEIL function in SQLite. This is the way I implemented it: > WITH percentage AS ( > SELECT date > , 100.0 * rank / outOf AS percentage > , CAST(100.0 * rank / outOf AS int) AS castedPercentage > FROM ranking > ) > SELECT date > ,

[sqlite] Good way for CEIL, or is there a better way

2016-05-09 Thread Michele Pradella
> 2016-05-09 13:40 GMT+02:00 Michele Pradella : > >> I need to have a CEIL function in SQLite. This is the way I implemented it: >>> WITH percentage AS ( >>> SELECT date >>> , 100.0 * rank / outOf AS percentage >>&g

[sqlite] memory wasted shm mapped file (3.7.2)

2010-09-02 Thread Michele Pradella
Hi, I found a strange behavior of the sqlite 3.7.2 with WAL journal mode. Yesterday I found my application DB with a -wal file of 1,5GB and a -shm file of few MB (about 9MB) with a DB file of 1,2GB: in this situation I got the process memory wasted by "mapped file" of the -shm file. It seams

Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-02 Thread Michele Pradella
/2010 14.43, Richard Hipp ha scritto: > On Thu, Sep 2, 2010 at 8:34 AM, Michele Pradella<michele.prade...@selea.com >> wrote: >> Hi, >> I found a strange behavior of the sqlite 3.7.2 with WAL journal mode. >> Yesterday I found my application DB with a -wal file of 1,5G

Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-09 Thread Michele Pradella
ine.de>wrote: > >> Michele Pradella wrote: >>>ok, I'll wait for the walk around. >>> I always use a BEGIN; COMMIT; transaction but often, after a COMMIT; the >>> -wal file does not change in size, it seams it's not checkponted. >>> Anyway do you thi

Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-09 Thread Michele Pradella
ok thank you, today I'm going to port the difference to my source code and I'm going to try if the memory it's ok Il 09/09/2010 9.37, Dan Kennedy ha scritto: > On Sep 9, 2010, at 1:12 PM, Michele Pradella wrote: > >> Hi, do you have some news about the wasted memory? ha

Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-09 Thread Michele Pradella
think I ported the fix in the right way, and the problem still exist. I'll do some other tests. After this fix did you never see the problem? Il 09/09/2010 9.46, Michele Pradella ha scritto: >ok thank you, today I'm going to port the difference to my source code > and I'm going

Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-09 Thread Michele Pradella
). ps.With this DB the "SELECT count(ID) FROM table_name" it's very slow...it take minutes(with the sqlite shell)! Il 09/09/2010 17.04, Max Vlasov ha scritto: > On Thu, Sep 9, 2010 at 11:37 AM, Dan Kennedy<danielk1...@gmail.com> wrote: > >> On Sep 9, 2010, at 1:1

Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
____ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > > -- Selea s.r.l. Michele Pradella R SELEA s.r.l. Via Aldo Moro 69 Italy - 46019 Cicognara (MN) Tel +39 037

Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
g? Can you > provide a short series of statements with the CLI to reproduce this? > > HTH. > -Shane > > > > > On Thu, Sep 9, 2010 at 11:36 AM, Michele Pradella > <michele.prade...@selea.com> wrote: >> Hi Max, I got the problem in both situations:

Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
)). > > There's a change that there's something wrong with my program, can someone > do a similar test on another Windows 64bit system? > > Thanks > > Max > _______ > sqlite-users mailing list > sqlite-users@sqlite.org > http://

Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
e. Il 10/09/2010 8.27, Michele Pradella ha scritto: >Even in my use case I got "Disk I/O error" after I reached 2GB of > virtual memory. > Max tell us the size of the memory mapped file in VMMap tool when you > got "Disk I/O error", and check the value of Virtual B

Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
; address >> space for superset/subset regions in contrary to Windows logic? > It is separate. This bug was in the OS specific win32 layer > > Dan. > > > > > > ___ > sqlite-users mailing list > sqlite-users@sq

Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
modified os_win.c from the full package into 3_7_2 > amalgamation. > > Max > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > > -- Selea s.r.l. Michele Pradel

Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
till tomorrow, so I can tell you more about this situation. Il 10/09/2010 12.16, Michele Pradella ha scritto: >ok, Dan already sent me a sqlite3.c source, and I'm doing some > tests...I let you know the results > > Il 10/09/2010 12.12, Max Vlasov ha scritto: >> On Fri, Sep 10

Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
by WAL mechanism? Il 10/09/2010 14.51, Max Vlasov ha scritto: > On Fri, Sep 10, 2010 at 3:52 PM, Michele Pradella< > michele.prade...@selea.com> wrote: > >> After some tests, with the new sqlite3.c source, seams that the >> behavior is better than before. So I see the

Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-10 Thread Michele Pradella
the DB connection? is just a question, and anyway I got my application running to test this behavior. Il 10/09/2010 15.20, Max Vlasov ha scritto: > On Fri, Sep 10, 2010 at 5:07 PM, Michele Pradella< > michele.prade...@selea.com> wrote: > >> what I worry about is tha

Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-13 Thread Michele Pradella
ith the patched library.2,000,000 appends were made without any > problem. > > > On Fri, Sep 10, 2010 at 5:37 PM, Michele Pradella< > michele.prade...@selea.com> wrote: > >> ...connection do the operation that in my situation cause the -shm Mapped >> File grow up,

Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-16 Thread Michele Pradella
will be released? Il 13/09/2010 8.18, Michele Pradella ha scritto: >Ok I think that the latest patch I used to try the WAL is the patch > that fix the problem. I do not get wasted memory by mapped file ;) good > job. Anyway I leave the application running this day. If I have news > I'll tell

Re: [sqlite] memory wasted shm mapped file (3.7.2)

2010-09-16 Thread Michele Pradella
ok, I think I'll use the snapshot in the mean time. Thank you Il 16/09/2010 15.43, Richard Hipp ha scritto: > On Thu, Sep 16, 2010 at 9:02 AM, Michele Pradella< > michele.prade...@selea.com> wrote: > >> After some days of test the application work fine and with VMMap

Re: [sqlite] Performance problems and large memory size

2010-09-22 Thread Michele Pradella
urf& Phone Flat 16.000 für >> nur 19,99/mtl.!* http://produkte.web.de/go/DSL_Doppel_Flatrate/2 >> ___ >> sqlite-users mailing list >> sqlite-users@sqlite.org >> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users >> >

Re: [sqlite] Performance problems and large memory size

2010-09-22 Thread Michele Pradella
ok thank you, usually how big is the default page_size? Il 22/09/2010 16.17, Jay A. Kreibich ha scritto: > On Wed, Sep 22, 2010 at 12:02:33PM +0200, Michele Pradella scratched on the > wall: >>I have a question about "PRAGMA cache_size" >> if I use the d

Re: [sqlite] Performance problems and large memory size

2010-09-22 Thread Michele Pradella
ok I think the default is 1024. So for 2000 of cache size: (100+1024)*2000=2,2MB Il 22/09/2010 16.30, Michele Pradella ha scritto: >ok thank you, usually how big is the default page_size? > > Il 22/09/2010 16.17, Jay A. Kreibich ha scritto: >> On Wed, Sep 22, 2010 at 1

Re: [sqlite] Performance problems and large memory size

2010-09-24 Thread Michele Pradella
_ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > > -- Selea s.r.l. Michele Pradella R SELEA s.r.l. Via Aldo Moro 69 Italy - 46019 Cicognara (MN) Tel +39 0375 889091 Fax +39 0375 889080

[sqlite] COUNT very slow

2010-09-24 Thread Michele Pradella
I have an SQLite DB of about 9GB with about 2.500.000 records. I can't understand why the "select COUNT(*) from log" statement is extremely slow, it takes me about 9-10 minutes! I try with: select COUNT(1) from logs select COUNT(DateTime) from logs same result. Have you idea of why it's so

Re: [sqlite] COUNT very slow

2010-09-24 Thread Michele Pradella
g10279.html > > If you need the result quickly, you have to maintain the rnumber of > records yourself in a different table, perhaps using triggers. > > Martin > > > Am 24.09.2010 10:13, schrieb Michele Pradella: >> I have an SQLite DB of about 9GB with about 2.500

Re: [sqlite] COUNT very slow

2010-09-24 Thread Michele Pradella
I can get a big speed up of COUNT if I first do a VIEW of what I have to count and than make select COUNT on the view. Without VIEW: 9 Minutes With VIEW: 8 Seconds! Il 24/09/2010 10.58, Martin Engelschalk ha scritto: > > Am 24.09.2010 10:38, schrieb Michele Pradella: >> o

Re: [sqlite] When do I need SQLITE_THREADSAFE?

2010-10-06 Thread Michele Pradella
That's why the =1 mode is called "serial"... >it automatically serializes the database statements. > >At least, I'm pretty sure that's how it works. I generally avoid > threaded code, and when I do use it, I tend to use thread-specific >resources that are carefully

Re: [sqlite] When do I need SQLITE_THREADSAFE?

2010-10-06 Thread Michele Pradella
I check in the sqlite3.c code for the SQLITE_THREADSAFE. I can't found any difference between SQLITE_THREADSAFE=1 and SQLITE_THREADSAFE=2. I found only differences if SQLITE_THREADSAFE=0 or SQLITE_THREADSAFE>0 Have I miss something? Il 06/10/2010 8.07, Michele Pradella ha scritto: >

[sqlite] Speed up DELETE of a lot of records

2010-10-07 Thread Michele Pradella
Hi all, I have a question about how to speed up a DELETE statement. I have a DB of about 3GB: the DB has about 23 millions of records. The DB is indexed by a DateTime column (is a 64 bit integer), and suppose you want to delete all records before a date. Now I'm using a syntax like this (I try

Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Michele Pradella
t 7, 2010 at 11:05 AM, Michele Pradella > <michele.prade...@selea.com> wrote: >> Hi all, I have a question about how to speed up a DELETE statement. >> I have a DB of about 3GB: the DB has about 23 millions of records. >> The DB is indexed by a DateTime column (is a 64 bi

Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Michele Pradella
ame, after closing shell the -wal remain. Moreover the operation above "create new" terminate with a "Error: disk I/O error" The hard disk I use has a lot of free space and it's SATA2 hard disk, so is internal Il 07/10/2010 20.38, Petite Abeille ha scritto: > On Oct 7, 2

Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Michele Pradella
I was thinking this too, but I take this for last chance: my hope is I can delete 5 millions of records in few seconds, science fiction? :) Il 08/10/2010 9.00, Aldes Rossi ha scritto: >Il 10/08/2010 08:30 AM, Michele Pradella ha scritto: >> I don't know if could be faster to do mo

Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Michele Pradella
I'll try to increase cache size, and I'll try operation on my Db with the 3.7.3 anyway I already ported the Fix of the WAL issue from recent snapshot. I'll try and let you know Il 08/10/2010 9.52, Marcus Grimm ha scritto: > Michele Pradella wrote: >>As I explain in previews email

Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Michele Pradella
with .quit the -wal file is reintegrated. I thought that Ctrl+C is like a ".quit " but it's not. Anyway if I close the DB connection with Ctrl+C and than reopen connection and close it with .quit the -wal file is not reintegrated. Il 08/10/2010 9.56, Michele Pradella ha scritto: >

Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Michele Pradella
DELETE on PrimaryKey instead of DateTime index takes same time Il 08/10/2010 10.30, Michele Pradella ha scritto: >ok I'll try with 3.7.3 > DELETE is a little bit faster, and the -wal is reintegrated when I close > the connection. > Changing cache_size (I try 1) DELETE tak

Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Michele Pradella
> Anyway if I close the DB connection with Ctrl+C and than reopen >> connection and close it with .quit the -wal file is not reintegrated. >> >> Il 08/10/2010 9.56, Michele Pradella ha scritto: >>> I'll try to increase cache size, and I'll try operation on my Db with >>

Re: [sqlite] Speed up DELETE of a lot of records

2010-10-08 Thread Michele Pradella
the time used. It's the only way I can figure out at the moment. Il 08/10/2010 15.55, Jay A. Kreibich ha scritto: > On Fri, Oct 08, 2010 at 09:09:09AM +0200, Michele Pradella scratched on the > wall: >>I was thinking this too, but I take this for last chance: my hope is I &g

Re: [sqlite] Speed up DELETE of a lot of records

2010-10-11 Thread Michele Pradella
frequently less records. If someone have a smart idea, I'm listening ;) Il 08/10/2010 22.54, Nicolas Williams ha scritto: > On Fri, Oct 08, 2010 at 05:49:18PM +0100, Simon Slavin wrote: >> On 8 Oct 2010, at 5:48pm, Stephan Wehner wrote: >>> On Fri, Oct 8, 2010 at 7:14 AM, Michele

Re: [sqlite] Speed up DELETE of a lot of records

2010-10-11 Thread Michele Pradella
down all the other DB operation. Il 11/10/2010 11.46, Simon Slavin ha scritto: > On 11 Oct 2010, at 10:26am, Michele Pradella wrote: > >> Soft delete could increase the SELECT speed because you have to check >> always for the "deleted" column. >> Moreover

Re: [sqlite] Speed up DELETE of a lot of records

2010-10-11 Thread Michele Pradella
It's what I'm doing: I'm deleting every 15 minutes of a small number of records at a time. Il 11/10/2010 12.40, Simon Slavin ha scritto: > On 11 Oct 2010, at 10:56am, Michele Pradella wrote: > >> I know that in this use case UPDATE is lither than DELETE, but if I >> make

Re: [sqlite] Speed up DELETE of a lot of records

2010-10-11 Thread Michele Pradella
> So each delete would be approx 100 times faster and would allow interruption > inbetween deletes. > > > > Michael D. Black > Senior Scientist > Advanced Analytics Directorate > Northrop Grumman Information Systems > > > > > F

Re: [sqlite] Speed up DELETE of a lot of records

2010-10-12 Thread Michele Pradella
reibich<j...@kreibi.ch> wrote: >> On Mon, Oct 11, 2010 at 02:08:54PM +0200, Michele Pradella scratched on the >> wall: >>>Ok so the main idea it's always the same: split the DELETE to make the >>> operation on less records, but do it more often. >> Ano

[sqlite] INSERT OR UPDATE

2010-11-10 Thread Michele Pradella
Hi all, I have to INSERT a row in a DB but I have first to check if the Key I'm inserting already exist. Now I'm doing a "SELECT count..." first to check if the key exist and then INSERT or UPDATE records. Do you know if there's a better or faster way to do that? Perhaps with an ON CONFLICT

Re: [sqlite] FTS Question

2010-11-23 Thread Michele Pradella
say m...@t-online.de ? > > the - is an exclusion > > Thanks in advance! > > Ady > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > > -- Selea s.r.l.

Re: [sqlite] Using stored Regular Expressions to match given String

2011-01-12 Thread Michele Pradella
example case, the second, > which have the text "main". > > Is that possible? Maybe adding an UDF? > -- Selea s.r.l. Michele Pradella R SELEA s.r.l. Via Aldo Moro 69 Italy - 46019 Cicognara (MN) Tel +39 0375 889091 F

[sqlite] memory used by sqlite library

2011-02-25 Thread Michele Pradella
possible to set up a maximum amount of memory that the library can use? -- Selea s.r.l. Michele Pradella R SELEA s.r.l. Via Aldo Moro 69 Italy - 46019 Cicognara (MN) Tel +39 0375 889091 Fax +39 0375 889080 *michele.prade...@selea.com* <mailto:michele.prade...@selea.com>

Re: [sqlite] memory used by sqlite library

2011-02-25 Thread Michele Pradella
s.html > and > http://www.sqlite.org/c3ref/c_status_malloc_count.html > > -- > Marco Bambini > http://www.sqlabs.com > > > > > > > On Feb 25, 2011, at 2:17 PM, Michele Pradella wrote: > >> Do you know if is there a way to ask to the sqlite library the am

[sqlite] Network file system

2016-08-05 Thread Michele Pradella
Hi all, I read documentation about using sqlite on a Network file system, so I know is not a good environment because file lock problem. Anyway do you think I can have the same problem if I'm sure that only my precess try write or read database? So I have just one process using network DB (for

Re: [sqlite] Network file system

2016-08-05 Thread Michele Pradella
Il 05/08/2016 09.47, Simon Slavin ha scritto: On 5 Aug 2016, at 7:30am, Michele Pradella <michele.prade...@selea.com> wrote: Hi all, I read documentation about using sqlite on a Network file system, so I know is not a good environment because file lock problem. Anyway do you think I ca

Re: [sqlite] Network file system

2016-08-06 Thread Michele Pradella
-Original Message- >> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] >> On Behalf Of Michele Pradella >> Sent: Friday, 5 August, 2016 00:31 >> To: sqlite-users@mailinglists.sqlite.org >> Subject: [sqlite] Network file system >> >

[sqlite] Transactions

2017-02-05 Thread Michele Pradella
Hi all, I have a question about transactions and SQLite: Do you think transactions are useful only when you have to do a sequence of statements that depends on each other and you need a way to rollback all statements if something goes wrong? or you can use transactions even with not

Re: [sqlite] Network file system

2016-08-21 Thread Michele Pradella
t;> -Original Message- >> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] >> On Behalf Of Michele Pradella >> Sent: Friday, 5 August, 2016 00:31 >> To: sqlite-users@mailinglists.sqlite.org >> Subject: [sqlite] Network file system >>

Re: [sqlite] unique values from a subset of data based on two fields

2018-06-29 Thread Michele Pradella
Select DISTINCT name,id,status from names where status = 1 *Michele Pradella* /R Software Engineer / michele.prade...@selea.com <mailto:michele.prade...@selea.com> Office: +39 0375

[sqlite] sqlite3_interrupt

2019-01-18 Thread Michele Pradella
Hi all, I was looking to the sqlite3_interrupt to make my application closing faster without waiting for long standing DB operation. I read in the documentation that should not be a problem to call it during insert update or delete: if transaction is running is automatically rolled back. Do you