[sqlite] Segmentation fault when using window function with (nested?) subquery
This query causes segmentation fault in SQLite 3.26.0 (on macOS 10.14.1). -- SELECT * FROM( SELECT * FROM (SELECT 1 AS c) WHERE c IN ( SELECT (row_number() OVER()) FROM (VALUES (0)) ) ) -- However, no error occurs in the following queries. The only difference is whether it is wrapped with "SELECT * FROM" . -- SELECT * FROM( SELECT 1 AS c WHERE c IN ( SELECT (row_number() OVER()) FROM (VALUES (0)) ) ) -- -- SELECT * FROM (SELECT 1 AS c) WHERE c IN ( SELECT (row_number() OVER()) FROM (VALUES (0)) ) -- I found similar bug report about window function but I think it is unrelated because it was fixed before 3.26.0. http://sqlite.1065341.n5.nabble.com/Segmentation-Fault-When-Using-Window-Function-td104584.html -- Sent from: http://sqlite.1065341.n5.nabble.com/ ___ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] FW: Bug in division?
Any reason I haven't heard back about this bug? Thanks _ From: Gene Connor [mailto:neothreeei...@hotmail.com] Sent: Wednesday, April 30, 2014 12:21 AM To: sqlite-users@sqlite.org Subject: Bug in division? SELECT DISTINCT 2/4 AS RESULT FROM TABLE; returns 0 SELECT DISTINCT 2/4.0 AS RESULT FROM TABLE; returns 0.5 (correct) SELECT DISTINCT 2.0/4 AS RESULT FROM TABLE; returns 0.5 (correct) SELECT DISTINCT 2.0/4.0 AS RESULT FROM TABLE; returns 0.5 (correct) SELECT DISTINCT 2.0/4.01 AS RESULT FROM TABLE; returns 0.49875 (correct) As long as one or both numbers has at least one decimal place, it calcs correctly. Verified that it also happens in queries using tables and real data. Windows SQLite version 3.8.4.3 2014-04-03 16:53:12 Thanks ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Bug in division?
SELECT DISTINCT 2/4 AS RESULT FROM TABLE;returns 0 SELECT DISTINCT 2/4.0 AS RESULT FROM TABLE;returns 0.5 (correct) SELECT DISTINCT 2.0/4 AS RESULT FROM TABLE;returns 0.5 (correct) SELECT DISTINCT 2.0/4.0 AS RESULT FROM TABLE;returns 0.5 (correct) SELECT DISTINCT 2.0/4.01 AS RESULT FROM TABLE;returns 0.49875 (correct) As long as one or both numbers has at least one decimal place, it calcs correctly. Verified that it also happens in queries using tables and real data. Windows SQLite version 3.8.4.3 2014-04-03 16:53:12 Thanks ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Forensics data control
Hello all, I would like to ask for some guidance. We sell a file auditing product and, like you would think, the database can get large. SQLite has been a perfect engine for us since we only do inserts and searches. Well, now we want to add some sophisticated forensics features. Currently we have fancy filtering but no real OLAP sort of capability. Have anyone seen a anything like this that we might be able to integrate into our product? We're using C# on Windows. Anyone used DevExpress's OLAP control? It looks pretty cool (if I could figure out what I'm doing) but I'm not sure it works with a raw DataSet or DataTable. Thanks, Gene ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] except and select
The customer's requirements has gotten worse and these queries are getting very ugly, so I think I should either: 1) do a select into a temp table and then run a bunch of queries to remove records based on another query (this might require substantial code changes) Or 2) use the 'except' clause in query. My questions are: 1) with option one, would the statement be "Delete * from table_name where orderno in (Select orderno from table_name where name = 'something')" 2) on option two, is there a way to except just certain records by a key or does the entire record have to match? Like can I say Select * from table_name except Select orderno from table_name where name = 'something' except Select orderno from table_name where name = 'something else' 2.a) does the second except apply to the original query? Thank you for your guidance, Gene -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Pavel Ivanov Sent: Wednesday, July 29, 2009 9:33 AM To: General Discussion of SQLite Database Subject: Re: [sqlite] except and select Maybe this way: select t.* from table_name t left outer join table_name t_del on t_del.operation = 'Deleted' and t_del.Filename = t.RenameTo where t.operation = 'Renamed' and t_del.ID is null union all select t.* from table_name t left outer join table_name t_ren on t_ren.operation = 'Renamed' and t_ren.RenameTo = t.Filename where t.operation = 'Deleted' and t_ren.ID is null Pavel On Wed, Jul 29, 2009 at 10:17 AM, Gene Allen wrote: > I need to come up with a query where I can remove records that match a > certain condition. > > > > I have file auditing data and I need the query to show all the files that > were deleted and the files that were renamed but not the files that was > deleted AND renamed TO. Problem is the Delete is one record and the Rename > is another one. > > > > For example, in the sample table below, I want records 3 and 4, since A was > deleted (record 1) AND renamed to (record 2) > > > > ID Filename operation RenamedTo > > 1 A Deleted > > 2 B Renamed A > > 3 C Renamed D > > 4 E Deleted > > > > I tried to use an except, but that won't work since the data won't match up > between the records. Record 1 and 2 don't match. > > > > Due to a limitation in my program, I have to do this in a single select > statement. > > > > Any advice would be wonderful! > > > > Thanks, > > > > Gene > > > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] except and select
I need to come up with a query where I can remove records that match a certain condition. I have file auditing data and I need the query to show all the files that were deleted and the files that were renamed but not the files that was deleted AND renamed TO. Problem is the Delete is one record and the Rename is another one. For example, in the sample table below, I want records 3 and 4, since A was deleted (record 1) AND renamed to (record 2) IDFilename operationRenamedTo 1 A Deleted 2 B RenamedA 3 C RenamedD 4 E Deleted I tried to use an except, but that won't work since the data won't match up between the records. Record 1 and 2 don't match. Due to a limitation in my program, I have to do this in a single select statement. Any advice would be wonderful! Thanks, Gene ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] corrupt database recovery
We weren't logging the encryption failure. >>>> The feof() does return true until you attempt to read PAST the end of a >> file. The "does" in the above line should be "doesn't" >> while code Yeah, this code was written as a 'do...while', instead of a 'while do.' -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Machin Sent: Thursday, May 28, 2009 7:19 PM To: General Discussion of SQLite Database Subject: Re: [sqlite] corrupt database recovery On 29/05/2009 9:34 AM, Gene Allen wrote: > Yeah. > > Since my code works in blocks, read/compress/encrypt/write, loop. Almost > all the real data was being written to the compressed file, however any > finalization and flushing of the stream wasn't occurring (since the encrypt > was failing) and the encrypt failure wasn't logged? > so the last bit of any SQLite database wouldn't be written. If so, pragma integrity_check should report that some of the pages actually written contain pointers to pages that are past the end of the file, shouldn't it? [snip] >> Well...a more structured test exposed the problem and it was this: >> >> The feof() does return true until you attempt to read PAST the end of a >> file. If feof doesn't continue to return true, it is broken. > So the code worked great until the file's length was a multiple of > the >> buffer size (in my case 262,144 bytes). As you can imagine that doesn't >> happen too often in the real world. >> >> Since I assumed that a feof would return true where there wasn't any more >> data in the file, I would start another pass at reading a chunk of data >> (which wouldn't find anything) and run thru the compression/encryption > code. >> The compression code worked handled it correctly, but the encryption >> required that a DWORD boundary (blowfish) and since 0 is on such a > boundary >> but at the wrong end...it would fail. Silently? Unlogged? In any case, I would have thought using feof() was not needed ... long time since I've written C in earnest, but isn't something like this the standard idiom: #define BUFSIZ 262144 buff char[BUFSIZ]; size_t nbytes; FILE *f; f = fopen("filename", "rb"); while ((nbytes = fread(buff, 1, BUFSIZ, f)) { do_something(buff, nbytes); } ?? HTH, John ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] corrupt database recovery
Yeah. Since my code works in blocks, read/compress/encrypt/write, loop. Almost all the real data was being written to the compressed file, however any finalization and flushing of the stream wasn't occurring (since the encrypt was failing) so the last bit of any SQLite database wouldn't be written. I'm guessing that also contributes to how infrequent I saw a problem. -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Allen Fowler Sent: Thursday, May 28, 2009 5:17 PM To: General Discussion of SQLite Database; kennethinbox-sql...@yahoo.com Subject: Re: [sqlite] corrupt database recovery > I think I found my defect: my old stress tests was based on doing > compression/encryptions/decryption/decompression passes on files of random > sizes; so I would do about a 10 million passes or so and say...that's pretty > good. > > Well...a more structured test exposed the problem and it was this: > > The feof() does return true until you attempt to read PAST the end of a > file. So the code worked great until the file's length was a multiple of the > buffer size (in my case 262,144 bytes). As you can imagine that doesn't > happen too often in the real world. > > Since I assumed that a feof would return true where there wasn't any more > data in the file, I would start another pass at reading a chunk of data > (which wouldn't find anything) and run thru the compression/encryption code. > The compression code worked handled it correctly, but the encryption > required that a DWORD boundary (blowfish) and since 0 is on such a boundary > but at the wrong end...it would fail. > But was causing the DB to be corrupt, but a dump to work fine? What structure was getting damaged? ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] corrupt database recovery
I have not! but I just might...thanks for the tip. I think I found my defect: my old stress tests was based on doing compression/encryptions/decryption/decompression passes on files of random sizes; so I would do about a 10 million passes or so and say...that's pretty good. Well...a more structured test exposed the problem and it was this: The feof() does return true until you attempt to read PAST the end of a file. So the code worked great until the file's length was a multiple of the buffer size (in my case 262,144 bytes). As you can imagine that doesn't happen too often in the real world. Since I assumed that a feof would return true where there wasn't any more data in the file, I would start another pass at reading a chunk of data (which wouldn't find anything) and run thru the compression/encryption code. The compression code worked handled it correctly, but the encryption required that a DWORD boundary (blowfish) and since 0 is on such a boundary but at the wrong end...it would fail. I fixed the code to test for 0 bytes being read, instead of solely relying on feof() and all seems well. :) I would like to say thank you for all your help and advice. Gene -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Ken Sent: Thursday, May 28, 2009 11:32 AM To: General Discussion of SQLite Database Subject: Re: [sqlite] corrupt database recovery Gene, Im sure others have suggested, but have you tried running your code through valgrind? Can you remove the custom VFS ? --- On Wed, 5/27/09, Gene Allen wrote: > From: Gene Allen > Subject: Re: [sqlite] corrupt database recovery > To: mgr...@medcom-online.de, "'General Discussion of SQLite Database'" > Date: Wednesday, May 27, 2009, 4:51 PM > I've reviewed the code over and over > again, but am yet to find it. But it > is a good bit of very complicated code (blowfish and > compression code). > > That's why I was asking about the format of the > database. Since I'm able to > successfully do a .dump but the integrity_check whines, I'm > thinking that > maybe if I focus on where what could cause that, it might > give me a clue. > > For example, if the tree is stored at the end of file maybe > I'm falling out > the loop without writing the final bytes. But if the > tree is not localized > to one area of the file, I'll have to rethink my approach. > > > > -Original Message- > From: sqlite-users-boun...@sqlite.org > [mailto:sqlite-users-boun...@sqlite.org] > On Behalf Of Marcus Grimm > Sent: Wednesday, May 27, 2009 2:44 PM > To: General Discussion of SQLite Database > Subject: Re: [sqlite] corrupt database recovery > > so, if you think it is a coding error on your side it will > be a hard work to find the source. I can only image side > effects > on an used API like sqlite3 with the classics: > > - stack frame overload in a function that calls sqlite > functions > - using a local stack variable as a return pointer and > reuse > this external. > - memory overwriting on malloc'ed pointers or reusing of > allready > free'd memory pages. > > one and two might be found using a code review and > identify > local variables or particular arrays that are written: > So wherever you call anything from sqlite check the stack > declaration and review the usage of these variables... > of course just don't do number two... ;) > > memory errors might be detected using special debug > versions of the > malloc/free library, by code review, or manually by adding > some > test variables whereever you malloc or free a pointer. > > I'm sure you allready did some of these... anyway, good > luck > > are you using threads? would be another beautiful issue to > trace...;) > > Marcus > > > > > It is a server class machine running Windows 2003 with > 4 GB. No, it's a > > local drive with 20GB free on it. > > > > I'm sure that it's a coding error on my part. > SQLite is very stable, in > > my > > opinion. I'm just trying to get a rough idea on > where I'm screwing up the > > database. > > > > > > -Original Message- > > From: sqlite-users-boun...@sqlite.org > > [mailto:sqlite-users-boun...@sqlite.org] > On Behalf Of John Elrick > > Sent: Wednesday, May 27, 2009 12:58 PM > > To: General Discussion of SQLite Database > > Subject: Re: [sqlite] corrupt database recovery > > > > What platform? Any chance they are using a > network drive? > > > > > > John > > > > Gene wrote: > >> My code is outside th
Re: [sqlite] corrupt database recovery
That's pretty much my test harness. Right now, I'm running a test on every file size from 0 to a 10 MB. If that works correctly (which I expect it will), I think I'm going to have to spend the time and effort setting up a massive test environment and stress the mess out of the whole system. I'm trying to avoid that since it would take a least a week to set that up. Thank you for the support, I need it. -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Doug Sent: Wednesday, May 27, 2009 5:20 PM To: 'General Discussion of SQLite Database' Subject: Re: [sqlite] corrupt database recovery I'm sorry that I can't help with the SQLite part, but couldn't you write a simple test harness that would create a file (of random size?) with random contents, encrypt/compress to a secondary file, then decompress/decrypt to a third file. Compare first and third files. If they match, delete and run test again. If they don't match you now have your debugging input. I'd think letting something like that run for a day or so would flush out boundary/buffer errors. It's not deterministic, but should be easy enough to bang out and let run on a secondary machine while you continue your main search. Good luck either way. Doug > -Original Message- > From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users- > boun...@sqlite.org] On Behalf Of Gene Allen > Sent: Wednesday, May 27, 2009 4:52 PM > To: mgr...@medcom-online.de; 'General Discussion of SQLite Database' > Subject: Re: [sqlite] corrupt database recovery > > I've reviewed the code over and over again, but am yet to find it. But > it > is a good bit of very complicated code (blowfish and compression code). > > That's why I was asking about the format of the database. Since I'm > able to > successfully do a .dump but the integrity_check whines, I'm thinking > that > maybe if I focus on where what could cause that, it might give me a > clue. > > For example, if the tree is stored at the end of file maybe I'm falling > out > the loop without writing the final bytes. But if the tree is not > localized > to one area of the file, I'll have to rethink my approach. > > > -Original Message- > From: sqlite-users-boun...@sqlite.org > [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Marcus Grimm > Sent: Wednesday, May 27, 2009 2:44 PM > To: General Discussion of SQLite Database > Subject: Re: [sqlite] corrupt database recovery > > so, if you think it is a coding error on your side it will > be a hard work to find the source. I can only image side effects > on an used API like sqlite3 with the classics: > > - stack frame overload in a function that calls sqlite functions > - using a local stack variable as a return pointer and reuse > this external. > - memory overwriting on malloc'ed pointers or reusing of allready > free'd memory pages. > > one and two might be found using a code review and identify > local variables or particular arrays that are written: > So wherever you call anything from sqlite check the stack > declaration and review the usage of these variables... > of course just don't do number two... ;) > > memory errors might be detected using special debug versions of the > malloc/free library, by code review, or manually by adding some > test variables whereever you malloc or free a pointer. > > I'm sure you allready did some of these... anyway, good luck > > are you using threads? would be another beautiful issue to trace...;) > > Marcus > > > > > It is a server class machine running Windows 2003 with 4 GB. No, > it's a > > local drive with 20GB free on it. > > > > I'm sure that it's a coding error on my part. SQLite is very stable, > in > > my > > opinion. I'm just trying to get a rough idea on where I'm screwing > up the > > database. > > > > > > -Original Message- > > From: sqlite-users-boun...@sqlite.org > > [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick > > Sent: Wednesday, May 27, 2009 12:58 PM > > To: General Discussion of SQLite Database > > Subject: Re: [sqlite] corrupt database recovery > > > > What platform? Any chance they are using a network drive? > > > > > > John > > > > Gene wrote: > >> My code is outside the database layer. So I do all my database > work, > >> then > >> compress and encrypt it. No errors are returned anywhere. I'm > guessing > >> that it's going to be an uninitialized variab
Re: [sqlite] corrupt database recovery
The problem is happening in the field and the program deals with a corrupt database by renaming it, logging the error and creating a new database. So I only end up with the corrupt one. I would GUESS that the database doesn't have any free pages except those at the end since it's an auditing product so we never delete anything...but I could be wrong. The only reason I think I'm 'dropping a byte' is that all the compression/encryption code is byte based and the database gets corrupt (invalid pages) but I'm able to .dump it successfully.That and I try to take the blame first, since it's usually my fault. (just ask my wife!) Right now I'm running a large test on the encryption/compression code to see if I can get it to break. -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Marcus Grimm Sent: Wednesday, May 27, 2009 5:09 PM To: 'General Discussion of SQLite Database' Subject: Re: [sqlite] corrupt database recovery sorry, I don't know where sqlite stores any kind of data like index or raw table, my suspect is that, since sqlite usually looks for a free page, the pages are randomly distributed over the file or added at the end if no free page has been found. what I don't understand: what makes you thinking that you may miss some bytes of a tree (and thus of a page) ? That is sqlite internal, except you are using your own vfs, I guess, ...do you ? by the way: did your application crash prior or nearby you had this corrupted DB ? > I've reviewed the code over and over again, but am yet to find it. But it > is a good bit of very complicated code (blowfish and compression code). > > That's why I was asking about the format of the database. Since I'm able > to > successfully do a .dump but the integrity_check whines, I'm thinking that > maybe if I focus on where what could cause that, it might give me a clue. > > For example, if the tree is stored at the end of file maybe I'm falling > out > the loop without writing the final bytes. But if the tree is not > localized > to one area of the file, I'll have to rethink my approach. > > > -Original Message- > From: sqlite-users-boun...@sqlite.org > [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Marcus Grimm > Sent: Wednesday, May 27, 2009 2:44 PM > To: General Discussion of SQLite Database > Subject: Re: [sqlite] corrupt database recovery > > so, if you think it is a coding error on your side it will > be a hard work to find the source. I can only image side effects > on an used API like sqlite3 with the classics: > > - stack frame overload in a function that calls sqlite functions > - using a local stack variable as a return pointer and reuse > this external. > - memory overwriting on malloc'ed pointers or reusing of allready > free'd memory pages. > > one and two might be found using a code review and identify > local variables or particular arrays that are written: > So wherever you call anything from sqlite check the stack > declaration and review the usage of these variables... > of course just don't do number two... ;) > > memory errors might be detected using special debug versions of the > malloc/free library, by code review, or manually by adding some > test variables whereever you malloc or free a pointer. > > I'm sure you allready did some of these... anyway, good luck > > are you using threads? would be another beautiful issue to trace...;) > > Marcus > > > >> It is a server class machine running Windows 2003 with 4 GB. No, it's a >> local drive with 20GB free on it. >> >> I'm sure that it's a coding error on my part. SQLite is very stable, in >> my >> opinion. I'm just trying to get a rough idea on where I'm screwing up >> the >> database. >> >> >> -Original Message- >> From: sqlite-users-boun...@sqlite.org >> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick >> Sent: Wednesday, May 27, 2009 12:58 PM >> To: General Discussion of SQLite Database >> Subject: Re: [sqlite] corrupt database recovery >> >> What platform? Any chance they are using a network drive? >> >> >> John >> >> Gene wrote: >>> My code is outside the database layer. So I do all my database work, >>> then >>> compress and encrypt it. No errors are returned anywhere. I'm >>> guessing >>> that it's going to be an uninitialized variable or byte alignment >>> problems >>> somewhere. >>> >>> This code is running on hundreds of machines without a problem and I
Re: [sqlite] corrupt database recovery
I've reviewed the code over and over again, but am yet to find it. But it is a good bit of very complicated code (blowfish and compression code). That's why I was asking about the format of the database. Since I'm able to successfully do a .dump but the integrity_check whines, I'm thinking that maybe if I focus on where what could cause that, it might give me a clue. For example, if the tree is stored at the end of file maybe I'm falling out the loop without writing the final bytes. But if the tree is not localized to one area of the file, I'll have to rethink my approach. -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Marcus Grimm Sent: Wednesday, May 27, 2009 2:44 PM To: General Discussion of SQLite Database Subject: Re: [sqlite] corrupt database recovery so, if you think it is a coding error on your side it will be a hard work to find the source. I can only image side effects on an used API like sqlite3 with the classics: - stack frame overload in a function that calls sqlite functions - using a local stack variable as a return pointer and reuse this external. - memory overwriting on malloc'ed pointers or reusing of allready free'd memory pages. one and two might be found using a code review and identify local variables or particular arrays that are written: So wherever you call anything from sqlite check the stack declaration and review the usage of these variables... of course just don't do number two... ;) memory errors might be detected using special debug versions of the malloc/free library, by code review, or manually by adding some test variables whereever you malloc or free a pointer. I'm sure you allready did some of these... anyway, good luck are you using threads? would be another beautiful issue to trace...;) Marcus > It is a server class machine running Windows 2003 with 4 GB. No, it's a > local drive with 20GB free on it. > > I'm sure that it's a coding error on my part. SQLite is very stable, in > my > opinion. I'm just trying to get a rough idea on where I'm screwing up the > database. > > > -Original Message- > From: sqlite-users-boun...@sqlite.org > [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick > Sent: Wednesday, May 27, 2009 12:58 PM > To: General Discussion of SQLite Database > Subject: Re: [sqlite] corrupt database recovery > > What platform? Any chance they are using a network drive? > > > John > > Gene wrote: >> My code is outside the database layer. So I do all my database work, >> then >> compress and encrypt it. No errors are returned anywhere. I'm guessing >> that it's going to be an uninitialized variable or byte alignment >> problems >> somewhere. >> >> This code is running on hundreds of machines without a problem and I've >> never reproduced it but every now and again I get a support ticket >> showing > a >> corrupt database. So I'm trying to figure out WHERE to look. >> >> -Original Message- >> From: sqlite-users-boun...@sqlite.org >> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick >> Sent: Wednesday, May 27, 2009 10:59 AM >> To: General Discussion of SQLite Database >> Subject: Re: [sqlite] corrupt database recovery >> >> Gene Allen wrote: >> >>> Ok...it's happened again and I've decided that I need to track this >>> down >>> once and for all! >>> >>> Here is what I'm seeing: I get errors when I do a integrity_check (see >>> below), but I can .dump it to a text file and then .read it into >>> another >>> database ok. >>> >>> It seems to me that I'm screwing up an index or something. Are indexes >>> stored at the end of the database file? All I can think of is that my >>> compression/encryption routines are messing something up and I'm trying > to >>> figure out 'where' to look. >>> >>> I guess the real question is, what would I have to do to make an >>> integrity_check fail, but still let a dump work correctly? >>> >>> Many thanks for any advice on tracking down this ugliness. >>> >>> >> >> SNIP >> >> Personally, I'd refactor the code to allow me to verify the operation of >> the compression/encryption routines independently of the database >> operation. How are you injecting the compression/encryption into the >> database layer? >> >> >> John >> ___ >> sqlite-users mailing list >> sqli
Re: [sqlite] corrupt database recovery
It is a server class machine running Windows 2003 with 4 GB. No, it's a local drive with 20GB free on it. I'm sure that it's a coding error on my part. SQLite is very stable, in my opinion. I'm just trying to get a rough idea on where I'm screwing up the database. -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick Sent: Wednesday, May 27, 2009 12:58 PM To: General Discussion of SQLite Database Subject: Re: [sqlite] corrupt database recovery What platform? Any chance they are using a network drive? John Gene wrote: > My code is outside the database layer. So I do all my database work, then > compress and encrypt it. No errors are returned anywhere. I'm guessing > that it's going to be an uninitialized variable or byte alignment problems > somewhere. > > This code is running on hundreds of machines without a problem and I've > never reproduced it but every now and again I get a support ticket showing a > corrupt database. So I'm trying to figure out WHERE to look. > > -Original Message- > From: sqlite-users-boun...@sqlite.org > [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick > Sent: Wednesday, May 27, 2009 10:59 AM > To: General Discussion of SQLite Database > Subject: Re: [sqlite] corrupt database recovery > > Gene Allen wrote: > >> Ok...it's happened again and I've decided that I need to track this down >> once and for all! >> >> Here is what I'm seeing: I get errors when I do a integrity_check (see >> below), but I can .dump it to a text file and then .read it into another >> database ok. >> >> It seems to me that I'm screwing up an index or something. Are indexes >> stored at the end of the database file? All I can think of is that my >> compression/encryption routines are messing something up and I'm trying to >> figure out 'where' to look. >> >> I guess the real question is, what would I have to do to make an >> integrity_check fail, but still let a dump work correctly? >> >> Many thanks for any advice on tracking down this ugliness. >> >> > > SNIP > > Personally, I'd refactor the code to allow me to verify the operation of > the compression/encryption routines independently of the database > operation. How are you injecting the compression/encryption into the > database layer? > > > John > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > > > > > No virus found in this incoming message. > Checked by AVG - www.avg.com > Version: 8.0.238 / Virus Database: 270.12.39/2133 - Release Date: 05/25/09 08:16:00 > > ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] corrupt database recovery
My code is outside the database layer. So I do all my database work, then compress and encrypt it. No errors are returned anywhere. I'm guessing that it's going to be an uninitialized variable or byte alignment problems somewhere. This code is running on hundreds of machines without a problem and I've never reproduced it but every now and again I get a support ticket showing a corrupt database. So I'm trying to figure out WHERE to look. -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick Sent: Wednesday, May 27, 2009 10:59 AM To: General Discussion of SQLite Database Subject: Re: [sqlite] corrupt database recovery Gene Allen wrote: > Ok...it's happened again and I've decided that I need to track this down > once and for all! > > Here is what I'm seeing: I get errors when I do a integrity_check (see > below), but I can .dump it to a text file and then .read it into another > database ok. > > It seems to me that I'm screwing up an index or something. Are indexes > stored at the end of the database file? All I can think of is that my > compression/encryption routines are messing something up and I'm trying to > figure out 'where' to look. > > I guess the real question is, what would I have to do to make an > integrity_check fail, but still let a dump work correctly? > > Many thanks for any advice on tracking down this ugliness. > SNIP Personally, I'd refactor the code to allow me to verify the operation of the compression/encryption routines independently of the database operation. How are you injecting the compression/encryption into the database layer? John ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] corrupt database recovery
Ok...it's happened again and I've decided that I need to track this down once and for all! Here is what I'm seeing: I get errors when I do a integrity_check (see below), but I can .dump it to a text file and then .read it into another database ok. It seems to me that I'm screwing up an index or something. Are indexes stored at the end of the database file? All I can think of is that my compression/encryption routines are messing something up and I'm trying to figure out 'where' to look. I guess the real question is, what would I have to do to make an integrity_check fail, but still let a dump work correctly? Many thanks for any advice on tracking down this ugliness. Gene >>Output details sqlite> pragma integrity_check; *** in database main *** On tree page 3 cell 26: invalid page number 469 On tree page 3 cell 26: Child page depth differs On tree page 3 cell 27: invalid page number 490 On tree page 3 cell 28: invalid page number 511 On tree page 3 cell 29: invalid page number 533 On tree page 3 cell 30: invalid page number 554 On tree page 3 cell 31: invalid page number 576 On tree page 3 cell 32: invalid page number 598 On tree page 3 cell 33: invalid page number 620 On tree page 3 cell 34: invalid page number 642 On tree page 3 cell 35: invalid page number 666 On tree page 3 cell 36: invalid page number 688 On tree page 3 cell 37: invalid page number 709 On tree page 3 cell 38: invalid page number 730 On tree page 3 cell 39: invalid page number 752 On page 3 at right child: invalid page number 773 On tree page 419 cell 86: invalid page number 462 On tree page 419 cell 86: Child page depth differs On tree page 419 cell 87: invalid page number 463 I said...h...so I do a .dump to see if I can sqlite> .output c:\\a.sql sqlite> .dump sqlite> .quit resulting in: 05/27/2009 10:08 AM 496,051 a.sql 1 File(s)496,051 bytes 0 Dir(s) 104,974,647,296 bytes free -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Machin Sent: Saturday, April 25, 2009 8:15 PM To: General Discussion of SQLite Database Subject: Re: [sqlite] corrupt database recovery On 26/04/2009 5:47 AM, Gene wrote: > Every now and again, we have a database that gets corrupt in the field (bad > coding on our end, not sqlite). > Hi Gene, The obvious question: why not fix the bad code? What does 'PRAGMA integrity_check;' say about these corrupt databases? > When we get one of these corrupt databases, we recover what we can by get > one row at a time by rowid, like 'select * from mytable where rowid = 1' and > we inc the row number every time. Each row we successfully recover we > insert into a clean database. How do you know when to stop incrementing the row number? Does 'select min(rowid), max(rowid) from mytable' give you believable answers? What happens when you do 'select * from mytable' ? Approximately how many rows are there? How many 'select * from mytable where rowid = ' attempts fail, and for what reasons (previously deleted, some other result code(s))? Are the failures restricted to a relatively small range of rowids? > Works pretty well, except over time we've added more and more columns, each > one that has to be pulled and bound to get it into the new table. What is causing this "pull and bind" problem, the fact that some columns weren't present initially? or just the sheer number of columns i.e. you need to "pull and bind" all columns (not just the later additions)? In any case, please explain what you mean by "pulled" and "bound". > We tried > to do an 'attach' so we could do a 'select into' the clean database directly > from the corrupt one. But as soon as we attempt to 'attach' the corrupt > database, we understandable get a 'database is malformed' error. It's not quite so understandable why 'select * from mytable where rowid = 1' doesn't get an error. > Is there an easier way to pull the good records out of a corrupt database > and put them into a new one without binding each column by hand? Can you give us an example of a row or two of (a) what you get from the 'select * from mytable where rowid = ' (b) the insert statement that you need to do to insert that data into the clean database? Doesn't have to be real data -- e.g. assume 3 columns initially, now grown to 5. What rules/procedure/recipe do you follow when producing (b) from (a) by hand? Assuming that 'select * from mytable' doesn't work, and subject to understanding the pulling and binding by hand thing, I would have thought the solution would look something like this: Write a script that loops around doing 's
[sqlite] some help with datetime
Hello all, I have a table with a unixepoch column in it. I need to do some fancy calculation based on that time, hourly summaries, weekly averages, standard dev, etc.. Right now..I'm only using it for reporting and simple analysis and it's often formatted like this: datetime(eventtime, 'unixepoch', 'localtime') What I need is a count all the records grouped by week or by day or by month so I can do some more advanced processing on it. I'm missing something simple since I can't seem to get it to work. Here is an example of how I'm trying to get the week number: date(datetime(eventtime, 'unixepoch', 'localtime'), '%W') based on http://www.sqlite.org/lang_datefunc.html it seemed to me that the date function needs a string so I passed in a formatted string. No joy. Can someone point out the error in my ways? Thank you, Gene ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] corrupt database recovery
It's not hard to update it...I'm just whining about that it's out of date and I hate having to deal with matching everything up so I was hoping that I could get past the Attach. I think your python script below might ease my maintenance whoas a bit. I'll port it to C# and just build my insert on the fly. Thank you. Gene -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Machin Sent: Saturday, April 25, 2009 9:30 PM To: General Discussion of SQLite Database Subject: Re: [sqlite] corrupt database recovery On 26/04/2009 11:28 AM, Gene wrote: > You are exactly right John...that is indeed what the code looks > like...except we have over 25 columns (it's a flat table). I presume that you are referring to this: """ Write a script that loops around doing 'select * from mytable where rowid = ?' on a connection to your corrupt database and doing 'insert into mytable values(?,?,?,?,? etc etc etc)' on a connection to your clean database. """ If that's correct, then surely the only maintenance you need to do to the above when an extra column is added to your table is to add an extra two characters ',?' to the insert statement ... you don't even have to do that e.g. # Python TABLE_NAME = "mytable" NUMBER_OF_COLS = 25 question_marks = ",".join("?" * NUMBER_OF_COLS) insert_sql = "insert into %s values (%s)" % (TABLE_NAME, question_marks) AND the output from the select should be able to be pumped straight into the insert with no changes at all. AND there might even be a pragma or suchlike that will enable you to easily find the number of columns on the fly in your script ... > > We've already fixed the bad code, but there are some customers who have old > versions...it didn't break very often with the old code, but it does still > did. > > I haven't tried a select Min or max on the row id but a select count(*) > returns an error...that's how I know I need to do the row by row recovery > method. Select * from mytable also returns an error. > > The tables usually have tens of thousands of rows, sometimes over a couple > hundred thousand but that's rare. > > What seems to work is that I do a select * from myTable where rowId = 'X' > incing X until I get an error. After I get the error, every row higher then > X also returns an error. So as soon as I get an error, I stop trying to > recover more rows. Does this usually mean that you are able to recover almost all of the rows? > The pull and bind code is just ugly and we don't update our 'recovery > utility' as quickly as we make changes to the database so it tends to get > out of data. That's all. I don't understand what is "the pull and bind" code and why you would need anything other that what I've outlined. Cheers, John ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] corrupt database recovery
You are exactly right John...that is indeed what the code looks like...except we have over 25 columns (it's a flat table). We've already fixed the bad code, but there are some customers who have old versions...it didn't break very often with the old code, but it does still did. I haven't tried a select Min or max on the row id but a select count(*) returns an error...that's how I know I need to do the row by row recovery method. Select * from mytable also returns an error. The tables usually have tens of thousands of rows, sometimes over a couple hundred thousand but that's rare. What seems to work is that I do a select * from myTable where rowId = 'X' incing X until I get an error. After I get the error, every row higher then X also returns an error. So as soon as I get an error, I stop trying to recover more rows. The pull and bind code is just ugly and we don't update our 'recovery utility' as quickly as we make changes to the database so it tends to get out of data. That's all. I haven't tried a PRAGMA integrity_check; in a long time so I can't remember what it tells me. I'll run it again. Thanks for you comments John! -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Machin Sent: Saturday, April 25, 2009 8:15 PM To: General Discussion of SQLite Database Subject: Re: [sqlite] corrupt database recovery On 26/04/2009 5:47 AM, Gene wrote: > Every now and again, we have a database that gets corrupt in the field (bad > coding on our end, not sqlite). > Hi Gene, The obvious question: why not fix the bad code? What does 'PRAGMA integrity_check;' say about these corrupt databases? > When we get one of these corrupt databases, we recover what we can by get > one row at a time by rowid, like 'select * from mytable where rowid = 1' and > we inc the row number every time. Each row we successfully recover we > insert into a clean database. How do you know when to stop incrementing the row number? Does 'select min(rowid), max(rowid) from mytable' give you believable answers? What happens when you do 'select * from mytable' ? Approximately how many rows are there? How many 'select * from mytable where rowid = ' attempts fail, and for what reasons (previously deleted, some other result code(s))? Are the failures restricted to a relatively small range of rowids? > Works pretty well, except over time we've added more and more columns, each > one that has to be pulled and bound to get it into the new table. What is causing this "pull and bind" problem, the fact that some columns weren't present initially? or just the sheer number of columns i.e. you need to "pull and bind" all columns (not just the later additions)? In any case, please explain what you mean by "pulled" and "bound". > We tried > to do an 'attach' so we could do a 'select into' the clean database directly > from the corrupt one. But as soon as we attempt to 'attach' the corrupt > database, we understandable get a 'database is malformed' error. It's not quite so understandable why 'select * from mytable where rowid = 1' doesn't get an error. > Is there an easier way to pull the good records out of a corrupt database > and put them into a new one without binding each column by hand? Can you give us an example of a row or two of (a) what you get from the 'select * from mytable where rowid = ' (b) the insert statement that you need to do to insert that data into the clean database? Doesn't have to be real data -- e.g. assume 3 columns initially, now grown to 5. What rules/procedure/recipe do you follow when producing (b) from (a) by hand? Assuming that 'select * from mytable' doesn't work, and subject to understanding the pulling and binding by hand thing, I would have thought the solution would look something like this: Write a script that loops around doing 'select * from mytable where rowid = ?' on a connection to your corrupt database and doing 'insert into mytable values(?,?,?,?,? etc etc etc)' on a connection to your clean database. HTH, John ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] corrupt database recovery
Every now and again, we have a database that gets corrupt in the field (bad coding on our end, not sqlite). When we get one of these corrupt databases, we recover what we can by get one row at a time by rowid, like 'select * from mytable where rowid = 1' and we inc the row number every time. Each row we successfully recover we insert into a clean database. Works pretty well, except over time we've added more and more columns, each one that has to be pulled and bound to get it into the new table. We tried to do an 'attach' so we could do a 'select into' the clean database directly from the corrupt one. But as soon as we attempt to 'attach' the corrupt database, we understandable get a 'database is malformed' error. Is there an easier way to pull the good records out of a corrupt database and put them into a new one without binding each column by hand? Many thanks, Gene ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Hundreds of thousands of INSERTs
I don't know if it will help but I have a similar product in that it runs 24/7/365 and generates gobs of rows a day. What I did was create a new database for each month, like FA0901.bsa would be Jan 09. FA0810.bsa was Oct 08. This allowed performance to stay really high, allowed for end-user grooming (just delete the old db files) and backup/restore was just as simple since the files were small, they would easily fit on a CD. When a query was done, all I did was create a temp database and insert data into it for each month and serve that database up. This made subqueries much faster since there wasn't 20 gazillion rows to munge thru. Granted, I did keep my database schema flat for so I could do this easily. I don't know if it will help you, but that's what I did. Good luck, Gene -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Paolo Pisati Sent: Tuesday, February 17, 2009 7:00 AM To: General Discussion of SQLite Database Subject: Re: [sqlite] Hundreds of thousands of INSERTs Igor Tandetnik wrote: > "Paolo Pisati" wrote > in message news:499a954e.5020...@oltrelinux.com > >> I've a perl script that reads rows from a file, does some processing >> and then insert the resulting data in a db, one INSERT per row. >> Usually the input file is made of hundreds of thousands, thus a >> typical run consists of hundreds of thousands of INSERTions. >> >> the performance was good until i decided to build indexes: >> > > Try creating indexes at the end, after all the inserts are done. > unfortunately, that's not an option for me: in the middle of all these inserts, someone could want to retrieve some data and without indexes the operation would be painfully slow - at minimum, this db has to hold around 14 days worth of logs, thus a select without indexes could take too long and block the remaining inserts. -- bye, P. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Getting the "position" (like LIMIT) for a query
That is what I did, did my select into a temp table filtered and sorted just the way I wanted it and used the rowid since it would match the index in the list control. -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Nicolas Williams Sent: Saturday, January 10, 2009 4:27 PM To: General Discussion of SQLite Database Subject: Re: [sqlite] Getting the "position" (like LIMIT) for a query On Sat, Jan 10, 2009 at 04:33:19PM -0500, Igor Tandetnik wrote: > "Lukas Haase" wrote in message > news:gkat07$n2...@ger.gmane.org > > "SELECT keyword FROM keywords ORDER BY keyword LIMIT %d, %d", iFrom, iTo-iFrom." > > > > I use an SQLite database to fill a virtual list control in Windows. > > In this control, I just tell the control the numer of my elements > > and the control tells me for which range data is needed. > > http://www.sqlite.org/cvstrac/wiki?p=ScrollingCursor One thing I've done before is to use a rowid to track each "page" of results: SELECT rowid, keyword FROM keywords WHERE rowid > :pagestart ORDER BY keyword LIMIT :pagesize then remember the last rowid from the result set and use it to start the next result set. If you have a JOIN then you can use this trick for one table in the query, just pick it carefully. Paging backwards efficiently is not as easy. You want to come up with a query that can start at some rowid and scan _backwards_ through the table. This: SELECT rowid, keyword FROM keywords WHERE rowid < :pagestart ORDER BY keyword LIMIT :pagesize doesn't do what you want, of course (EXPLAIN will show that it uses the Next instruction instead of Prev). This: SELECT rowid, keyword FROM (SELECT rowid, keyword FROM keywords WHERE rowid < :pagestart ORDER BY rowid DESC LIMIT :pagesize) ORDER BY keyword; does use the Prev instruction, instead of Next, to scan the table in the sub-select, so it will process the fewest possible rows. Using rowid makes this very fast. I'm surprised that the wiki page for scrolling cursors doesn't mention this. Incidentally, I think many, if not all queries that have an outer table scan, and many, if not all joins that have an inner table scan but not an outer table scan, could be programmatically modified to create a scrolling cursor, forward and backwards. All that has to be done is: pick one table whose rowid to extract, add the where clause for rowid < or > than some variable, add the limit clause, and the sub-select for paging backwards. But parsing the select just do that is hard, while the SQLite3 parser is uniquely positioned to do it for the user. So perhaps there's a small RFE here? Cheers, Nico -- ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] testing for "cannot commit transaction - SQL statements in progress"
Sometimes when I do a sqlite3_exec with the sql "Commit;" I get a SQLITE_ERROR returned and the error text is "cannot commit transaction - SQL statements in progress" If I wait for a few seconds and try the sqlite3_exec again, the works correctly. (testing in the debugger) Is there anyway to test for this condition so I handle it properly? Gene Allen ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] transaction question
I have a large sqlite database and I'm inserting a bunch of records into it. My question is this..shouldn't the -journal be getting larger since I've wrapped all the inserts inside a transaction? I'm watching the file sizes change and the main file is getting larger, but the journal files is staying at 44kb.Is there something different I have to do while large http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] database disk image is malformed
Sorry, mispost ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] database disk image is malformed
That's an error that may be left over from 1.7. If you're filter is using a mapped drive, try using a UNC. I'm not near a computer, so I can't verify the error message. I will as soon as I get back to the office and kick on the generator, I will verify it. If you could run the User Companion on the start menu, it will package up your rules and FTP them to me. Gene ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Multithreaded SQLite
I'm using SQLite v. 3.5 in my multithreaded application. I have 2 threads both writing to the same database, different tables but same database. All is good until one of the threads goes under HEAVY load, then the other thread blocks never to return (on a bind statement). I WAS using the same database connection, but now I'm switching it to use seperate connections both to the same file. Is opening the database in each thread, really the only thing I need to worry about when I have mutilple threads are using the same database file, as long as that are using seperate tables? thank you for your time, Gene ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] SQLite Select Optimization help
Good morning all, I'm wondering if someone can help me with my incredibly slow select statement. My program executes a SINGLE select statement and then generates a report based on the output. So far, it's worked great..until now. Yesterday, I wanted to add a new report that lists all the files that have more than one user accessing them and the users that accessed them. (this is a file auditing product) Here is the select that I used: select Distinct ar.username userName, ar.filename fileName, multiused.accesscount accessCount from auditrecords ar inner join ( Select filename, accessCount from ( SELECT eventTime, Upper(AuditRecords.fileName) fileName, Count(Distinct Upper(AuditRecords.userName)) accessCount FROM AuditRecords GROUP BY Upper(AuditRecords.fileName) ) as used where used.accessCount > 1 ) as multiused on Upper(ar.filename) = multiused.filename On a database of 2.5 million records, it ran over 11 hours before I killed it. On smaller datasets it works fine. Any Advice? Gene ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Attempting to merge large databases
I have several databases files that I need to merge. Each one has the same schema and about ½ million records. Here is what Im currently doing (but in code) C:\Development\Main\Common\SQLite>sqlite3 c:\test\a.db3 SQLite version 3.2.7 Enter ".help" for instructions sqlite> attach 'c:\test\b.db3' as toMerge; sqlite> insert into AuditRecords select * from toMerge.AuditRecords; sqlite> detach database toMerge; Is there a faster/better way to do this? Should I wrap the insert in a transaction or something? Thanks a lot, Gene ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users