[sqlite] Forensics data control
Hello all, I would like to ask for some guidance. We sell a file auditing product and, like you would think, the database can get large. SQLite has been a perfect engine for us since we only do inserts and searches. Well, now we want to add some sophisticated forensics features. Currently we have fancy filtering but no real OLAP sort of capability. Have anyone seen a anything like this that we might be able to integrate into our product? We're using C# on Windows. Anyone used DevExpress's OLAP control? It looks pretty cool (if I could figure out what I'm doing) but I'm not sure it works with a raw DataSet or DataTable. Thanks, Gene ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] except and select
The customer's requirements has gotten worse and these queries are getting very ugly, so I think I should either: 1) do a select into a temp table and then run a bunch of queries to remove records based on another query (this might require substantial code changes) Or 2) use the 'except' clause in query. My questions are: 1) with option one, would the statement be "Delete * from table_name where orderno in (Select orderno from table_name where name = 'something')" 2) on option two, is there a way to except just certain records by a key or does the entire record have to match? Like can I say Select * from table_name except Select orderno from table_name where name = 'something' except Select orderno from table_name where name = 'something else' 2.a) does the second except apply to the original query? Thank you for your guidance, Gene -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Pavel Ivanov Sent: Wednesday, July 29, 2009 9:33 AM To: General Discussion of SQLite Database Subject: Re: [sqlite] except and select Maybe this way: select t.* from table_name t left outer join table_name t_del on t_del.operation = 'Deleted' and t_del.Filename = t.RenameTo where t.operation = 'Renamed' and t_del.ID is null union all select t.* from table_name t left outer join table_name t_ren on t_ren.operation = 'Renamed' and t_ren.RenameTo = t.Filename where t.operation = 'Deleted' and t_ren.ID is null Pavel On Wed, Jul 29, 2009 at 10:17 AM, Gene Allen<g...@bystorm.com> wrote: > I need to come up with a query where I can remove records that match a > certain condition. > > > > I have file auditing data and I need the query to show all the files that > were deleted and the files that were renamed but not the files that was > deleted AND renamed TO. Problem is the Delete is one record and the Rename > is another one. > > > > For example, in the sample table below, I want records 3 and 4, since A was > deleted (record 1) AND renamed to (record 2) > > > > ID Filename operation RenamedTo > > 1 A Deleted > > 2 B Renamed A > > 3 C Renamed D > > 4 E Deleted > > > > I tried to use an except, but that won't work since the data won't match up > between the records. Record 1 and 2 don't match. > > > > Due to a limitation in my program, I have to do this in a single select > statement. > > > > Any advice would be wonderful! > > > > Thanks, > > > > Gene > > > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] except and select
I need to come up with a query where I can remove records that match a certain condition. I have file auditing data and I need the query to show all the files that were deleted and the files that were renamed but not the files that was deleted AND renamed TO. Problem is the Delete is one record and the Rename is another one. For example, in the sample table below, I want records 3 and 4, since A was deleted (record 1) AND renamed to (record 2) IDFilename operationRenamedTo 1 A Deleted 2 B RenamedA 3 C RenamedD 4 E Deleted I tried to use an except, but that won't work since the data won't match up between the records. Record 1 and 2 don't match. Due to a limitation in my program, I have to do this in a single select statement. Any advice would be wonderful! Thanks, Gene ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] corrupt database recovery
We weren't logging the encryption failure. >>>> The feof() does return true until you attempt to read PAST the end of a >> file. The "does" in the above line should be "doesn't" >> while code Yeah, this code was written as a 'do...while', instead of a 'while do.' -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Machin Sent: Thursday, May 28, 2009 7:19 PM To: General Discussion of SQLite Database Subject: Re: [sqlite] corrupt database recovery On 29/05/2009 9:34 AM, Gene Allen wrote: > Yeah. > > Since my code works in blocks, read/compress/encrypt/write, loop. Almost > all the real data was being written to the compressed file, however any > finalization and flushing of the stream wasn't occurring (since the encrypt > was failing) and the encrypt failure wasn't logged? > so the last bit of any SQLite database wouldn't be written. If so, pragma integrity_check should report that some of the pages actually written contain pointers to pages that are past the end of the file, shouldn't it? [snip] >> Well...a more structured test exposed the problem and it was this: >> >> The feof() does return true until you attempt to read PAST the end of a >> file. If feof doesn't continue to return true, it is broken. > So the code worked great until the file's length was a multiple of > the >> buffer size (in my case 262,144 bytes). As you can imagine that doesn't >> happen too often in the real world. >> >> Since I assumed that a feof would return true where there wasn't any more >> data in the file, I would start another pass at reading a chunk of data >> (which wouldn't find anything) and run thru the compression/encryption > code. >> The compression code worked handled it correctly, but the encryption >> required that a DWORD boundary (blowfish) and since 0 is on such a > boundary >> but at the wrong end...it would fail. Silently? Unlogged? In any case, I would have thought using feof() was not needed ... long time since I've written C in earnest, but isn't something like this the standard idiom: #define BUFSIZ 262144 buff char[BUFSIZ]; size_t nbytes; FILE *f; f = fopen("filename", "rb"); while ((nbytes = fread(buff, 1, BUFSIZ, f)) { do_something(buff, nbytes); } ?? HTH, John ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] corrupt database recovery
Yeah. Since my code works in blocks, read/compress/encrypt/write, loop. Almost all the real data was being written to the compressed file, however any finalization and flushing of the stream wasn't occurring (since the encrypt was failing) so the last bit of any SQLite database wouldn't be written. I'm guessing that also contributes to how infrequent I saw a problem. -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Allen Fowler Sent: Thursday, May 28, 2009 5:17 PM To: General Discussion of SQLite Database; kennethinbox-sql...@yahoo.com Subject: Re: [sqlite] corrupt database recovery > I think I found my defect: my old stress tests was based on doing > compression/encryptions/decryption/decompression passes on files of random > sizes; so I would do about a 10 million passes or so and say...that's pretty > good. > > Well...a more structured test exposed the problem and it was this: > > The feof() does return true until you attempt to read PAST the end of a > file. So the code worked great until the file's length was a multiple of the > buffer size (in my case 262,144 bytes). As you can imagine that doesn't > happen too often in the real world. > > Since I assumed that a feof would return true where there wasn't any more > data in the file, I would start another pass at reading a chunk of data > (which wouldn't find anything) and run thru the compression/encryption code. > The compression code worked handled it correctly, but the encryption > required that a DWORD boundary (blowfish) and since 0 is on such a boundary > but at the wrong end...it would fail. > But was causing the DB to be corrupt, but a dump to work fine? What structure was getting damaged? ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] corrupt database recovery
I have not! but I just might...thanks for the tip. I think I found my defect: my old stress tests was based on doing compression/encryptions/decryption/decompression passes on files of random sizes; so I would do about a 10 million passes or so and say...that's pretty good. Well...a more structured test exposed the problem and it was this: The feof() does return true until you attempt to read PAST the end of a file. So the code worked great until the file's length was a multiple of the buffer size (in my case 262,144 bytes). As you can imagine that doesn't happen too often in the real world. Since I assumed that a feof would return true where there wasn't any more data in the file, I would start another pass at reading a chunk of data (which wouldn't find anything) and run thru the compression/encryption code. The compression code worked handled it correctly, but the encryption required that a DWORD boundary (blowfish) and since 0 is on such a boundary but at the wrong end...it would fail. I fixed the code to test for 0 bytes being read, instead of solely relying on feof() and all seems well. :) I would like to say thank you for all your help and advice. Gene -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Ken Sent: Thursday, May 28, 2009 11:32 AM To: General Discussion of SQLite Database Subject: Re: [sqlite] corrupt database recovery Gene, Im sure others have suggested, but have you tried running your code through valgrind? Can you remove the custom VFS ? --- On Wed, 5/27/09, Gene Allen <g...@bystorm.com> wrote: > From: Gene Allen <g...@bystorm.com> > Subject: Re: [sqlite] corrupt database recovery > To: mgr...@medcom-online.de, "'General Discussion of SQLite Database'" <sqlite-users@sqlite.org> > Date: Wednesday, May 27, 2009, 4:51 PM > I've reviewed the code over and over > again, but am yet to find it. But it > is a good bit of very complicated code (blowfish and > compression code). > > That's why I was asking about the format of the > database. Since I'm able to > successfully do a .dump but the integrity_check whines, I'm > thinking that > maybe if I focus on where what could cause that, it might > give me a clue. > > For example, if the tree is stored at the end of file maybe > I'm falling out > the loop without writing the final bytes. But if the > tree is not localized > to one area of the file, I'll have to rethink my approach. > > > > -Original Message- > From: sqlite-users-boun...@sqlite.org > [mailto:sqlite-users-boun...@sqlite.org] > On Behalf Of Marcus Grimm > Sent: Wednesday, May 27, 2009 2:44 PM > To: General Discussion of SQLite Database > Subject: Re: [sqlite] corrupt database recovery > > so, if you think it is a coding error on your side it will > be a hard work to find the source. I can only image side > effects > on an used API like sqlite3 with the classics: > > - stack frame overload in a function that calls sqlite > functions > - using a local stack variable as a return pointer and > reuse > this external. > - memory overwriting on malloc'ed pointers or reusing of > allready > free'd memory pages. > > one and two might be found using a code review and > identify > local variables or particular arrays that are written: > So wherever you call anything from sqlite check the stack > declaration and review the usage of these variables... > of course just don't do number two... ;) > > memory errors might be detected using special debug > versions of the > malloc/free library, by code review, or manually by adding > some > test variables whereever you malloc or free a pointer. > > I'm sure you allready did some of these... anyway, good > luck > > are you using threads? would be another beautiful issue to > trace...;) > > Marcus > > > > > It is a server class machine running Windows 2003 with > 4 GB. No, it's a > > local drive with 20GB free on it. > > > > I'm sure that it's a coding error on my part. > SQLite is very stable, in > > my > > opinion. I'm just trying to get a rough idea on > where I'm screwing up the > > database. > > > > > > -Original Message- > > From: sqlite-users-boun...@sqlite.org > > [mailto:sqlite-users-boun...@sqlite.org] > On Behalf Of John Elrick > > Sent: Wednesday, May 27, 2009 12:58 PM > > To: General Discussion of SQLite Database > > Subject: Re: [sqlite] corrupt database recovery > > > > What platform? Any chance they are using a > network drive? > > > > > > John > > > > Gene wrote: > >> My code is outside the database layer. So I
Re: [sqlite] corrupt database recovery
I've reviewed the code over and over again, but am yet to find it. But it is a good bit of very complicated code (blowfish and compression code). That's why I was asking about the format of the database. Since I'm able to successfully do a .dump but the integrity_check whines, I'm thinking that maybe if I focus on where what could cause that, it might give me a clue. For example, if the tree is stored at the end of file maybe I'm falling out the loop without writing the final bytes. But if the tree is not localized to one area of the file, I'll have to rethink my approach. -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Marcus Grimm Sent: Wednesday, May 27, 2009 2:44 PM To: General Discussion of SQLite Database Subject: Re: [sqlite] corrupt database recovery so, if you think it is a coding error on your side it will be a hard work to find the source. I can only image side effects on an used API like sqlite3 with the classics: - stack frame overload in a function that calls sqlite functions - using a local stack variable as a return pointer and reuse this external. - memory overwriting on malloc'ed pointers or reusing of allready free'd memory pages. one and two might be found using a code review and identify local variables or particular arrays that are written: So wherever you call anything from sqlite check the stack declaration and review the usage of these variables... of course just don't do number two... ;) memory errors might be detected using special debug versions of the malloc/free library, by code review, or manually by adding some test variables whereever you malloc or free a pointer. I'm sure you allready did some of these... anyway, good luck are you using threads? would be another beautiful issue to trace...;) Marcus > It is a server class machine running Windows 2003 with 4 GB. No, it's a > local drive with 20GB free on it. > > I'm sure that it's a coding error on my part. SQLite is very stable, in > my > opinion. I'm just trying to get a rough idea on where I'm screwing up the > database. > > > -Original Message- > From: sqlite-users-boun...@sqlite.org > [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick > Sent: Wednesday, May 27, 2009 12:58 PM > To: General Discussion of SQLite Database > Subject: Re: [sqlite] corrupt database recovery > > What platform? Any chance they are using a network drive? > > > John > > Gene wrote: >> My code is outside the database layer. So I do all my database work, >> then >> compress and encrypt it. No errors are returned anywhere. I'm guessing >> that it's going to be an uninitialized variable or byte alignment >> problems >> somewhere. >> >> This code is running on hundreds of machines without a problem and I've >> never reproduced it but every now and again I get a support ticket >> showing > a >> corrupt database. So I'm trying to figure out WHERE to look. >> >> -Original Message- >> From: sqlite-users-boun...@sqlite.org >> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick >> Sent: Wednesday, May 27, 2009 10:59 AM >> To: General Discussion of SQLite Database >> Subject: Re: [sqlite] corrupt database recovery >> >> Gene Allen wrote: >> >>> Ok...it's happened again and I've decided that I need to track this >>> down >>> once and for all! >>> >>> Here is what I'm seeing: I get errors when I do a integrity_check (see >>> below), but I can .dump it to a text file and then .read it into >>> another >>> database ok. >>> >>> It seems to me that I'm screwing up an index or something. Are indexes >>> stored at the end of the database file? All I can think of is that my >>> compression/encryption routines are messing something up and I'm trying > to >>> figure out 'where' to look. >>> >>> I guess the real question is, what would I have to do to make an >>> integrity_check fail, but still let a dump work correctly? >>> >>> Many thanks for any advice on tracking down this ugliness. >>> >>> >> >> SNIP >> >> Personally, I'd refactor the code to allow me to verify the operation of >> the compression/encryption routines independently of the database >> operation. How are you injecting the compression/encryption into the >> database layer? >> >> >> John >> ___ >> sqlite-users mailing list >> sqlite-users@sqlite.org >> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users >> >> ___
Re: [sqlite] corrupt database recovery
It is a server class machine running Windows 2003 with 4 GB. No, it's a local drive with 20GB free on it. I'm sure that it's a coding error on my part. SQLite is very stable, in my opinion. I'm just trying to get a rough idea on where I'm screwing up the database. -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick Sent: Wednesday, May 27, 2009 12:58 PM To: General Discussion of SQLite Database Subject: Re: [sqlite] corrupt database recovery What platform? Any chance they are using a network drive? John Gene wrote: > My code is outside the database layer. So I do all my database work, then > compress and encrypt it. No errors are returned anywhere. I'm guessing > that it's going to be an uninitialized variable or byte alignment problems > somewhere. > > This code is running on hundreds of machines without a problem and I've > never reproduced it but every now and again I get a support ticket showing a > corrupt database. So I'm trying to figure out WHERE to look. > > -Original Message- > From: sqlite-users-boun...@sqlite.org > [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick > Sent: Wednesday, May 27, 2009 10:59 AM > To: General Discussion of SQLite Database > Subject: Re: [sqlite] corrupt database recovery > > Gene Allen wrote: > >> Ok...it's happened again and I've decided that I need to track this down >> once and for all! >> >> Here is what I'm seeing: I get errors when I do a integrity_check (see >> below), but I can .dump it to a text file and then .read it into another >> database ok. >> >> It seems to me that I'm screwing up an index or something. Are indexes >> stored at the end of the database file? All I can think of is that my >> compression/encryption routines are messing something up and I'm trying to >> figure out 'where' to look. >> >> I guess the real question is, what would I have to do to make an >> integrity_check fail, but still let a dump work correctly? >> >> Many thanks for any advice on tracking down this ugliness. >> >> > > SNIP > > Personally, I'd refactor the code to allow me to verify the operation of > the compression/encryption routines independently of the database > operation. How are you injecting the compression/encryption into the > database layer? > > > John > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > > > > > No virus found in this incoming message. > Checked by AVG - www.avg.com > Version: 8.0.238 / Virus Database: 270.12.39/2133 - Release Date: 05/25/09 08:16:00 > > ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] corrupt database recovery
Ok...it's happened again and I've decided that I need to track this down once and for all! Here is what I'm seeing: I get errors when I do a integrity_check (see below), but I can .dump it to a text file and then .read it into another database ok. It seems to me that I'm screwing up an index or something. Are indexes stored at the end of the database file? All I can think of is that my compression/encryption routines are messing something up and I'm trying to figure out 'where' to look. I guess the real question is, what would I have to do to make an integrity_check fail, but still let a dump work correctly? Many thanks for any advice on tracking down this ugliness. Gene >>Output details sqlite> pragma integrity_check; *** in database main *** On tree page 3 cell 26: invalid page number 469 On tree page 3 cell 26: Child page depth differs On tree page 3 cell 27: invalid page number 490 On tree page 3 cell 28: invalid page number 511 On tree page 3 cell 29: invalid page number 533 On tree page 3 cell 30: invalid page number 554 On tree page 3 cell 31: invalid page number 576 On tree page 3 cell 32: invalid page number 598 On tree page 3 cell 33: invalid page number 620 On tree page 3 cell 34: invalid page number 642 On tree page 3 cell 35: invalid page number 666 On tree page 3 cell 36: invalid page number 688 On tree page 3 cell 37: invalid page number 709 On tree page 3 cell 38: invalid page number 730 On tree page 3 cell 39: invalid page number 752 On page 3 at right child: invalid page number 773 On tree page 419 cell 86: invalid page number 462 On tree page 419 cell 86: Child page depth differs On tree page 419 cell 87: invalid page number 463 I said...h...so I do a .dump to see if I can sqlite> .output c:\\a.sql sqlite> .dump sqlite> .quit resulting in: 05/27/2009 10:08 AM 496,051 a.sql 1 File(s)496,051 bytes 0 Dir(s) 104,974,647,296 bytes free -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Machin Sent: Saturday, April 25, 2009 8:15 PM To: General Discussion of SQLite Database Subject: Re: [sqlite] corrupt database recovery On 26/04/2009 5:47 AM, Gene wrote: > Every now and again, we have a database that gets corrupt in the field (bad > coding on our end, not sqlite). > Hi Gene, The obvious question: why not fix the bad code? What does 'PRAGMA integrity_check;' say about these corrupt databases? > When we get one of these corrupt databases, we recover what we can by get > one row at a time by rowid, like 'select * from mytable where rowid = 1' and > we inc the row number every time. Each row we successfully recover we > insert into a clean database. How do you know when to stop incrementing the row number? Does 'select min(rowid), max(rowid) from mytable' give you believable answers? What happens when you do 'select * from mytable' ? Approximately how many rows are there? How many 'select * from mytable where rowid = ' attempts fail, and for what reasons (previously deleted, some other result code(s))? Are the failures restricted to a relatively small range of rowids? > Works pretty well, except over time we've added more and more columns, each > one that has to be pulled and bound to get it into the new table. What is causing this "pull and bind" problem, the fact that some columns weren't present initially? or just the sheer number of columns i.e. you need to "pull and bind" all columns (not just the later additions)? In any case, please explain what you mean by "pulled" and "bound". > We tried > to do an 'attach' so we could do a 'select into' the clean database directly > from the corrupt one. But as soon as we attempt to 'attach' the corrupt > database, we understandable get a 'database is malformed' error. It's not quite so understandable why 'select * from mytable where rowid = 1' doesn't get an error. > Is there an easier way to pull the good records out of a corrupt database > and put them into a new one without binding each column by hand? Can you give us an example of a row or two of (a) what you get from the 'select * from mytable where rowid = ' (b) the insert statement that you need to do to insert that data into the clean database? Doesn't have to be real data -- e.g. assume 3 columns initially, now grown to 5. What rules/procedure/recipe do you follow when producing (b) from (a) by hand? Assuming that 'select * from mytable' doesn't work, and subject to understanding the pulling and binding by hand thing, I would have thought the solution would look something like this: Write a script that loops around doing 'select * from mytable where rowid = ?' on a connection to your corrupt database and doing 'insert into mytable values(?,?,?,?,? etc etc etc)' on a connection to your clean database. HTH, John ___ sqlite-users mailing list sqlite-users@sqlite.org
[sqlite] some help with datetime
Hello all, I have a table with a unixepoch column in it. I need to do some fancy calculation based on that time, hourly summaries, weekly averages, standard dev, etc.. Right now..I'm only using it for reporting and simple analysis and it's often formatted like this: datetime(eventtime, 'unixepoch', 'localtime') What I need is a count all the records grouped by week or by day or by month so I can do some more advanced processing on it. I'm missing something simple since I can't seem to get it to work. Here is an example of how I'm trying to get the week number: date(datetime(eventtime, 'unixepoch', 'localtime'), '%W') based on http://www.sqlite.org/lang_datefunc.html it seemed to me that the date function needs a string so I passed in a formatted string. No joy. Can someone point out the error in my ways? Thank you, Gene ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] corrupt database recovery
It's not hard to update it...I'm just whining about that it's out of date and I hate having to deal with matching everything up so I was hoping that I could get past the Attach. I think your python script below might ease my maintenance whoas a bit. I'll port it to C# and just build my insert on the fly. Thank you. Gene -Original Message- From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Machin Sent: Saturday, April 25, 2009 9:30 PM To: General Discussion of SQLite Database Subject: Re: [sqlite] corrupt database recovery On 26/04/2009 11:28 AM, Gene wrote: > You are exactly right John...that is indeed what the code looks > like...except we have over 25 columns (it's a flat table). I presume that you are referring to this: """ Write a script that loops around doing 'select * from mytable where rowid = ?' on a connection to your corrupt database and doing 'insert into mytable values(?,?,?,?,? etc etc etc)' on a connection to your clean database. """ If that's correct, then surely the only maintenance you need to do to the above when an extra column is added to your table is to add an extra two characters ',?' to the insert statement ... you don't even have to do that e.g. # Python TABLE_NAME = "mytable" NUMBER_OF_COLS = 25 question_marks = ",".join("?" * NUMBER_OF_COLS) insert_sql = "insert into %s values (%s)" % (TABLE_NAME, question_marks) AND the output from the select should be able to be pumped straight into the insert with no changes at all. AND there might even be a pragma or suchlike that will enable you to easily find the number of columns on the fly in your script ... > > We've already fixed the bad code, but there are some customers who have old > versions...it didn't break very often with the old code, but it does still > did. > > I haven't tried a select Min or max on the row id but a select count(*) > returns an error...that's how I know I need to do the row by row recovery > method. Select * from mytable also returns an error. > > The tables usually have tens of thousands of rows, sometimes over a couple > hundred thousand but that's rare. > > What seems to work is that I do a select * from myTable where rowId = 'X' > incing X until I get an error. After I get the error, every row higher then > X also returns an error. So as soon as I get an error, I stop trying to > recover more rows. Does this usually mean that you are able to recover almost all of the rows? > The pull and bind code is just ugly and we don't update our 'recovery > utility' as quickly as we make changes to the database so it tends to get > out of data. That's all. I don't understand what is "the pull and bind" code and why you would need anything other that what I've outlined. Cheers, John ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] testing for "cannot commit transaction - SQL statements in progress"
Sometimes when I do a sqlite3_exec with the sql "Commit;" I get a SQLITE_ERROR returned and the error text is "cannot commit transaction - SQL statements in progress" If I wait for a few seconds and try the sqlite3_exec again, the works correctly. (testing in the debugger) Is there anyway to test for this condition so I handle it properly? Gene Allen ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] transaction question
I have a large sqlite database and I'm inserting a bunch of records into it. My question is this..shouldn't the -journal be getting larger since I've wrapped all the inserts inside a transaction? I'm watching the file sizes change and the main file is getting larger, but the journal files is staying at 44kb.Is there something different I have to do while large http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] database disk image is malformed
Sorry, mispost ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] database disk image is malformed
That's an error that may be left over from 1.7. If you're filter is using a mapped drive, try using a UNC. I'm not near a computer, so I can't verify the error message. I will as soon as I get back to the office and kick on the generator, I will verify it. If you could run the User Companion on the start menu, it will package up your rules and FTP them to me. Gene ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] SQLite Select Optimization help
Good morning all, I'm wondering if someone can help me with my incredibly slow select statement. My program executes a SINGLE select statement and then generates a report based on the output. So far, it's worked great..until now. Yesterday, I wanted to add a new report that lists all the files that have more than one user accessing them and the users that accessed them. (this is a file auditing product) Here is the select that I used: select Distinct ar.username userName, ar.filename fileName, multiused.accesscount accessCount from auditrecords ar inner join ( Select filename, accessCount from ( SELECT eventTime, Upper(AuditRecords.fileName) fileName, Count(Distinct Upper(AuditRecords.userName)) accessCount FROM AuditRecords GROUP BY Upper(AuditRecords.fileName) ) as used where used.accessCount > 1 ) as multiused on Upper(ar.filename) = multiused.filename On a database of 2.5 million records, it ran over 11 hours before I killed it. On smaller datasets it works fine. Any Advice? Gene ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Attempting to merge large databases
I have several databases files that I need to merge. Each one has the same schema and about ½ million records. Here is what Im currently doing (but in code) C:\Development\Main\Common\SQLite>sqlite3 c:\test\a.db3 SQLite version 3.2.7 Enter ".help" for instructions sqlite> attach 'c:\test\b.db3' as toMerge; sqlite> insert into AuditRecords select * from toMerge.AuditRecords; sqlite> detach database toMerge; Is there a faster/better way to do this? Should I wrap the insert in a transaction or something? Thanks a lot, Gene ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users