Re: [sqlite] corrupt database recovery

2009-05-27 Thread Marcus Grimm
I might have missed something, but:
What are you compressing?
ehm... Are compressing the database file while sqlite is running ?


> The problem is happening in the field and the program deals with a corrupt
> database by renaming it, logging the error and creating a new database.
> So
> I only end up with the corrupt one.
>
> I would GUESS that the database doesn't have any free pages except those
> at
> the end since it's an auditing product so we never delete anything...but I
> could be wrong.
>
> The only reason I think I'm 'dropping a byte' is that all the
> compression/encryption code is byte based and the database gets corrupt
> (invalid pages) but I'm able to .dump it successfully.That and I try
> to
> take the blame first, since it's usually my fault. (just ask my wife!)
>
> Right now I'm running a large test on the encryption/compression code to
> see
> if I can get it to break.
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Marcus Grimm
> Sent: Wednesday, May 27, 2009 5:09 PM
> To: 'General Discussion of SQLite Database'
> Subject: Re: [sqlite] corrupt database recovery
>
> sorry, I don't know where sqlite stores any kind of data like
> index or raw table, my suspect is that, since sqlite
> usually looks for a free page, the pages are randomly
> distributed over the file or added at the end if no free
> page has been found.
>
> what I don't understand:
> what makes you thinking that you may miss some bytes of a tree
> (and thus of a page) ?
> That is sqlite internal, except you are using your own
> vfs, I guess, ...do you ?
>
> by the way: did your application crash prior or nearby you had this
> corrupted DB ?
>
>> I've reviewed the code over and over again, but am yet to find it.  But
>> it
>> is a good bit of very complicated code (blowfish and compression code).
>>
>> That's why I was asking about the format of the database.  Since I'm
>> able
>> to
>> successfully do a .dump but the integrity_check whines, I'm thinking
>> that
>> maybe if I focus on where what could cause that, it might give me a
>> clue.
>>
>> For example, if the tree is stored at the end of file maybe I'm falling
>> out
>> the loop without writing the final bytes.  But if the tree is not
>> localized
>> to one area of the file, I'll have to rethink my approach.
>>
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Marcus Grimm
>> Sent: Wednesday, May 27, 2009 2:44 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] corrupt database recovery
>>
>> so, if you think it is a coding error on your side it will
>> be a hard work to find the source. I can only image side effects
>> on an used API like sqlite3 with the classics:
>>
>> - stack frame overload in a function that calls sqlite functions
>> - using a local stack variable as a return pointer and reuse
>>   this external.
>> - memory overwriting on malloc'ed pointers or reusing of allready
>>   free'd memory pages.
>>
>> one and two might be found using a code review and identify
>> local variables or particular arrays that are written:
>> So wherever you call anything from sqlite check the stack
>> declaration and review the usage of these variables...
>> of course just don't do number two... ;)
>>
>> memory errors might be detected using special debug versions of the
>> malloc/free library, by code review, or manually by adding some
>> test variables whereever you malloc or free a pointer.
>>
>> I'm sure you allready did some of these... anyway, good luck
>>
>> are you using threads? would be another beautiful issue to trace...;)
>>
>> Marcus
>>
>>
>>
>>> It is a server class machine running Windows 2003 with 4 GB.  No, it's
>>> a
>>> local drive with 20GB free on it.
>>>
>>> I'm sure that it's a coding error on my part.  SQLite is very stable,
>>> in
>>> my
>>> opinion.  I'm just trying to get a rough idea on where I'm screwing up
>>> the
>>> database.
>>>
>>>
>>> -Original Message-
>>> From: sqlite-users-boun...@sqlite.org
>>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick
>>> Sent: Wednesday, May 27, 2009 12:58 PM
>>> To: General Discussion of SQLite Database
>>> Subject: Re: [sqlite] corrupt database recovery
>>>
>>> What platform?  Any chance they are using a network drive?
>>>
>>>
>>> John
>>>
>>> Gene wrote:
 My code is outside the database layer.  So I do all my database work,
 then
 compress and encrypt it.  No errors are returned anywhere.  I'm
 guessing
 that it's going to be an uninitialized variable or byte alignment
 problems
 somewhere.

 This code is running on hundreds of machines without a problem and
 I've
 never reproduced it but every now and again I get a support ticket
 showing
>>> a
 corrupt database.  So I'm trying to figure out WHERE to look.

 -Original Message-
 

Re: [sqlite] Some index questions

2009-05-27 Thread John Machin
On 28/05/2009 12:24 AM, Dan wrote:
> 
> If a single column index is like the index found in textbooks,
> a compound index with two fields is like the phone book. Sorted first by
> surname, then by first name. The "rowid", if you like, is the phone  
> number.
> 
> So, it's easy to find the set of phone numbers for everybody with the
> surname "Jones". It's easy to find the set of phone numbers for people
> called "Barry Jones". Quite difficult to find all the people called  
> "Barry"
> though.

And even more difficult to find all the people nicknamed "Bazzer" :-)


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQL question - not sure if it's possible at all

2009-05-27 Thread Dennis Volodomanov
Igor Tandetnik wrote:
> Dennis Volodomanov  wrote:
>   
>> Let's say I've got this query:
>>
>> SELECT * FROM (SELECT * FROM Data WHERE PlotOnGraph=1 ORDER BY Date
>> ASC) LIMIT ?2 OFFSET ?1
>> 
>
> Why an extra layer? Why not just
>
> SELECT * FROM Data WHERE PlotOnGraph=1 ORDER BY Date ASC
> LIMIT ?2 OFFSET ?1
>   

No real reason. I am porting older code and kept it "safe" (keeping old 
SQL unchanged for the time being). I'll change that later.

>> and let's say, "Data" has an "ID" field (primary index).
>>
>> Is it possible to find out the *position* in the returned result set
>> of a Data with a given ID?
>> 
>
> select count(*) from Data
> where PlotOnGraph=1
> and Date < (select Date from Data where ID=?);
>   

Looks interesting - I'll try it later tonight!

Thank you for the help.

   Dennis

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQlite performance on multi process env

2009-05-27 Thread zhrahman

I am doing a simple test on sqlite where in step one, I just write 10,000
records on a simple database having three fileds. In step 2, after the write
is done,  I want 100 process reading from the database and doing simple
SQLs, no update on the databse just simple select read operations. I am
getting horrible performance when I increase the number of processes. Now
each process only opens the database connection in read only mode. Still I
am guessing some lock is placed on the database and that why even the read
operation is slowing down drastically with the increase of the number of
processes. Any idea what I need to do to boost performance? For 100
processes doing reads it takes like 10 seconds to do 1 SQL by each
process. If I reduce the number of processes from 1000 to 1 it only finishes
in a second
-- 
View this message in context: 
http://www.nabble.com/SQlite-performance-on-multi-process-env-tp23752056p23752056.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQL question - not sure if it's possible at all

2009-05-27 Thread Igor Tandetnik
Dennis Volodomanov  wrote:
> Let's say I've got this query:
>
> SELECT * FROM (SELECT * FROM Data WHERE PlotOnGraph=1 ORDER BY Date
> ASC) LIMIT ?2 OFFSET ?1

Why an extra layer? Why not just

SELECT * FROM Data WHERE PlotOnGraph=1 ORDER BY Date ASC
LIMIT ?2 OFFSET ?1

> and let's say, "Data" has an "ID" field (primary index).
>
> Is it possible to find out the *position* in the returned result set
> of a Data with a given ID?

select count(*) from Data
where PlotOnGraph=1
and Date < (select Date from Data where ID=?);

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] corrupt database recovery

2009-05-27 Thread Gene
That's pretty much my test harness.  Right now, I'm running a test on every
file size from 0 to a 10 MB. 

If that works correctly (which I expect it will), I think I'm going to have
to spend the time and effort setting up a massive test environment and
stress the mess out of the whole system.  I'm trying to avoid that since it
would take a least a week to set that up.

Thank you for the support, I need it. 

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Doug
Sent: Wednesday, May 27, 2009 5:20 PM
To: 'General Discussion of SQLite Database'
Subject: Re: [sqlite] corrupt database recovery

I'm sorry that I can't help with the SQLite part, but couldn't you write a
simple test harness that would create a file (of random size?) with random
contents, encrypt/compress to a secondary file, then decompress/decrypt to a
third file.  Compare first and third files.  If they match, delete and run
test again.  If they don't match you now have your debugging input.  I'd
think letting something like that run for a day or so would flush out
boundary/buffer errors.  It's not deterministic, but should be easy enough
to bang out and let run on a secondary machine while you continue your main
search.

Good luck either way.

Doug


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Gene Allen
> Sent: Wednesday, May 27, 2009 4:52 PM
> To: mgr...@medcom-online.de; 'General Discussion of SQLite Database'
> Subject: Re: [sqlite] corrupt database recovery
> 
> I've reviewed the code over and over again, but am yet to find it.  But
> it
> is a good bit of very complicated code (blowfish and compression code).
> 
> That's why I was asking about the format of the database.  Since I'm
> able to
> successfully do a .dump but the integrity_check whines, I'm thinking
> that
> maybe if I focus on where what could cause that, it might give me a
> clue.
> 
> For example, if the tree is stored at the end of file maybe I'm falling
> out
> the loop without writing the final bytes.  But if the tree is not
> localized
> to one area of the file, I'll have to rethink my approach.
> 
> 
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Marcus Grimm
> Sent: Wednesday, May 27, 2009 2:44 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] corrupt database recovery
> 
> so, if you think it is a coding error on your side it will
> be a hard work to find the source. I can only image side effects
> on an used API like sqlite3 with the classics:
> 
> - stack frame overload in a function that calls sqlite functions
> - using a local stack variable as a return pointer and reuse
>   this external.
> - memory overwriting on malloc'ed pointers or reusing of allready
>   free'd memory pages.
> 
> one and two might be found using a code review and identify
> local variables or particular arrays that are written:
> So wherever you call anything from sqlite check the stack
> declaration and review the usage of these variables...
> of course just don't do number two... ;)
> 
> memory errors might be detected using special debug versions of the
> malloc/free library, by code review, or manually by adding some
> test variables whereever you malloc or free a pointer.
> 
> I'm sure you allready did some of these... anyway, good luck
> 
> are you using threads? would be another beautiful issue to trace...;)
> 
> Marcus
> 
> 
> 
> > It is a server class machine running Windows 2003 with 4 GB.  No,
> it's a
> > local drive with 20GB free on it.
> >
> > I'm sure that it's a coding error on my part.  SQLite is very stable,
> in
> > my
> > opinion.  I'm just trying to get a rough idea on where I'm screwing
> up the
> > database.
> >
> >
> > -Original Message-
> > From: sqlite-users-boun...@sqlite.org
> > [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick
> > Sent: Wednesday, May 27, 2009 12:58 PM
> > To: General Discussion of SQLite Database
> > Subject: Re: [sqlite] corrupt database recovery
> >
> > What platform?  Any chance they are using a network drive?
> >
> >
> > John
> >
> > Gene wrote:
> >> My code is outside the database layer.  So I do all my database
> work,
> >> then
> >> compress and encrypt it.  No errors are returned anywhere.  I'm
> guessing
> >> that it's going to be an uninitialized variable or byte alignment
> >> problems
> >> somewhere.
> >>
> >> This code is running on hundreds of machines without a problem and
> I've
> >> never reproduced it but every now and again I get a support ticket
> >> showing
> > a
> >> corrupt database.  So I'm trying to figure out WHERE to look.
> >>
> >> -Original Message-
> >> From: sqlite-users-boun...@sqlite.org
> >> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick
> >> Sent: Wednesday, May 27, 2009 10:59 AM
> >> To: General Discussion of 

Re: [sqlite] corrupt database recovery

2009-05-27 Thread Gene
The problem is happening in the field and the program deals with a corrupt
database by renaming it, logging the error and creating a new database.  So
I only end up with the corrupt one.

I would GUESS that the database doesn't have any free pages except those at
the end since it's an auditing product so we never delete anything...but I
could be wrong.

The only reason I think I'm 'dropping a byte' is that all the
compression/encryption code is byte based and the database gets corrupt
(invalid pages) but I'm able to .dump it successfully.That and I try to
take the blame first, since it's usually my fault. (just ask my wife!)

Right now I'm running a large test on the encryption/compression code to see
if I can get it to break.  

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Marcus Grimm
Sent: Wednesday, May 27, 2009 5:09 PM
To: 'General Discussion of SQLite Database'
Subject: Re: [sqlite] corrupt database recovery

sorry, I don't know where sqlite stores any kind of data like
index or raw table, my suspect is that, since sqlite
usually looks for a free page, the pages are randomly
distributed over the file or added at the end if no free
page has been found.

what I don't understand:
what makes you thinking that you may miss some bytes of a tree
(and thus of a page) ?
That is sqlite internal, except you are using your own
vfs, I guess, ...do you ?

by the way: did your application crash prior or nearby you had this
corrupted DB ?

> I've reviewed the code over and over again, but am yet to find it.  But it
> is a good bit of very complicated code (blowfish and compression code).
>
> That's why I was asking about the format of the database.  Since I'm able
> to
> successfully do a .dump but the integrity_check whines, I'm thinking that
> maybe if I focus on where what could cause that, it might give me a clue.
>
> For example, if the tree is stored at the end of file maybe I'm falling
> out
> the loop without writing the final bytes.  But if the tree is not
> localized
> to one area of the file, I'll have to rethink my approach.
>
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Marcus Grimm
> Sent: Wednesday, May 27, 2009 2:44 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] corrupt database recovery
>
> so, if you think it is a coding error on your side it will
> be a hard work to find the source. I can only image side effects
> on an used API like sqlite3 with the classics:
>
> - stack frame overload in a function that calls sqlite functions
> - using a local stack variable as a return pointer and reuse
>   this external.
> - memory overwriting on malloc'ed pointers or reusing of allready
>   free'd memory pages.
>
> one and two might be found using a code review and identify
> local variables or particular arrays that are written:
> So wherever you call anything from sqlite check the stack
> declaration and review the usage of these variables...
> of course just don't do number two... ;)
>
> memory errors might be detected using special debug versions of the
> malloc/free library, by code review, or manually by adding some
> test variables whereever you malloc or free a pointer.
>
> I'm sure you allready did some of these... anyway, good luck
>
> are you using threads? would be another beautiful issue to trace...;)
>
> Marcus
>
>
>
>> It is a server class machine running Windows 2003 with 4 GB.  No, it's a
>> local drive with 20GB free on it.
>>
>> I'm sure that it's a coding error on my part.  SQLite is very stable, in
>> my
>> opinion.  I'm just trying to get a rough idea on where I'm screwing up
>> the
>> database.
>>
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick
>> Sent: Wednesday, May 27, 2009 12:58 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] corrupt database recovery
>>
>> What platform?  Any chance they are using a network drive?
>>
>>
>> John
>>
>> Gene wrote:
>>> My code is outside the database layer.  So I do all my database work,
>>> then
>>> compress and encrypt it.  No errors are returned anywhere.  I'm
>>> guessing
>>> that it's going to be an uninitialized variable or byte alignment
>>> problems
>>> somewhere.
>>>
>>> This code is running on hundreds of machines without a problem and I've
>>> never reproduced it but every now and again I get a support ticket
>>> showing
>> a
>>> corrupt database.  So I'm trying to figure out WHERE to look.
>>>
>>> -Original Message-
>>> From: sqlite-users-boun...@sqlite.org
>>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick
>>> Sent: Wednesday, May 27, 2009 10:59 AM
>>> To: General Discussion of SQLite Database
>>> Subject: Re: [sqlite] corrupt database recovery
>>>
>>> Gene Allen wrote:
>>>
 Ok...it's happened again and I've 

Re: [sqlite] corrupt database recovery

2009-05-27 Thread Doug
I'm sorry that I can't help with the SQLite part, but couldn't you write a
simple test harness that would create a file (of random size?) with random
contents, encrypt/compress to a secondary file, then decompress/decrypt to a
third file.  Compare first and third files.  If they match, delete and run
test again.  If they don't match you now have your debugging input.  I'd
think letting something like that run for a day or so would flush out
boundary/buffer errors.  It's not deterministic, but should be easy enough
to bang out and let run on a secondary machine while you continue your main
search.

Good luck either way.

Doug


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Gene Allen
> Sent: Wednesday, May 27, 2009 4:52 PM
> To: mgr...@medcom-online.de; 'General Discussion of SQLite Database'
> Subject: Re: [sqlite] corrupt database recovery
> 
> I've reviewed the code over and over again, but am yet to find it.  But
> it
> is a good bit of very complicated code (blowfish and compression code).
> 
> That's why I was asking about the format of the database.  Since I'm
> able to
> successfully do a .dump but the integrity_check whines, I'm thinking
> that
> maybe if I focus on where what could cause that, it might give me a
> clue.
> 
> For example, if the tree is stored at the end of file maybe I'm falling
> out
> the loop without writing the final bytes.  But if the tree is not
> localized
> to one area of the file, I'll have to rethink my approach.
> 
> 
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Marcus Grimm
> Sent: Wednesday, May 27, 2009 2:44 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] corrupt database recovery
> 
> so, if you think it is a coding error on your side it will
> be a hard work to find the source. I can only image side effects
> on an used API like sqlite3 with the classics:
> 
> - stack frame overload in a function that calls sqlite functions
> - using a local stack variable as a return pointer and reuse
>   this external.
> - memory overwriting on malloc'ed pointers or reusing of allready
>   free'd memory pages.
> 
> one and two might be found using a code review and identify
> local variables or particular arrays that are written:
> So wherever you call anything from sqlite check the stack
> declaration and review the usage of these variables...
> of course just don't do number two... ;)
> 
> memory errors might be detected using special debug versions of the
> malloc/free library, by code review, or manually by adding some
> test variables whereever you malloc or free a pointer.
> 
> I'm sure you allready did some of these... anyway, good luck
> 
> are you using threads? would be another beautiful issue to trace...;)
> 
> Marcus
> 
> 
> 
> > It is a server class machine running Windows 2003 with 4 GB.  No,
> it's a
> > local drive with 20GB free on it.
> >
> > I'm sure that it's a coding error on my part.  SQLite is very stable,
> in
> > my
> > opinion.  I'm just trying to get a rough idea on where I'm screwing
> up the
> > database.
> >
> >
> > -Original Message-
> > From: sqlite-users-boun...@sqlite.org
> > [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick
> > Sent: Wednesday, May 27, 2009 12:58 PM
> > To: General Discussion of SQLite Database
> > Subject: Re: [sqlite] corrupt database recovery
> >
> > What platform?  Any chance they are using a network drive?
> >
> >
> > John
> >
> > Gene wrote:
> >> My code is outside the database layer.  So I do all my database
> work,
> >> then
> >> compress and encrypt it.  No errors are returned anywhere.  I'm
> guessing
> >> that it's going to be an uninitialized variable or byte alignment
> >> problems
> >> somewhere.
> >>
> >> This code is running on hundreds of machines without a problem and
> I've
> >> never reproduced it but every now and again I get a support ticket
> >> showing
> > a
> >> corrupt database.  So I'm trying to figure out WHERE to look.
> >>
> >> -Original Message-
> >> From: sqlite-users-boun...@sqlite.org
> >> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick
> >> Sent: Wednesday, May 27, 2009 10:59 AM
> >> To: General Discussion of SQLite Database
> >> Subject: Re: [sqlite] corrupt database recovery
> >>
> >> Gene Allen wrote:
> >>
> >>> Ok...it's happened again and I've decided that I need to track this
> >>> down
> >>> once and for all!
> >>>
> >>> Here is what I'm seeing: I get errors when I do a integrity_check
> (see
> >>> below), but I can .dump it to a text file and then .read it into
> >>> another
> >>> database ok.
> >>>
> >>> It seems to me that I'm screwing up an index or something.  Are
> indexes
> >>> stored at the end of the database file?  All I can think of is that
> my
> >>> compression/encryption routines are messing something up and I'm
> trying
> > to
> >>> figure out 

Re: [sqlite] corrupt database recovery

2009-05-27 Thread Marcus Grimm
sorry, I don't know where sqlite stores any kind of data like
index or raw table, my suspect is that, since sqlite
usually looks for a free page, the pages are randomly
distributed over the file or added at the end if no free
page has been found.

what I don't understand:
what makes you thinking that you may miss some bytes of a tree
(and thus of a page) ?
That is sqlite internal, except you are using your own
vfs, I guess, ...do you ?

by the way: did your application crash prior or nearby you had this
corrupted DB ?

> I've reviewed the code over and over again, but am yet to find it.  But it
> is a good bit of very complicated code (blowfish and compression code).
>
> That's why I was asking about the format of the database.  Since I'm able
> to
> successfully do a .dump but the integrity_check whines, I'm thinking that
> maybe if I focus on where what could cause that, it might give me a clue.
>
> For example, if the tree is stored at the end of file maybe I'm falling
> out
> the loop without writing the final bytes.  But if the tree is not
> localized
> to one area of the file, I'll have to rethink my approach.
>
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Marcus Grimm
> Sent: Wednesday, May 27, 2009 2:44 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] corrupt database recovery
>
> so, if you think it is a coding error on your side it will
> be a hard work to find the source. I can only image side effects
> on an used API like sqlite3 with the classics:
>
> - stack frame overload in a function that calls sqlite functions
> - using a local stack variable as a return pointer and reuse
>   this external.
> - memory overwriting on malloc'ed pointers or reusing of allready
>   free'd memory pages.
>
> one and two might be found using a code review and identify
> local variables or particular arrays that are written:
> So wherever you call anything from sqlite check the stack
> declaration and review the usage of these variables...
> of course just don't do number two... ;)
>
> memory errors might be detected using special debug versions of the
> malloc/free library, by code review, or manually by adding some
> test variables whereever you malloc or free a pointer.
>
> I'm sure you allready did some of these... anyway, good luck
>
> are you using threads? would be another beautiful issue to trace...;)
>
> Marcus
>
>
>
>> It is a server class machine running Windows 2003 with 4 GB.  No, it's a
>> local drive with 20GB free on it.
>>
>> I'm sure that it's a coding error on my part.  SQLite is very stable, in
>> my
>> opinion.  I'm just trying to get a rough idea on where I'm screwing up
>> the
>> database.
>>
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick
>> Sent: Wednesday, May 27, 2009 12:58 PM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] corrupt database recovery
>>
>> What platform?  Any chance they are using a network drive?
>>
>>
>> John
>>
>> Gene wrote:
>>> My code is outside the database layer.  So I do all my database work,
>>> then
>>> compress and encrypt it.  No errors are returned anywhere.  I'm
>>> guessing
>>> that it's going to be an uninitialized variable or byte alignment
>>> problems
>>> somewhere.
>>>
>>> This code is running on hundreds of machines without a problem and I've
>>> never reproduced it but every now and again I get a support ticket
>>> showing
>> a
>>> corrupt database.  So I'm trying to figure out WHERE to look.
>>>
>>> -Original Message-
>>> From: sqlite-users-boun...@sqlite.org
>>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick
>>> Sent: Wednesday, May 27, 2009 10:59 AM
>>> To: General Discussion of SQLite Database
>>> Subject: Re: [sqlite] corrupt database recovery
>>>
>>> Gene Allen wrote:
>>>
 Ok...it's happened again and I've decided that I need to track this
 down
 once and for all!

 Here is what I'm seeing: I get errors when I do a integrity_check (see
 below), but I can .dump it to a text file and then .read it into
 another
 database ok.

 It seems to me that I'm screwing up an index or something.  Are
 indexes
 stored at the end of the database file?  All I can think of is that my
 compression/encryption routines are messing something up and I'm
 trying
>> to
 figure out 'where' to look.

 I guess the real question is, what would I have to do to make an
 integrity_check fail, but still let a dump work correctly?

 Many thanks for any advice on tracking down this ugliness.


>>>
>>> SNIP
>>>
>>> Personally, I'd refactor the code to allow me to verify the operation
>>> of
>>> the compression/encryption routines independently of the database
>>> operation.  How are you injecting the compression/encryption into the
>>> database 

Re: [sqlite] corrupt database recovery

2009-05-27 Thread Gene Allen
I've reviewed the code over and over again, but am yet to find it.  But it
is a good bit of very complicated code (blowfish and compression code).

That's why I was asking about the format of the database.  Since I'm able to
successfully do a .dump but the integrity_check whines, I'm thinking that
maybe if I focus on where what could cause that, it might give me a clue.  

For example, if the tree is stored at the end of file maybe I'm falling out
the loop without writing the final bytes.  But if the tree is not localized
to one area of the file, I'll have to rethink my approach. 


-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Marcus Grimm
Sent: Wednesday, May 27, 2009 2:44 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] corrupt database recovery

so, if you think it is a coding error on your side it will
be a hard work to find the source. I can only image side effects
on an used API like sqlite3 with the classics:

- stack frame overload in a function that calls sqlite functions
- using a local stack variable as a return pointer and reuse
  this external.
- memory overwriting on malloc'ed pointers or reusing of allready
  free'd memory pages.

one and two might be found using a code review and identify
local variables or particular arrays that are written:
So wherever you call anything from sqlite check the stack
declaration and review the usage of these variables...
of course just don't do number two... ;)

memory errors might be detected using special debug versions of the
malloc/free library, by code review, or manually by adding some
test variables whereever you malloc or free a pointer.

I'm sure you allready did some of these... anyway, good luck

are you using threads? would be another beautiful issue to trace...;)

Marcus



> It is a server class machine running Windows 2003 with 4 GB.  No, it's a
> local drive with 20GB free on it.
>
> I'm sure that it's a coding error on my part.  SQLite is very stable, in
> my
> opinion.  I'm just trying to get a rough idea on where I'm screwing up the
> database.
>
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick
> Sent: Wednesday, May 27, 2009 12:58 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] corrupt database recovery
>
> What platform?  Any chance they are using a network drive?
>
>
> John
>
> Gene wrote:
>> My code is outside the database layer.  So I do all my database work,
>> then
>> compress and encrypt it.  No errors are returned anywhere.  I'm guessing
>> that it's going to be an uninitialized variable or byte alignment
>> problems
>> somewhere.
>>
>> This code is running on hundreds of machines without a problem and I've
>> never reproduced it but every now and again I get a support ticket
>> showing
> a
>> corrupt database.  So I'm trying to figure out WHERE to look.
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick
>> Sent: Wednesday, May 27, 2009 10:59 AM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] corrupt database recovery
>>
>> Gene Allen wrote:
>>
>>> Ok...it's happened again and I've decided that I need to track this
>>> down
>>> once and for all!
>>>
>>> Here is what I'm seeing: I get errors when I do a integrity_check (see
>>> below), but I can .dump it to a text file and then .read it into
>>> another
>>> database ok.
>>>
>>> It seems to me that I'm screwing up an index or something.  Are indexes
>>> stored at the end of the database file?  All I can think of is that my
>>> compression/encryption routines are messing something up and I'm trying
> to
>>> figure out 'where' to look.
>>>
>>> I guess the real question is, what would I have to do to make an
>>> integrity_check fail, but still let a dump work correctly?
>>>
>>> Many thanks for any advice on tracking down this ugliness.
>>>
>>>
>>
>> SNIP
>>
>> Personally, I'd refactor the code to allow me to verify the operation of
>> the compression/encryption routines independently of the database
>> operation.  How are you injecting the compression/encryption into the
>> database layer?
>>
>>
>> John
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>> 
>>
>>
>> No virus found in this incoming message.
>> Checked by AVG - www.avg.com
>> Version: 8.0.238 / Virus Database: 270.12.39/2133 - Release Date:
>> 05/25/09
> 08:16:00
>>
>>
>
> ___
> sqlite-users mailing 

[sqlite] SQL question - not sure if it's possible at all

2009-05-27 Thread Dennis Volodomanov
Hello all,

I'm not sure if this can be done in SQL, but if it can, I'd appreciate 
your help.

Let's say I've got this query:

SELECT * FROM (SELECT * FROM Data WHERE PlotOnGraph=1 ORDER BY Date ASC) 
LIMIT ?2 OFFSET ?1

and let's say, "Data" has an "ID" field (primary index).

Is it possible to find out the *position* in the returned result set of 
a Data with a given ID? So, let's say the returned result was (just IDs):

1   2   3   4   5   6   9   7   11   14   12   13   10

Can I find out that ID=9 is number 7 (or 6 if zero-based, doesn't 
matter) using only SQL?

Thanks in advance!

  Dennis

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] corrupt database recovery

2009-05-27 Thread Marcus Grimm
so, if you think it is a coding error on your side it will
be a hard work to find the source. I can only image side effects
on an used API like sqlite3 with the classics:

- stack frame overload in a function that calls sqlite functions
- using a local stack variable as a return pointer and reuse
  this external.
- memory overwriting on malloc'ed pointers or reusing of allready
  free'd memory pages.

one and two might be found using a code review and identify
local variables or particular arrays that are written:
So wherever you call anything from sqlite check the stack
declaration and review the usage of these variables...
of course just don't do number two... ;)

memory errors might be detected using special debug versions of the
malloc/free library, by code review, or manually by adding some
test variables whereever you malloc or free a pointer.

I'm sure you allready did some of these... anyway, good luck

are you using threads? would be another beautiful issue to trace...;)

Marcus



> It is a server class machine running Windows 2003 with 4 GB.  No, it's a
> local drive with 20GB free on it.
>
> I'm sure that it's a coding error on my part.  SQLite is very stable, in
> my
> opinion.  I'm just trying to get a rough idea on where I'm screwing up the
> database.
>
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick
> Sent: Wednesday, May 27, 2009 12:58 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] corrupt database recovery
>
> What platform?  Any chance they are using a network drive?
>
>
> John
>
> Gene wrote:
>> My code is outside the database layer.  So I do all my database work,
>> then
>> compress and encrypt it.  No errors are returned anywhere.  I'm guessing
>> that it's going to be an uninitialized variable or byte alignment
>> problems
>> somewhere.
>>
>> This code is running on hundreds of machines without a problem and I've
>> never reproduced it but every now and again I get a support ticket
>> showing
> a
>> corrupt database.  So I'm trying to figure out WHERE to look.
>>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick
>> Sent: Wednesday, May 27, 2009 10:59 AM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] corrupt database recovery
>>
>> Gene Allen wrote:
>>
>>> Ok...it's happened again and I've decided that I need to track this
>>> down
>>> once and for all!
>>>
>>> Here is what I'm seeing: I get errors when I do a integrity_check (see
>>> below), but I can .dump it to a text file and then .read it into
>>> another
>>> database ok.
>>>
>>> It seems to me that I'm screwing up an index or something.  Are indexes
>>> stored at the end of the database file?  All I can think of is that my
>>> compression/encryption routines are messing something up and I'm trying
> to
>>> figure out 'where' to look.
>>>
>>> I guess the real question is, what would I have to do to make an
>>> integrity_check fail, but still let a dump work correctly?
>>>
>>> Many thanks for any advice on tracking down this ugliness.
>>>
>>>
>>
>> SNIP
>>
>> Personally, I'd refactor the code to allow me to verify the operation of
>> the compression/encryption routines independently of the database
>> operation.  How are you injecting the compression/encryption into the
>> database layer?
>>
>>
>> John
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>> 
>>
>>
>> No virus found in this incoming message.
>> Checked by AVG - www.avg.com
>> Version: 8.0.238 / Virus Database: 270.12.39/2133 - Release Date:
>> 05/25/09
> 08:16:00
>>
>>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Feature request: reset file mtime on rollback

2009-05-27 Thread Jim Wilcoxson
It would be handy for file synchronization if SQLite stored the
database file's mtime in the journal and reset it if a rollback
occurs.  It's difficult to do this in an application:

1. If a DB is opened, a transaction started, some stuff happens, and a
rollback is executed, the file mtime only gets changed if a cache page
had to be written.  Only SQLite knows whether cache pages were
written.

2. Suppose the DB has mtime T.  An app runs and starts modifying the
DB (and pages are written from cache), but it crashes before the
commit.  When the app starts again, if the app looks at mtime before
opening the DB, the DB appears to be modified.  After the open, the
rollback will occur, the DB will be in it's prior state when mtime was
T, but the mtime is still updated.

Not a huge deal, just would be nice.

Jim
-- 
Software first.  Software lasts!
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] corrupt database recovery

2009-05-27 Thread Gene Allen
It is a server class machine running Windows 2003 with 4 GB.  No, it's a
local drive with 20GB free on it.

I'm sure that it's a coding error on my part.  SQLite is very stable, in my
opinion.  I'm just trying to get a rough idea on where I'm screwing up the
database.  


-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick
Sent: Wednesday, May 27, 2009 12:58 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] corrupt database recovery

What platform?  Any chance they are using a network drive?


John

Gene wrote:
> My code is outside the database layer.  So I do all my database work, then
> compress and encrypt it.  No errors are returned anywhere.  I'm guessing
> that it's going to be an uninitialized variable or byte alignment problems
> somewhere.
>
> This code is running on hundreds of machines without a problem and I've
> never reproduced it but every now and again I get a support ticket showing
a
> corrupt database.  So I'm trying to figure out WHERE to look.
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick
> Sent: Wednesday, May 27, 2009 10:59 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] corrupt database recovery
>
> Gene Allen wrote:
>   
>> Ok...it's happened again and I've decided that I need to track this down
>> once and for all!
>>
>> Here is what I'm seeing: I get errors when I do a integrity_check (see
>> below), but I can .dump it to a text file and then .read it into another
>> database ok.
>>
>> It seems to me that I'm screwing up an index or something.  Are indexes
>> stored at the end of the database file?  All I can think of is that my
>> compression/encryption routines are messing something up and I'm trying
to
>> figure out 'where' to look.  
>>
>> I guess the real question is, what would I have to do to make an
>> integrity_check fail, but still let a dump work correctly?
>>
>> Many thanks for any advice on tracking down this ugliness.
>>   
>> 
>
> SNIP
>
> Personally, I'd refactor the code to allow me to verify the operation of 
> the compression/encryption routines independently of the database 
> operation.  How are you injecting the compression/encryption into the 
> database layer?
>
>
> John
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   
> 
>
>
> No virus found in this incoming message.
> Checked by AVG - www.avg.com 
> Version: 8.0.238 / Virus Database: 270.12.39/2133 - Release Date: 05/25/09
08:16:00
>
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] corrupt database recovery

2009-05-27 Thread John Elrick
What platform?  Any chance they are using a network drive?


John

Gene wrote:
> My code is outside the database layer.  So I do all my database work, then
> compress and encrypt it.  No errors are returned anywhere.  I'm guessing
> that it's going to be an uninitialized variable or byte alignment problems
> somewhere.
>
> This code is running on hundreds of machines without a problem and I've
> never reproduced it but every now and again I get a support ticket showing a
> corrupt database.  So I'm trying to figure out WHERE to look.
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick
> Sent: Wednesday, May 27, 2009 10:59 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] corrupt database recovery
>
> Gene Allen wrote:
>   
>> Ok...it's happened again and I've decided that I need to track this down
>> once and for all!
>>
>> Here is what I'm seeing: I get errors when I do a integrity_check (see
>> below), but I can .dump it to a text file and then .read it into another
>> database ok.
>>
>> It seems to me that I'm screwing up an index or something.  Are indexes
>> stored at the end of the database file?  All I can think of is that my
>> compression/encryption routines are messing something up and I'm trying to
>> figure out 'where' to look.  
>>
>> I guess the real question is, what would I have to do to make an
>> integrity_check fail, but still let a dump work correctly?
>>
>> Many thanks for any advice on tracking down this ugliness.
>>   
>> 
>
> SNIP
>
> Personally, I'd refactor the code to allow me to verify the operation of 
> the compression/encryption routines independently of the database 
> operation.  How are you injecting the compression/encryption into the 
> database layer?
>
>
> John
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   
> 
>
>
> No virus found in this incoming message.
> Checked by AVG - www.avg.com 
> Version: 8.0.238 / Virus Database: 270.12.39/2133 - Release Date: 05/25/09 
> 08:16:00
>
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Some index questions

2009-05-27 Thread Jay A. Kreibich
On Wed, May 27, 2009 at 12:34:00PM -0500, Jay A. Kreibich scratched on the wall:
> On Wed, May 27, 2009 at 10:23:04AM -0400, D. Richard Hipp scratched on the 
> wall:
> > Indices can be used for sorting as well as for lookup.  And sometime  
> > content can be pulled from indices to avoid an additional search to  
> > find the original table entry.  For these reasons, compound indices  
> > can be useful.


>   One additional point.  This is getting a bit into the nitty gritty
>   details, but SQLite is capable of servicing data requests directly
>   from indexes, if all the data is available.

  Duh.  I guess Dr. Hipp explained that pretty clearly already.   I got
  so focused on his sorting example (which is something I'm trying to
  understand more clearly) I kind of missed the bit where he already
  explained all this.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Some index questions

2009-05-27 Thread Jay A. Kreibich
On Wed, May 27, 2009 at 10:23:04AM -0400, D. Richard Hipp scratched on the wall:
> 
> On May 27, 2009, at 11:13 AM, Marcus Grimm wrote:
> > I guess such compound indexes have only a benefit for specific
> > queries, like the above that use all the time exactly these two
> > columns, otherwise two seperate indexes will do the same job but maybe
> > a little slower due to additional overhead to scan two index tables.
> >
> 
> Indices can be used for sorting as well as for lookup.  And sometime  
> content can be pulled from indices to avoid an additional search to  
> find the original table entry.  For these reasons, compound indices  
> can be useful.


  Don't forget that indexes are also used to enforce UNIQUE
  constraints.  You have to have compound indexes to support
  compound keys.  Although, if the index exists solely for the purpose
  of enforcing a UNIQUE constraint, the column order has less meaning.



  One additional point.  This is getting a bit into the nitty gritty
  details, but SQLite is capable of servicing data requests directly
  from indexes, if all the data is available.  The example, consider a
  Person table and a Group table.  If you want to build a many-to-many
  relationship between these tables, you typically create a bridge table
  that might be Person_Group_Membership that consists of the columns
  (ROWID, person_id, group_id).  You need a UNIQUE index over
  (person_id, group_id) to make sure memberships are properly unique.
  This index can also be used for person_id directed lookups (e.g.
  WHERE person_id = 34).  Many people would also create an index on
  (group_id) for group_id directed lookups.  This would allow a
  group_id to be looked up in the index, getting a ROWID.  The ROWID
  could then be looked up in the main table, getting the required
  person_id.

  If, however, the second index is made across both fields as
  (group_id, person_id) and SQLite is aware that it is touching the
  Person_Group_Membership table just to translate a group_id into a
  person_id (or set there of), it can service such requests directly
  out of the index data so that a second lookup in the main table is
  not required.  This effectively doubles the speed of the lookup at the
  cost of a slightly larger index.

  This is also another example that the indexes {person_id, group_id}
  and {group_id, person_id} are very different and are used for
  different things-- and that there are legit situations to have both.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] corrupt database recovery

2009-05-27 Thread Jim Wilcoxson
What platform is this?  I just posted a note today that my Linux box
running 2.6-25 and ext3 isn't doing fsyncs like it should, so I would
be susceptible to DB corruption if my machine crashed during DB I/O.
I posted a C program you could run on the specific machine with a
corrupt DB to see if it's really doing synchronous I/O.

The other thing you might want to check is hardware.  Running memtest
on the machine overnight could show something, unless you are on
server-class hardware with ECC memory.

Jim


On 5/27/09, Gene  wrote:
> My code is outside the database layer.  So I do all my database work, then
> compress and encrypt it.  No errors are returned anywhere.  I'm guessing
> that it's going to be an uninitialized variable or byte alignment problems
> somewhere.
>
> This code is running on hundreds of machines without a problem and I've
> never reproduced it but every now and again I get a support ticket showing a
> corrupt database.  So I'm trying to figure out WHERE to look.
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick
> Sent: Wednesday, May 27, 2009 10:59 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] corrupt database recovery
>
> Gene Allen wrote:
>> Ok...it's happened again and I've decided that I need to track this down
>> once and for all!
>>
>> Here is what I'm seeing: I get errors when I do a integrity_check (see
>> below), but I can .dump it to a text file and then .read it into another
>> database ok.
>>
>> It seems to me that I'm screwing up an index or something.  Are indexes
>> stored at the end of the database file?  All I can think of is that my
>> compression/encryption routines are messing something up and I'm trying to
>> figure out 'where' to look.
>>
>> I guess the real question is, what would I have to do to make an
>> integrity_check fail, but still let a dump work correctly?
>>
>> Many thanks for any advice on tracking down this ugliness.
>>
>
> SNIP
>
> Personally, I'd refactor the code to allow me to verify the operation of
> the compression/encryption routines independently of the database
> operation.  How are you injecting the compression/encryption into the
> database layer?
>
>
> John
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
Software first.  Software lasts!
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] corrupt database recovery

2009-05-27 Thread Gene
My code is outside the database layer.  So I do all my database work, then
compress and encrypt it.  No errors are returned anywhere.  I'm guessing
that it's going to be an uninitialized variable or byte alignment problems
somewhere.

This code is running on hundreds of machines without a problem and I've
never reproduced it but every now and again I get a support ticket showing a
corrupt database.  So I'm trying to figure out WHERE to look.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Elrick
Sent: Wednesday, May 27, 2009 10:59 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] corrupt database recovery

Gene Allen wrote:
> Ok...it's happened again and I've decided that I need to track this down
> once and for all!
>
> Here is what I'm seeing: I get errors when I do a integrity_check (see
> below), but I can .dump it to a text file and then .read it into another
> database ok.
>
> It seems to me that I'm screwing up an index or something.  Are indexes
> stored at the end of the database file?  All I can think of is that my
> compression/encryption routines are messing something up and I'm trying to
> figure out 'where' to look.  
>
> I guess the real question is, what would I have to do to make an
> integrity_check fail, but still let a dump work correctly?
>
> Many thanks for any advice on tracking down this ugliness.
>   

SNIP

Personally, I'd refactor the code to allow me to verify the operation of 
the compression/encryption routines independently of the database 
operation.  How are you injecting the compression/encryption into the 
database layer?


John
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Foreign Keys and Indexes

2009-05-27 Thread Igor Tandetnik
s.breith...@staubli.com wrote:
> So when I have a query over serveral tables normally the primay key
> is set as index when the table is joined.

Only if you actually join on primary key column(s). Doing so is common, 
but not mandatory.

> If I need to add a where
> clause
> to a field of the joined table no single index on that field can be
> used.

SQLite may choose to join in a "reverse" order. Consider:

select * from A join B on A.b_id = B.id
where B.x = 42;

One way to satisfy this query would be to scan all rows in A, for each 
such row look up matching rows in B using the primary key (assuming B.id 
is in fact its primary key), and look for those where B.x=42 (an index 
on B(id, x) could be useful here). Alternatively, one could scan all 
rows in B looking for those with B.x=42 (possibly using an index on 
B(x) ), and for each such row look up a matching row in A (possibly 
using an index on A(b_id) ). SQLite engine decides which plan is better, 
depending on which indexes exist.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Table-Alias Confusion or Bug?

2009-05-27 Thread Ralf Junker
At 17:33 27.05.2009, D. Richard Hipp wrote:

>On May 27, 2009, at 11:26 AM, Ralf Junker wrote:
>
>> Hello!
>>
>> Given this SQL:
>>
>>  drop table if exists t;
>>
>>  create table t (c);
>>
>>  select *
>>from t as ta
>>where t.c = ta.c;
>>
>> SQLite3 chokes with the following error:
>>
>>  SQL error near line 5: no such column: t.c
>>
>> Question:
>>
>> Is this the expected behaviour? I know that the where clause does  
>> not make sense, but shouldn't SQLite see both "t.c" and "ta.c" as  
>> the same columns instead of reporting the error?
>
>This is the behavior that I expected because I deliberately coded it  
>that way.  Once you us an AS clause, the table is thereafter known  
>only by its AS name, not its original name.

Thanks for the clarification!

Ralf 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] corrupt database recovery

2009-05-27 Thread John Elrick
Gene Allen wrote:
> Ok...it's happened again and I've decided that I need to track this down
> once and for all!
>
> Here is what I'm seeing: I get errors when I do a integrity_check (see
> below), but I can .dump it to a text file and then .read it into another
> database ok.
>
> It seems to me that I'm screwing up an index or something.  Are indexes
> stored at the end of the database file?  All I can think of is that my
> compression/encryption routines are messing something up and I'm trying to
> figure out 'where' to look.  
>
> I guess the real question is, what would I have to do to make an
> integrity_check fail, but still let a dump work correctly?
>
> Many thanks for any advice on tracking down this ugliness.
>   

SNIP

Personally, I'd refactor the code to allow me to verify the operation of 
the compression/encryption routines independently of the database 
operation.  How are you injecting the compression/encryption into the 
database layer?


John
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Foreign Keys and Indexes

2009-05-27 Thread s . breitholz
> s.breith...@staubli.com wrote:
> > Now I noticed that the speed of selects are much faster with fields
> > that are indexed
> 
> Well, that's pretty much the point of having an index in the first 
> place. Why do you find this fact surprising?

Because the indexed field that improved my program was a unix timestamp 
already in asc order
(There was another topic in the mailing list about where clause on 
timestamps searching t > a And t < b )

> 
> > I´ve read that only one index is used per where (I think join is
> > nothing
> > else here)
> 
> No - one index per table. SQLite could use as many indexes as tables 
> mentioned in the FROM clause.
> 
> > The questions:
> > Does it mean only one index can be used per join or one index for each
> > table of the join?
> 
> One for each table.

Ahh Ok! So in the print I can see which index is used for each table in 
the whole query. 
That clears nearly all the questions I posted. Thanks

> 
> > Does the output mean, that the query runs over the fast index and
> > there is
> > nothing to improve (as long
> > as I use the database always in that way) or does it just explain
> > that one
> > index is used?
> 
> I don't understand this question.

Nearyl explained in your answer above.
So when I have a query over serveral tables normally the primay key
is set as index when the table is joined. If I need to add a where clause
to a field of the joined table no single index on that field can be used. 
So have to think about the best multiple index including the ID... 

> 
> > Or in other words, does this print mean that I use the database in
> > perfect
> > way (speaking only of indexes):
> > 0|0|TABLE ActiveError AS AE USING PRIMARY KEY ORDER BY
> > 1|1|TABLE ErrorDefinition AS ED USING PRIMARY KEY
> > 2|2|TABLE ErrorGroupDefinition AS EG1 USING PRIMARY KEY
> > 3|3|TABLE ErrorGroupDefinition AS EG2 USING PRIMARY KEY
> 
> Well, it's hard to prove that there's absolutely no better way to 
> execute a query - especially since you don't actually show the query in 
> question, nor the definition of tables involved.
> 
> Igor Tandetnik 
> 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



Best regards / Mit freundlichen Grüssen

Stefan Breitholz

---
Staeubli GmbH - Theodor-Schmidt-Str. 19
DE - 95448 Bayreuth
Phone: +49(0)921/883-126 Fax: +49(0)921/883-58126
mailto:s.breith...@staubli.com
http://www.staubli.com

Registered under HRB 175 - Local Court Bayreuth - Managing Director: Karl 
Kirschner
---


This e-mail and any attachment (the 'message') are confidential and privileged 
and intended solely for the person or the entity to which it is adressed. If 
you have received it in error, please advise the sender by return e-mail and 
delete it immediately. Any use not in accordance with its purpose, any 
dissemination or reproduction, either whole or partial, by entities other than 
the intended recipient is strictly prohibited.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Table-Alias Confusion or Bug?

2009-05-27 Thread D. Richard Hipp

On May 27, 2009, at 11:26 AM, Ralf Junker wrote:

> Hello!
>
> Given this SQL:
>
>  drop table if exists t;
>
>  create table t (c);
>
>  select *
>from t as ta
>where t.c = ta.c;
>
> SQLite3 chokes with the following error:
>
>  SQL error near line 5: no such column: t.c
>
> Question:
>
> Is this the expected behaviour? I know that the where clause does  
> not make sense, but shouldn't SQLite see both "t.c" and "ta.c" as  
> the same columns instead of reporting the error?


This is the behavior that I expected because I deliberately coded it  
that way.  Once you us an AS clause, the table is thereafter known  
only by its AS name, not its original name.


D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Table-Alias Confusion or Bug?

2009-05-27 Thread Ralf Junker
Hello!

Given this SQL:

  drop table if exists t;

  create table t (c);

  select *
from t as ta 
where t.c = ta.c;

SQLite3 chokes with the following error:

  SQL error near line 5: no such column: t.c

Question:

Is this the expected behaviour? I know that the where clause does not make 
sense, but shouldn't SQLite see both "t.c" and "ta.c" as the same columns 
instead of reporting the error?

Thanks for any response,

Ralf

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] corrupt database recovery

2009-05-27 Thread Gene Allen
Ok...it's happened again and I've decided that I need to track this down
once and for all!

Here is what I'm seeing: I get errors when I do a integrity_check (see
below), but I can .dump it to a text file and then .read it into another
database ok.

It seems to me that I'm screwing up an index or something.  Are indexes
stored at the end of the database file?  All I can think of is that my
compression/encryption routines are messing something up and I'm trying to
figure out 'where' to look.  

I guess the real question is, what would I have to do to make an
integrity_check fail, but still let a dump work correctly?

Many thanks for any advice on tracking down this ugliness.

Gene

>>Output details
sqlite> pragma integrity_check;
*** in database main ***
On tree page 3 cell 26: invalid page number 469
On tree page 3 cell 26: Child page depth differs
On tree page 3 cell 27: invalid page number 490
On tree page 3 cell 28: invalid page number 511
On tree page 3 cell 29: invalid page number 533
On tree page 3 cell 30: invalid page number 554
On tree page 3 cell 31: invalid page number 576
On tree page 3 cell 32: invalid page number 598
On tree page 3 cell 33: invalid page number 620
On tree page 3 cell 34: invalid page number 642
On tree page 3 cell 35: invalid page number 666
On tree page 3 cell 36: invalid page number 688
On tree page 3 cell 37: invalid page number 709
On tree page 3 cell 38: invalid page number 730
On tree page 3 cell 39: invalid page number 752
On page 3 at right child: invalid page number 773
On tree page 419 cell 86: invalid page number 462
On tree page 419 cell 86: Child page depth differs
On tree page 419 cell 87: invalid page number 463

I said...h...so I do a .dump to see if I can 
sqlite> .output c:\\a.sql
sqlite> .dump
sqlite> .quit

resulting in:
05/27/2009  10:08 AM   496,051 a.sql
   1 File(s)496,051 bytes
   0 Dir(s)  104,974,647,296 bytes free


-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Machin
Sent: Saturday, April 25, 2009 8:15 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] corrupt database recovery

On 26/04/2009 5:47 AM, Gene wrote:
> Every now and again, we have a database that gets corrupt in the field
(bad
> coding on our end, not sqlite). 
> 

Hi Gene,

The obvious question: why not fix the bad code?

What does 'PRAGMA integrity_check;' say about these corrupt databases?

> When we get one of these corrupt databases, we recover what we can by get
> one row at a time by rowid, like 'select * from mytable where rowid = 1'
and
> we inc the row number every time.  Each row we successfully recover we
> insert into a clean database.

How do you know when to stop incrementing the row number?

Does 'select min(rowid), max(rowid) from mytable' give you believable 
answers?

What happens when you do 'select * from mytable' ?

Approximately how many rows are there?

How many 'select * from mytable where rowid = ' attempts 
fail, and for what reasons (previously deleted, some other result code(s))?

Are the failures restricted to a relatively small range of rowids?

> Works pretty well, except over time we've added more and more columns,
each
> one that has to be pulled and bound to get it into the new table.

What is causing this "pull and bind" problem, the fact that some columns 
  weren't present initially? or just the sheer number of columns i.e. 
you need to "pull and bind" all columns (not just the later additions)? 
In any case, please explain what you mean by "pulled" and "bound".

>  We tried
> to do an 'attach' so we could do a 'select into' the clean database
directly
> from the corrupt one.  But as soon as we attempt to 'attach' the corrupt
> database, we understandable get a 'database is malformed' error.

It's not quite so understandable why 'select * from mytable where rowid 
= 1' doesn't get an error.

> Is there an easier way to pull the good records out of a corrupt database
> and put them into a new one without binding each column by hand?

Can you give us an example of a row or two of (a) what you get from the 
'select * from mytable where rowid = ' (b) the insert 
statement that you need to do to insert that data into the clean 
database? Doesn't have to be real data -- e.g. assume 3 columns 
initially, now grown to 5.

What rules/procedure/recipe do you follow when producing (b) from (a) by 
hand?

Assuming that 'select * from mytable' doesn't work, and subject to 
understanding the pulling and binding by hand thing, I would have 
thought the solution would look something like this:

Write a script that loops around doing 'select * from mytable where 
rowid = ?' on a connection to your corrupt database and doing 'insert 
into mytable values(?,?,?,?,? etc etc etc)' on a connection to your 
clean database.

HTH,
John

___
sqlite-users mailing list
sqlite-users@sqlite.org

[sqlite] Sqlite3 wrapper for Liberty Basic

2009-05-27 Thread Merv
I am using the above wrapper and was informed that it was created by D. 
R. Hipp.
When I run my program from source everything works great but when I run 
it from .exe I often get a Smalltalk halt stating that an un-continuable 
protection violation has occurred.
This halt may occur at startup and always occurs when exiting the program.
I have other programs using this wrapper that work in .exe mode but are 
not as extensive as this one.
If you would like the source code please let me know.
Thank you for your help.
Merv

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Some index questions

2009-05-27 Thread Marcus Grimm
Thanks again Dan,

the phonebook is an excellent example to use the compound index.
So far I was thinking for such cases that it is enough to
have one index and let sqlite test against the other value from
the original table, like:

CREATE TABLE T1(A, B, C);
CREATE INDEX T1x ON T1(B);

SELECT * FROM T1 WHERE (B=4) AND (C=5);

if the temporary result for B=4 is small, compared to
the number of rows in that table, the additional test against
C=5 will not harm very much, I guess.
Of course, as I have learned now, a compound index would be
better, and even better if the fetch could concentrate on
index data only (as DRH mentioned)...
This sounds reasonable if the datasize of C is not very big.
In case C is a char like a UID it might be worse since the
index will become big (in terms of bytes) -- then it is time to
make experiments, I guess.

Thanks again

Marcus

Dan wrote:
> On May 27, 2009, at 10:13 PM, Marcus Grimm wrote:
> 
...
>>>
>>>   WHERE b = 3 AND c = 4;
>>>   WHERE b = 3 AND c > 4;
>>>
>>> but cannot be used to optimize:
>>>
>>>   WHERE c = 4;
>> Ahh... I picked up the wrong example... Great, this is exactly inline
>> how I was thinking how it might work.
>> I guess such compound indexes have only a benefit for specific
>> queries, like the above that use all the time exactly these two
>> columns, otherwise two seperate indexes will do the same job but maybe
>> a little slower due to additional overhead to scan two index tables.
> 
> If a single column index is like the index found in textbooks,
> a compound index with two fields is like the phone book. Sorted first by
> surname, then by first name. The "rowid", if you like, is the phone  
> number.
> 
> So, it's easy to find the set of phone numbers for everybody with the
> surname "Jones". It's easy to find the set of phone numbers for people
> called "Barry Jones". Quite difficult to find all the people called  
> "Barry"
> though.
> 
> If you had two separate indexes, one on surname and one on first-name,
> you could quickly find all the Jones's using the first index, or all the
> Barry's using the second index. But to find the set of people called
> "Barry Jones" would be much more work than it was with the compound  
> index.
> 
> Dan.
> 
> 
> 
>> Thanks again for the useful advice.
>>
>> kind regards
>>
>> Marcus
>>
>>
>>
>>
>>> Dan.
>>>
>>>
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread Jim Wilcoxson
I should have mentioned that when running my C test program on Linux,
with the hard drive cache enabled, it ran instantly.   But it should
have taken quite a while to do 3000 fsyncs.  So this problem with
defeating the hard drive cache is not an SQLite issue; it's a Linux
issue.

I also ran the same C test program on a virtual machine, without any
hdparm tweaking, and the syncs worked correctly: I got about 115 TPS.
This system is also using ext3, but a different version of Linux:

$ uname -a
Linux prirun 2.6.24-19-xen #1 SMP Sat Jul 12 00:15:59 UTC 2008 x86_64
Dual-Core AMD Opteron(tm) Processor 2212 AuthenticAMD GNU/Linux

So I don't know why syncs work correctly in one place but not the
other.  Could be my hardware config, the virtualization layer fixing
things, different version of Linux, ...

If you see SQLite run much slower on ReiserFS than ext3, it probably
means that ReiserFS is doing the syncs correctly, but ext3 is getting
cached.

Jim

On 5/27/09, Jim Wilcoxson  wrote:
> I'm running on Linux with ext3 and just wrote a Python test program to
> insert rows into a table with a single column, no indexing, and doing
> a commit after each insert.  When I first ran it, I got around 440
> inserts per second, which is clearly impossible.  This is a 7200rpm
> drive, so even if I could write a row on every revolution, the maximum
> insert rate would be 120 per second.  I tried adding "pragma
> sychronous=normal", and it sped up to 520 TPS.  With synchronous=full
> it slowed back to 420-460, so this must be the Python default.  Adding
> synchronous=off increased the rate to over 6000 TPS -- basically
> writing to memory instead of disk.
>
> After using hdparm to turn off the drive cache (hdparm -W 0 /dev/sda),
> the insert rate was 15 rows per second.  So it seems that for my
> particular hardware configuration, the Linux fsync call isn't doing
> what it should be doing.  I have built the SQLite stuff from source,
> so perhaps my build has a problem.  If you look on the Linux kernel
> mailing list archives, there are several discussions about drive write
> caching not playing nice with ext3's efforts to ensure data is
> actually on the disk.
>
> $ uname -a
> Linux amd 2.6.25-ARCH #1 SMP PREEMPT Sat Jun 14 18:07:19 CEST 2008
> i686 AMD Athlon(tm) 64 Processor 3200+ AuthenticAMD GNU/Linux
>
> Here is a test program you can run, to show your system's maximum
> physical I/O rate:
>
> #include 
> #include 
> #include 
> #include 
> #include 
>
> #define MAX 3000
>
> main () {
>   int fd;
>   int n;
>   int loops;
>   time_t elap;
>   time_t start;
>
>   if ((fd=open("big", O_RDWR+O_CREAT, 0777)) == -1) {
> perror("Error opening file");
> exit(1);
>   }
>
>   start = time(NULL);
>   for(loops=0; loops if (lseek(fd, 0, SEEK_SET) == -1) {
>   perror("Error seeking file");
>   exit(1);
> }
> n = write(fd, , 1);
> if (n != 1) {
>   perror("Error writing file");
>   exit(1);
> }
> fsync(fd);
>   }
>   elap = time(NULL)-start;
>   printf("Time: %d seconds; TPS=%f\n", elap, MAX*1.0/elap);
> }
>
> On my system, I get these results:
>
> [...@amd ~]$ cc -o sync sync.c
> [...@amd ~]$ ./sync
> Time: 25 seconds; TPS=120.00
> [...@amd ~]$
>
> Running vmstat concurrently, we see this:
>
> [...@amd]$ vmstat 5
> procs ---memory-- ---swap-- -io -system--
> cpu
>  r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id
> wa
>  0  0  0 791108 127632  7635600 0 9   10   33  1  1 98
> 1
>  0  1  0 791100 127648  7635600 0   439  102  212  0  1 13
> 86
>  0  1  0 791100 127664  7635600 0   478  119  242  0  1  0
> 99
>  0  1  0 790976 127672  7635600 0   478  119  242  0  1  0
> 99
>  0  1  0 790976 127688  7635600 0   481  120  244  0
> 1  0 99   <-- steady state
>  0  1  0 790976 127696  7635600 0   482  120  244  0  1  0
> 99
>  0  0  0 790984 127700  7635600 0   167   40   75  0  0 71
> 29
>  0  0  0 790984 127712  7635600 0 316  0  0 100
> 0
>
> During the steady state, there are 480KBytes written per second.
> Linux does I/O in 4K chunks, so dividing 480K by 4K gives you I/O's
> per second: 120.
>
> Jim
>
> On 5/27/09, Marcus Grimm  wrote:
>> Thanks Nick,
>>
>> good point. ahh yes, I've read about this somewhere...
>>
>> My extension is currently ".db", a quick check indicates
>> that using ".abc" gives a slight speed improvement, maybe 10%.
>> But that is allready very close to the variation I get between
>> different test runs, so I'm not really sure if I have this
>> "microsoft effect" here.
>>
>> Anyway, thanks for the reminder.
>>
>> Well, I think now there is nothing wrong here,
>> it is just as it is... Currently I achieve about 10 transactions/second,
>> maybe
>> not that bad... still slower than the "few dozen" that 

Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread Jim Wilcoxson
I'm running on Linux with ext3 and just wrote a Python test program to
insert rows into a table with a single column, no indexing, and doing
a commit after each insert.  When I first ran it, I got around 440
inserts per second, which is clearly impossible.  This is a 7200rpm
drive, so even if I could write a row on every revolution, the maximum
insert rate would be 120 per second.  I tried adding "pragma
sychronous=normal", and it sped up to 520 TPS.  With synchronous=full
it slowed back to 420-460, so this must be the Python default.  Adding
synchronous=off increased the rate to over 6000 TPS -- basically
writing to memory instead of disk.

After using hdparm to turn off the drive cache (hdparm -W 0 /dev/sda),
the insert rate was 15 rows per second.  So it seems that for my
particular hardware configuration, the Linux fsync call isn't doing
what it should be doing.  I have built the SQLite stuff from source,
so perhaps my build has a problem.  If you look on the Linux kernel
mailing list archives, there are several discussions about drive write
caching not playing nice with ext3's efforts to ensure data is
actually on the disk.

$ uname -a
Linux amd 2.6.25-ARCH #1 SMP PREEMPT Sat Jun 14 18:07:19 CEST 2008
i686 AMD Athlon(tm) 64 Processor 3200+ AuthenticAMD GNU/Linux

Here is a test program you can run, to show your system's maximum
physical I/O rate:

#include 
#include 
#include 
#include 
#include 

#define MAX 3000

main () {
  int fd;
  int n;
  int loops;
  time_t elap;
  time_t start;

  if ((fd=open("big", O_RDWR+O_CREAT, 0777)) == -1) {
perror("Error opening file");
exit(1);
  }

  start = time(NULL);
  for(loops=0; loops wrote:
> Thanks Nick,
>
> good point. ahh yes, I've read about this somewhere...
>
> My extension is currently ".db", a quick check indicates
> that using ".abc" gives a slight speed improvement, maybe 10%.
> But that is allready very close to the variation I get between
> different test runs, so I'm not really sure if I have this
> "microsoft effect" here.
>
> Anyway, thanks for the reminder.
>
> Well, I think now there is nothing wrong here,
> it is just as it is... Currently I achieve about 10 transactions/second,
> maybe
> not that bad... still slower than the "few dozen" that are mentioned on the
> www pages, but I agree that there too many issues that may affect the disk
> speed.
>
>
> Marcus
>
>
> Brandon, Nicholas (UK) wrote:
>>> my developemnt system is a Win XP, with of course NTFS, I
>>> don't know which drive it has, I guess a standard 7200 rpm.
>>>
>>
>> What file extension (i.e. the letters after the dot in the filename) do
>> you give the database?
>>
>> I faintly recall there is a windows peculiarity with system restore or
>> something similar that archives certain file extensions in the
>> background. That may contribute to your slow down.
>>
>> Nick
>>
>> 
>> This email and any attachments are confidential to the intended
>> recipient and may also be privileged. If you are not the intended
>> recipient please delete it from your system and notify the sender.
>> You should not copy it or use it for any purpose nor disclose or
>> distribute its contents to any other person.
>> 
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> 

Re: [sqlite] Foreign Keys and Indexes

2009-05-27 Thread Igor Tandetnik
s.breith...@staubli.com wrote:
> Now I noticed that the speed of selects are much faster with fields
> that are indexed

Well, that's pretty much the point of having an index in the first 
place. Why do you find this fact surprising?

> I´ve read that only one index is used per where (I think join is
> nothing
> else here)

No - one index per table. SQLite could use as many indexes as tables 
mentioned in the FROM clause.

> The questions:
> Does it mean only one index can be used per join or one index for each
> table of the join?

One for each table.

> Does the output mean, that the query runs over the fast index and
> there is
> nothing to improve (as long
> as I use the database always in that way) or does it just explain
> that one
> index is used?

I don't understand this question.

> Or in other words, does this print mean that I use the database in
> perfect
> way (speaking only of indexes):
> 0|0|TABLE ActiveError AS AE USING PRIMARY KEY ORDER BY
> 1|1|TABLE ErrorDefinition AS ED USING PRIMARY KEY
> 2|2|TABLE ErrorGroupDefinition AS EG1 USING PRIMARY KEY
> 3|3|TABLE ErrorGroupDefinition AS EG2 USING PRIMARY KEY

Well, it's hard to prove that there's absolutely no better way to 
execute a query - especially since you don't actually show the query in 
question, nor the definition of tables involved.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Some index questions

2009-05-27 Thread Marcus Grimm
Richard,

thanks you very much for the example and advice.
Ahh, yes. I missed the point that sqlite might use an
index also as a kind of buffer to fetch data.

kind regards

Marcus

D. Richard Hipp wrote:
> On May 27, 2009, at 11:13 AM, Marcus Grimm wrote:
>> I guess such compound indexes have only a benefit for specific
>> queries, like the above that use all the time exactly these two
>> columns, otherwise two seperate indexes will do the same job but maybe
>> a little slower due to additional overhead to scan two index tables.
>>
> 
> Indices can be used for sorting as well as for lookup.  And sometime  
> content can be pulled from indices to avoid an additional search to  
> find the original table entry.  For these reasons, compound indices  
> can be useful.
> 
> For example:
> 
>   CREATE TABLE t1(a,b,c);
>   CREATE INDEX t1abc ON t1(a,b,c);
> 
> Then if you say:
> 
>   SELECT c FROM t1 WHERE a=5 ORDER BY b;
> 
> The query above will run faster with the index on a,b,c than it would  
> with any other index.  The first column of the index is used to do the  
> search.  Then the index is scanned in order to output rows in order by  
> b.  Finally, the c value is pulled directly from the index, avoiding a  
> lookup of the original table.
> 
> 
> 
> D. Richard Hipp
> d...@hwaci.com
> 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 

-- 
Marcus Grimm, MedCom GmbH Darmstadt, Rundeturmstr. 12, 64283 Darmstadt
Tel: +49(0)6151-95147-10
Fax: +49(0)6151-95147-20
--
MedCom slogans of the month:
"Vacation ? -- Every day at MedCom is a paid vacation!"
"Friday I have monday in my mind."
"MedCom -- Every week a vacation, every day an event, every hour a cliffhanger,
every minute a climax."
"Damned, it's weekend again!"
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Some index questions

2009-05-27 Thread Dan

On May 27, 2009, at 10:13 PM, Marcus Grimm wrote:

> Hi Dan,
>
> thanks for your answers: this was exactly what I was looking for.
>
> Dan wrote:
>> On May 27, 2009, at 9:08 PM, Marcus Grimm wrote:
>>>
>>> The difference, I guess, to a "real" sql table, is that it is sorted
>>> with respect to the indexed column and not by row_id, something that
>>> makes them different to a standard sql table, am I right ?
>>
>> Pretty much. Obviously there are a ton of details, but what you  
>> have is
>> a good mental model for practical purposes.
>
> I can imagine that you folks have a bunch of issue with indexes...
> Thanks, for confirming my "mental model".
>
>>
>>> I often have to deal with columns which are UIDs that have a length
>>> of say 128 bytes and that will be the majority (in terms of byte- 
>>> size)
>>> of that table. If I would now create an index on such a UID I will
>>> basically
>>> double the database size, correct ?
>>> (I know I can experiment this by my selve, but maybe a sql-guru here
>>> has allready
>>> the answer)
>>
>> If you have a schema:
>>
>>   CREATE TABLE t1(a INTEGER PRIMARY KEY, b UUID);
>>
>> And then create an index on column b, you will probably double the  
>> size
>> of the database, yes.
>
> Okay, that was the key question for me.
>
>>>
>>> CREATE TABLE T1(A, B, C);
>>> CREATE INDEX T1Idx ON T1(B,C);
>>
>> It's like a table that contains columns B and C, and the rowid. The
>> table is sorted in the same order as the results of:
>>
>>   SELECT b, c, rowid FROM t1 ORDER BY b, c;
>>
>
> An interesting way to express how this index is build. Thank you.
>
>>
>>> ...
>>> SELECT * FROM T1 WHERE B=3;
>>>
>>> as far as I know this will most likely not use the index, but then
>>> I'm curious what
>>> is the benefit or application of a compound index compared to two
>>> single indexes ?
>>
>> Such a query can use the compound index T1Idx above. It can do the
>> following
>> too:
>>
>>   WHERE b = 3 AND c = 4;
>>   WHERE b = 3 AND c > 4;
>>
>> but cannot be used to optimize:
>>
>>   WHERE c = 4;
>
> Ahh... I picked up the wrong example... Great, this is exactly inline
> how I was thinking how it might work.
> I guess such compound indexes have only a benefit for specific
> queries, like the above that use all the time exactly these two
> columns, otherwise two seperate indexes will do the same job but maybe
> a little slower due to additional overhead to scan two index tables.

If a single column index is like the index found in textbooks,
a compound index with two fields is like the phone book. Sorted first by
surname, then by first name. The "rowid", if you like, is the phone  
number.

So, it's easy to find the set of phone numbers for everybody with the
surname "Jones". It's easy to find the set of phone numbers for people
called "Barry Jones". Quite difficult to find all the people called  
"Barry"
though.

If you had two separate indexes, one on surname and one on first-name,
you could quickly find all the Jones's using the first index, or all the
Barry's using the second index. But to find the set of people called
"Barry Jones" would be much more work than it was with the compound  
index.

Dan.



> Thanks again for the useful advice.
>
> kind regards
>
> Marcus
>
>
>
>
>>
>> Dan.
>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] accessing a sqlite db with multiple java processes.

2009-05-27 Thread Laurent Burgy
my bad...
but i don't know how to block the writing to the database to wait until the
release of the lock... It seems to me that when the writing is impossible
because there's a lock, the driver raises the exception...
Maybe the good thing to do would be to wait for a regular FileLock to be
released and then write...


On Wed, May 27, 2009 at 10:19 AM, Martin.Engelschalk <
engelsch...@codeswift.com> wrote:

> http://www.sqlite.org/faq.html#q5




-- 
Laurent
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Some index questions

2009-05-27 Thread D. Richard Hipp

On May 27, 2009, at 11:13 AM, Marcus Grimm wrote:
> I guess such compound indexes have only a benefit for specific
> queries, like the above that use all the time exactly these two
> columns, otherwise two seperate indexes will do the same job but maybe
> a little slower due to additional overhead to scan two index tables.
>

Indices can be used for sorting as well as for lookup.  And sometime  
content can be pulled from indices to avoid an additional search to  
find the original table entry.  For these reasons, compound indices  
can be useful.

For example:

  CREATE TABLE t1(a,b,c);
  CREATE INDEX t1abc ON t1(a,b,c);

Then if you say:

  SELECT c FROM t1 WHERE a=5 ORDER BY b;

The query above will run faster with the index on a,b,c than it would  
with any other index.  The first column of the index is used to do the  
search.  Then the index is scanned in order to output rows in order by  
b.  Finally, the c value is pulled directly from the index, avoiding a  
lookup of the original table.



D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] accessing a sqlite db with multiple java processes.

2009-05-27 Thread Martin.Engelschalk
Hi,

i imagine that your problem has nothing to do with your driver.
Have you looked at http://www.sqlite.org/faq.html#q5 ?

Martin

Laurent Burgy schrieb:
> Hi,
> I was using sqlite with only one java process (using the sqlitejdbc driver)
> and everything was ok... now i'm trying to run several instances of the same
> program on the same db...and i've a bunch of "database is locked"
> exceptions.
>
> i was wandering if it was possible to access the same db with several jobs
> ?
>
> If so (and maybe this is not the right place to ask), what are the best
> practices using sqlitejdbc ? (or any other jdbc driver)...
>
>
> thanks
>
>   
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] accessing a sqlite db with multiple java processes.

2009-05-27 Thread Laurent Burgy
Hi,
I was using sqlite with only one java process (using the sqlitejdbc driver)
and everything was ok... now i'm trying to run several instances of the same
program on the same db...and i've a bunch of "database is locked"
exceptions.

i was wandering if it was possible to access the same db with several jobs
?

If so (and maybe this is not the right place to ask), what are the best
practices using sqlitejdbc ? (or any other jdbc driver)...


thanks

-- 
Laurent
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Some index questions

2009-05-27 Thread Marcus Grimm
Hi Dan,

thanks for your answers: this was exactly what I was looking for.

Dan wrote:
> On May 27, 2009, at 9:08 PM, Marcus Grimm wrote:
>>
>> The difference, I guess, to a "real" sql table, is that it is sorted
>> with respect to the indexed column and not by row_id, something that
>> makes them different to a standard sql table, am I right ?
> 
> Pretty much. Obviously there are a ton of details, but what you have is
> a good mental model for practical purposes.

I can imagine that you folks have a bunch of issue with indexes...
Thanks, for confirming my "mental model".

> 
>> I often have to deal with columns which are UIDs that have a length
>> of say 128 bytes and that will be the majority (in terms of byte-size)
>> of that table. If I would now create an index on such a UID I will  
>> basically
>> double the database size, correct ?
>> (I know I can experiment this by my selve, but maybe a sql-guru here  
>> has allready
>> the answer)
> 
> If you have a schema:
> 
>CREATE TABLE t1(a INTEGER PRIMARY KEY, b UUID);
> 
> And then create an index on column b, you will probably double the size
> of the database, yes.

Okay, that was the key question for me.

>>
>> CREATE TABLE T1(A, B, C);
>> CREATE INDEX T1Idx ON T1(B,C);
> 
> It's like a table that contains columns B and C, and the rowid. The
> table is sorted in the same order as the results of:
> 
>SELECT b, c, rowid FROM t1 ORDER BY b, c;
> 

An interesting way to express how this index is build. Thank you.

> 
>> ...
>> SELECT * FROM T1 WHERE B=3;
>>
>> as far as I know this will most likely not use the index, but then  
>> I'm curious what
>> is the benefit or application of a compound index compared to two  
>> single indexes ?
> 
> Such a query can use the compound index T1Idx above. It can do the  
> following
> too:
> 
>WHERE b = 3 AND c = 4;
>WHERE b = 3 AND c > 4;
> 
> but cannot be used to optimize:
> 
>WHERE c = 4;

Ahh... I picked up the wrong example... Great, this is exactly inline
how I was thinking how it might work.
I guess such compound indexes have only a benefit for specific
queries, like the above that use all the time exactly these two
columns, otherwise two seperate indexes will do the same job but maybe
a little slower due to additional overhead to scan two index tables.

Thanks again for the useful advice.

kind regards

Marcus




> 
> Dan.
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Some index questions

2009-05-27 Thread Dan

On May 27, 2009, at 9:08 PM, Marcus Grimm wrote:

> Hi List,
>
> this is not very sqlite specific but hopefully somebody will give
> me some info on this, as I haven't yet found a nice description of  
> this:
>
> I'm curios how an index works internally, my suspect is that an index
> can be seen as a kind of table that has two columns which hold a  
> copy of
> a) the row_ids of the indexed table.
> b) the value of the indexed column.
>
> The difference, I guess, to a "real" sql table, is that it is sorted
> with respect to the indexed column and not by row_id, something that
> makes them different to a standard sql table, am I right ?

Pretty much. Obviously there are a ton of details, but what you have is
a good mental model for practical purposes.

> I often have to deal with columns which are UIDs that have a length
> of say 128 bytes and that will be the majority (in terms of byte-size)
> of that table. If I would now create an index on such a UID I will  
> basically
> double the database size, correct ?
> (I know I can experiment this by my selve, but maybe a sql-guru here  
> has allready
> the answer)

If you have a schema:

   CREATE TABLE t1(a INTEGER PRIMARY KEY, b UUID);

And then create an index on column b, you will probably double the size
of the database, yes.

> How does an compound index work:
>
> Is it the same if I create two indexes compared to a single but  
> compound index ?
> I guess no, because reading the optimizer hints from the sqlite doc  
> I understand
> that sqlite will not use that index if I ask only for one of the  
> column names, like:
>
> CREATE TABLE T1(A, B, C);
> CREATE INDEX T1Idx ON T1(B,C);

It's like a table that contains columns B and C, and the rowid. The
table is sorted in the same order as the results of:

   SELECT b, c, rowid FROM t1 ORDER BY b, c;


> ...
> SELECT * FROM T1 WHERE B=3;
>
> as far as I know this will most likely not use the index, but then  
> I'm curious what
> is the benefit or application of a compound index compared to two  
> single indexes ?

Such a query can use the compound index T1Idx above. It can do the  
following
too:

   WHERE b = 3 AND c = 4;
   WHERE b = 3 AND c > 4;

but cannot be used to optimize:

   WHERE c = 4;

Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Types for strings, non-expert question

2009-05-27 Thread D. Richard Hipp

On May 27, 2009, at 9:36 AM, Maurí cio wrote:

> Hi,
>
> I see that in most functions strings are typed as
>
> char *
>
> while in 'sqlite_column_text' and 'sqlite_value_text'
> they are typed as
>
> unsigned char *
>

That was just bad design on my part.  I don't know what I was thinking  
when I did that.  It has been a pain for everyone to deal with ever  
since.  But we cannot change it now without breaking compatibility.

On the other hand, as long as you use the characters as characters and  
not as integers, does it really matter if they are signed or  
unsigned?  Just cast the result to whatever you need and it will work.


D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Some index questions

2009-05-27 Thread Marcus Grimm
Thanks Puneet,

I know Igors nice text book example... :-)

My questions are a little more technical
like "If I would do a DB engine"... to judge better
when or when not using or creating an index, etc.

I think it is useful to understand what it is behind all this,
similar that I would recommend to every young programmer to have
a little course in assembly programming to understand what
a compiler does and what not, why it is fast why it is slow, etc
but maybe I'm old fashion.

I know, maybe my post is a  bit too unspecific. :-)

Marcus

P Kishor wrote:
> On Wed, May 27, 2009 at 7:38 PM, Marcus Grimm  wrote:
>> Hi List,
>>
>> this is not very sqlite specific but hopefully somebody will give
>> me some info on this, as I haven't yet found a nice description of this:
>>
>> I'm curios how an index works internally, my suspect is that an index
>> can be seen as a kind of table that has two columns which hold a copy of
>> a) the row_ids of the indexed table.
>> b) the value of the indexed column.
> 
> See http://www.mail-archive.com/sqlite-users@sqlite.org/msg37474.html
> for a very, very nice explanation from Igor of how indexes work. The
> explanation uses the same metaphor of an index in a text book.
> 
> 
>> The difference, I guess, to a "real" sql table, is that it is sorted
>> with respect to the indexed column and not by row_id, something that
>> makes them different to a standard sql table, am I right ?
>>
>> I often have to deal with columns which are UIDs that have a length
>> of say 128 bytes and that will be the majority (in terms of byte-size)
>> of that table. If I would now create an index on such a UID I will basically
>> double the database size, correct ?
>> (I know I can experiment this by my selve, but maybe a sql-guru here has 
>> allready
>> the answer)
>>
>> How does an compound index work:
>>
>> Is it the same if I create two indexes compared to a single but compound 
>> index ?
>> I guess no, because reading the optimizer hints from the sqlite doc I 
>> understand
>> that sqlite will not use that index if I ask only for one of the column 
>> names, like:
>>
>> CREATE TABLE T1(A, B, C);
>> CREATE INDEX T1Idx ON T1(B,C);
>> ...
>> SELECT * FROM T1 WHERE B=3;
>>
>> as far as I know this will most likely not use the index, but then I'm 
>> curious what
>> is the benefit or application of a compound index compared to two single 
>> indexes ?
>>
>> Again sorry for beeing not very specific in the questions.
>>
>> Thanks
>>
>> Marcus Grimm
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> 
> 
> 

-- 
Marcus Grimm, MedCom GmbH Darmstadt, Rundeturmstr. 12, 64283 Darmstadt
Tel: +49(0)6151-95147-10
Fax: +49(0)6151-95147-20
--
MedCom slogans of the month:
"Vacation ? -- Every day at MedCom is a paid vacation!"
"Friday I have monday in my mind."
"MedCom -- Every week a vacation, every day an event, every hour a cliffhanger,
every minute a climax."
"Damned, it's weekend again!"
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Types for strings, non-expert question

2009-05-27 Thread Maurí­cio
Hi,

I see that in most functions strings are typed as

 char *

while in 'sqlite_column_text' and 'sqlite_value_text'
they are typed as

 unsigned char *

Is there something I should have in mind about
that when writing a binding to sqlite3 from another
language? Can I type both as beeing the same?

Thanks,
Maurício

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Some index questions

2009-05-27 Thread P Kishor
On Wed, May 27, 2009 at 7:38 PM, Marcus Grimm  wrote:
> Hi List,
>
> this is not very sqlite specific but hopefully somebody will give
> me some info on this, as I haven't yet found a nice description of this:
>
> I'm curios how an index works internally, my suspect is that an index
> can be seen as a kind of table that has two columns which hold a copy of
> a) the row_ids of the indexed table.
> b) the value of the indexed column.

See http://www.mail-archive.com/sqlite-users@sqlite.org/msg37474.html
for a very, very nice explanation from Igor of how indexes work. The
explanation uses the same metaphor of an index in a text book.


>
> The difference, I guess, to a "real" sql table, is that it is sorted
> with respect to the indexed column and not by row_id, something that
> makes them different to a standard sql table, am I right ?
>
> I often have to deal with columns which are UIDs that have a length
> of say 128 bytes and that will be the majority (in terms of byte-size)
> of that table. If I would now create an index on such a UID I will basically
> double the database size, correct ?
> (I know I can experiment this by my selve, but maybe a sql-guru here has 
> allready
> the answer)
>
> How does an compound index work:
>
> Is it the same if I create two indexes compared to a single but compound 
> index ?
> I guess no, because reading the optimizer hints from the sqlite doc I 
> understand
> that sqlite will not use that index if I ask only for one of the column 
> names, like:
>
> CREATE TABLE T1(A, B, C);
> CREATE INDEX T1Idx ON T1(B,C);
> ...
> SELECT * FROM T1 WHERE B=3;
>
> as far as I know this will most likely not use the index, but then I'm 
> curious what
> is the benefit or application of a compound index compared to two single 
> indexes ?
>
> Again sorry for beeing not very specific in the questions.
>
> Thanks
>
> Marcus Grimm
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Puneet Kishor http://www.punkish.org/
Carbon Model http://carbonmodel.org/
Charter Member, Open Source Geospatial Foundation http://www.osgeo.org/
Science Commons Fellow, Geospatial Data http://sciencecommons.org
Nelson Institute, UW-Madison http://www.nelson.wisc.edu/
---
collaborate, communicate, compete
===
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Some index questions

2009-05-27 Thread Marcus Grimm
Hi List,

this is not very sqlite specific but hopefully somebody will give
me some info on this, as I haven't yet found a nice description of this:

I'm curios how an index works internally, my suspect is that an index
can be seen as a kind of table that has two columns which hold a copy of
a) the row_ids of the indexed table.
b) the value of the indexed column.

The difference, I guess, to a "real" sql table, is that it is sorted
with respect to the indexed column and not by row_id, something that
makes them different to a standard sql table, am I right ?

I often have to deal with columns which are UIDs that have a length
of say 128 bytes and that will be the majority (in terms of byte-size)
of that table. If I would now create an index on such a UID I will basically
double the database size, correct ?
(I know I can experiment this by my selve, but maybe a sql-guru here has 
allready
the answer)

How does an compound index work:

Is it the same if I create two indexes compared to a single but compound index ?
I guess no, because reading the optimizer hints from the sqlite doc I understand
that sqlite will not use that index if I ask only for one of the column names, 
like:

CREATE TABLE T1(A, B, C);
CREATE INDEX T1Idx ON T1(B,C);
...
SELECT * FROM T1 WHERE B=3;

as far as I know this will most likely not use the index, but then I'm curious 
what
is the benefit or application of a compound index compared to two single 
indexes ?

Again sorry for beeing not very specific in the questions.

Thanks

Marcus Grimm
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fast data duplication

2009-05-27 Thread John Machin
On 27/05/2009 9:47 PM, Igor Tandetnik wrote:
> "Vasil Boshnyakov" 
> wrote in message news:000c01c9de8b$16510a40$42f31e...@bg
>> The short description is: we need to copy many records of a table in
>> the same table but changing the "Name" value. So we have added a new
>> function which process the names:
>>
>> Insert into users ItemID, Name
>> Select ItemID, newName(Name) from users where itemActive = 1;
>>
>> That works great but we need one more step: how to much the pairs
>> "item comes from the Select <-> new item result of the Insert". We
>> need to track the copy history: itemID -> newItemID.
> 
> What is this newItemID you speak of? As far as I can tell from your 
> (syntactically invalid) statement, new records are inserted with the 
> same ItemId as the old ones.

Vasil, please tell us the schema, otherwise we can't help you. Some 
explanation of what you are really trying to do might also aid us. There 
are other concerns beside Igor's, like "itemActive" would seem from its 
name to belong to an "items" table, not to a "users" table.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fast data duplication

2009-05-27 Thread Vasil Boshnyakov
Thank you for the help, I already got the solution of the problem. Thanks!

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Igor Tandetnik
Sent: Wednesday, May 27, 2009 2:47 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Fast data duplication

"Vasil Boshnyakov" 
wrote in message news:000c01c9de8b$16510a40$42f31e...@bg
> The short description is: we need to copy many records of a table in
> the same table but changing the "Name" value. So we have added a new
> function which process the names:
>
> Insert into users ItemID, Name
> Select ItemID, newName(Name) from users where itemActive = 1;
>
> That works great but we need one more step: how to much the pairs
> "item comes from the Select <-> new item result of the Insert". We
> need to track the copy history: itemID -> newItemID.

What is this newItemID you speak of? As far as I can tell from your 
(syntactically invalid) statement, new records are inserted with the 
same ItemId as the old ones.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ISO-8859-1

2009-05-27 Thread Igor Tandetnik
"Igor Tandetnik"  wrote in
message news:gvjb4m$hc...@ger.gmane.org
> "Wili"  wrote in
> message news:23741187.p...@talk.nabble.com
>> it is possible using charset ISO-8859-1 in sqlite?
>
> No, not really. Convert your strings to UTF-8 before passing them to
> SQLite, and to whatever encoding you want when getting them out.

Or to/from UTF-16. Might be easier - all you have to do to convert 
ISO-8859-1 to UTF-16 is zero-expand each byte to a 16-bit value, and to 
convert back, just drop those extra zeros. Not that converting to/from 
UTF-8 is much more complicated.

Igor Tandetnik



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ISO-8859-1

2009-05-27 Thread Wili


Hi Igor,

thx for fast awnser!

lg
\^/ili

-- 
View this message in context: 
http://www.nabble.com/ISO-8859-1-tp23741187p23741440.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ISO-8859-1

2009-05-27 Thread Igor Tandetnik
"Wili"  wrote in
message news:23741187.p...@talk.nabble.com
> it is possible using charset ISO-8859-1 in sqlite?

No, not really. Convert your strings to UTF-8 before passing them to 
SQLite, and to whatever encoding you want when getting them out.

Now, you could probably get away with storing ill-formed UTF-8 strings 
(which is what ISO-8859-1 strings are, as far as SQLite is concerned), 
as long as all you do is put them into the database and get them out 
later. Comparing them, calling things like length() or substr() on them, 
searching them with LIKE operator and so on would likely produce 
incorrect results. All in all, it's a bad idea, don't do it if you could 
possibly avoid it.

Igor Tandetnik



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread Marcus Grimm
Thanks Nick,

good point. ahh yes, I've read about this somewhere...

My extension is currently ".db", a quick check indicates
that using ".abc" gives a slight speed improvement, maybe 10%.
But that is allready very close to the variation I get between
different test runs, so I'm not really sure if I have this
"microsoft effect" here.

Anyway, thanks for the reminder.

Well, I think now there is nothing wrong here,
it is just as it is... Currently I achieve about 10 transactions/second, maybe
not that bad... still slower than the "few dozen" that are mentioned on the
www pages, but I agree that there too many issues that may affect the disk 
speed.


Marcus


Brandon, Nicholas (UK) wrote:
>> my developemnt system is a Win XP, with of course NTFS, I 
>> don't know which drive it has, I guess a standard 7200 rpm.
>>
> 
> What file extension (i.e. the letters after the dot in the filename) do
> you give the database?
> 
> I faintly recall there is a windows peculiarity with system restore or
> something similar that archives certain file extensions in the
> background. That may contribute to your slow down.
> 
> Nick
> 
> 
> This email and any attachments are confidential to the intended
> recipient and may also be privileged. If you are not the intended
> recipient please delete it from your system and notify the sender.
> You should not copy it or use it for any purpose nor disclose or
> distribute its contents to any other person.
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] ISO-8859-1

2009-05-27 Thread Wili

Hello,

it is possible using charset ISO-8859-1 in sqlite?

lg
\^/ili
-- 
View this message in context: 
http://www.nabble.com/ISO-8859-1-tp23741187p23741187.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread python
Dr. Hipp,

> Your OS and filesystem configuration have a big impact too. I've notice, for 
> example, that transactions are really slow on RieserFS on Linux compared to 
> Ext3.

In your experience, which Linux file system(s) provide the high
performance platform for SQLite?

Which Linux file systems do you recommend avoiding for SQLite use?

Thank you,
Malcolm
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread Brandon, Nicholas (UK)

> my developemnt system is a Win XP, with of course NTFS, I 
> don't know which drive it has, I guess a standard 7200 rpm.
> 

What file extension (i.e. the letters after the dot in the filename) do
you give the database?

I faintly recall there is a windows peculiarity with system restore or
something similar that archives certain file extensions in the
background. That may contribute to your slow down.

Nick


This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fast data duplication

2009-05-27 Thread Igor Tandetnik
"Vasil Boshnyakov" 
wrote in message news:000c01c9de8b$16510a40$42f31e...@bg
> The short description is: we need to copy many records of a table in
> the same table but changing the "Name" value. So we have added a new
> function which process the names:
>
> Insert into users ItemID, Name
> Select ItemID, newName(Name) from users where itemActive = 1;
>
> That works great but we need one more step: how to much the pairs
> "item comes from the Select <-> new item result of the Insert". We
> need to track the copy history: itemID -> newItemID.

What is this newItemID you speak of? As far as I can tell from your 
(syntactically invalid) statement, new records are inserted with the 
same ItemId as the old ones.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Index expressions and constraints

2009-05-27 Thread Igor Tandetnik
"Tima Fourie" 
wrote in message news:363e35f87b5c4cbbb2e34692737dd...@c1
> 1. Let's say I have a field called Field1 in table Table1 with the
> following rows:
>
> "A B C"
> "D E F"
> "G H I"
>
> Is there some way I can create an index on just a substring for a
> column.

You could install a custom collation 
(http://sqlite.org/c3ref/create_collation.html) that would only look at 
and compare substrings, then create an index using that collation.

However, it might be a better idea to normalize your schema, by storing 
these values in three separate columns.

> 2. Is there some hidden way we can limit what goes into an index in
> the first place and then do a select that would return only those
> entries.

No. I suppose you could create a table that contains a subset of rowids 
in another table, then join with it to restrict your searches (though 
it's not quite clear why you would want to do that).

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread Marcus Grimm
thank you for the hint,
my current approach to copy the database file, is even more
restrictive -- it is encapsulated by an EXCLUSIVE transaction. ;)

In my case it is unlikely that it will block anybody from using
the server since it is a typical "working-hours" application and
the backup will usually take place around midnight.

kind regards

Marcus


Nuno Lucas wrote:
> On Wed, May 27, 2009 at 12:51 PM, Marcus Grimm  
> wrote:
>> Yes, I understood that this is unsafe and I'll not use it right now.
>> But my feeling is that it will be an option for the future. My databases
>> are not so big (say in megabyte range) and currently my server application
>> will do a backup every day (more or less a simple file copy). So even if
>> I'll see a corrupt database the customer will loose only one day work,
>> as maximum. This should be tolerated since I estimate the liklyhood
>> of such an event extremely low (powerfailure or OS crash at exactly the
>> "wrong" time), powerfailure by the way can be workaround by using a
>> battery pack which is common on server hardware, I guess.
> 
> Ok,  but note that backing up sqlite database files could get a
> corrupted database on restore if you don't backup the sqlite database
> file AND the journal file as an atomic operation.
> 
> A work around for this situation is to start an immediate transaction
> [1] (which will assure no other writes are pending, although allowing
> reads to proceed), backup the database (copy the file) and only then
> end the transaction (a simple "BEGIN IMMEDIATE" --> "END" operation).
> 
> 
> Regards,
> ~Nuno Lucas
> 
> [1] http://www.sqlite.org/lang_transaction.html
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread Nuno Lucas
On Wed, May 27, 2009 at 12:51 PM, Marcus Grimm  wrote:
> Yes, I understood that this is unsafe and I'll not use it right now.
> But my feeling is that it will be an option for the future. My databases
> are not so big (say in megabyte range) and currently my server application
> will do a backup every day (more or less a simple file copy). So even if
> I'll see a corrupt database the customer will loose only one day work,
> as maximum. This should be tolerated since I estimate the liklyhood
> of such an event extremely low (powerfailure or OS crash at exactly the
> "wrong" time), powerfailure by the way can be workaround by using a
> battery pack which is common on server hardware, I guess.

Ok,  but note that backing up sqlite database files could get a
corrupted database on restore if you don't backup the sqlite database
file AND the journal file as an atomic operation.

A work around for this situation is to start an immediate transaction
[1] (which will assure no other writes are pending, although allowing
reads to proceed), backup the database (copy the file) and only then
end the transaction (a simple "BEGIN IMMEDIATE" --> "END" operation).


Regards,
~Nuno Lucas

[1] http://www.sqlite.org/lang_transaction.html
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread Marcus Grimm
my developemnt system is a Win XP, with of course NTFS, I don't
know which drive it has, I guess a standard 7200 rpm.

I tried also on a Win 2000 and Vista, as expected Vista appears
to be the slowest even this machine has the best hardware, theoretically.

All measurements vary +- 20%.

Just for my curiosity: Is linux considered to be faster than Windows here ?

kind regards

Marcus

D. Richard Hipp wrote:
> On May 27, 2009, at 7:51 AM, Marcus Grimm wrote:
> 
>>
>> Nuno Lucas wrote:
>>> On Wed, May 27, 2009 at 5:58 AM, Marcus Grimm  On Tue, May 26, 2009 at 9:47 PM, Marcus Grimm  http://www.sqlite.org/faq.html#q19
 the faq as well as the speed comparison speaks about a few dozen
 of transaction per second... that's why I'm wondering why I'm
 almost ten times slower on windows...
>>> You don't say much about your hardware or how you are timing the
>>> transaction speed, so 10 times slower for those numbers are within  
>>> the
>>> error margin.
>> my hardware is just a standard desktop computer, not the fastest...
>> Okay, if my measurements are within the typical speed range, then this
>> is the explanation why I see so much slower speed.
> 
> 
> Your OS and filesystem configuration have a big impact too.  I've  
> notice, for example, that transactions are really slow on RieserFS on  
> linux compared to Ext3.  What OS are you using?  And what filesystem?
> 
> 
> D. Richard Hipp
> d...@hwaci.com
> 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread D. Richard Hipp

On May 27, 2009, at 7:51 AM, Marcus Grimm wrote:

>
>
> Nuno Lucas wrote:
>> On Wed, May 27, 2009 at 5:58 AM, Marcus Grimm > > wrote:
 On Tue, May 26, 2009 at 9:47 PM, Marcus Grimm 
 http://www.sqlite.org/faq.html#q19
>>> the faq as well as the speed comparison speaks about a few dozen
>>> of transaction per second... that's why I'm wondering why I'm
>>> almost ten times slower on windows...
>>
>> You don't say much about your hardware or how you are timing the
>> transaction speed, so 10 times slower for those numbers are within  
>> the
>> error margin.
>
> my hardware is just a standard desktop computer, not the fastest...
> Okay, if my measurements are within the typical speed range, then this
> is the explanation why I see so much slower speed.


Your OS and filesystem configuration have a big impact too.  I've  
notice, for example, that transactions are really slow on RieserFS on  
linux compared to Ext3.  What OS are you using?  And what filesystem?


D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread Marcus Grimm


Nuno Lucas wrote:
> On Wed, May 27, 2009 at 5:58 AM, Marcus Grimm  wrote:
>>> On Tue, May 26, 2009 at 9:47 PM, Marcus Grimm 
>>> http://www.sqlite.org/faq.html#q19
>> the faq as well as the speed comparison speaks about a few dozen
>> of transaction per second... that's why I'm wondering why I'm
>> almost ten times slower on windows...
> 
> You don't say much about your hardware or how you are timing the
> transaction speed, so 10 times slower for those numbers are within the
> error margin.

my hardware is just a standard desktop computer, not the fastest...
Okay, if my measurements are within the typical speed range, then this
is the explanation why I see so much slower speed.

> 
> Do the math yourself. It's more about disk rotation speed than any
> other factor, as you have already seen when wrapping those INSERTs
> within a transaction.

Yes, I know... there has been also a funny proof some weeks ago here, that
where refering to light-speed and the distance of the disc to the memory,
as far as I can remeber.
I'm very well aware of the speed limitations, it was just that I was puzzled
by my results compared to the indication given on the sqlite pages.

> 
> For comparison, use PRAGMA SYNCHRONOUS=OFF, which will return as soon
> as the OS let's you (doesn't depend on the disk access).

correct, I made some experiments today and switching the synchronous to off
will give a dramatic speed improvement. Using the "normal" sync mode
I achieve an appx. 20% improvement as well.

Yes, I understood that this is unsafe and I'll not use it right now.
But my feeling is that it will be an option for the future. My databases
are not so big (say in megabyte range) and currently my server application
will do a backup every day (more or less a simple file copy). So even if
I'll see a corrupt database the customer will loose only one day work,
as maximum. This should be tolerated since I estimate the liklyhood
of such an event extremely low (powerfailure or OS crash at exactly the
"wrong" time), powerfailure by the way can be workaround by using a
battery pack which is common on server hardware, I guess.

I agree that the new async vfs might help as well, havn't tried this yet.

Thanks for sharing your thoughts

kind regards

Marcus


> 
> But note that this option is not safe and I would discourage people
> from using it until they understand very well what they are doing
> (it's not easy, and many time impossible, to recover a corrupt SQLite
> database). If you really want speed, you can try the new async VFS,
> which will do the atomic writes in a background thread.


> 
> 
> Regards,
> ~Nuno Lucas
> 
>> thanks for the reply
>>
>> Marcus
> 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Index expressions and constraints

2009-05-27 Thread Tima Fourie
Hi All

1. Let's say I have a field called Field1 in table Table1 with the following 
rows:

"A B C"
"D E F"
"G H I"

Is there some way I can create an index on just a substring for a column 
without extracting the substring into it's own column, 
something like

CREATE INDEX I1 ON Table1 (substr(Field1, 3, 1))

2. Is there some hidden way we can limit what goes into an index in the first 
place and then do a select that would return only 
those entries.

In the dbase ".mdx" era you could supply expressions as a "tag" that would then 
determine what gets stored in the index / btree. (A 
real performance boost was then if one could use just the index value.)

eg.:

CREATE INDEX I1 ON Table1 (substr(Field1, 4, 1)>='E')

should then index only the last two rows from above, and doing something like

SELECT * FROM Table1 USING I1

should then return only the last two rows.

Many thanks!
Tima 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow Transaction Speed?

2009-05-27 Thread Nuno Lucas
On Wed, May 27, 2009 at 5:58 AM, Marcus Grimm  wrote:
>> On Tue, May 26, 2009 at 9:47 PM, Marcus Grimm 
>> http://www.sqlite.org/faq.html#q19
>
> the faq as well as the speed comparison speaks about a few dozen
> of transaction per second... that's why I'm wondering why I'm
> almost ten times slower on windows...

You don't say much about your hardware or how you are timing the
transaction speed, so 10 times slower for those numbers are within the
error margin.

Do the math yourself. It's more about disk rotation speed than any
other factor, as you have already seen when wrapping those INSERTs
within a transaction.

For comparison, use PRAGMA SYNCHRONOUS=OFF, which will return as soon
as the OS let's you (doesn't depend on the disk access).

But note that this option is not safe and I would discourage people
from using it until they understand very well what they are doing
(it's not easy, and many time impossible, to recover a corrupt SQLite
database). If you really want speed, you can try the new async VFS,
which will do the atomic writes in a background thread.


Regards,
~Nuno Lucas

>
> thanks for the reply
>
> Marcus
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Alignment bug in SQLIte 3.6.14.2

2009-05-27 Thread Dan

Looks like the same issue as reported here:

   http://www.sqlite.org/cvstrac/tktview?tn=3869

Fix will be included in 3.6.15. Or just grab the patch here if you're
in a hurry:

   http://www.sqlite.org/cvstrac/chngview?cn=

Dan.



On May 27, 2009, at 12:12 PM, Brian Dantes wrote:

> We've run into an alignment issue in SQLite 3.6.14.2 which only
> seems to cause a problem under Solaris Sparc in our testing.
>
> Failure narrowed down to:
> src/rowset.c:186
> pEntry->v = rowid
> pEntry is 0xX4
> __alignof(*pEntry) is 8 (because of an i64)
>
> However sizeof(RowSet) is 28 (and 28%8 = 4), and pEntry starts 1  
> RowSet
> after freshly allocated, 8-bytes aligned) memory (see  
> sqlite3RowSetInit)
> So it crashes. This is definitely a bug in sqlite.
>
> Suggested patch that seems to work for us:
>
> = sqlite/src/rowset.c 1.1 vs edited =
> --- 1.1/sqlite/src/rowset.c2009-05-19 14:07:53 -07:00
> +++ edited/sqlite/src/rowset.c
> 2009-05-26 15:43:56 -07:00
> @@ -127,6 +127,7 @@
>  */
>  RowSet *sqlite3RowSetInit(sqlite3 *db, void *pSpace, unsigned int N){
>RowSet *p;
> +  int n;
>assert( N >= sizeof(*p) );
>p = pSpace;
>p->pChunk = 0;
> @@ -134,8 +135,14 @@
>p->pEntry = 0;
>p->pLast = 0;
>p->pTree = 0;
> -  p->pFresh = (struct RowSetEntry*)[1];
> -  p->nFresh = (u16)((N - sizeof(*p))/sizeof(struct RowSetEntry));
> +  /* Alignment must be a power of 2, and at least equal to
> +   __alignof(struct RowSetEntry) */
> +  #define MIN_ALIGNMENT 8
> +  n = sizeof(*p);
> +  /* Round up to next alignment */
> +  n = (n - 1) / MIN_ALIGNMENT * MIN_ALIGNMENT + MIN_ALIGNMENT;
> +  p->pFresh = (struct RowSetEntry*)((char *)p + n);
> +  p->nFresh = (u16)((N - n)/sizeof(struct RowSetEntry));
>p->isSorted = 1;
>p->iBatch = 0;
>return p;
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] suggested changes to file format document

2009-05-27 Thread Dan
>


Thanks very much for catching these. Document is now updated with the
fixes in the fossil docsrc repository.

Dan.



> 1. In the following, s/less than/less than or equal to/
>
> """
> 2.3.3.4 Index B-Tree Cell Format
> [snip 2 paragraphs]
>
> If the record is small enough, it is stored verbatim in the cell. A
> record is deemed to be small enough to be completely stored in the  
> cell
> if it consists of less than:
> max-local := (usable-size - 12) * max-embedded-fraction / 255 - 23
> bytes.
> """
>
> 2. The formula in the following is incorrect.
>
> """
> [H31190] When a table B-Tree cell is stored partially in an overflow
> page chain, the prefix stored on the B-Tree leaf page consists of the
> two variable length integer fields, followed by the first N bytes of  
> the
> database record, where N is determined by the following algorithm:
> min-local := (usable-size - 12) * 255 / 32 - 23
> """
>
> It should be:
> min-local := (usable-size - 12) * 32 / 255 - 23
>
> 3. In description of first 100 bytes of file: """The number of unused
> bytes on each page (single byte field, byte offset 20), is always  
> set to
> 0x01.""" ... should be 0x00.
>
> 4. In section 2.3.2 Database Record Format, in the table describing
> type/size codes:
>
> """Even values greater than 12 are used to signify a blob of data  
> (type
> SQLITE_BLOB) (n-12)/2 bytes in length, where n is the integer value
> stored in the record header."""
>
> s/greater than/greater than or equal to/
>
> 5. In section 2.3.1 Variable Length Integer Format, in the examples
> """
> Decimal   HexadecimalVariable Length Integer
> [snip]
> -780560xFFFECD56 0xFF 0xFF 0xFF 0xFF 0xFF 0xFF 0xFD 0xCD  
> 0x56
> """
> s/78056/78506/
>
> 6. In description of sqlite_master:
>
> """[H30300] If the associated database table is a virtual table, the
> fourth field of the schema table record shall contain an SQL NULL  
> value."""
>
> Looks like an integer zero to me:

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users