[sqlite] Thread safe in 3.2.5
I recently updated my system with the 3.2.5 version from 3.0.8. I configure with --enable-threadsafe because I use mutliple threads to access a single database. It had been working fine (it's the apr_dbd driver for the apr.apache.org project), but after the upgrade I started seeing "error 21". The library call was out of sequence. I noticed there was a change in the 3.2.4 version related to configuring the threadsafe operation that stated the threadsafe option wasn't working. I don't understand why it was changed. I haven't had any problems until now. Any help would be appreciated. Thanks, Rick Keiner
Re: [sqlite] FAQ clarification
Thanks I filed this as Ticket 1431 and requested they add your diagram to lockingv3.html Sam Christian Smith wrote: >On Fri, 16 Sep 2005, Amin Azez wrote: > > > >>FAQ 7 >> >>(7) Can multiple applications or multiple instances of the same >>application access a single database file at the same time? >> >> Multiple processes can have the same database open at the same time. >>Multiple processes can be doing a SELECT at the same time. But only one >>process can be making changes to the database at once. >> >> >> >>This answer fails to make clear whether or not the multiple readers can >>read while the single writer is writing, or whether the writer blocks >>the readers and the readers block pending writers (like mysql non-innodb >>tables) >> >>Could someone please clairify this point. >> >> > > >Locking in SQLite is detailed here: >http://www.sqlite.org/lockingv3.html > >In summary: >SQLite uses multiple readers/single writer locking. A writer can operate >concurrently with readers until it is ready to commit or spill data from >it's cache. In this case, it waits for readers to finish, then gets an >exclusive write lock and writes it's data. Thus, the following concurrency >is available to SQLite: > > time > >Reader >-| >Reader >-| >Reader >--| >Writer>---c***| >Reader >***-| > >Key: >- Executing query >c Commit >* Blocked by lock > > >>Start of query >> >> >| End of query > >The last reader above is blocked from starting by the writer until the >writer commits. If the writer commits before the last reader has finished, >it is blocked. > >It might be worth raising a ticket (http://www.sqlite.org/cvstrac/tktnew) >to have the FAQ reference the locking document. > > > > >>Sam >> >> >> > >Christian > > >
RE: [sqlite] determining number of 'used' pages?
Excellent! This is exactly what I am looking for. Thanks > -Original Message- > From: Dennis Jenkins [mailto:[EMAIL PROTECTED] > Sent: 16 September 2005 12:58 > To: sqlite-users@sqlite.org > Subject: Re: [sqlite] determining number of 'used' pages? > > > Mark Allan wrote: > > >Hi, > > > >I am using SQLite on an embedded software product. The > SQLite database file is saved and read from a NOR flash chip. > We have found that the writing of data to this Flash chip can > be quite slow. Therefore we need to minimise the writes that > are made by SQLite. > > > >We have disabled the creation of the journal file when > writing data to the database and this has halved the write > time. If anyone can suggest any way we can further reduce the > amount of file writes made or speed up the writing of data to > the file system then these would be gratefully receieved. > > > >The main area where we now have a performance problem > however is with deleting records. The problem is due to the > need to 'vacuum' the database when we delete records. We do > this as we need to know the size of the database file to show > a capacity readout to the user. I have been looking into the > SQlite code to try and find if there is a way in which I can > read the number of 'used' pages. If I can determine the > number of used pages in the database then I can use this to > generate the capacity report and I can disable the vacuuming > of the database, which will save us much time, about 3-4 seconds!. > > > >Please can someone advise me as to how I can determine the > number of used pages? Looking at the vacuum code it seems it > needs to create a temporary file and database to perform > vacuum. Is there a way I can determine the number of 'used' > pages without the overhead of having to create a temporary > database on the filesystem? > > > >Thanks in advance for your help. > > > > > >Mark > > > > > I ported some code from sqlite2 to sqlite3 that will do what > you want. > You can get it from "http://unwg.no-ip.com/freepages.c;. > What you want > is in the function at the very bottom of the file: > > int sqlite3_get_page_stats(sqlite3* db, long *lTotal, long > *lFree, long *lSize); > > > Happy hacking! > >
Re: [sqlite] sqlite on embedded board
Thank you, I'll try it! Áron > I can't tell what you've been doing, but probably > what you need to do is something like > > 1. Do all your work in the memory db, without > touching the flash db. > > 2. Then, just before you detach, update the flash > db with > > insert [or replace] into flash.db ... > select ... from memory.db ... > > Regards >
Re: [sqlite] sqlite on embedded board
I can't tell what you've been doing, but probably what you need to do is something like 1. Do all your work in the memory db, without touching the flash db. 2. Then, just before you detach, update the flash db with insert [or replace] into flash.db ... select ... from memory.db ... Regards
Re: [sqlite] FAQ clarification
Christian Smith wrote: I went through the link you had sent. This page mentions 5 different types of locks which are provided by the pager module in SQLite. Could you please clarify these 2 doubts - 1. My application uses our own Mutex variables to allow single reader/writer operation - this is no longer required. I'd suggest you keep a wrapper between SQLite and your application though; with the option of locking out other instances of itself. Correct. SQLite handles locking and concurrency. But you must handle the case where you cannot execute because of a lock. Check out: http://www.sqlite.org/capi3ref.html#sqlite3_busy_handler http://www.sqlite.org/capi3ref.html#sqlite3_busy_timeout Alternatively, handle SQLITE_BUSY in your code to retry a failed query some time in the future. According to past discussions on the list there are instances where you'd get SQLITE_BUSY even if you have set the sqlite3_busy_timeout or handler. So you always have to check for that return value. If you're using threads you need to check for SQLITE_SCHEMA in that same loop ( I am assuming you would retry on SCHEMA and BUSY errors ). This is partly why a wrapper between SQLite and the application seems useful. 2. The 5 lock types mentioned on the documentation page are acquired by processes/threads on their own and as a programmer i can leave all these details for the pager to handle. You can manipulation the locking using the different transaction levels. http://www.sqlite.org/lang_transaction.html Regards, Kervin
Re: [sqlite] Checkins 2694 and 2697 (order of columns in primary key)?
On Fri, 2005-09-16 at 07:34 -0400, Ned Batchelder wrote: > I saw checkin 2694 ("The table_info pragma now gives the order of columns in > the primary key"), and rejoiced. I currently have to parse the sql from > sqlite_master to dig up this information myself.Then came checkin 2697 > ("Undo check-in [2694]"). What happened? > That patch only works for tables with 255 or fewer columns in the primary key. Granted, that accounts for *most* tables. But I still do not like introducing arbitrary limits like that. And once I put something in a release, I'm obliged to support it forever. So I want to ponder the issue a bit more first. -- D. Richard Hipp <[EMAIL PROTECTED]>
Re: [sqlite] How to load a SQLite schema from file using SQLite C-API
>I have SQLite schema in an ascii file. I would like to be able to load >this schema via SQLite C-API. How do I do this? You can look at the source code for the SQLite shell and see how it implements the .read command, but it may be simpler just to invoke the SQLite shell using system() or exec(). Regards
[sqlite] Checkins 2694 and 2697 (order of columns in primary key)?
I saw checkin 2694 ("The table_info pragma now gives the order of columns in the primary key"), and rejoiced. I currently have to parse the sql from sqlite_master to dig up this information myself.Then came checkin 2697 ("Undo check-in [2694]"). What happened? --Ned. http://nedbatchelder.com
Re: [sqlite] determining number of 'used' pages?
Mark Allan wrote: Hi, I am using SQLite on an embedded software product. The SQLite database file is saved and read from a NOR flash chip. We have found that the writing of data to this Flash chip can be quite slow. Therefore we need to minimise the writes that are made by SQLite. We have disabled the creation of the journal file when writing data to the database and this has halved the write time. If anyone can suggest any way we can further reduce the amount of file writes made or speed up the writing of data to the file system then these would be gratefully receieved. The main area where we now have a performance problem however is with deleting records. The problem is due to the need to 'vacuum' the database when we delete records. We do this as we need to know the size of the database file to show a capacity readout to the user. I have been looking into the SQlite code to try and find if there is a way in which I can read the number of 'used' pages. If I can determine the number of used pages in the database then I can use this to generate the capacity report and I can disable the vacuuming of the database, which will save us much time, about 3-4 seconds!. Please can someone advise me as to how I can determine the number of used pages? Looking at the vacuum code it seems it needs to create a temporary file and database to perform vacuum. Is there a way I can determine the number of 'used' pages without the overhead of having to create a temporary database on the filesystem? Thanks in advance for your help. Mark I ported some code from sqlite2 to sqlite3 that will do what you want. You can get it from "http://unwg.no-ip.com/freepages.c;. What you want is in the function at the very bottom of the file: int sqlite3_get_page_stats(sqlite3* db, long *lTotal, long *lFree, long *lSize); Happy hacking!
[sqlite] Database corruption problem
Anybody who is using SQLite database files that exceed 1GiB should take note of ticket #1432. http://www.sqlite.org/cvstrac/tktview?tn=1432 The problem has been fixed in CVS. Following some additional testing, I will be releasing version 3.2.6. This will hopefully occur before Monday morning. -- D. Richard Hipp <[EMAIL PROTECTED]>
Re: [sqlite] FAQ clarification
Hi Ritesh, Please direct responses to the SQLite list, as others may be able to provide input as well. I'm a SQLite user as you are, not a one man SQLite support team:) Inline. On Fri, 16 Sep 2005, Ritesh Kapoor wrote: >Hi Christian, > >I went through the link you had sent. This page mentions 5 different >types of locks which are provided by the pager module in SQLite. Could >you please clarify these 2 doubts - >1. My application uses our own Mutex variables to allow single >reader/writer operation - this is no longer required. Correct. SQLite handles locking and concurrency. But you must handle the case where you cannot execute because of a lock. Check out: http://www.sqlite.org/capi3ref.html#sqlite3_busy_handler http://www.sqlite.org/capi3ref.html#sqlite3_busy_timeout Alternatively, handle SQLITE_BUSY in your code to retry a failed query some time in the future. >2. The 5 lock types mentioned on the documentation page are acquired by >processes/threads on their own and as a programmer i can leave all these >details for the pager to handle. Yes. The lock states are internal to SQLite and completely opaque to the developer. > >Futher I am looking for some kind of example or documentation where the >aggregate queries and these two functions are explained in more detail - >sqlite3_create_function() >sqlite3_aggregate_function() There is comprehensive documentation on the website: http://www.sqlite.org/docs.html API reference: http://www.sqlite.org/capi3ref.html >Please let me know if you are aware of this. > >Thanks and Regards, >Ritesh Christian > >Christian Smith wrote: > >>On Fri, 16 Sep 2005, Amin Azez wrote: >> >> >> >>>FAQ 7 >>> >>>(7) Can multiple applications or multiple instances of the same >>>application access a single database file at the same time? >>> >>> Multiple processes can have the same database open at the same time. >>>Multiple processes can be doing a SELECT at the same time. But only one >>>process can be making changes to the database at once. >>> >>> >>> >>>This answer fails to make clear whether or not the multiple readers can >>>read while the single writer is writing, or whether the writer blocks >>>the readers and the readers block pending writers (like mysql non-innodb >>>tables) >>> >>>Could someone please clairify this point. >>> >>> >> >> >>Locking in SQLite is detailed here: >>http://www.sqlite.org/lockingv3.html >> >>In summary: >>SQLite uses multiple readers/single writer locking. A writer can operate >>concurrently with readers until it is ready to commit or spill data from >>it's cache. In this case, it waits for readers to finish, then gets an >>exclusive write lock and writes it's data. Thus, the following concurrency >>is available to SQLite: >> >> time > >>Reader >-| >>Reader >-| >>Reader >--| >>Writer>---c***| >>Reader >***-| >> >>Key: >>- Executing query >>c Commit >>* Blocked by lock >> >> >>>Start of query >>> >>> >>| End of query >> >>The last reader above is blocked from starting by the writer until the >>writer commits. If the writer commits before the last reader has finished, >>it is blocked. >> >>It might be worth raising a ticket (http://www.sqlite.org/cvstrac/tktnew) >>to have the FAQ reference the locking document. >> >> >> >> >>>Sam >>> >>> >>> >> >>Christian >> >> >> > -- /"\ \ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL X - AGAINST MS ATTACHMENTS / \
[sqlite] determining number of 'used' pages?
Hi, I am using SQLite on an embedded software product. The SQLite database file is saved and read from a NOR flash chip. We have found that the writing of data to this Flash chip can be quite slow. Therefore we need to minimise the writes that are made by SQLite. We have disabled the creation of the journal file when writing data to the database and this has halved the write time. If anyone can suggest any way we can further reduce the amount of file writes made or speed up the writing of data to the file system then these would be gratefully receieved. The main area where we now have a performance problem however is with deleting records. The problem is due to the need to 'vacuum' the database when we delete records. We do this as we need to know the size of the database file to show a capacity readout to the user. I have been looking into the SQlite code to try and find if there is a way in which I can read the number of 'used' pages. If I can determine the number of used pages in the database then I can use this to generate the capacity report and I can disable the vacuuming of the database, which will save us much time, about 3-4 seconds!. Please can someone advise me as to how I can determine the number of used pages? Looking at the vacuum code it seems it needs to create a temporary file and database to perform vacuum. Is there a way I can determine the number of 'used' pages without the overhead of having to create a temporary database on the filesystem? Thanks in advance for your help. Mark
Re: [sqlite] weird problem with windows 2000
On Thu, 15 Sep 2005, dan greene wrote: >I wrote a little program to test SQLITE3, essentially, a program to add 200 >records to a table called notes with one column, subject. >The loop I used to enter the rows is shown below: > >// time the additions >t1=clock(); >// add some rows >for(i=1;i<201;i++) >{ >char *statement; >char buffer[50]; >sprintf(buffer,"my subject%d",i); >statement = sqlite3_mprintf("insert into notes(subject) values('%q');",buffer); >stat = sqlite3_exec(db,statement,0,0,); >if(stat != SQLITE_OK) >{ >printf("insert error at i=%1d: %s\n",i,errmsg); >sqlite3_free(errmsg); >break; >} > >sqlite3_free(statement); >}// for >t2 = clock(); >printf("added %d records to notes\n",i-1); >printf("elapsed time: %d\n",(t2-t1)); > > >When I ran this program on the win2000 machine with NTFS, this loop took on >the order of 25 seconds as reported in the second printf! >When I ran the same program on my win98 machine, it took 1.1 seconds. >putting a begin; and end;commit; SQL around the entire loop, dropped the >execution on both machines into the tens of milliseconds. > >Any thoughts as to what is happening on the WIN2000 NTFS machine to slow >down the single inserts so drastically? Basically, Windows 2000 is making sure your data is safe, whereas Win98 flies fast and lose. Try the same test, pulling the power cord out half way through, and see which box has an intact database. SQLite will sync the journal to disk before modifying the database file on commit, so that it can recover in the above instance. This is a synchronous operation on Win2000, but appears to be asynchronous on Win98. Thus, Win2000 is waiting on the slow mechanical hard disk to actually write data. Win98 is simply caching the write and not waiting on the hard disk. Win2000 is working correctly. Win98 is not. That both have around the same execution speed in a single transaction is expected, as both do a single sync at the end, and thus the sync time becomes less significant in the running time of the whole program. >Cheers >Dan Greene > Christian -- /"\ \ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL X - AGAINST MS ATTACHMENTS / \
Re: [sqlite] FAQ clarification
On Fri, 16 Sep 2005, Amin Azez wrote: >FAQ 7 > >(7) Can multiple applications or multiple instances of the same >application access a single database file at the same time? > >Multiple processes can have the same database open at the same time. >Multiple processes can be doing a SELECT at the same time. But only one >process can be making changes to the database at once. > > > >This answer fails to make clear whether or not the multiple readers can >read while the single writer is writing, or whether the writer blocks >the readers and the readers block pending writers (like mysql non-innodb >tables) > >Could someone please clairify this point. Locking in SQLite is detailed here: http://www.sqlite.org/lockingv3.html In summary: SQLite uses multiple readers/single writer locking. A writer can operate concurrently with readers until it is ready to commit or spill data from it's cache. In this case, it waits for readers to finish, then gets an exclusive write lock and writes it's data. Thus, the following concurrency is available to SQLite: time > Reader >-| Reader >-| Reader >--| Writer>---c***| Reader >***-| Key: - Executing query c Commit * Blocked by lock > Start of query | End of query The last reader above is blocked from starting by the writer until the writer commits. If the writer commits before the last reader has finished, it is blocked. It might be worth raising a ticket (http://www.sqlite.org/cvstrac/tktnew) to have the FAQ reference the locking document. > >Sam > Christian -- /"\ \ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL X - AGAINST MS ATTACHMENTS / \
[sqlite] FAQ clarification
FAQ 7 (7) Can multiple applications or multiple instances of the same application access a single database file at the same time? Multiple processes can have the same database open at the same time. Multiple processes can be doing a SELECT at the same time. But only one process can be making changes to the database at once. This answer fails to make clear whether or not the multiple readers can read while the single writer is writing, or whether the writer blocks the readers and the readers block pending writers (like mysql non-innodb tables) Could someone please clairify this point. Sam