Re: [sqlite] nfs 'sillynames'
a followup on this post: i was running some straces on sqlite while it using transactions and think i may have found the source of the problem, we see this in in the strace output: ... ... close(5)= 0 unlink("/dmsp/moby-1-1/ahoward/shared/silly/db-journal") = 0 ... ... i think what is happening is ... ... close(5)= 0 here remote client opens db-journal unlink("/dmsp/moby-1-1/ahoward/shared/silly/db-journal") = 0 I think this file is created everytime you start a transaction. Maybe even if you only perform read only commands. The journal is what is used to keep track of the changes to the database. Anyone else have any thoughts? John LeSueur
Re: [sqlite] nfs 'sillynames'
On Fri, 3 Sep 2004, Ara.T.Howard wrote: if you are unfamiliar with nfs sillynames, they occur when a file that is open on one client is removed or renamed on another. i am seeing alot of these appear in an NFS directory i'm using to store a sqlite database acessed by many clients. the access protocol is a meta-transaction wrapped around an actual sqlite transaction using an additional empty lockfile (db.lock). this is to ensure single writer multiple reader semantics for the entire network, eg. lock_type = read # or perhaps write aquire fcntl lock of lock_type on db.lock open db.lock start_transaction(lock_type) db.execute(sql) end_transaction close db.lock release fcntl lock NONE of the applications has exited for about 30 days. ALL of the application are aquiring write locks on the db.lock so one process only is accessing the db. NONE of the uses removes, renames, etc. the file - only reads and writes are done via the sqlite api. the semantics are definitely single writer (and potentially) many reader, if this we not so the application would crash horribly almost instantly. the api i'm using throws an exception if it gets SQLITE_BUSY and i have no busy handler so i am positive that the locking works, and that no readers ever attempt to write (upgrade lock) and vice versa. i'm simply mystified as to what's creating the sillynames. they all appear to be the product of the sqlite api: opening them up in vi shows them to be a binary file that's obviously part of the database since i can recognize many strings from the database in them. is worries me - it seems to imply that sqlite, at some point does a rename or remove when some remote client has an open file handle. could this be because ALL of my operations (even reads) is inside a transaction? more info... the sillynamed files are only ever a minute or so old and disappear almost as quickly (this would be after the last client called a close). the application is working great but i'd like to understand this as it concerns me somewhat. the sqlite lib version is the latest 2.8 branch. the nfs server/client impl are the latest patched redhat enterprise versions. kind regards. a followup on this post: i was running some straces on sqlite while it using transactions and think i may have found the source of the problem, we see this in in the strace output: ... ... close(5)= 0 unlink("/dmsp/moby-1-1/ahoward/shared/silly/db-journal") = 0 ... ... i think what is happening is ... ... close(5)= 0 here remote client opens db-journal unlink("/dmsp/moby-1-1/ahoward/shared/silly/db-journal") = 0 and the sillyname is created, only to disappear when the remote client closes the db-journal. ... ... the locking seems like it should prevent this - but the files themselves are definitely subsets of the original - so i'm thinking there is some race condition here. -a -- === | EMAIL :: Ara [dot] T [dot] Howard [at] noaa [dot] gov | PHONE :: 303.497.6469 | A flower falls, even though we love it; | and a weed grows, even though we do not love it. | --Dogen ===
RE: [sqlite] One more question on the C API performance
Regarding: "... why the complex query takes x milliseconds to execute from > the sqlite command line but it takes 15 seconds with the APIs?" If you were to double the timeout limit, does the query then require 30 seconds? If so, that would certainly point to a timeout you're experiencing. Donald Griggs Desk: 803-735-7834 Opinions are not necessarily those of Misys Healthcare Systems nor its board of directors.
Re: [sqlite] One more question on the C API performance
"Zahraie Ramin-p96152" <[EMAIL PROTECTED]> writes: >> Did you tried with easier queries? (select * from >> dvrpAudioFileTable) ? >> Also in this case you have such times? >> >> Paolo >> > Thanks for the suggestion. This query took only 180 milliseconds. The > question is why the complex query takes x milliseconds to execute from the > sqlite command line but it takes 15 seconds with the APIs? The sqlite command line tool *uses* the APIs. Why don't you take a look at the source for the sqlite command line tool, and see what it's doing differently than you are. There's clearly _something_ different; it's just a matter of tracking it down. You should find the code remarkably similar to what you're already doing. Derrell
Re: [sqlite] SQLite & stack size
It's not really (or not only) Windows the culprit but rather the compilers that implement lazy memory managers. In my experience, Borland had rather slow functions, always beaten by microsoft (sorry :( ). For instance, there are out some C compilers like LCC that (as I'm told) does not implement any memory manager but passes the calls to the OS. As far as I know, MS Visual C memory manager is quite fast. If one have problems with the malloc's, maybe they could be inside ifdef's to use one style or another. - Original Message - From: "D. Richard Hipp" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Sent: Friday, September 03, 2004 8:43 PM Subject: Re: [sqlite] SQLite & stack size > Christian Smith wrote: > > > > How often does the balancer run? > > > > No so much, it turns out. Long ago, it used to run a lot > more often and was a high runner. But I've since optimized > it out of a lot of situations. > > So allocating with malloc() isn't a big performance hit > after all (at least not on systems where malloc performs > well - I haven't tried it on windows...) I've checked in > the changes so that btree.c now allocates all of its big > temporary data structures using malloc instead of allocating > them off of the stack. > > Let me know if you see any problems > > -- > D. Richard Hipp -- [EMAIL PROTECTED] -- 704.948.4565 >
[sqlite] nfs 'sillynames'
if you are unfamiliar with nfs sillynames, they occur when a file that is open on one client is removed or renamed on another. i am seeing alot of these appear in an NFS directory i'm using to store a sqlite database acessed by many clients. the access protocol is a meta-transaction wrapped around an actual sqlite transaction using an additional empty lockfile (db.lock). this is to ensure single writer multiple reader semantics for the entire network, eg. lock_type = read # or perhaps write aquire fcntl lock of lock_type on db.lock open db.lock start_transaction(lock_type) db.execute(sql) end_transaction close db.lock release fcntl lock NONE of the applications has exited for about 30 days. ALL of the application are aquiring write locks on the db.lock so one process only is accessing the db. NONE of the uses removes, renames, etc. the file - only reads and writes are done via the sqlite api. the semantics are definitely single writer (and potentially) many reader, if this we not so the application would crash horribly almost instantly. the api i'm using throws an exception if it gets SQLITE_BUSY and i have no busy handler so i am positive that the locking works, and that no readers ever attempt to write (upgrade lock) and vice versa. i'm simply mystified as to what's creating the sillynames. they all appear to be the product of the sqlite api: opening them up in vi shows them to be a binary file that's obviously part of the database since i can recognize many strings from the database in them. is worries me - it seems to imply that sqlite, at some point does a rename or remove when some remote client has an open file handle. could this be because ALL of my operations (even reads) is inside a transaction? more info... the sillynamed files are only ever a minute or so old and disappear almost as quickly (this would be after the last client called a close). the application is working great but i'd like to understand this as it concerns me somewhat. the sqlite lib version is the latest 2.8 branch. the nfs server/client impl are the latest patched redhat enterprise versions. kind regards. -a -- === | EMAIL :: Ara [dot] T [dot] Howard [at] noaa [dot] gov | PHONE :: 303.497.6469 | A flower falls, even though we love it; | and a weed grows, even though we do not love it. | --Dogen ===
RE: [sqlite] One more question on the C API performance
> -Original Message- > From: Paolo Vernazza [mailto:[EMAIL PROTECTED] > Sent: Friday, September 03, 2004 2:03 PM > To: [EMAIL PROTECTED] > Subject: Re: [sqlite] One more question on the C API performance > > > > >The query takes about 19 seconds at best to complete. Does > anyone know why there is such a discrepancy? > > > >Thanks very much in advance. > > > Did you tried with easier queries? (select * from > dvrpAudioFileTable) ? > Also in this case you have such times? > > Paolo > Hi Paolo Thanks for the suggestion. This query took only 180 milliseconds. The question is why the complex query takes x milliseconds to execute from the sqlite command line but it takes 15 seconds with the APIs? Thanks again and take care. RZ
Re: [sqlite] One more question on the C API performance
The query takes about 19 seconds at best to complete. Does anyone know why there is such a discrepancy? Thanks very much in advance. Did you tried with easier queries? (select * from dvrpAudioFileTable) ? Also in this case you have such times? Paolo
RE: [sqlite] One more question on the C API performance
> > Thanks again for the reply. That very well may be! The > call back puts the > > column data in a std::map and then the std::map in > std::vector. This is so > > column and row data can be easily accessed. That very well > may be the bottle > > neck. I will try to come up with another way of doing this. > > the std template library is a hog in this respect. if you > trace the ctor's > you'll see an insertion into a std container results in three > of them. so if > you have them nested three deep you'll call 9 ctors for each insert. > > at least that's how it used to be... > > > Thanks for the reply, Ara. I performed some tests and unfortunately the STL was not the culprit. For some reason, as I stated in another message, it takes 15 seconds to receive a callback after an 'sqlite_exec' call is made. Regards
RE: [sqlite] One more question on the C API performance
On Fri, 3 Sep 2004, Zahraie Ramin-p96152 wrote: -Original Message- From: John LeSueur [mailto:[EMAIL PROTECTED] Sent: Friday, September 03, 2004 11:09 AM To: [EMAIL PROTECTED] Subject: Re: [sqlite] One more question on the C API performance A 1000 rows of data return instantaneously. However when this very query is executed through the use of the C API using a very similar code to this: http://www.hwaci.com/sw/sqlite/quickstart.html The query takes about 19 seconds at best to complete. Does anyone know why there is such a discrepancy? Thanks very much in advance. Regards What does your callback do? Is it possible that's the bottleneck? Hi John Thanks again for the reply. That very well may be! The call back puts the column data in a std::map and then the std::map in std::vector. This is so column and row data can be easily accessed. That very well may be the bottle neck. I will try to come up with another way of doing this. the std template library is a hog in this respect. if you trace the ctor's you'll see an insertion into a std container results in three of them. so if you have them nested three deep you'll call 9 ctors for each insert. at least that's how it used to be... -a -- === | EMAIL :: Ara [dot] T [dot] Howard [at] noaa [dot] gov | PHONE :: 303.497.6469 | A flower falls, even though we love it; | and a weed grows, even though we do not love it. | --Dogen ===
RE: [sqlite] One more question on the C API performance
> -Original Message- > From: Zahraie Ramin-p96152 > Sent: Friday, September 03, 2004 11:18 AM > To: [EMAIL PROTECTED] > Subject: RE: [sqlite] One more question on the C API performance > > > > > > -Original Message- > > From: John LeSueur [mailto:[EMAIL PROTECTED] > > Sent: Friday, September 03, 2004 11:09 AM > > To: [EMAIL PROTECTED] > > Subject: Re: [sqlite] One more question on the C API performance > > > > > > > > >A 1000 rows of data return instantaneously. However when > > this very query is executed through the use of the C API > > using a very similar code to this: > > > > > >http://www.hwaci.com/sw/sqlite/quickstart.html > > > > > >The query takes about 19 seconds at best to complete. Does > > anyone know why there is such a discrepancy? > > > > > >Thanks very much in advance. > > > > > >Regards > > > > > What does your callback do? Is it possible that's the bottleneck? > > > > Hi John > > Thanks again for the reply. That very well may be! The call > back puts the column data in a std::map and then the std::map > in std::vector. This is so column and row data can be easily > accessed. That very well may be the bottle neck. I will try > to come up with another way of doing this. > John Unfortunately STL is not the problem. It takes approximately ~15 seconds from the time 'sqlite_exec' is executed to the time when the first callback is received. Regards
Re: [sqlite] SQLite & stack size
Christian Smith wrote: How often does the balancer run? No so much, it turns out. Long ago, it used to run a lot more often and was a high runner. But I've since optimized it out of a lot of situations. So allocating with malloc() isn't a big performance hit after all (at least not on systems where malloc performs well - I haven't tried it on windows...) I've checked in the changes so that btree.c now allocates all of its big temporary data structures using malloc instead of allocating them off of the stack. Let me know if you see any problems -- D. Richard Hipp -- [EMAIL PROTECTED] -- 704.948.4565
RE: [sqlite] One more question on the C API performance
> -Original Message- > From: John LeSueur [mailto:[EMAIL PROTECTED] > Sent: Friday, September 03, 2004 11:09 AM > To: [EMAIL PROTECTED] > Subject: Re: [sqlite] One more question on the C API performance > > > > >A 1000 rows of data return instantaneously. However when > this very query is executed through the use of the C API > using a very similar code to this: > > > >http://www.hwaci.com/sw/sqlite/quickstart.html > > > >The query takes about 19 seconds at best to complete. Does > anyone know why there is such a discrepancy? > > > >Thanks very much in advance. > > > >Regards > > > What does your callback do? Is it possible that's the bottleneck? > Hi John Thanks again for the reply. That very well may be! The call back puts the column data in a std::map and then the std::map in std::vector. This is so column and row data can be easily accessed. That very well may be the bottle neck. I will try to come up with another way of doing this. Thanks. RZ
Re: [sqlite] One more question on the C API performance
A 1000 rows of data return instantaneously. However when this very query is executed through the use of the C API using a very similar code to this: http://www.hwaci.com/sw/sqlite/quickstart.html The query takes about 19 seconds at best to complete. Does anyone know why there is such a discrepancy? Thanks very much in advance. Regards What does your callback do? Is it possible that's the bottleneck? John LeSueur
[sqlite] One more question on the C API performance
Hello As you may recall, yesterday I asked for help on optimizing a query. I received a series of outstanding suggestions that I incorporated which made a great difference. As a matter of review, here is my schema: Create TABLE dvrpAudioFileTable(fileId varchar(256), filePath varchar(256),timeRangeStartSeconds INTEGER, timeRangeStartuSeconds INTEGER, timeRangeStartTotal FLOAT, timeR angeEndSeconds INTEGER, timeRangeEnduSeconds INTEGER, timeRangeEndTotal FLOAT); Create TABLE dvrpAudioSourceTable(sourceIndex VARCHAR(256), dsuIPAddress INTEGER, dsuPortNumber INTEGER, sourceType SMALLINT, workStationIpAddress INTEGER, sourceName VAR CHAR(256)); Create TABLE dvrpDataFileTable(fileId varchar(256), filePath varchar(256), timeRangeStartSeconds INTEGER, timeRangeStartuSeconds INTEGER,timeRangeStartTotal FLOAT, timeRa ngeEndSeconds INTEGER, timeRangeEnduSeconds INTEGER, timeRangeEndTotal FLOAT); Create Table dvrpIndexTable(sourceIndex VARCHAR(256), fileId INTEGER); CREATE INDEX FILEIDX ON dvrpAudioFileTable(fileId); CREATE INDEX SOURCEIDX ON dvrpAudioSourceTable(sourceIndex); CREATE INDEX SOURCEINDEX ON dvrpIndexTable(fileId, sourceIndex); CREATE INDEX TIMEINDEX on dvrpAudioFileTable(timeRangeStartTotal, timeRangeEndTotal); The indices were created after series of tests based on the suggestions I received yesterday. This set seems to give the best performance. When I execute the following query from the sqlite command line: select a.sourceName, a.dsuIPAddress, a.dsuPortNumber, a.sourceType, a.workStationIpAddress, b.filePath, b.timeRangeStartSeconds, b.timeRangeStartuSeconds, b.timeRangeEndSeconds, b.timeRangeEnduSeconds from dvrpAudioFileTable b, dvrpIndexTable c, dvrpAudioSourceTable a where b.timeRangeStartTotal >= 9 and b.timeRangeEndTotal <= 9000 and c.fileId = b.fileId and a.sourceIndex = c.sourceIndex order by a.sourceIndex, b.fileId; A 1000 rows of data return instantaneously. However when this very query is executed through the use of the C API using a very similar code to this: http://www.hwaci.com/sw/sqlite/quickstart.html The query takes about 19 seconds at best to complete. Does anyone know why there is such a discrepancy? Thanks very much in advance. Regards
Re: [sqlite] SQLite & stack size
On 3-Sep-04, at 9:31 AM, Christian Smith wrote: How often does the balancer run? Could the space for the balance routine be allocated with the sqlite structure? It is opaque anyway, and is allocated only once, so there should be no penalty in performance over the current stack implementation, and only a single thread should ever be using a sqlite structure at a time. Or you could set the thread stack size to something reasonable when you create it, which accomplishes the same thing. ck
Re: [sqlite] SQLite & stack size
On Fri, 3 Sep 2004, D. Richard Hipp wrote: >b.bum wrote: > > What changed in SQLite3 such that the stack size is significantly larger for > > common operations? > > > >The balance routine needs 10x the page size of space to do its >job. That space needs to come from somewhere. I chose the stack >since the stack is readily at hand. Other options include >malloc(), with a performance penalty, or alloca(), which does >not work right on many systems. How often does the balancer run? Could the space for the balance routine be allocated with the sqlite structure? It is opaque anyway, and is allocated only once, so there should be no penalty in performance over the current stack implementation, and only a single thread should ever be using a sqlite structure at a time. Christian -- /"\ \ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL X - AGAINST MS ATTACHMENTS / \
Re: [sqlite] SQLite & stack size
Thank you for the precise explanation -- very helpful. On Sep 3, 2004, at 8:19 AM, D. Richard Hipp wrote: alloca(), which does not work right on many systems. ... and would be on the stack anyway. b.bum
Re: [sqlite] SQLite & stack size
b.bum wrote: > What changed in SQLite3 such that the stack size is significantly larger for > common operations? > Version 3 supports databases with different page sizes. The routine in question (the "balance" routine in btree.c) has always needed stack space that is roughly 10x the size of one database page. In version 2, the database page size was fixed at compile-time to 1024 bytes. So the stack space needed was about 10K. Version 3 can read a database with any page size up to SQLITE_MAX_PAGE_SIZE, currently set to 8192, IIRC. So we need about 80K of space in the balancer. You can raise SQLITE_MAX_PAGE_SIZE as large as 65536 if you want, but then you really need a lot of stack... The balance routine needs 10x the page size of space to do its job. That space needs to come from somewhere. I chose the stack since the stack is readily at hand. Other options include malloc(), with a performance penalty, or alloca(), which does not work right on many systems. -- D. Richard Hipp -- [EMAIL PROTECTED] -- 704.948.4565
[sqlite] [OT] Shell globbing - Re: [sqlite] trying to compile SQLite
On Fri, 3 Sep 2004, Greg Miller wrote: >Christian Smith wrote: > >> I found it funny, while looking through Dr Dobbs journal some time ago, >> about a columnist (Al Stevens, I think!) being surprised that under UNIX, >> such things as filename globbing was done by the shell, and all main() >> usually gets is a list of valid filenames. Under DOS and Windows, given: >> >> C:\Temp> grep foo * >> >> The '*' would have to be interpreted and expanded by the grep program, but >> under UNIX, the list of files is generated by the shell. Of course, you >> can get decent shells on Windows as well, but generally only as ports of >> UNIX shells. > > >I guess the UNIX folks just didn't know any better way back then. >Putting globbing in the API instead of the shell is a much better >approach, but that wasn't all that obvious when UNIX first came along. You condone the DOS/Windows (lack of) approach? Put it in a library, yes. But it is still best done by the shell (IMHO.) The shell is written once, so for the common case, globbing only has to be implemented in one place. Having every utility do it's own globbing results is code reuse for the sake of code reuse. I prefer just having a list of files the user wants manipulated. Christian -- /"\ \ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL X - AGAINST MS ATTACHMENTS / \
[sqlite] SQLite & stack size
On Sep 2, 2004, at 11:41 PM, Jakub Adamek wrote: Nuno, I am much surprised that version 3.0.5 helped you. It didn't help me. Neither my nor your port. BUT your remark helped me! You are right that it is because of stack space, and the default setting in Windows CE projects is 0x1, i.e. 65 kB. After changing to 0x10, i.e. 1 MB my test program which first added 1000 rows of size 0..2000, than deleted all of them and created 1000 tables works fine. Did you perhaps also change this setting? That is an interesting observation and CE is not the only platform to have a stacksize related issue with SQLite 3. On Mac OS X, threads started via NSThread have a relatively limited stack size. While it didn't appear to fail with SQLite2, it is quite easy to cause SQLite3 to overflow an NSThread's stack. The workaround is to create a pthread by hand, setting the stack size appropriately along the way. The larger question is what changed in SQLite3 such that the stack size is significantly larger for common operations? I would imagine that this could be of concern for the embedded market. b.bum
Re: [sqlite] sqlite3_busy_timeout() not working with 3.0.x?
On Fri, Sep 03, 2004 at 09:20:05AM -0400, D. Richard Hipp wrote: > > It does if you put the timeout on pDb2. You're right. That's what I get for hacking up test cases at 1 AM. Thanks, Matt
Re: [sqlite] Is version 3 seriously broken?
Nuno, hurrah, it works on first tests. I will run more later. Clever idea! I am now looking much forward to the release ... Thanks for your work. Jakub Nuno Lucas wrote: Jakub Adamek, dando pulos de alegria, escreveu : > Nuno, I am much surprised that version 3.0.5 helped you. It didn't help > me. Neither my nor your port. BUT your remark helped me! You are right > that it is because of stack space, and the default setting in Windows CE > projects is 0x1, i.e. 65 kB. After changing to 0x10, i.e. 1 MB > my test program which first added 1000 rows of size 0..2000, than > deleted all of them and created 1000 tables works fine. Did you perhaps > also change this setting? Now that that you talked about it, I remembered an irritating bug in VC++ that forces us to rebuild all after changing linking options (clean, build doesn't work). You are right, it's the same... But the good news is that I found the cure (well, thinking about it, I should have remembered about it earlier). There is a SQLITE_MAX_PAGE_SIZE constant exactly for the sake of embedded systems. I committed a new version, where I defined that to be 1024 (instead of the default of 8192) and it works now. I tried 2048 but it isn't low enough, too. If someone wants more, it will have to increase the stack size in the linking options, and it should be done by someone that understands what it wants to do. > Are you going to merge 3.0.6? Already done and committed, stay tuned for the release... Regards, ~Nuno Lucas
Re: [sqlite] temp files
On Fri, 03 Sep 2004 17:07:36 +0300, Dmytro Bogovych <[EMAIL PROTECTED]> wrote: On Fri, 03 Sep 2004 09:12:31 -0400, D. Richard Hipp <[EMAIL PROTECTED]> wrote: Unable to reproduce. I put a breakpoint on sqlite3pager_opentemp() and did lots of UPDATEs in the style shown, but no temporary file was ever opened. I've attached the database files and more additional info. Seems attachments are not permitted in this mailing list. I've posted them (7 and 1.4 KB each) to: http://www.quickoutliner.com/Untitled.qof http://www.quickoutliner.com/Untitled.qof-journal -- With best regards, Dmytro Bogovych
Re: [sqlite] temp files
On Fri, 03 Sep 2004 09:12:31 -0400, D. Richard Hipp <[EMAIL PROTECTED]> wrote: Unable to reproduce. I put a breakpoint on sqlite3pager_opentemp() and did lots of UPDATEs in the style shown, but no temporary file was ever opened. I've attached the database files and more additional info. If it will not help I'll try to make minimal test to reproduce this behaviour. However current database is very small. There is a log of SQL statements (I've obtained it from sqlite3_trace()): PRAGMA synchronous = NORMAL PRAGMA cache_size = 1000 PRAGMA temp_store = MEMORY BEGIN; select parent, child, number from TREE where parent is null order by number COMMIT; BEGIN; select caption, encrypted, compressed, created, modified, type, children, refcount, origsize from ITEMS where ROWID=? COMMIT; BEGIN; select blob_data from CFG where id = ? COMMIT; BEGIN; select child from TREE where parent=? order by number select caption, encrypted, compressed, created, modified, type, children, refcount, origsize from ITEMS where ROWID=? COMMIT; BEGIN; select caption, encrypted, compressed, created, modified, type, children, refcount, origsize from ITEMS where ROWID=? COMMIT; BEGIN; select caption, data, encrypted, compressed, created, modified, type, children, refcount, origsize from ITEMS where ROWID=? COMMIT; BEGIN; select caption, data, encrypted, compressed, created, modified, type, children, refcount, origsize from ITEMS where ROWID=? COMMIT; BEGIN; insert into ITEMS (caption, data, encrypted, compressed, created, modified, type, children) values (?, ?, ?, ?, ?, ?, ?, ?) update TREE set number = number + 1 where child = ? and parent = ? and number > ? Here I've got breakpoint on sqlite3pager_opentemp. Call stack is: vdbeapi.c, line 159, sqlite3_step: rc = sqlite3VdbeExec(p); vdbe.c, line 2114, sqlite3VdbeExec: rc = sqlite3BtreeBeginStmt(pBt); btree.c, line 1459, sqlite3BtreeBeginStmt: rc = pBt->readOnly ? SQLITE_OK : sqlite3pager_stmt_begin(pBt->pPager); pager.c, line 2952, sqlite3pager_stmt_begin: rc = sqlite3pager_opentemp(zTemp, &pPager->stfd); And breakpoint in sqlite3pager_opentemp was triggered. In any case SQLite is great. Thank you! -- With best regards, Dmytro Bogovych http://www.quickoutliner.com/
Re: [sqlite] sqlite3_busy_timeout() not working with 3.0.x?
Matt Wilson wrote: On Thu, Sep 02, 2004 at 08:24:21PM -0400, D. Richard Hipp wrote: If you do separate sqlite3_open() calls for each statement, the behavior will be more along the lines of what you expect. Attached test case still doesn't seem to wait. It does if you put the timeout on pDb2. -- D. Richard Hipp -- [EMAIL PROTECTED] -- 704.948.4565
Re: [sqlite] temp files
Dmytro Bogovych wrote: I'm trying to run simple update on the following table CREATE TABLE tree ( parent INTEGER, child INTEGER, number INTEGER, children INTEGER ) The query itself: update TREE set number = number + 1 where child = ? and parent = ? and number > ? During execution of this query the following func is called: static int sqlite3pager_opentemp(char *zFile, OsFile *fd) from pager.c and temporary file is created in my temp directory. Unable to reproduce. I put a breakpoint on sqlite3pager_opentemp() and did lots of UPDATEs in the style shown, but no temporary file was ever opened. -- D. Richard Hipp -- [EMAIL PROTECTED] -- 704.948.4565
[sqlite] temp files
Greetings. I'm trying to run simple update on the following table CREATE TABLE tree ( parent INTEGER, child INTEGER, number INTEGER, children INTEGER ) The query itself: update TREE set number = number + 1 where child = ? and parent = ? and number > ? During execution of this query the following func is called: static int sqlite3pager_opentemp(char *zFile, OsFile *fd) from pager.c and temporary file is created in my temp directory. Is it expected behaviour? May I to use any SQLite's caching to avoid too often creating of temporary files? I'm already using PRAGMA synchronous = NORMAL PRAGMA cache_size = 1000 PRAGMA temp_store = MEMORY -- With best regards, Dmytro Bogovych
Re: [sqlite] Is version 3 seriously broken?
Jakub Adamek, dando pulos de alegria, escreveu : > Nuno, I am much surprised that version 3.0.5 helped you. It didn't help > me. Neither my nor your port. BUT your remark helped me! You are right > that it is because of stack space, and the default setting in Windows CE > projects is 0x1, i.e. 65 kB. After changing to 0x10, i.e. 1 MB > my test program which first added 1000 rows of size 0..2000, than > deleted all of them and created 1000 tables works fine. Did you perhaps > also change this setting? Now that that you talked about it, I remembered an irritating bug in VC++ that forces us to rebuild all after changing linking options (clean, build doesn't work). You are right, it's the same... But the good news is that I found the cure (well, thinking about it, I should have remembered about it earlier). There is a SQLITE_MAX_PAGE_SIZE constant exactly for the sake of embedded systems. I committed a new version, where I defined that to be 1024 (instead of the default of 8192) and it works now. I tried 2048 but it isn't low enough, too. If someone wants more, it will have to increase the stack size in the linking options, and it should be done by someone that understands what it wants to do. > Are you going to merge 3.0.6? Already done and committed, stay tuned for the release... Regards, ~Nuno Lucas
Re: [sqlite] trying to compile SQLite
Christian Smith wrote: I found it funny, while looking through Dr Dobbs journal some time ago, about a columnist (Al Stevens, I think!) being surprised that under UNIX, such things as filename globbing was done by the shell, and all main() usually gets is a list of valid filenames. Under DOS and Windows, given: C:\Temp> grep foo * The '*' would have to be interpreted and expanded by the grep program, but under UNIX, the list of files is generated by the shell. Of course, you can get decent shells on Windows as well, but generally only as ports of UNIX shells. I guess the UNIX folks just didn't know any better way back then. Putting globbing in the API instead of the shell is a much better approach, but that wasn't all that obvious when UNIX first came along. -- http://www.velocityvector.com/ | "F--- 'em all. That's how I feel." http://www.classic-games.com/ | -- Michael Moore on small business http://www.indie-games.com/|