[sqlite] help with win32/iis install
Greetings. I recently installed php 5 on Win 2k3 server (iis 6). I enabled the pdo and sqlite extensions for sqlite, but using the test script from http://www.tanguay.at/installPhp5.php5?step=8 , I get error message: "failed to open/create the database." Any ideas? Thank you, -- greenshire
Re: [sqlite] SQLiteSpy vs. SQLite.dll speed comparison
Michael, The guy who produced Sqlitespy is a member of this forum so he can confirm or debunk my theory as to why you are getting a big difference in execution time. I suspect that Sqlitespy might be storing the SQL in its compiled (from sqlite3_prepare) form and when you run it you skip the compile phase. It is hard to imagine that compile optimization makes a fourfold difference in executions speed, particularly to a well written program like Sqlite. We do a similar thing in our Sqlite applications, compile the SQL when the program loads then use the compiled VDBE code as input to sqlite3_bind and sqlite3_step at execution time. It kicks up performance no end and also traps any schema mismatches before any processing starts. Note that if you use the sqlite3_exec function you cannot have pre-compiled SQL because sqlite3_exec is a wrapper around sqlite3_prepare/step/reset/finalize. It is a big waste of processing time to be compiling the same statment thousands of times unnecessarily. If you use sqlite3_exec I suggest that you replace it. It is only there for compatibility with legacy programs. If you use gcc you can use option -finline-functions to get some extra speed at the cost of a larger executable, although Sqlite is not written with thousands of one line functions so the improvement will not be great. michael cuthbertson wrote: Thanks to Christian and John for the pointers regarding compilers. I have not compiled the sqlite sources myself but have used the supplied binary. Could either one you give me some tips for compiling the sqlite sources for either vs 6 or 8? John, I will follow your advice on inline functions. The absolute last worry I have is the size of my exe. My concerns are speed (1) and runtime memory (2), i.e., in-mem db size. BTW, here are some times to demonstrate what I am facing: A table with 14 columns, 8 indexes, only 4k rows. This is a secondary(subset) table - not a view - that I created since running this query against my complete table of 440k rows was impossibly slow. The following query takes 75ms in my code - just the SQL_exec and callbacks - while SQLiteSpy takes 20ms, including display. select * from (select f1, f2, f3, f4, f5 from table where f6 = 2563351070 and f2 <='2006-01-01' and f2 >= '2004-01-01') order by f1 limit 32 offset 855; This qry is used to refresh a scrolling display where any of the constants are actually variables. The subquery returns about 1000 rows. Note that a LIMIT = 1 is only marginally faster than 32. Also, as the offset increases, the exec. time increases about 50% as fast, which I do not understand, since this would merely seem to be an index into the result set that should be low cost.
RE: [sqlite] SQLiteSpy vs. SQLite.dll speed comparison
Thanks to Christian and John for the pointers regarding compilers. I have not compiled the sqlite sources myself but have used the supplied binary. Could either one you give me some tips for compiling the sqlite sources for either vs 6 or 8? John, I will follow your advice on inline functions. The absolute last worry I have is the size of my exe. My concerns are speed (1) and runtime memory (2), i.e., in-mem db size. BTW, here are some times to demonstrate what I am facing: A table with 14 columns, 8 indexes, only 4k rows. This is a secondary(subset) table - not a view - that I created since running this query against my complete table of 440k rows was impossibly slow. The following query takes 75ms in my code - just the SQL_exec and callbacks - while SQLiteSpy takes 20ms, including display. select * from (select f1, f2, f3, f4, f5 from table where f6 = 2563351070 and f2 <='2006-01-01' and f2 >= '2004-01-01') order by f1 limit 32 offset 855; This qry is used to refresh a scrolling display where any of the constants are actually variables. The subquery returns about 1000 rows. Note that a LIMIT = 1 is only marginally faster than 32. Also, as the offset increases, the exec. time increases about 50% as fast, which I do not understand, since this would merely seem to be an index into the result set that should be low cost.
[sqlite] Retrieving ROWID value on duplicate conflict
Hi Folks, i I'm unsure if SQLite provides the functionality I seek. On a duplicate entry/conflict I would like to retrieve the rowid of the conflicting entry; by possibly updating columns, or replacing the row entirely while ensuring the rowid doesn't change. On an INSERT OR REPLACE it deletes the existing row and inserts the specified data with the next monotonic number as the rowid. Unfortunately for me this behaviour is undesirable. I'm looking for a way to do one of the following: * Replace the conflicting row while preserving it's rowid then get the last_insert_rowid(). * Update or 'touch' the conflicting row to retrieve the rowid number, in hopes that the last_insert_rowid() will be updated.. * Retrieve just the rowid number due to a duplicate conflict. I understand the latter can be done in the logic of the program by handling the duplicate value error, however I was hoping there would be a way to deal with this directly with SQLite. Any advice would be much appreciated. Thanks, Naveen Nathan
RE: [sqlite] Temp dir
>Set the global variable sqlite3_temp_directory to any >directory you want and it tries that directory first. Ok, fair enough. But why do you try and open the directory? Why can you just try and create the tmp file there and deal with it if it's not allowed? I'm asking becuase I have permissions for tmp folders (/tmp, /usr/tmp, /var/tmp) set at a reasonable: drwxrwx-wt 4 root adm4096 Jul 21 15:18 tmp/ and I'm running an application (trac) as a non-privlidged user.. Pretty standard so far. Said user/program therefore is not allowed to read the entire tmp dir, nor do I want it to. It is however allowed to create files and operate on them. I think that if you just tried to create the file and handle exemptions after that (no dir access, file already exists, etc) then this would work just as well, no? Of course, an obvious workaround in the mean time is for the author of trac to go ahead and set the sqlite_temp_directory per your suggestion. Respectfully, Christopher Taylor
Re: [sqlite] Temp dir
chtaylo3 <[EMAIL PROTECTED]> wrote: > I have a question about os_unix.c > > On line 854 inside function sqlite3UnixTempFileName, you declare: > static const char *azDirs[] = { > 0, > "/var/tmp", > "/usr/tmp", > "/tmp", > ".", > }; > > I'm guessing this is where sqlite attempts to create a temp copy of the > database it's opening. Nope. This is where it puts temporary tables you create using CREATE TEMP TABLE. > > I'm having difficulties with this because I'm running apache & trac ( > http://trac.edgewall.org/ ) as a non-priviliged user (of course) and > subsequently it does not have read access to either of these directories. Is > there a chance it could, at runtime, eval the environment variable $TMP and > use that as one of the options? > > It looks like creating the temp file fails if and only if all of the dirs > listed in azDirs is accessible. So just adding the env var $TMP should solve > this problem. Do you agree? if so, I'd be happy to write a patch for this. > Set the global variable sqlite3_temp_directory to any directory you want and it tries that directory first. -- D. Richard Hipp <[EMAIL PROTECTED]>
[sqlite] Temp dir
I have a question about os_unix.c On line 854 inside function sqlite3UnixTempFileName, you declare: static const char *azDirs[] = { 0, "/var/tmp", "/usr/tmp", "/tmp", ".", }; I'm guessing this is where sqlite attempts to create a temp copy of the database it's opening. I'm having difficulties with this because I'm running apache & trac ( http://trac.edgewall.org/ ) as a non-priviliged user (of course) and subsequently it does not have read access to either of these directories. Is there a chance it could, at runtime, eval the environment variable $TMP and use that as one of the options? It looks like creating the temp file fails if and only if all of the dirs listed in azDirs is accessible. So just adding the env var $TMP should solve this problem. Do you agree? if so, I'd be happy to write a patch for this. Respectfully, Christopher
Re: [sqlite] A littel question...
Cesar David Rodas Maldonado wrote: I have not a substring, I have a list of words (stemmed words of several languages) and i just want to get the Id. The word is unique In that case the sqlite B-Tree index is about as good as you will get. just make sure that the word is an index.
Re: [sqlite] Using prepare, step, finalize and handling BUSY answers
> That's what I think as well. But, when you say 'all your suggestions' are you talking about sqlite3_interrupt too? That would be useful if you were doing queries where you don't know how long they will take. All my code is written in a way the queries are short so I would not have to abort long ones. Depending on your usage that would be a very nice feature.. -- SqliteImporter and SqliteReplicator: Command line utilities for Sqlite http://www.reddawn.net/~jsprenkl/Sqlite Cthulhu Bucks! http://www.cthulhubucks.com
Re: [sqlite] A littel question...
I know that, but I would like to know if will be better first transform the word into a number (a hash function), after that select the number and after search with the index of the word... understand?. I am sorry for my english... On 7/21/06, Daniel van Ham Colchete <[EMAIL PROTECTED]> wrote: Cesar David Rodas Maldonado wrote: > Hello to everybody > > If I have a table with 100.000 unique words I am wondering if SQLite > select > if faster an cheaper (RAM, Processor, etc), or If i have to help SQLite > using a Hash function, and what could be that Hash function? > > Thanks. > Cesar, you should consider using an index: http://www.sqlite.org/lang_createindex.html Best regards, Daniel Colchete
Re: [sqlite] A littel question...
I have not a substring, I have a list of words (stemmed words of several languages) and i just want to get the Id. The word is unique
Re: [sqlite] A littel question...
Cesar David Rodas Maldonado wrote: > Hello to everybody > > If I have a table with 100.000 unique words I am wondering if SQLite > select > if faster an cheaper (RAM, Processor, etc), or If i have to help SQLite > using a Hash function, and what could be that Hash function? > > Thanks. > Cesar, you should consider using an index: http://www.sqlite.org/lang_createindex.html Best regards, Daniel Colchete
Re: [sqlite] A littel question...
Cesar David Rodas Maldonado wrote: Hello to everybody If I have a table with 100.000 unique words I am wondering if SQLite select if faster an cheaper (RAM, Processor, etc), or If i have to help SQLite using a Hash function, and what could be that Hash function? Thanks. Do you want to select on whole words, first few characters in the word or on sub-strings?
Re: [sqlite] A littel question...
Cesar David Rodas Maldonado said: > Hello to everybody > > If I have a table with 100.000 unique words I am wondering if SQLite > select > if faster an cheaper (RAM, Processor, etc), or If i have to help SQLite > using a Hash function, and what could be that Hash function? If you're going to the trouble of building your own hash table, why bother with SQLite? The Hash table will provide faster access to the data, assuming that you load the entire list of words into RAM, but will be more memory intensive than using SQLite. This is assuming that the SQLite database lives on disk. If the SQLite database lives in memory, it's still going to take less RAM (btrees are almost always more compact than hash tables), but the speed will depend on the efficiency of your hash implementation. Clay -- Simple Content Management http://www.ceamus.com
[sqlite] A littel question...
Hello to everybody If I have a table with 100.000 unique words I am wondering if SQLite select if faster an cheaper (RAM, Processor, etc), or If i have to help SQLite using a Hash function, and what could be that Hash function? Thanks.
[sqlite] SQLiteSpy - new topic
Hello, is there a similar program as SQLiteSpy for other platforms available? I am especially interested in programs running on MacOS X. Hartwig
Re: [sqlite] Using prepare, step, finalize and handling BUSY answers
Jay, thank you very much man! That answers a lot. And it showed me that I was not checking the SQLITE_LOCKED case. But, from what I can see, if your database is busy or locked you just stop your program execution, or you will end this function WITHOUT running neither sqlite3_finalize nor sqlite3_close. Either way you will have a memory leak and this is not a good thing when you're running an daemon (my case). What if you put an loop = false instead of the throw at the 'default' case? Do you have to you use sqlite3_interrupt before sqlite3_finalize? This code was for a benchmark so I wasn't careful to check for locking and do good error recovery. You're correct, and all your suggestions should be put in. You'll have to decide what's an appropriate error response for your daemon. I usually use the assumption none of the applications using the database will keep it locked for long periods. So retrying for a small time after a lock return works well for me.
Re: [sqlite] Using prepare, step, finalize and handling BUSY answers
Jay Sprenkle wrote: > Here's some example code: > > sqlite3*db; > > // connect to database > if ( sqlite3_open( "test.db", ) ) > throw "Can't open database"; > > char* sql; > > // two forms of the same sql > sql = "SELECT one.test1, two.test2" > " FROM one" > " INNER JOIN two ON one.id = two.id" > ; > sqlite3_stmt* pStmt; > > if ( sqlite3_prepare( db, sql, strlen(sql), , NULL ) != SQLITE_OK ) > { > string str = "Cannot prepare sql: "; > str += sql[t]; > str += ", Error: "; > str += sqlite3_errmsg(db); > throw str.c_str(); > } > > bool Loop = true; > while ( Loop ) > switch ( sqlite3_step( pStmt ) ) > { > case SQLITE_ROW: >// retrieve the results >char* p = (char *) sqlite3_column_text( pStmt, 0 ); >string test1 = string( p ? p : "" ); > >p = (char *) sqlite3_column_text( pStmt, 1 ); >string test2 = string( p ? p : "" ); > >break; > case SQLITE_DONE: >Loop = false; >break; > case SQLITE_BUSY: > case SQLITE_LOCKED: > default: >string str = "Cannot execute sql: "; >str += sql[t]; >str += ", Error: "; >str += sqlite3_errmsg(db); >throw str.c_str(); >break; > } > > // clean up when finished > sqlite3_finalize( pStmt ); > > sqlite3_close( db ); Jay, thank you very much man! That answers a lot. And it showed me that I was not checking the SQLITE_LOCKED case. But, from what I can see, if your database is busy or locked you just stop your program execution, or you will end this function WITHOUT running neither sqlite3_finalize nor sqlite3_close. Either way you will have a memory leak and this is not a good thing when you're running an daemon (my case). What if you put an loop = false instead of the throw at the 'default' case? Do you have to you use sqlite3_interrupt before sqlite3_finalize? Best regards, Daniel Colchete
Re: [sqlite] SQLiteSpy vs. SQLite.dll speed comparison
It is possible to resolve the issue by using the traditional C profiler. Compile the SQL library with profiling on the different compilers and measure where the time is spent during execution. You can also compile some test programs and look at the assembler output to get an idea of the efficiency of the optimizer. A good optimizer can make big chunks of code disappear. We find that thoughtfully written C runs well with or without compile optimisation. Carelessly written programs benefit strongly from optimisation to remove common sub-expressions etc. We also notice that a compiler which is specialized for a particular architecture so that it makes use of the full register file produces the best code for that machine. An example of that is the IBM XLC which produces code running 40% better than GCC on the Power machines. Someone may have a similar comparison for some Intel compilers compared to GCC as a benchmark. For speed unrolling loops and inlining functions creates a bigger but noticeably faster executable. Since function calls are expensive in execution time, inlining them can be a big win. Shields, Daniel wrote: Daniel: Thanks for the suggestion. I wasn't aware that the prepare statement gained you that much for one-table select queries. I use it for multi-100k inserts (along with trans.) and it saves quite a bit of time. This is my sql for the present problem: select * from (select f1, f2, f3, f4, f5 from Table where f2 = 'abc' and f3 = 2563351070 and f4 >= '2004-01-01'and f4 <='2006-01-01' ) order by f1 limit 32 offset 900; Do you think that prepare would be helpful here? Regards, Michael Michael, You're right, for a single query the pre-prepared statement will save no time. If performance is important you may get some mileage out of an optimising compiler. http://www.intel.com/cd/software/products/asmo-na/eng/compilers/284527.htm Daniel. == Please access the attached hyperlink for an important electronic communications disclaimer: http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html ==
Re: [sqlite] Using prepare, step, finalize and hadling BUSY answers
On 7/21/06, Daniel van Ham Colchete <[EMAIL PROTECTED]> wrote: I'm having problems understanding the SQLite docs. At the 'C/C++ Interface for SQLite Version 3' it says that sqlite3_exec is a wrapper to 'prepare, finalize, reset' without a step. But a little bit down the document it says you should use and step. But again, what if I don't want to wait the busy state anymore? Should I use finalize or interrupt and then finalize? Here's some example code: sqlite3*db; // connect to database if ( sqlite3_open( "test.db", ) ) throw "Can't open database"; char* sql; // two forms of the same sql sql = "SELECT one.test1, two.test2" " FROM one" " INNER JOIN two ON one.id = two.id" ; sqlite3_stmt* pStmt; if ( sqlite3_prepare( db, sql, strlen(sql), , NULL ) != SQLITE_OK ) { string str = "Cannot prepare sql: "; str += sql[t]; str += ", Error: "; str += sqlite3_errmsg(db); throw str.c_str(); } bool Loop = true; while ( Loop ) switch ( sqlite3_step( pStmt ) ) { case SQLITE_ROW: // retrieve the results char* p = (char *) sqlite3_column_text( pStmt, 0 ); string test1 = string( p ? p : "" ); p = (char *) sqlite3_column_text( pStmt, 1 ); string test2 = string( p ? p : "" ); break; case SQLITE_DONE: Loop = false; break; case SQLITE_BUSY: case SQLITE_LOCKED: default: string str = "Cannot execute sql: "; str += sql[t]; str += ", Error: "; str += sqlite3_errmsg(db); throw str.c_str(); break; } // clean up when finished sqlite3_finalize( pStmt ); sqlite3_close( db ); -- SqliteImporter and SqliteReplicator: Command line utilities for Sqlite http://www.reddawn.net/~jsprenkl/Sqlite Cthulhu Bucks! http://www.cthulhubucks.com
Re: [sqlite] SQLite NFS Problem compared to Other Db's
Over NFS you are limited to the bandwidth of your network, probably 1-10 Mb/s. Compare that to the disk speed on your host server, one or two orders of magnitude better. The NFS link could be up to 50 times slower. If you want better distributed performance use a DBMS server like PostgreSQL. Ritesh Kapoor wrote: Hi, Most of us are aware that SQLite on Linux has problems with NFS mounted volumes - basically the file locking issue. If there is a single thread accessing the SQLite DB - and this is guaranteed - then there is no need to have file locking. So I modified the code for SQLite and removed all locking mechanism. However, the performance of SQLite insert/delete of rows varies a lot when the DB file is local or accessed over NFS. I've also removed the Synch-mechanism and increased the SQLite page size as well as the number of pages to hold in cache. I understand that all this can cause data loss if the system crashes but that is tolerable. What I can't figure out is that the performance over NFS is still horrible. My application requires inserting one row at a time, many times in a single run. I can't use transactions when inserting but I've used them for deletion. Can anyone give me some more ideas to work with. Does this performance problem happen with the other DB's available as well?
[sqlite] Using prepare, step, finalize and hadling BUSY answers
Hello everyone, I'm new on this list and I have only a few months of experience with SQLITE. I use SQLITE 3.3.6 for Linux with C++. I'm having a few problems with locking. But before describing the problem itself, I would like to check if I'm doing something wrong. That's what I do when I try to INSERT something in my database: PREPARE BIND EC = STEP while EC = BUSY { TRY AGAIN 10 TIMES; // SQLITE is set to sleep 1 sec if BUSY. IF TIME > 10 { BREAK; } } And then I do some other selects and inserts. Now I read that there is another function I should be using: sqlite3_finalize. From what I understood I should be doing: PREPARE BIND EC = STEP while EC = BUSY { TRY AGAIN 10 TIMES; // SQLITE is set to sleep 1 sec if BUSY. IF TIME > 10 { BREAK; } } FINALIZE Am I right? Question: what if a PREPARE returns BUSY? What should I do? Why would a PREPARE return BUSY? I'm having problems understanding the SQLite docs. At the 'C/C++ Interface for SQLite Version 3' it says that sqlite3_exec is a wrapper to 'prepare, finalize, reset' without a step. But a little bit down the document it says you should use and step. But again, what if I don't want to wait the busy state anymore? Should I use finalize or interrupt and then finalize? Thank you very much for your help! Best regards, Daniel Colchete
Re: [sqlite] SQLite NFS Problem compared to Other Db's
On 7/21/06, Ritesh Kapoor <[EMAIL PROTECTED]> wrote: I've also removed the Synch-mechanism and increased the SQLite page size as well as the number of pages to hold in cache. I understand that all this can cause data loss if the system crashes but that is tolerable. What I can't figure out is that the performance over NFS is still horrible. My application requires inserting one row at a time, many times in a single run. I can't use transactions when inserting but I've used them for deletion. If you have a busy network packet collisions can cause serious performance issues. Are you on an ethernet network with other traffic on the same net?
Re: [sqlite] Help me in SQL
Can you change the alias to a different field name than the source tables? On 7/20/06, blins <[EMAIL PROTECTED]> wrote: Hi sqlite-users@sqlite.org, I use sqliteODBC 0.68 + ADO and SQLite 3.3.6. I try executing sql: select t1.field1 as field1, t2.field2 as field2 from table1 t1 left join table2 t2 on (t1.id=t2.refid) and I receive the message on a mistake "no such column: t1.field1(1)"
Re: [sqlite] How to port the SQLite
You're better off posting this to the list, as I can't answer specifics for lack of experience of VxWorks. I'll answer what I can inline... Vivek R uttered: Hi Smith, I have the following doubt in SQLite. Could you please help me regarding theese... 1. what are the resources required by SQLLite - they can be RAM/ROM, semaphores, mail boxes, task requirements; How many task does it utilizes in VxWorks? How many semaphores, Mailboxes , etc it requires? Code footprint is something like 170KB-230KB, depending on what features are included. RAM footprint, I'm not sure, but will be at least the size of the page cache, which is 1K * 2000 by default. The page cache size can be reduced by caching less pages. SQLite doesn't need any extra tasks. It runs in the context of the calling application. Mailboxes are a VxWorks construct, I take it? It does not use those. All SQLite uses is the standard C library and (on UNIX/POSIX) the POSIX 1003.1 API with pthreads (if threading is required.) 2. How do we have flow control? SQLite has only a SQL engine, without a stored procedure language. If that's what you mean. 3. what are the Lock mechanisms provided by the engine (row lock, table lock..)? Any additional lock mechanism we need to build. The engine has database level reader/writer locks. Multiple readers or a single writer can access the database at any time. A writer blocks readers. Locking is detailed at: http://www.sqlite.org/lockingv3.html Locking requires POSIX locking on Unix. 4. How to create a service component that creates these tables on HDD ( Hard disk on Consumer products like DVD or Set top box ) before it leave the factory. Either have the application initialise the database if not already initialised, or create a master initialised database that is copied to new devices at manufacture. The datafile file is platform independent, so the master image can be created on any machine with SQLite. 5. recovery mechanisms (in case DB crash how do we recover/reconstruct data?) SQLite will recover automatically. Updates don't procede until the overwritten data is backed up to a recovery journal. If SQLite detects that a transaction has not completed (because of a crash, say) then the contents of the database are rolled back using the recovery journal. 6. Where I can find the version which is compatible with VxWorks. If I'm not mistaken, the standard version should be compatible with VxWorks. Your best bet is to compile it up and try it. Please guide me. Thanks and Regards, Vivek R On 7/14/06, Christian Smith <[EMAIL PROTECTED]> wrote: Vivek R uttered: > Hi , > I am New bee to the group and SQLite. Can anyone explain me How to > port the SQLite to DVD or Consumer products or any other Embedded > Systems. Where I can look for it ? What and all things to be > considered while porting. Which is the best version to port to > consumer product which runs VxWorks. VxWorks comes with a POSIX compliant API, IIRC, so it should be just a case of using the existing UNIX based port. Have you tried compiling a default release? > > Thanks and Regards, > Vivek R > Christian -- /"\ \ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL X - AGAINST MS ATTACHMENTS / \
RE: [sqlite] SQLiteSpy vs. SQLite.dll speed comparison
michael cuthbertson uttered: Brannon: Thank you for your thoughts. To be clear, the 'optimize for speed' setting in MY release is actually slower than MY debug version - I know nothing about Ralf's settings. That issue is separate from SQLiteSpy - I didn't mean to conflate them. And the issue is not which version of VS I'm using. If I simply put a 'return 0' in my callback, the time is within 2%. Thus, I am timing the dll only and I am using the pre-compiled version, not a VS 6 compiled version. Ralf is compiling his own version, in Borland, and gets 2.5 times my speed. Therefore, I believe the speed difference lies in the differences between the two compiled versions. The precompiled version from sqlite.org is compiled using Mingw32, which is a Win32 targeted version of gcc. Being cross platform, gcc can not be as aggressive in optimisations as VC or Borland, which are mostly targeted at Intel based processors and have large, paid development teams wringing every last drop of performance out of them. At a guess, as SQLite code is heavy in branching, the Borland compiler is making better use of your processor pipeline by reordering instructions, especially if you have a P4. Michael Christian -- /"\ \ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL X - AGAINST MS ATTACHMENTS / \
[sqlite] import in sqlite
Hi, Is there any possibility to import files from the local file system and storing in sqlite DB.And Is there any export option just to check whether the loaded file into a table consists of same data as the original file or not. Is it possible in sqlite? If possible,How it will stores files in tables?In which format? Please explain me how can i do it. Thank you Sandhya
RE: [sqlite] SQLiteSpy vs. SQLite.dll speed comparison
> Daniel: > Thanks for the suggestion. > I wasn't aware that the prepare statement gained you that > much for one-table select queries. > I use it for multi-100k inserts (along with trans.) and it > saves quite a bit of time. > This is my sql for the present problem: > > select * from (select f1, f2, f3, f4, > f5 from Table where f2 = 'abc' and f3 = 2563351070 and f4 >= > '2004-01-01'and f4 <='2006-01-01' ) order by f1 limit 32 offset 900; > > Do you think that prepare would be helpful here? > Regards, > Michael > Michael, You're right, for a single query the pre-prepared statement will save no time. If performance is important you may get some mileage out of an optimising compiler. http://www.intel.com/cd/software/products/asmo-na/eng/compilers/284527.htm Daniel. == Please access the attached hyperlink for an important electronic communications disclaimer: http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html ==
[sqlite] SQLite NFS Problem compared to Other Db's
Hi, Most of us are aware that SQLite on Linux has problems with NFS mounted volumes - basically the file locking issue. If there is a single thread accessing the SQLite DB - and this is guaranteed - then there is no need to have file locking. So I modified the code for SQLite and removed all locking mechanism. However, the performance of SQLite insert/delete of rows varies a lot when the DB file is local or accessed over NFS. I've also removed the Synch-mechanism and increased the SQLite page size as well as the number of pages to hold in cache. I understand that all this can cause data loss if the system crashes but that is tolerable. What I can't figure out is that the performance over NFS is still horrible. My application requires inserting one row at a time, many times in a single run. I can't use transactions when inserting but I've used them for deletion. Can anyone give me some more ideas to work with. Does this performance problem happen with the other DB's available as well? -- Regards, Ritesh Kapoor "living in interesting times..."