[sqlite] for what reason :memory: is much slower than /dev/shm/dummy.db
Hi there, we are on an challanging project with very high requirements on performance. When doing some debugging we discover, that the sqlite method for creating an memory-based database is much slower than using e.g /dev/shm on linux or /tempfs on solaris. (We have measured an 20min performance advantage for the /dev/shm style on a batch run which takes 70min with :memory: and just 49min using /dev/shm. Because our project needs to be ported to windows - the /dev/shm is not an option - because win2000 does not support any temporary memory based file system. But beside that, we guess, that there will be a possiblity to tune :memory: or we belief, that we to something wrong when using :memory: (for example pragma page_size ...). Is there any body who can give us some advises to tune up our :memory: database to become as fast as the /dev/shm alternativ? Thanks roland - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] creating own pragmas
Hi all, i'm using "sqlite3_create_function()" to create own functions in my database. this is a really nice possibility to extend the database with powerful functions. but i did not find a way to create own pragmas like "pragma foo=bar" to pass some global information to my application. i can do this by changing the sqlite-sources, but then i have problems with upgrading to new sqlite versions. how about a api-function like "sqlite3_create_pragma()" ? cu, gg - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Re: sqlite_open
Dave Dyer wrote: At 08:04 PM 11/30/2006, John Stanton wrote: Those are Macintosh issues, not Sqlite, and you need to handle them in your application. Yes indeed. I'm only suggesting that sqlite would be a better substrate if it provided a supported way to tell me "I can't open the database" rather than "there is no table named xx". Jon Postel said: "Be conservative in what you do; be liberal in what you accept from others." Checking the existence of a file is a mere stat() away and querying the master table list is simple: SELECT name FROM sqlite_master WHERE type='table' AND name='%q'; sqlite is lean and mean, feature creep detracts from that. -- Craig Morrison =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= http://pse.2cah.com Controlling pseudoephedrine purchases. http://www.mtsprofessional.com/ A Win32 email server that works for You. - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: creating own pragmas
Günter Greschenz <[EMAIL PROTECTED]> wrote: i'm using "sqlite3_create_function()" to create own functions in my database. this is a really nice possibility to extend the database with powerful functions. but i did not find a way to create own pragmas like "pragma foo=bar" to pass some global information to my application. Pass global information from where? Why would your application use such a roundabout way to pass global information to itself? If you need to store some global settings in the database, why not just create a table for them? Igor Tandetnik - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Re: creating own pragmas
hi, i implemented crypting for values: insert into test (x, y, z) values(encrypt('this is a test', 'pwd'), 'foo', 'bar') insert into test (x, y, z) values(encrypt('this is test 2', 'pwd'), 'foo2', 'bar2') ... select decrypt(x, 'pwd') x, y, z from test where decrypt(x, 'pwd') like 'this%' but setting the password each time is quiet time-expensive. so i want to set the password only once: pragma password='pwd' insert into test values(encrypt('this is a test'), 'foo', 'bar') insert into test values(encrypt('this is test 2'), 'foo2', 'bar2') ... select decrypt(x) x, y, z from test where decrypt(x) like 'this%' cu, gg Igor Tandetnik schrieb: Günter Greschenz <[EMAIL PROTECTED]> wrote: i'm using "sqlite3_create_function()" to create own functions in my database. this is a really nice possibility to extend the database with powerful functions. but i did not find a way to create own pragmas like "pragma foo=bar" to pass some global information to my application. Pass global information from where? Why would your application use such a roundabout way to pass global information to itself? If you need to store some global settings in the database, why not just create a table for them? Igor Tandetnik - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Re: How safe is sqlite3_prepare compared to sqlite3_exec?
Try instead of "SELECT * FROM table WHERE name LIKE ?" as your sql query, "SELECT * FROM table WHERE name LIKE :comparison" Thomas Zangl wrote: Vitali Lovich schrieb: Regarding your code snippet: // SQL Statement is: "SELECT * FROM table WHERE name LIKE ?" search = '%test%'; sqlite3_bind_text(prepared_statement, 0,search , search , SQLITE_STATIC); First I'm not sure what language you're using - it seems Perl-like. Anyways, the documentation for http://www.sqlite.org/capi3ref.html#sqlite3_bind_text gives the 4th param as the number of bytes (not chars) while you're passing the original string. Since I'm assuming it's Perl, it won't generate an error on the type mismatch. You generally want to pass -1 for the fourth parameter (from what I understand, -1 is always safe for sqlite3_bind_text). Also, take care in using SQLITE_STATIC and make sure that the string you pass remains on the heap (i.e. delete isn't called, not sure if this is possible in Perl) or the stack (i.e. local variable in scope) when you execute the statement. Its C :-) Anyway, I tried your suggestion and free the char* after sqlite3_finalize. Does not help. char* sql_parameter_search = '%test%' rc = sqlite3_bind_text(prepared_statement, 1, sql_parameter_search, strlen(sql_parameter_search), SQLITE_STATIC); sql_check_result(rc); logDebug("Added search = %s", sql_parameter_search); the result is: Added search = %test% my_sqlite_logger-SQLITE said: (0) SELECT * FROM table WHERE name LIKE ? So - no variable substitution done? Somebody with a working LIKE example? TIA, Thomas - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Preallocating fixed disk space for database ...
I want to preallocate disk space for database. I have only one table with ~1 milion entries, each entry takes about 30 bytes. Entries are added/removed but there is some maximum number of items, which can be put into the table at the same time. Is it possible to allocate a fixed disk space for such database ? How large should it be ? If yes, then is there a better way to create large files than massive insterts/deletes ? Thanks in advance, Kamil - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] strange last_insert_rowid()
Hello List I encounterd an strange behaviour with rowid and last_insert_rowid(). Here's the Code: BEGIN TRANSACTION; DROP TABLE IF EXISTS tb_language; CREATE TABLE tb_language ( tag INTEGER PRIMARY KEY AUTOINCREMENT ,lang_code VARCHAR(7) NOT NULL ,desc_german VARCHAR(25) NOT NULL ,desc_native VARCHAR(25) NULL ); CREATE TRIGGER trg_dont_change_language_tag BEFORE UPDATE OF tag ON tb_language BEGIN SELECT(RAISE(IGNORE)); END; CREATE TRIGGER trg_dont_delete_master_language BEFORE DELETE ON tb_language WHEN(OLD.tag=(SELECT tb_master_language FROM tb_master_tag)) BEGIN SELECT(RAISE(IGNORE)); END; DROP INDEX IF EXISTS idx_language; CREATE UNIQUE INDEX idx_language ON tb_language(lang_code,desc_german,desc_native); INSERT INTO tb_language(lang_code,desc_german,desc_native) VALUES('de_DE','Deutsch','Deutsch (Standard)'); INSERT INTO tb_language(lang_code,desc_german,desc_native) VALUES('en_UK','Englisch (UK)','english (UK)'); INSERT INTO tb_language(lang_code,desc_german,desc_native) VALUES('fr_FR','Französisch','francais'); INSERT INTO tb_language(lang_code,desc_german,desc_native) VALUES('it_IT','Italienisch','Italiano'); INSERT INTO tb_language(lang_code,desc_german,desc_native) VALUES('es_ES','Spanisch','espaniol'); DROP VIEW IF EXISTS vw_last_insert_language_tag; CREATE VIEW vw_last_insert_language_tag AS SELECT CASE WHEN (SELECT tag FROM tb_language WHERE rowid=last_insert_rowID()) IS NULL THEN 0 ELSE(SELECT tag FROM tb_language WHERE rowid=last_insert_rowID())END; COMMIT; (I don't think that there's a Problem with the triggers, but who can tell...) When I do a "SELECT tag FROM vw_last_insert_language_tag" I think 5 should be returned (since there are 5 INSERTS with 5 autoinc'd), but I receive 1 as result. Is there anything else I have to deal with? thx in advance Uwe - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] strange last_insert_rowid()
Inoqulath <[EMAIL PROTECTED]> wrote: > > Here's the Code: > > BEGIN TRANSACTION; > [] > COMMIT; > > When I do a "SELECT tag FROM vw_last_insert_language_tag" I think 5 > should be returned (since there are 5 INSERTS with 5 autoinc'd), but I > receive 1 as result. Is there anything else I have to deal with? > I did a copy/paste and I got an answer of 5. Please note that last_insert_rowid() returns the rowid of the last row inserted in the same database connection. If you try it from a different database connection, you will get a different answer. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Preallocating fixed disk space for database ...
"kamil" <[EMAIL PROTECTED]> wrote: > I want to preallocate disk space for database. I have only one table with ~1 > milion entries, each entry takes about 30 bytes. Entries are added/removed > but there is some maximum number of items, which can be put into the table > at the same time. Is it possible to allocate a fixed disk space for such > database ? Make sure autovacuum is turned off. Then insert 1M bogus entries and turn around and delete them. You are left with an empty database that is large enough to hold 1M entries without growing. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Re: sqlite_open
Dave Dyer wrote: At 08:04 PM 11/30/2006, John Stanton wrote: Those are Macintosh issues, not Sqlite, and you need to handle them in your application. Yes indeed. I'm only suggesting that sqlite would be a better substrate if it provided a supported way to tell me "I can't open the database" rather than "there is no table named xx". Dave, It is really fairly easy to do this yourself using the pragma user_version. Simply set the user_version to a known value when you initialize a database schema, and then add a check after you open a database to see if the user_version has the expected value. If not you know you have some other kind of file, if the file existed before you did the sqlite3_open, or a newly created empty database file that needs to be initialized. HTH Dennis Cote - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] for what reason :memory: is much slower than /dev/shm/dummy.db
[EMAIL PROTECTED] wrote: Hi there, we are on an challanging project with very high requirements on performance. When doing some debugging we discover, that the sqlite method for creating an memory-based database is much slower than using e.g /dev/shm on linux or /tempfs on solaris. (We have measured an 20min performance advantage for the /dev/shm style on a batch run which takes 70min with :memory: and just 49min using /dev/shm. Because our project needs to be ported to windows - the /dev/shm is not an option - because win2000 does not support any temporary memory based file system. But beside that, we guess, that there will be a possiblity to tune :memory: or we belief, that we to something wrong when using :memory: (for example pragma page_size ...). Is there any body who can give us some advises to tune up our :memory: database to become as fast as the /dev/shm alternativ? Roland, There was a previous thread about the speed of :memory: databases that you may want to look for. Basically, SQLite3 is actually slightly slower using a :memory: database than an actual file on both Windows and Linux. This changed when the SQLite stared using the same pager for both memory and file based databases. If you want better memory performance, and don't need any of the newest features, you might want to look at using SQLite version 2 instead. It was significantly faster when using a memory based store since it had a separate, purpose designed, memory pager. HTH Dennis Cote - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Preallocating fixed disk space for database ...
kamil wrote: I want to preallocate disk space for database. I have only one table with ~1 milion entries, each entry takes about 30 bytes. Entries are added/removed but there is some maximum number of items, which can be put into the table at the same time. Is it possible to allocate a fixed disk space for such database ? How large should it be ? If yes, then is there a better way to create large files than massive insterts/deletes ? Kamil, See my previous post in the archives at http://www.mail-archive.com/sqlite-users@sqlite.org/msg12121.html This produces a database with a fixed number of records. You may want to investigate turning autovacuum to eliminate unused space in the database (though it does add some overhead that makes the database slightly larger). Massive inserts are the best (only?) way to create a large database file. Just make sure you batch your inserts in a transaction. HTH Dennis Cote - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Re: Re: creating own pragmas
Günter Greschenz <[EMAIL PROTECTED]> wrote: i implemented crypting for values: insert into test (x, y, z) values(encrypt('this is a test', 'pwd'), 'foo', 'bar') insert into test (x, y, z) values(encrypt('this is test 2', 'pwd'), 'foo2', 'bar2') ... select decrypt(x, 'pwd') x, y, z from test where decrypt(x, 'pwd') like 'this%' but setting the password each time is quiet time-expensive. so i want to set the password only once: pragma password='pwd' insert into test values(encrypt('this is a test'), 'foo', 'bar') insert into test values(encrypt('this is test 2'), 'foo2', 'bar2') Apparently these are custom functions defined in your application. You can just store a password in a global variable somewhere, and have the functions refer to it. Or, you can pass it in via pUserData parameter to sqlite3_create_function. Or, if you want it to persist, you can create a table in the database just for this purpose, store the password there, and have these custom functions read it. Igor Tandetnik - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Re: creating own pragmas
Günter Greschenz wrote: hi, i implemented crypting for values: insert into test (x, y, z) values(encrypt('this is a test', 'pwd'), 'foo', 'bar') insert into test (x, y, z) values(encrypt('this is test 2', 'pwd'), 'foo2', 'bar2') ... select decrypt(x, 'pwd') x, y, z from test where decrypt(x, 'pwd') like 'this%' but setting the password each time is quiet time-expensive. so i want to set the password only once: pragma password='pwd' insert into test values(encrypt('this is a test'), 'foo', 'bar') insert into test values(encrypt('this is test 2'), 'foo2', 'bar2') ... select decrypt(x) x, y, z from test where decrypt(x) like 'this%' Igor Tandetnik schrieb: Günter Greschenz <[EMAIL PROTECTED]> wrote: i'm using "sqlite3_create_function()" to create own functions in my database. this is a really nice possibility to extend the database with powerful functions. but i did not find a way to create own pragmas like "pragma foo=bar" to pass some global information to my application. Pass global information from where? Why would your application use such a roundabout way to pass global information to itself? If you need to store some global settings in the database, why not just create a table for them? Gunter, Igor is right, you should store your password in a table. If you are concerned about the password being saved in the database, then you could use a temporary table, or even attach a :memory: database to hold the table that contains the password. In any case the saved password is available to your encrypt and decrypt functions. HTH Dennis Cote - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Query on database operations.
You need to get help from Dr Hipp for that. I believe he has a special cut down version of Sqlite which omits the compiler and has a special way of storing prepared SQL statements. It is obviously intended for small footprint embedded applications with a specific functionality. Kalyani Tummala wrote: Hi John, Thanks for your help. I have another doubt? If I turn off the sql compiler, should I have to fix my insert,delete,select,update operations? My operations are almost fixed in nature. Can you tell me how exactly I do this? -Original Message- From: John Stanton [mailto:[EMAIL PROTECTED] Sent: Thursday, November 30, 2006 11:26 PM To: sqlite-users@sqlite.org Subject: Re: [sqlite] Query on database operations. You might find that a version of Sqlite which omits the compiler might fit your application. You get a smaller footprint, but at the cost of having to use pre-compiled SQL, which means that you cannot dynamically create SQL. Kalyani Tummala wrote: Hi, This mail is a bit lengthy. I thank you in advance for your patience and help:-). Here we go.. I need to port sqlite to a platform where the RAM size is too small about 512kb. For that I need to tune sqlite to make its footprint, heap and stack memory as small as possible. As said in the documentation on sqlite.org, I could get the least possible footprint of 170KB with all the optional features turned off. My Analysis of heap and stack usage is as follows. Stack memory is consistent about 9-10K Heap memory is highly variant on size and number of records. I tried modifying the page size(SQLITE_DEFAULT_PAGE_SIZE and SQLITE_MAX_PAGE_SIZE ) in pager.h from 512 to 2048 but found no reduction in heap size. With indexes on every column(searched), the following is the heap size for different database operations on a database with 100 records and 6 tables with an avg of 10 to 15 fields each. OperationMAX Heap Average Heap(Bytes) insertion 85939 24166 deletion 397834 76541 Selection of all records 246973 79075 Update of all records 249808 196334 It seems quite a huge heap consumption. Are these numbers as expected? Is there a way that I can reduce these numbers in the range of 40-50kB by any means (including accessing pattern). I would like to know the reason for high heap consumption for deletion operation. Thanks in advance. Kalyani ** This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify [EMAIL PROTECTED] ** - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Re: sqlite_open
On Thu, Nov 30, 2006 at 10:52:55PM -0600, John Stanton wrote: > Sqlite has a carefully thought through minimalism. Feature creep would > detract from its function as a small footprint, embedded DB. If you > want different features there is nothing to stop you adding your own > library extension, like "daves_sqlite3_open" which does just what you > want. You could also add the logic necessary to handle the different > features of legacy and current Apple OS's. I can think of one potentially good reason for having a boolean option to control whether the DB gets created if it didn't exist: to avoid TOCTTOU races. I suspect that noone using SQLite will put themselves in a situation where they have such a race. Nonetheless, it would be nice if the API had such a boolean option. (But can it be implemented portably?) And then it'd be nice if the SQLite used the O_NOFOLLOW open(2) flag where available, and maybe O_NOLINKS. The latter wouldn't save SQLite the trouble of dealing with POSIX advisory lock braindeadness, even if universally available, I think (sigh). Or perhaps a sqlite3_open_fd() that takes a file descriptor instead of a filename, though that would expose the fact that the DB uses a single file, and it would be less portable still. (I.e., I don't advocate this one.) Nico -- - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Preallocating fixed disk space for database ...
You could look at the Sqlite data structures and write a program to build the free pages list. I still don't understand why you need to pre-allocate space. If you are using Linux or Unix you can make a file system of a fixed size for the DB. On Windows you could partition the disk into a drive for the DB. If you are using something else you may have disk partitioning capabilities in the file system. You could modify Sqlite to check for the file size whenever it allocates new file space and return a disk full status when it exceeds the quota you have set. You could define a trigger on INSERT and DELETE to keep a running total of rows in the DB and check the total before you perform an INSERT. kamil wrote: I want to preallocate disk space for database. I have only one table with ~1 milion entries, each entry takes about 30 bytes. Entries are added/removed but there is some maximum number of items, which can be put into the table at the same time. Is it possible to allocate a fixed disk space for such database ? How large should it be ? If yes, then is there a better way to create large files than massive insterts/deletes ? Thanks in advance, Kamil - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Preallocating fixed disk space for database ...
I cannot see a reason for what you propose, but you could do it by brute force and ignorance - populate the DB with 1 million rows then delete them all to add all the space to the free pages list. Then your insertions will use the freed pages, not fresh ones. kamil wrote: I want to preallocate disk space for database. I have only one table with ~1 milion entries, each entry takes about 30 bytes. Entries are added/removed but there is some maximum number of items, which can be put into the table at the same time. Is it possible to allocate a fixed disk space for such database ? How large should it be ? If yes, then is there a better way to create large files than massive insterts/deletes ? Thanks in advance, Kamil - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Preallocating fixed disk space for database ...
On Fri, Dec 01, 2006 at 08:35:24AM +0100, kamil wrote: > I want to preallocate disk space for database. I have only one table with ~1 > milion entries, each entry takes about 30 bytes. Entries are added/removed > but there is some maximum number of items, which can be put into the table > at the same time. Is it possible to allocate a fixed disk space for such > database ? How large should it be ? If yes, then is there a better way to > create large files than massive insterts/deletes ? It's not necessarily the case that pre-allocating space does what I suspect you want. Consider a filesystem like ZFS. Writing 1GB of garbage to a file that you will eventually overwrite does not guarantee that you'll have 1GB of space to write into that file. That's because ZFS uses a copy-on-write approach and snapshots can hold references to the space that would be released on a CoW operation. And SQLite might use CoW too someday, for all I know, so that pre-creating 10^6 rows wouldn't necessarily guarantee that you have room for 10^6 UPDATEs no matter what filesystem you're using. What you want is a way to get a guarantee from the OS that there will be some amount of disk space that you can use to grow some file. You can't get that portably. Nico -- - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] for what reason :memory: is much slower than /dev/shm/dummy.db
You might discover that a memory based database has few advantages over a disk based one, since Sqlite uses cacheing and the OS uses virtual memory file cacheing. The main difference might be initial accesses being slower while the cache fills up. I understand that Sqlite uses a less efficient algorithm for :memory files than for the cacheing on regular files and that means lower performance on those memory DBs. [EMAIL PROTECTED] wrote: Hi there, we are on an challanging project with very high requirements on performance. When doing some debugging we discover, that the sqlite method for creating an memory-based database is much slower than using e.g /dev/shm on linux or /tempfs on solaris. (We have measured an 20min performance advantage for the /dev/shm style on a batch run which takes 70min with :memory: and just 49min using /dev/shm. Because our project needs to be ported to windows - the /dev/shm is not an option - because win2000 does not support any temporary memory based file system. But beside that, we guess, that there will be a possiblity to tune :memory: or we belief, that we to something wrong when using :memory: (for example pragma page_size ...). Is there any body who can give us some advises to tune up our :memory: database to become as fast as the /dev/shm alternativ? Thanks roland - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Preallocating fixed disk space for database ...
On Fri, 1 Dec 2006, John Stanton wrote: I cannot see a reason for what you propose, but you could do it by brute force and ignorance - populate the DB with 1 million rows then delete them all to add all the space to the free pages list. Then your insertions will use the freed pages, not fresh ones. Actually, this harkens back to the dedicated transaction oriented file systems (HP 3000/MPE) which were extent based. File "extents" could be pre-allocated to ensure a minimum number of transactions were captured before the disk ran out of space -- this also had the advantage of contiguous allocation which ensured data proximity. The administrator could control the size and number of extents as well as the number pre-allocated (if any), and indeed, the maximum file size. I believe that some modern file system development work is looking back towards the 60's and 70's and modern high performance extent based file system are in development -- at least in the open systems (Linux?) area, but I have no data to back this up, and no clue as to whether they are ready for prime time (other than a fuzzily remembered magazine article). As for the OP, this type of operation is OS dependant, and should not be relegated to SQLite IMHO. - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] for what reason :memory: is much slower than /dev/shm/dummy.db
John Stanton <[EMAIL PROTECTED]> wrote: > > I understand that Sqlite uses a less efficient algorithm for :memory > files than for the cacheing on regular files and that means lower > performance on those memory DBs. > It uses exactly the same algorithm. It is just that the hash tables are fixed sized and are designed for a cache, not for an entire database. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Preallocating fixed disk space for database ...
My recollection of using MPE was performance measured in furlongs per fortnight and the need to do that allocation strictly to get contiguous space to try to counter the dismal performance of the HPIB disks which were boat anchors on some of the HP/3000's. Fortunately we don't have to indulge in that nonsense any more. Rob Sciuk wrote: On Fri, 1 Dec 2006, John Stanton wrote: I cannot see a reason for what you propose, but you could do it by brute force and ignorance - populate the DB with 1 million rows then delete them all to add all the space to the free pages list. Then your insertions will use the freed pages, not fresh ones. Actually, this harkens back to the dedicated transaction oriented file systems (HP 3000/MPE) which were extent based. File "extents" could be pre-allocated to ensure a minimum number of transactions were captured before the disk ran out of space -- this also had the advantage of contiguous allocation which ensured data proximity. The administrator could control the size and number of extents as well as the number pre-allocated (if any), and indeed, the maximum file size. I believe that some modern file system development work is looking back towards the 60's and 70's and modern high performance extent based file system are in development -- at least in the open systems (Linux?) area, but I have no data to back this up, and no clue as to whether they are ready for prime time (other than a fuzzily remembered magazine article). As for the OP, this type of operation is OS dependant, and should not be relegated to SQLite IMHO. - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
AW: [sqlite] for what reason :memory: is much slower than /dev/sh m/dummy.db
> -Ursprüngliche Nachricht- > Von: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] > Gesendet: Freitag, 1. Dezember 2006 18:06 > An: sqlite-users@sqlite.org > Betreff: Re: [sqlite] for what reason :memory: is much slower than > /dev/shm/dummy.db > > > John Stanton <[EMAIL PROTECTED]> wrote: > > > > I understand that Sqlite uses a less efficient algorithm > for :memory > > files than for the cacheing on regular files and that means lower > > performance on those memory DBs. > > > > It uses exactly the same algorithm. It is just that the hash > tables are fixed sized and are designed for a cache, not for > an entire database. Does this mean, that if i use :memory: databases, the cache-layer of sqlite is disabled? thanks in advance roland > > -- > D. Richard Hipp <[EMAIL PROTECTED]> > > > -- > --- > To unsubscribe, send email to [EMAIL PROTECTED] > -- > --- > - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] for what reason :memory: is much slower than /dev/sh m/dummy.db
[EMAIL PROTECTED] wrote: > > > > It uses exactly the same algorithm. It is just that the hash > > tables are fixed sized and are designed for a cache, not for > > an entire database. > > Does this mean, that if i use :memory: databases, the cache-layer of = > sqlite > is disabled? > No. It means the cache is the whole in-memory database, rather than just a cache of the most recently accessed pages of the disk database. When you specify a file named ":memory:" about all that happens is that write-to-disk is disabled. Everything that would have been written to disk stays permanently in cache. Thus it becomes an in-memory database. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] for what reason :memory: is much slower than /dev/shm/dummy.db
At 09:34 01/12/2006, you wrote: Hi there, we are on an challanging project with very high requirements on performance. When doing some debugging we discover, that the sqlite method for creating an memory-based database is much slower than using e.g /dev/shm on linux or /tempfs on solaris. (We have measured an 20min performance advantage for the /dev/shm style on a batch run which takes 70min with :memory: and just 49min using /dev/shm. Because our project needs to be ported to windows - the /dev/shm is not an option - because win2000 does not support any temporary memory based file system. But beside that, we guess, that there will be a possiblity to tune :memory: or we belief, that we to something wrong when using :memory: (for example pragma page_size ...). Is there any body who can give us some advises to tune up our :memory: database to become as fast as the /dev/shm alternativ? Thanks roland On our project we desisted to use :memory: databases, only a ram disk file system. From time to time make a snapshot to hard disk or other persistent medium. In windows i suppouse you can make a ram disk using malloc and copy there your database file, set the pragma for temporary files to memory and disable journaling. Make a new io routines access based on windows, open/close, write/read etc... for access your memory malloc ram disk. Again, from time to time stop reads/writes to database and save it to disk. HTH --- Scientists have shown that the moon is moving away at a tiny yet measurable distance from the earth every year. If you do the math, you can calculate that 85 million years ago the moon was orbiting the earth at a distance of about 35 feet from the earth's surface. This would explain the death of the dinosaurs. The tallest ones, anyway. - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] for what reason :memory: is much slower than /dev/shm/dummy.db
Eduardo Morras wrote: At 09:34 01/12/2006, you wrote: Hi there, we are on an challanging project with very high requirements on performance. When doing some debugging we discover, that the sqlite method for creating an memory-based database is much slower than using e.g /dev/shm on linux or /tempfs on solaris. (We have measured an 20min performance advantage for the /dev/shm style on a batch run which takes 70min with :memory: and just 49min using /dev/shm. Because our project needs to be ported to windows - the /dev/shm is not an option - because win2000 does not support any temporary memory based file system. But beside that, we guess, that there will be a possiblity to tune :memory: or we belief, that we to something wrong when using :memory: (for example pragma page_size ...). Is there any body who can give us some advises to tune up our :memory: database to become as fast as the /dev/shm alternativ? Thanks roland On our project we desisted to use :memory: databases, only a ram disk file system. From time to time make a snapshot to hard disk or other persistent medium. In windows i suppouse you can make a ram disk using malloc and copy there your database file, set the pragma for temporary files to memory and disable journaling. Make a new io routines access based on windows, open/close, write/read etc... for access your memory malloc ram disk. Again, from time to time stop reads/writes to database and save it to disk. HTH You might find you can get the same performance in a simpler way by just disabling synchronous writes. Read Dr Hipp's explanation of a memory database for the reason. - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] for what reason :memory: is much slower than /dev/shm/dummy.db
Because our project needs to be ported to windows - the /dev/shm is not an option - because win2000 does not support any temporary memory based file system. Not so. "FILE: Ramdisk.sys sample driver for Windows 2000" http://support.microsoft.com/kb/257405 Even includes C code, along with a binary. A Google search reveals several other ones, some commercial, for 2K and XP. - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] for what reason :memory: is much slower than /dev/shm/dummy.db
On 12/1/06, Isaac Raway <[EMAIL PROTECTED]> wrote: > Because our project needs to be ported to windows - the /dev/shm is not an > option - because win2000 does not support any temporary memory based file > system. Not so. "FILE: Ramdisk.sys sample driver for Windows 2000" http://support.microsoft.com/kb/257405 Even includes C code, along with a binary. A Google search reveals several other ones, some commercial, for 2K and XP. Try http://www.winsoft.sk/ramdisk.htm Just installed it and it works perfectly. Looks like it's $35 through SWREG. - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] compiler option SQLITE_OMIT_SQLITE_OMIT_PARSER
Hello all, I've read the documentation, and the wiki and the pages at http://www.sqlite.org/compile.html However, I cannot find what is the purpose of the compiler option SSQLITE_OMIT_PARSER Clearly it "Omits" the "Parser", but my real questions are: what is the purpose of the parser. What are the ramification of omitting it from sqlite? Thank you, Noah Hart CONFIDENTIALITY NOTICE: This message may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, you must not use, copy, disclose, or take any action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message. Thank you for your cooperation. - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] When to run analyze?
Having a reasonably big SQLite database, 21 tables, one table with some millions of rows, overall file size about 1.3 Gb. This table will only get SELECT statements once it has been created. I have now run analyze on this file and it does speed queries up indeed. Do I run analyze after the table has been completed (all tables created and populated) or should I run analyze before the creation of any tables as I read somewhere? RBS - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] When to run analyze?
On 12/1/06, RB Smissaert <[EMAIL PROTECTED]> wrote: Having a reasonably big SQLite database, 21 tables, one table with some millions of rows, overall file size about 1.3 Gb. This table will only get SELECT statements once it has been created. I have now run analyze on this file and it does speed queries up indeed. Do I run analyze after the table has been completed (all tables created and populated) or should I run analyze before the creation of any tables as I read somewhere? Frankly, I don't know what ANALYZE exactly does, but I would think it would be after all the tables have been populated, so ANALYZE can make sure all the indexes are complete, and calculate whatever stats that SQLite might be depending on internally. That is how I do it. -- Puneet Kishor http://punkish.eidesis.org/ Nelson Inst. for Env. Studies, UW-Madison http://www.nelson.wisc.edu/ Open Source Geospatial Foundation https://edu.osgeo.org/ - collaborate, communicate, compete = - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] When to run analyze?
"RB Smissaert" <[EMAIL PROTECTED]> wrote: > Having a reasonably big SQLite database, 21 tables, one table with some > millions of rows, overall file size about 1.3 Gb. This table will only get > SELECT statements once it has been created. I have now run analyze on this > file and it does speed queries up indeed. > Do I run analyze after the table has been completed (all tables created and > populated) or should I run analyze before the creation of any tables as I > read somewhere? > Run ANALYZE after all data has been inserted into the table. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] When to run analyze?
"RB Smissaert" <[EMAIL PROTECTED]> wrote: > Having a reasonably big SQLite database, 21 tables, one table with some > millions of rows, overall file size about 1.3 Gb. This table will only get > SELECT statements once it has been created. I have now run analyze on this > file and it does speed queries up indeed. > Do I run analyze after the table has been completed (all tables created and > populated) or should I run analyze before the creation of any tables as I > read somewhere? > BTW, I would be interested in seeing the output of SELECT * FROM sqlite_stat1; After you have run analyze. And also the output of running sqlite3_analyzer on your database, if you are willing to share it. This kind of data helps me to see what people are putting into SQLite database, which in turn helps me to tune it better. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] compiler option SQLITE_OMIT_SQLITE_OMIT_PARSER
You send it SQL statements. It has to parse and compile them for execution. The parser understands the grammar of SQL and applies it. Noah Hart wrote: Hello all, I've read the documentation, and the wiki and the pages at http://www.sqlite.org/compile.html However, I cannot find what is the purpose of the compiler option SSQLITE_OMIT_PARSER Clearly it "Omits" the "Parser", but my real questions are: what is the purpose of the parser. What are the ramification of omitting it from sqlite? Thank you, Noah Hart CONFIDENTIALITY NOTICE: This message may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, you must not use, copy, disclose, or take any action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message. Thank you for your cooperation. - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] compiler option SQLITE_OMIT_SQLITE_OMIT_PARSER
"Noah Hart" <[EMAIL PROTECTED]> wrote: > > However, I cannot find what is the purpose of the compiler option > SSQLITE_OMIT_PARSER > > Clearly it "Omits" the "Parser", but my real questions are: > what is the purpose of the parser. > What are the ramification of omitting it from sqlite? > SQLite, like every other SQL database engine, can be divided into two logical components. The front-end or "compiler" reads SQL statements and compiles them into bytecode. The back-end interprets the bytecode in order to do whatever is the statement requests. (Note: SQLite really uses bytecode. Other SQL engines do different things - usually they build a tree of some kind and then the backend walks the tree. But the concept is the same.) The frontend of SQLite is the larger of the two components. The sqlite3_prepare() API is the interface to the front-end. The sqlite3_stmt object that sqlite3_prepare() returns is really a little computer program in byte code. sqlite3_step() is the interface to the backend. sqlite3_step is a virtual machine for interpreting the bytecode. There is a proprietary extension for SQLite that allows you to compile SQLite without the compiler frontend. This makes the library much smaller. Sometimes that is important for embedded devices. The downside, is that the parser-less SQLite does not understand SQL. You have to feed it bytecode that you generated on a workstation using a version of SQLite that does have compiler built in. Typically, the generated bytecode is put into a special table in the database and you then run statements by number. In other words, the embedded device says things like "I now what to run statement 43 with parameters 1.43e17, 'hello', and NULL." -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
RE: [sqlite] When to run analyze?
OK, thanks. I can see it makes sense to run analyze after the DB is completed. I take it I can run the analyze before the commit, just as any other action query. This is the output from SELECT * FROM sqlite_stat1; tbl idx stat --- ENTRY IDX14ENTRY 3121645 104 ENTRY IDX13ENTRY 3121645 208110 ENTRY IDX12ENTRY 3121645 124 ENTRY IDX11ENTRY 3121645 37611 ENTRY IDX10ENTRY 3121645 31 ENTRY IDX9ENTRY 3121645 251 ENTRY IDX8ENTRY 3121645 312165 ENTRY IDX7ENTRY 3121645 29 ENTRY IDX6ENTRY 3121645 433 ENTRY IDX5ENTRY 3121645 429 ENTRY IDX4ENTRY 3121645 6 ENTRY IDX3ENTRY 3121645 217 ENTRY IDX2ENTRY 3121645 51175 ENTRY IDX1ENTRY 3121645 51175 ENTRYLINK IDX3ENTRYLINK 151096 12 ENTRYLINK IDX2ENTRYLINK 151096 3 ENTRYLINK IDX1ENTRYLINK 151096 75548 ENTRYLINK IDX0ENTRYLINK 151096 1 PHONELINK IDX4PHONELINK 25817 5164 PHONELINK IDX3PHONELINK 25817 12909 PHONELINK IDX2PHONELINK 25817 2 PHONELINK IDX1PHONELINK 25817 2 PHONE IDX2PHONE 18438 28 PHONE IDX1PHONE 18438 3 REMINDERIDX6REMINDER30746 156 REMINDERIDX5REMINDER30746 2 REMINDERIDX4REMINDER30746 12 REMINDERIDX3REMINDER30746 7687 REMINDERIDX2REMINDER30746 10 REMINDERIDX1REMINDER30746 10249 ADDRESSLINK IDX2ADDRESSLINK 25793 2 ADDRESSLINK IDX1ADDRESSLINK 25793 2 ADDRESS IDX7ADDRESS 17399 153 ADDRESS IDX6ADDRESS 17399 61 ADDRESS IDX5ADDRESS 17399 28 ADDRESS IDX4ADDRESS 17399 2 ADDRESS IDX3ADDRESS 17399 27 ADDRESS IDX2ADDRESS 17399 11 ADDRESS IDX1ADDRESS 17399 13 PATIENT IDX8PATIENT 25324 9 PATIENT IDX7PATIENT 25324 5 PATIENT IDX6PATIENT 25324 37 PATIENT IDX5PATIENT 25324 2 PATIENT IDX4PATIENT 25324 2 PATIENT IDX3PATIENT 25324 6331 PATIENT IDX2PATIENT 25324 3618 PATIENT IDX1PATIENT 25324 3166 AUTHORISATION IDX9AUTHORISATION 259812 109 AUTHORISATION IDX8AUTHORISATION 259812 104 AUTHORISATION IDX7AUTHORISATION 259812 18 AUTHORISATION IDX6AUTHORISATION 259812 528 AUTHORISATION IDX5AUTHORISATION 259812 890 AUTHORISATION IDX4AUTHORISATION 259812 1556 AUTHORISATION IDX3AUTHORISATION 259812 62 AUTHORISATION IDX2AUTHORISATION 259812 2 AUTHORISATION IDX1AUTHORISATION 259812 2260 AUTHORISATION IDX0AUTHORISATION 259812 1 ENCOUNTER IDX4ENCOUNTER 639545 30 ENCOUNTER IDX3ENCOUNTER 639545 13 ENCOUNTER IDX2ENCOUNTER 639545 16399 ENCOUNTER IDX1ENCOUNTER 639545 10316 ENTRY_ATTRIBUTE IDX5ENTRY_ATTRIBUTE 289381 54 ENTRY_ATTRIBUTE IDX4ENTRY_ATTRIBUTE 289381 15 ENTRY_ATTRIBUTE IDX3ENTRY_ATTRIBUTE 289381 2 ENTRY_ATTRIBUTE IDX2ENTRY_ATTRIBUTE 289381 36173 ENTRY_ATTRIBUTE IDX1ENTRY_ATTRIBUTE 289381 20 DISCONTINUATION IDX3DISCONTINUATION 93204 7170 DISCONTINUATION IDX2DISCONTINUATION 93204 2 DISCONTINUATION IDX1DISCONTINUATION 93204 8 SENSITIVITY IDX4SENSITIVITY 441 2 SENSITIVITY IDX3SENSITIVITY 441 89 SENSITIVITY IDX2SENSITIVITY 441 89 SENSITIVITY IDX1SENSITIVITY 441 441 EXTENDED_TEXT IDX1EXTENDED_TEXT 15424 2 EXTENDED_TEXT IDX0EXTENDED_TEXT 15424 2571 ISSUE IDX6ISSUE 915819 61 ISSUE IDX5ISSUE 915819 5 ISSUE IDX4ISSUE 915819 4 ISSUE IDX3ISSUE 915819 2313 ISSUE IDX2ISSUE 915819 172 ISSUE IDX1ISSUE 915819 38160 REFERRALIDX6REFERRAL11046 16 REFERRALIDX5REFERRAL11046 16 REFERRALIDX4REFERRAL11046 16 REFERRALIDX3REFERRAL11046 1381 REFERRALIDX2REFERRAL11046 850 REFERRALIDX1REFERRAL11046 614 RBS -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: 01 December 2006 21:01 To: sqlite-users@sqlite.org Subject: Re: [sqlite] When to run analyze? "RB Smissaert" <[EMAIL PROTECTED]> wrote: > Having a reasonably big SQLite database, 21 tables, one table with some > millions of rows, overall file size about 1.3 Gb. This table will only get > SELECT statements once it has been created. I have now run analyze on this > file and it does speed queries up indeed. > Do I run analyze after the table has been completed (all tables created and > populated) or should I run analyze before the creation of any tables as I > read somewhere? > BTW, I would be interested in seeing the output of SELECT * FROM sqlite_stat1; After you have run analyze. And also the output of running sqlite3_analyzer on your database, if you are willing to share it. This kind of data helps me to see what people are putting into SQLite database, which in turn helps me to tune it better. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED]
RE: [sqlite] When to run analyze?
> And also the output of running sqlite3_analyzer on your database Have run that now and the output looks impressive, although I have no idea yet how to use it. Output file is 72 Kb. Shall I send it as an attachement to this user-group? RBS -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: 01 December 2006 21:01 To: sqlite-users@sqlite.org Subject: Re: [sqlite] When to run analyze? "RB Smissaert" <[EMAIL PROTECTED]> wrote: > Having a reasonably big SQLite database, 21 tables, one table with some > millions of rows, overall file size about 1.3 Gb. This table will only get > SELECT statements once it has been created. I have now run analyze on this > file and it does speed queries up indeed. > Do I run analyze after the table has been completed (all tables created and > populated) or should I run analyze before the creation of any tables as I > read somewhere? > BTW, I would be interested in seeing the output of SELECT * FROM sqlite_stat1; After you have run analyze. And also the output of running sqlite3_analyzer on your database, if you are willing to share it. This kind of data helps me to see what people are putting into SQLite database, which in turn helps me to tune it better. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] When to run analyze?
"RB Smissaert" <[EMAIL PROTECTED]> wrote: > > And also the output of running sqlite3_analyzer on your database > > Have run that now and the output looks impressive, although I have no idea > yet how to use it. Output file is 72 Kb. Shall I send it as an attachement > to this user-group? > Direct email to me will be fine. Unless somebody else on the group wants to see it. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
RE: [sqlite] compiler option SQLITE_OMIT_SQLITE_OMIT_PARSER
Thank you, so as I understand it, the option SQLITE_OMIT_PARSER mean that I've already parsed the statements, and are not supporting "ad-hoc" SQL. What about the compiler option SQLITE_OMIT_CHECK Thanks, Noah Hart -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Friday, December 01, 2006 2:10 PM To: sqlite-users@sqlite.org Subject: Re: [sqlite] compiler option SQLITE_OMIT_PARSER "Noah Hart" <[EMAIL PROTECTED]> wrote: > > However, I cannot find what is the purpose of the compiler option > SSQLITE_OMIT_PARSER > > Clearly it "Omits" the "Parser", but my real questions are: > what is the purpose of the parser. > What are the ramification of omitting it from sqlite? > SQLite, like every other SQL database engine, can be divided into two logical components. The front-end or "compiler" reads SQL statements and compiles them into bytecode. The back-end interprets the bytecode in order to do whatever is the statement requests. (Note: SQLite really uses bytecode. Other SQL engines do different things - usually they build a tree of some kind and then the backend walks the tree. But the concept is the same.) The frontend of SQLite is the larger of the two components. The sqlite3_prepare() API is the interface to the front-end. The sqlite3_stmt object that sqlite3_prepare() returns is really a little computer program in byte code. sqlite3_step() is the interface to the backend. sqlite3_step is a virtual machine for interpreting the bytecode. There is a proprietary extension for SQLite that allows you to compile SQLite without the compiler frontend. This makes the library much smaller. Sometimes that is important for embedded devices. The downside, is that the parser-less SQLite does not understand SQL. You have to feed it bytecode that you generated on a workstation using a version of SQLite that does have compiler built in. Typically, the generated bytecode is put into a special table in the database and you then run statements by number. In other words, the embedded device says things like "I now what to run statement 43 with parameters 1.43e17, 'hello', and NULL." -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] - CONFIDENTIALITY NOTICE: This message may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, you must not use, copy, disclose, or take any action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message. Thank you for your cooperation. - To unsubscribe, send email to [EMAIL PROTECTED] -
RE: [sqlite] When to run analyze?
OK, have sent the file to [EMAIL PROTECTED] RBS -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: 02 December 2006 00:00 To: sqlite-users@sqlite.org Subject: Re: [sqlite] When to run analyze? "RB Smissaert" <[EMAIL PROTECTED]> wrote: > > And also the output of running sqlite3_analyzer on your database > > Have run that now and the output looks impressive, although I have no idea > yet how to use it. Output file is 72 Kb. Shall I send it as an attachement > to this user-group? > Direct email to me will be fine. Unless somebody else on the group wants to see it. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] compiler option SQLITE_OMIT_SQLITE_OMIT_PARSER
"Noah Hart" <[EMAIL PROTECTED]> wrote: > Thank you, so as I understand it, the option SQLITE_OMIT_PARSER mean > that I've already parsed the statements, and are not supporting "ad-hoc" > SQL. > > What about the compiler option SQLITE_OMIT_CHECK > CHECK constrants. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] sqlite3_open (new Win32 thread)
I think this is a different question, unrelated to the previous sqlite_open thread. I'm in a WIN32 environment. I'm using: h = CreateFile(path, GENERIC_READ, FILE_SHARE_READ || FILE_SHARED_WRITE, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); to establish if my DB exists before I open it. If it does exist, I close the handle "h" immediately. I then call sqlite3_open. If the file did not previously exists, I then create my tables. I have multiple processes accessing the resulting DB. One of those processes only queries the DB. I call: h = CreateFile(path, GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); to verify the file exists before declaring an error if it does not exists. The problem is that CreateFile is returning error in this case and GetLastError() tells me that "The process cannot access the file because it is being used by another process." Is there any reason that the second CreateFile above would be incompatible with whatever SQLite uses to open the file? - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] sqlite3_open (new Win32 thread)
On 12/1/06, Dixon Hutchinson <[EMAIL PROTECTED]> wrote: I have multiple processes accessing the resulting DB. One of those processes only queries the DB. I call: h = CreateFile(path, GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); to verify the file exists before declaring an error if it does not exists. The problem is that CreateFile is returning error in this case and GetLastError() tells me that "The process cannot access the file because it is being used by another process." Is there any reason that the second CreateFile above would be incompatible with whatever SQLite uses to open the file? You do not specify FILE_SHARE_WRITE, so if another process has it open with GENERIC_WRITE (or equivalent) permissions, your call will fail. - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Some comments on virtual tables
I am busy adding virtual table support to APSW and have some comments from my progress so far: The structure types and "subclassing" method of using them works well and the API is easy to use. (Admittedly I did spend a lot of quality time with the xBestIndex method documentation but that is to be expected). There is no documentation on exactly what the Begin/Sync/Commit/Rollback methods should do. I can take an educated guess but the presence of Sync confuses things. Eg is Commit expected to do a sync internally or will SQLite call Sync as well? Does Begin mean a write will happen (eg should I fail it for a read only database?) Will there always be a Begin? In the same territory I would assume all changes made since a begin should be held temporarily until a commit or discarded in a rollback. Should those changes be visible in all cursors or just the same one as update is called in. What effect does Update have on the cursor (such as the position)? Do updated values become visible in the same iteration? How do you not get infinite loops with "update foo set bar=bar+1"? BestIndex is very hairy! One API change I suggest is in the output aConstraintUsage.argvIndex - don't make it be off by one. Currently if you set it to 2 then it is argv[1] in xFilter. It would be way better if they just corresponded directly to each other. A value of <0 (eg -1) can indicate not to send it to xFilter. When making the argument list to the Python code, I omit all aConstraint where usable is false hiding the fact they were ever supplied. Is this okay? (ie would there ever be a situation where the Python code would want to see unusable constraints) On a stylistic and consistency note, what is the ideal way to pass the constraint op to the Python code. I currently do it as an integer but could use a string instead (eg '>=' instead of 32). I think strings look nicer but if the list of constraints is going to get longer or somehow be combined (they are powers of two implying they could be or'ed together) then strings definitely can't be used. Another stylistic and consistency question: I can make several methods be optional in the Python code. One example is Sync. Another is Update where I can return SQLITE_READONLY if the developer didn't supply Update. If Connect isn't present then call Create. BestIndex is another that doesn't need to be present. Any opinions on letting many methods be optional which makes for far shorter "hello world" examples vs requiring users to provide all methods so that they have to think through the consequences and interactions between methods? Roger - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] sqlite_open
Cnichols wrote: > I disagree I myself see this as a feature. I think the programmer should be > responsible for file checking because if it is not a SQLite database SQLite > will inform you. The one fly in the ointment is all the Unicode/UTF-8 stuff. Your own code has to go through the exact same set of steps as SQLite and call the same OS api otherwise the name you think a file has and what SQLite thinks it has could be different. SQLite doesn't usually check that what you give it is valid UTF-8 so this can get very messed up. If you are using ASCII everywhere then it is no big deal. But with users elsewhere and especially with files stored in their home directory/my documents/documents, you will end up with accented characters and other fun things in the file names. Roger - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] sqlite3_open (new Win32 thread)
On 12/1/06, Dixon Hutchinson <[EMAIL PROTECTED]> wrote: I think this is a different question, unrelated to the previous sqlite_open thread. I'm in a WIN32 environment. I'm using: h = CreateFile(path, GENERIC_READ, FILE_SHARE_READ || FILE_SHARED_WRITE, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); to establish if my DB exists before I open it. If it does exist, I close the handle "h" immediately. I then call sqlite3_open. If the file did not previously exists, I then create my tables. I have multiple processes accessing the resulting DB. One of those processes only queries the DB. I call: h = CreateFile(path, GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); to verify the file exists before declaring an error if it does not exists. The problem is that CreateFile is returning error in this case and GetLastError() tells me that "The process cannot access the file because it is being used by another process." Is there any reason that the second CreateFile above would be incompatible with whatever SQLite uses to open the file? I'm afraid I don't have an answer to your problem, but is there a reason you can't use the syntax CREATE TABLE IF NOT EXISTS? - To unsubscribe, send email to [EMAIL PROTECTED] - -- Cory Nelson http://www.int64.org - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] sqlite3_open (new Win32 thread)
Trevor, Just to be safe, I tried specifying FILE_SHARE_WRITE, that did not help. FILE_SHARE_WRITE would allow other "writers" to open the file shared, but should not have any effect on whether this open succeeds. If I ignore the failure and try to open the DB with sqlite3_open, the open succeeds just fine. All I want to do really is test for the existence of the file and whether or not I have permissions to open it with sqlite. Trevor Talbot wrote: On 12/1/06, Dixon Hutchinson <[EMAIL PROTECTED]> wrote: I have multiple processes accessing the resulting DB. One of those processes only queries the DB. I call: h = CreateFile(path, GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); to verify the file exists before declaring an error if it does not exists. The problem is that CreateFile is returning error in this case and GetLastError() tells me that "The process cannot access the file because it is being used by another process." Is there any reason that the second CreateFile above would be incompatible with whatever SQLite uses to open the file? You do not specify FILE_SHARE_WRITE, so if another process has it open with GENERIC_WRITE (or equivalent) permissions, your call will fail. - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] sqlite3_open (new Win32 thread)
On 12/1/06, Dixon Hutchinson <[EMAIL PROTECTED]> wrote: Just to be safe, I tried specifying FILE_SHARE_WRITE, that did not help. FILE_SHARE_WRITE would allow other "writers" to open the file shared, but should not have any effect on whether this open succeeds. Sharing semantics are bidirectional and affect all opens. In order for two processes to have a particular file open at the same time, they must both agree to share the file for the relevent access rights. If another process has a file open for write access, then you try to open it for read access but do not agree to allow other processes to write to it at the same time, then your open cannot succeed. Otherwise the other process could write to the file when you are not prepared for it, causing you to read inconsistent data. If I ignore the failure and try to open the DB with sqlite3_open, the open succeeds just fine. All I want to do really is test for the existence of the file and whether or not I have permissions to open it with sqlite. sqlite's open calls are in os_win.c. The standard sqlite3WinOpenReadWrite() requests GENERIC_READ and GENERIC_WRITE access, with FILE_SHARE_READ|FILE_SHARE_WRITE sharing. - To unsubscribe, send email to [EMAIL PROTECTED] -