Re: [sqlite] stored procedures
One reason might be that SQLite does not usually include anything that can be just as well implemented externally. I could be missing something, because I'm not really sure of the advantages of stored procedures, but it seems to me that an implementation could be created without modifying SQLite itself. --- Lloyd Dupont <[EMAIL PROTECTED]> wrote: > I'm not sure it's a real justification > I believe that stored procedure are more than convenience to do avoid > multiple client-serveur call > > For exemple lately I wanted to created to related table (kind of > MASTER_TABLE, PROPERTY_TABLE) > property should be destroyed/created with master record. > I used trigger for that. > But I read once that TRIGGER are evil. And I do feel it, when I write my > INSERT in MASTER_TABLE, it's quite easy to forget that that a record is > created as well in PROPERTY_TABLE (with some link ID updated in both table). > Whereas a stored Procedure would have enable me to clearly look at the whole > procedure as one single operation. > > - Original Message - > From: "Jay Sprenkle" <[EMAIL PROTECTED]> > To:> Sent: Wednesday, June 15, 2005 12:40 AM > Subject: Re: [sqlite] stored procedures > > > >> BTW I wonder why SQLite doesn't support Stored Procedure. > >> Through Trigger it does already support some similar functionality. > >> Certainly, while writing trigger code it won't have been that much code > >> to > >> write stored procedure code as well. > >> That kind of puzzle me.. is there any rationale for the lack of stored > >> procedure? > > > > Stored procedures are code run by the server. There isn't a server with > > SQLite. > > SQLite is a database file structure that uses SQL to access it. > > > > __ Yahoo! Mail Mobile Take Yahoo! Mail with you! Check email on your mobile phone. http://mobile.yahoo.com/learn/mail
Re: [sqlite] Reads and Writes
> I can imagine a protocol that will improve performance by keeping track > of whether the database was updated, which would allow a quick > determination whether a program's cache was still valid. The idea is to > increment a 'version number' in the database file as soon as a program > obtains a write lock. A reader could then determine whether the file was > written to and flush the cache only when necessary, speeding things up a > little. A 128 bit counter should never wrap around. Such a counter (32-bits only) was included in the file-format for version 3. Current versions of the library update the counter every commit, but it's not being used to allow persistent caches yet. I think such a change would require a lot of testing before it would be safe enough to use. And as you say, sophisticated operating systems do a pretty good job of caching the file anyway. __ Discover Yahoo! Find restaurants, movies, travel and more fun for the weekend. Check it out! http://discover.yahoo.com/weekend.html
Re: [sqlite] Reads and Writes
Sean Heber wrote: > > > My database file is only around 4MB and I have set the > default_cache_size to 5. From what I've read, that should > translate to almost 50MB of cache size which would be more than > enough to keep the entire database in memory, I'd think. Yet it > doesn't seem to actually do that since it is reading from the file so > often. SQLite cannot know whether the database file has been modified by another process once it releases its file locks, so it needs to read everything again. If you wrap multiple queries in a transaction the file stays locked and cache remains valid, and the number of seeks/reads should go down. After SQLite obtains a read lock it will have to perform at least the following: - check the file header to check for corruption - check for a hot transaction log, and roll back changes if necessary - read the database schema from the sqlite_master table This is the most likely cause of the repetitive behavior you're seeing. That said: the OS will of course cache the database file as well, and many of the 'read' calls will be very fast. I can imagine a protocol that will improve performance by keeping track of whether the database was updated, which would allow a quick determination whether a program's cache was still valid. The idea is to increment a 'version number' in the database file as soon as a program obtains a write lock. A reader could then determine whether the file was written to and flush the cache only when necessary, speeding things up a little. A 128 bit counter should never wrap around. Gé -- Ge' Weijers e-mail: [EMAIL PROTECTED] tel: (520)623-8542
[sqlite] database disk image is malformed
I just recently switched from sqlite 2.8.14 to 3.2.2. Since switching, I've seen a few SQLITE_CORRUPT errors returned from sqlite_exec. However, almost all queries work - this error seems to only indicate a transient condition.After receiving the error, my process closes the connection and opens a new one, and subsequent queries work. The db is in ramdisk. I'm running on a Debian based linux system. I've read the stuff in http://sqlite.org/lockingv3.html on "How to Corrupt Your Database Files", but nothing seemed to apply. I don't think fsync has any effect when the db is in ramdisk. I've also never seen this error using 2.8.14 with daemon process running for weeks and logging every query. Any ideas? Thanks, Kevin __ Discover Yahoo! Stay in touch with email, IM, photo sharing and more. Check it out! http://discover.yahoo.com/stayintouch.html
Re: [sqlite] preventing text to integer conversion of bind variables in perl
At 3:34 PM -0400 6/14/05, D. Richard Hipp wrote: On Tue, 2005-06-14 at 20:18 +, [EMAIL PROTECTED] wrote: I have textual data that may look like integers (eg. "0325763213"). On insertion, any leading "0" will vanish. How do I prevent this and make the data be inserted verbatim? Simple illustration: sqlite3 test 'create table t ( k text unique, v text);' perl -e 'use DBI; $db = DBI->connect( "dbi:SQLite:dbname=test" ); $db->do( qq[REPLACE INTO t VALUES(?,?);], undef, "key", "0325763213");' sqlite3 test 'select * from t;' returns: > key|325763213 It looks like perl is making this conversion for you. SQLite does not do this. As a work-around, consider prepending a single 'x' character to every "v" column entry then strip of the 'x' before you use it. Actually, Perl itself wouldn't be doing that. Perl only converts a string to a number when it is used in a numerical context; eg, '$bar = $foo + 0'; otherwise it continues representing it as a string. Since the inserted value was string quoted when it was defined, it started out as a string. I suspect that it is the DBD::SQLite module, or the DBI module, that is the problem. As I recall, DBD::SQLite was never updated to use the prepared statements feature added to SQLite 3 and continues to emulate that feature which DBI defines (as it did for SQLite 2). It does this by substituting the values into the raw SQL and executing that as a SQL string without variables. Moreover, I think this functionality will examine the variable, and if it looks like a number, will insert it into the SQL as a number rather than a character string, hence the loss of the zero. In that case, neither SQLite nor the Perl core is at fault, but the intermediary between them, and hence the best solution is to fix that so it at least always string-quotes (or ask Matt to do it). I ruled out SQLite because you were using version 3 and explicitly defined the field as a character string. Meanwhile, you could follow the the workaround that DRH mentioned. -- Darren Duncan
[sqlite] Reads and Writes
My program has a lot of simple select queries. Most of them are of the "select count() from... " variety. These are very simple queries where there is a single WHERE clause and the columns referenced are either a primary key column or another indexed column. I would expect the database to keep the indexes in memory and thus have virtually no need to go to disk when doing these count() queries and such. Yet when I do an strace on the process, I see significant read() and seek() calls on the database file. What are the conditions for when it needs to read from the database file? My database file is only around 4MB and I have set the default_cache_size to 5. From what I've read, that should translate to almost 50MB of cache size which would be more than enough to keep the entire database in memory, I'd think. Yet it doesn't seem to actually do that since it is reading from the file so often. I've seen this on OSX and Linux but this strace output is from Linux: Snippet of strace output: fcntl64(5, F_SETLK64, {type=F_RDLCK, whence=SEEK_SET, start=1073741824, len=1}, 0xbfffdb00) = 0 fcntl64(5, F_SETLK64, {type=F_RDLCK, whence=SEEK_SET, start=1073741826, len=510}, 0xbfffdb00) = 0 fcntl64(5, F_SETLK64, {type=F_UNLCK, whence=SEEK_SET, start=1073741824, len=1}, 0xbfffdb00) = 0 access("/db/data-journal", F_OK) = -1 ENOENT (No such file or directory) fstat64(5, {st_mode=S_IFREG|0644, st_size=4402176, ...}) = 0 _llseek(5, 0, [0], SEEK_SET)= 0 read(5, "SQLite format 3\0\4\0\1\1\0@ \0\0\4\200\0\0\0\0"..., 1024) = 1024 _llseek(5, 3072, [3072], SEEK_SET) = 0 read(5, "\2\2]\0\22\1\214\6\0\0\16%\2\323\2\265\2z\2\361\3\245\3"..., 1024) = 1024 _llseek(5, 395264, [395264], SEEK_SET) = 0 read(5, "\n\0\0\0$\0\\\0\0\\\0v\0\217\0\251\0\303\0\335\0\367\1"..., 1024) = 1024 fcntl64(5, F_SETLK64, {type=F_UNLCK, whence=SEEK_SET, start=0, len=0}, 0xbfffdfd0) = 0 fcntl64(5, F_SETLK64, {type=F_RDLCK, whence=SEEK_SET, start=1073741824, len=1}, 0xbfffdb60) = 0 fcntl64(5, F_SETLK64, {type=F_RDLCK, whence=SEEK_SET, start=1073741826, len=510}, 0xbfffdb60) = 0 fcntl64(5, F_SETLK64, {type=F_UNLCK, whence=SEEK_SET, start=1073741824, len=1}, 0xbfffdb60) = 0 access("/db/data-journal", F_OK) = -1 ENOENT (No such file or directory) fstat64(5, {st_mode=S_IFREG|0644, st_size=4402176, ...}) = 0 _llseek(5, 0, [0], SEEK_SET)= 0 read(5, "SQLite format 3\0\4\0\1\1\0@ \0\0\4\200\0\0\0\0"..., 1024) = 1024 _llseek(5, 2048, [2048], SEEK_SET) = 0 read(5, "\5\0\0\0\2\3\364\0\0\0\n\253\3\372\3\364\0\0\0\0\0\0\0"..., 1024) = 1024 _llseek(5, 5120, [5120], SEEK_SET) = 0 read(5, "\2\0\0\0\6\3\277\1\0\0\r\341\3\365\3\352\3\277\3\311\3"..., 1024) = 1024 _llseek(5, 1768448, [1768448], SEEK_SET) = 0 read(5, "\n\0\0\0q\0\370\0\0\370\0\377\1\6\1\f\1\22\1\30\1\36\1"..., 1024) = 1024 _llseek(5, 405504, [405504], SEEK_SET) = 0 read(5, "\5\0\0\0\207\1(\0\0\0\1\207\1(\1-\0012\0017\1<\1A\1F\1"..., 1024) = 1024 _llseek(5, 137216, [137216], SEEK_SET) = 0 read(5, "\r\0\0\0\1\1\35\0\1\35\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 1024) = 1024 _llseek(5, 293888, [293888], SEEK_SET) = 0 read(5, "\r\0\0\0\2\0\217\0\0\217\3H\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 1024) = 1024 _llseek(5, 406528, [406528], SEEK_SET) = 0 read(5, "\5\0\0\0|\1\30\0\0\0\n]\1\30\1\36\1$\1*\0010\0016\1<\1"..., 1024) = 1024 _llseek(5, 1178624, [1178624], SEEK_SET) = 0 read(5, "\r\0\0\0\2\0\353\0\0\353\3\221\0\0\0\0\0\0\0\0\0\0\0\0"..., 1024) = 1024 _llseek(5, 2127872, [2127872], SEEK_SET) = 0 read(5, "\r\0\0\0\3\0s\0\3\221\0s\0\342\0\0\0\0\0\0\0\0\0\0\0\0"..., 1024) = 1024 etc There is more or less a pattern that is very similar to the snippet above that repeats over and over in the trace. It sort of seems like it is reading a header or something in the database file over and over again. Additional notes: I set the following pragmas in this order when I create/open the database file (which only happens once when the app loads): PRAGMA auto_vacuum = 1 PRAGMA temp_store = MEMORY PRAGMA synchronous = OFF PRAGMA default_cache_size = 5 Thanks, Sean
Re: [sqlite] preventing text to integer conversion of bind variables in perl
[EMAIL PROTECTED] wrote: I have textual data that may look like integers (eg. "0325763213"). On insertion, any leading "0" will vanish. How do I prevent this and make the data be inserted verbatim? Simple illustration: sqlite3 test 'create table t ( k text unique, v text);' perl -e 'use DBI; $db = DBI->connect( "dbi:SQLite:dbname=test" ); $db->do( qq[REPLACE INTO t VALUES(?,?);], undef, "key", "0325763213");' sqlite3 test 'select * from t;' returns: key|325763213 I am using the sqlite3_3.2.1-1 and libdbd-sqlite3-perl_1.08-1 packages from Debian, in case it matters. Perhaps it matters, because I don't get the results you get... D:\testers>sqlite3 test SQLite version 3.2.1 Enter ".help" for instructions sqlite> create table t (k text unique, v text); sqlite> .q ---test.pl #!perl -w use DBI; $db = DBI->connect("dbi:SQLite:dbname=test"); $db->do(qq[REPLACE INTO t VALUES (?, ?);], undef, "key", "0325763213"); --- D:\testers>test.pl D:\testers>sqlite3 test SQLite version 3.2.1 Enter ".help" for instructions sqlite> select * from t; key|0325763213 sqlite> I am using the latest DBI and DBD::SQLite
[sqlite] preventing text to integer conversion of bind variables in perl
I have textual data that may look like integers (eg. "0325763213"). On insertion, any leading "0" will vanish. How do I prevent this and make the data be inserted verbatim? Simple illustration: sqlite3 test 'create table t ( k text unique, v text);' perl -e 'use DBI; $db = DBI->connect( "dbi:SQLite:dbname=test" ); $db->do( qq[REPLACE INTO t VALUES(?,?);], undef, "key", "0325763213");' sqlite3 test 'select * from t;' returns: key|325763213 I am using the sqlite3_3.2.1-1 and libdbd-sqlite3-perl_1.08-1 packages from Debian, in case it matters. -jonathan -- Jonathan H N Chin, 2 dan | deputy computer | Newton Institute, Cambridge, UK <[EMAIL PROTECTED]> | systems mangler | tel/fax: +44 1223 767091/330508 "respondeo etsi mutabor" --Rosenstock-Huessy
Re: [sqlite] Question about the LIMIT keyword
Thaks for the suggestions! I'm going to try to do the query without the LIMIT keyword, and then only use the rows I need. I'm not sure if you can get a "pointer" and then only fetch the rows you need in TCL, but I'll see if that is possible. The program I'm writing must be able to run on rather old machines, so in this case speed is important. Hovewer I do agree that SQLite is a very fast database :) Johan Puneet Kishor wrote: >Don't know about Tcl, but make sure that you don't fetch all the >records (analogy: fetchall... methods in Perl DBI), but fetch just a >pointer to the cursor (fetchrow... methods), and then step through the >cursor. If you fetch all the records then the whole point is defeated. > >An alternative method is to define a separate table that keeps the >COUNT of the rows, and define a trigger that keeps that COUNT updated >every time you DELETE/INSERT/UPDATE on the main table. > >All depends on what you mean by "database is very large." For most >"large" SQLite still should be very fast, but if its largeness is >indeed contributing to a slowdown then the above alternatives should >work.