Re: [sqlite] Trigger and Select
Hi, Shall we use Auto as like in sql it will show the value in the result. but we can't add column into it i think so. selvaraj > do you mean that the result of an SQL query should contain more columns > than defined in the SQL itself? > I can not imagine a trigger doing this... > > Martin > > Marco Bambini schrieb: > >> Hello, >> >> is it possible to create a Trigger that adds a column to every select >> statement? >> I am searching a way to automatically add rowID to every sql query... >> >> Thanks, >> Marco Bambini > >
Re: [sqlite] Using variables within trigger definitions
Philipp Knüsel wrote: Is there a possibility to define variables within trigger definitions? I would like to to something like this (simplified example): CREATE TRIGGER Side_Insert AFTER INSERT ON Side BEGIN SET LOWDATE = SELECT MIN(Startdate) FROM Basis; INSERT INTO BASIS (Name,Startdate) VALUES ("Trigger", LOWDATE); END; I know, there is the possibility to do the same with SUBSELECTS, but I'm looking for something easier/faster/more elegant than doing the same subselect several times. (as my real example is more complicated) Philipp, You can't have variables in SQLite, but you can use another table to store the result of your complex query (which only executes once) and use simple subselects to retrieve that value. Something like this: CREATE TABLE LowDate (date integer); INSERT INTO LowDate VALUES(0); CREATE TRIGGER Side_Insert AFTER INSERT ON Side BEGIN UPDATE LowDate SET date = SELECT MIN(Startdate) FROM Basis; INSERT INTO BASIS (Name,Startdate) VALUES ("Trigger", (SELECT date FROM LowDate )); END; HTH Dennis Cote
Re: [sqlite] Possible 1GB database limit on Windows from PENDING_BYTE
thank you a lot for the feedback. It is not only Windows using advisory locking and aborting other reads, it is the same with Mac OS X using AFP volumes or a Samba CIFS volume mount on a Linux client. I understand the problem and consider to add a ".backup" command to the sqlite shell to backup a running database. I still believe it is an good idea to move the lock range outside of the file range e.g. 0x. Word/Excel using as well a locking in high area for simple interprocess communication about the current Word/Excel file. Helmut Tschemernjak D. Richard Hipp wrote: On Mon, 2005-05-16 at 15:58 +0200, Helmut Tschemernjak wrote: thank you for the info, another possible problem is that I cannot backup the database e.g. "begin exclusive" and copy the entire DB file while the main server app is still running. The copy aborts at offset 1GB. This is only a problem on windows. (Unix lets you make a copy of the file even though it is locked - unix got this part right.) A easy work-around would be to write a custom "copy" program that ignored bytes 1GiB through 1Gib+510.
Re: [sqlite] Possible 1GB database limit on Windows from PENDING_BYTE
D. Richard Hipp wrote: On Mon, 2005-05-16 at 15:58 +0200, Helmut Tschemernjak wrote: thank you for the info, another possible problem is that I cannot backup the database e.g. "begin exclusive" and copy the entire DB file while the main server app is still running. The copy aborts at offset 1GB. This is only a problem on windows. (Unix lets you make a copy of the file even though it is locked - unix got this part right.) A easy work-around would be to write a custom "copy" program that ignored bytes 1GiB through 1Gib+510. If you are a member of the Windows "backup operators" group, you can CreateFile (open the file) with the flag FILE_FLAG_BACKUP_SEMANTICS. I *think* you can then copy a locked file. Gert
RE: [sqlite] Convert and isnumeric function
Dan, You are absolutely correct - I always get those two confused. I should have looked it up instead of relying on memory. Liberia - Nigeria - two letters different. My ECC must be faulty. It only catches single bit errors. :-) Mike > -Original Message- > From: Dan Kennedy [mailto:[EMAIL PROTECTED] > Sent: Monday, May 16, 2005 10:34 AM > To: sqlite-users@sqlite.org > Subject: RE: [sqlite] Convert and isnumeric function > > > There's a former Nigeria dictator Charles Taylor that is even worse. > > Liberia! Not that I could tell one from the other :) > > > > > Yahoo! Mail > Stay connected, organized, and protected. Take the tour: > http://tour.mail.yahoo.com/mailtour.html > >
RE: [sqlite] Convert and isnumeric function
> There's a former Nigeria dictator Charles Taylor that is even > worse. Liberia! Not that I could tell one from the other :) Yahoo! Mail Stay connected, organized, and protected. Take the tour: http://tour.mail.yahoo.com/mailtour.html
RE: [sqlite] Convert and isnumeric function
Cory, Thanks, I'll give it a try. Speed if of the utmost for me here, because I am building an array of ID's in memory that satisfy a certain condition and then using that array of ID's back into the database to add to the array using the keys in the array as the condition for the next select. I keep iterating like this until there are no more ID's to add to the list. The array can get to be over 10,000 rows at up to 30 levels. I also track the levels at which the ID was entered into the table. Obviously I eliminate the ID's past each level that are already in the array. I currently am doing this in MSSQL 2K and the CircleOfFriends Database alone (without the profile data) is over 9GB and has over 94 million rows. It's not distributable in this format, so I am using sqlite to ship the 289,000+ profiles and building the lists on the fly in a web application. We ship a new profile every single day to our clients and the circle of friends list needs to be rebuilt every time they get a new list. Basically what it is, is a database of bad guys and companies from various world wide lists that each have a unique ID assigned to them. Part of the database is a table of links to other bad guys that are in the list. I am using this method to build a 'Circle Of Friends' of the bad guys. We then rank each member of the Circle according to how many other bad guys he/she is linked to and assign a severity based on the list or list that they are on. The list currently contains about 289,000+ profiles of the worlds top bad guys. By the way - Usama Bin Laden is in a circle that contains about 9,000+ other bad guys/companies. There's a former Nigeria dictator Charles Taylor that is even worse. Mike > -Original Message- > From: Cory Nelson [mailto:[EMAIL PROTECTED] > Sent: Sunday, May 15, 2005 1:36 PM > To: sqlite-users@sqlite.org > Cc: [EMAIL PROTECTED] > Subject: Re: [sqlite] Convert and isnumeric function > > for a nice speed increase (no strlen and only one pointer > dereference) try changing > > for (i = 0; i < strlen (z); i++) { > > to > > const char *iter; > char ch; > for(iter=z; (ch=*iter); ++iter) { > > and > > if (i == 0) { > > to > > if(iter==z) { > > and all the z[i] to ch. > > On 5/15/05, Michael Evenson <[EMAIL PROTECTED]> wrote: > > This fixes the problem of negative numeric values. It does > not however > > handle locale specific numbers (like using , as decimal > separator) or > > for that matter, commas in the numbers. As I understand it, sqlite > > does not handle locale specific numbers in general. > > > > Mike > > > > > > /* > > ** Implementation of the isnumeric() function */ static void > > isnumericFunc(sqlite3_context *context, int argc, sqlite3_value > > **argv) > > { > > int i; > > int nResult = 1; > > > > assert( argc==1 ); > > switch( sqlite3_value_type(argv[0]) ){ > > case SQLITE_INTEGER: { > > sqlite3_result_int(context, 1); > > break; > > } > > case SQLITE_FLOAT: { > > sqlite3_result_int(context, 1); > > break; > > } > > case SQLITE_NULL: { > > sqlite3_result_int(context, 0); > > break; > > } > > case SQLITE_TEXT: { > > int d = 0; > > const char *z = sqlite3_value_text(argv[0]); > > for (i = 0; i < strlen (z); i++) { > > if (!isdigit (z[i])) { > > /* the character is not a digit */ > > if (i == 0) { > > /* allow - or + as the first character */ > > if ((z[i] != '-') && (z[i] != '+') && (z[i] != '.')) { > > /* only +, - and . allowed as first non digit > character */ > > nResult = 0; > > if (z[i] == '.') > > d++; > > break; > > } > > } > > else { > > if ((d > 0) && (z[i] == '.')) { > > /* only . allowed as non digit character here */ > > /* and only one of them in the string */ > > nResult = 0; > > break; > > } > > else if (z[i] == '.') > > d++; > > } > > } > > } > > sqlite3_result_int(context, nResult); > > break; > > } > > default: { > > sqlite3_result_int(context, 0); > > break; > > } > > } > > } > > > > > -Original Message- > > > From: Wolfgang Rohdewald [mailto:[EMAIL PROTECTED] > > > Sent: Saturday, May 14, 2005 8:39 AM > > > To: sqlite-users@sqlite.org > > > Subject: Re: [sqlite] Convert and isnumeric function > > > > > > On Samstag 14 Mai 2005 00:31, Michael Evenson wrote: > > > > case SQLITE_TEXT: { > > > > const char *z = sqlite3_value_text(argv[0]); > > > > for (i = 0; i < strlen (z); i++) { > > > > if (!isdigit (z[i])) { > > > > nResult = 0; > > > > break; > > > > } > > > > > > this should return FALSE for -5, 1123.456. > > > In some locales, that might be written
Re: [sqlite] Trigger and Select
Hi, do you mean that the result of an SQL query should contain more columns than defined in the SQL itself? I can not imagine a trigger doing this... Martin Marco Bambini schrieb: Hello, is it possible to create a Trigger that adds a column to every select statement? I am searching a way to automatically add rowID to every sql query... Thanks, Marco Bambini
Re: [sqlite] Possible 1GB database limit on Windows from PENDING_BYTE
thank you for the info, another possible problem is that I cannot backup the database e.g. "begin exclusive" and copy the entire DB file while the main server app is still running. The copy aborts at offset 1GB. Helmut Tschemernjak D. Richard Hipp wrote: On Mon, 2005-05-16 at 15:33 +0200, Helmut Tschemernjak wrote: looking into the PENDING_BYTE 0x4000 define from os.h, I believe there is a problem when the database is larger than 1GB and a second process reads at offset 1GB, the read may aborts with SQLITE_IOERR if a lock is active. This should not happen because no process ever reads data out of the range 1GiB through 1GiB+510. That range of bytes is used for locking only. No data is every stored here. If you grep for PENDING_BYTE in pager.c, you will find the places in the code that automatically skip over the locking bytes.
[sqlite] Trigger and Select
Hello, is it possible to create a Trigger that adds a column to every select statement? I am searching a way to automatically add rowID to every sql query... Thanks, Marco Bambini
Re: [sqlite] Possible 1GB database limit on Windows from PENDING_BYTE
On Mon, 2005-05-16 at 15:33 +0200, Helmut Tschemernjak wrote: > looking into the PENDING_BYTE 0x4000 define from os.h, I believe > there is a problem when the database is larger than 1GB and a second > process reads at offset 1GB, the read may aborts with SQLITE_IOERR if a > lock is active. > This should not happen because no process ever reads data out of the range 1GiB through 1GiB+510. That range of bytes is used for locking only. No data is every stored here. If you grep for PENDING_BYTE in pager.c, you will find the places in the code that automatically skip over the locking bytes. -- D. Richard Hipp <[EMAIL PROTECTED]>
Re: [sqlite] database table is locked
> It's probably not a good idea, because it depends on some > behaviour that is not specified, but I once used a trick > like this to get good performance: > > CREATE TABLE abc(a, b); > UPDATE abc SET b = user2(a, b) WHERE AND user1(a, b); > > SQLite loops through the rows where is true, and > remembers those for which user1() returns true. It then runs > a second loop through those rows and calls user2() for each > of the remembered rows, setting 'b' to the return value. That's an interesting trick. It has no way to retrieve the rows you want to operate on though. I guess you could make every column you needed a parameters to the user2() function and do your processing there.
[sqlite] Possible 1GB database limit on Windows from PENDING_BYTE
Hello, looking into the PENDING_BYTE 0x4000 define from os.h, I believe there is a problem when the database is larger than 1GB and a second process reads at offset 1GB, the read may aborts with SQLITE_IOERR if a lock is active. A "(u64)-10" PENDING_BYTE define and adjusting the low/high dword for LockFileEx should fix this. It is clear to me that all clients need to update the libsqlite to allow concurrent access with the new locking offset. Is this right? Helmut Tschemernjak
Re: [sqlite] Is there a new sqlite specifically design for WinCE.
On May 16, 2005, at 2:24 PM, steven frierdich wrote: Is there a new Sqlite designed specifically for WinCE? I am using a Sqlite 3.0.7 version that was design for WinCE. Is there a newer one? Thanks Steve SQLite 3.1.5 here: http://sqlite-wince.sourceforge.net/ Regards, Marco Bambini SQLabs.net smime.p7s Description: S/MIME cryptographic signature
Re: [sqlite] Is there a new sqlite specifically design for WinCE.
Don't reply to existing messages with a new topic. People won't see your message. On Mon, 2005-05-16 at 08:24 -0400, steven frierdich wrote: > Is there a new Sqlite designed specifically for WinCE? I am using a > Sqlite 3.0.7 version that was design for WinCE. Is there a newer one? > Thanks > Steve I don't believe there is. If you found a 3.0.7 for WinCE, chances are it was distributed with patches (if there were any) which can probably be applied to later versions.
Re: [sqlite] InnoDB Transactions
On Sun, 2005-05-15 at 19:48 -0700, Dan Kennedy wrote: > Anyone have a quick summary of InnoDB algorithm for comparison? I > suspect whatever trick they use won't work with SQLite because > there is no server process, but it could be interesting anyway. There are several ways the extra fsync() might be avoided: 1. using an internal journal, and an operating system guarantee that earlier write() occurs before later write() when sent to disk at fsync ()-time. I do not know of any operating system that makes this guarantee [and keeps it!] 2. write changes to "changeblobs" instead of journals and have sqlite logically merge the changeblobs when loading (thus, original db is modified less frequently). However, because multiple files have to be searched, this slows down reads quite a bit (unless the index were redesigned). This is a tradeoff- writes _might_ be faster, but reads certainly aren't. interestingly enough, lucene does this. actually, many real databases do this because it's very easy and allows you to use structures that are less mutable in the database, and thus often simplifies implementation. this wouldn't require database-structure changes (actually), but it would mean that the database would have to be recovered/vacuumed before it would be a single file again, and thus usable on older sqlites... ... however, hooking into the changeblob system makes single-master replication very simple and efficient (without mucking about in the users' code) 3. block-level versioning. use a buddy system to keep 2 copies of each block. changes to a block write out to the new spot. each block would have a chain to the next "changed block" known about [for recovery], and a block bitmap would be used to maskout the unused blocks. during recovery, the last changed block (in the header) is scanned and each change in that set is checked against the bitmap. in this way, space is traded; speed remains "about the same", although because the space is bigger, the kernel would need to be given tricks to know when a block could be evicted safely. This is similar to the problem faced by systems like UML and might require kernel patches to avoid hurting the VMM. if you allow (say) 8 copies of each block, you can detect when two simultaneous writers are about to trample over eachother VERY easily, although you limit yourselves to 4 copies. it is NOT a modification of this to generalize for "n copies" of each block as you then have a serialized index of how many blocks each (ahem) block there is.
[sqlite] Is there a new sqlite specifically design for WinCE.
Is there a new Sqlite designed specifically for WinCE? I am using a Sqlite 3.0.7 version that was design for WinCE. Is there a newer one? Thanks Steve
Re: [sqlite] Sqlite2 vs Sqlite3 benchmark
--- Ludvig Strigeus <[EMAIL PROTECTED]> wrote: > Dan Kennedy <[EMAIL PROTECTED]> wrote: > > For SQLite 3, the default value of the 'synchronous' pragma > > changed from "NORMAL" to "FULL". IIRC this means the disk is > > synced 3 times instead of 2 for a small transaction. So this > > might be what you're seeing. > > That is indeed the case. The sqlite FULL commit for the journal code works > like this > 1) Write all journal pages > 2) Fsync > 3) Update counter of journal pages in the journal header > 4) Fsync > > Why not do it like this instead: > 1) Check if counter in header matches the file size > 2) Write all journal pages > 3) Update counter of journal pages in the journal header > 4) Fsync This is pretty much what SQLite does in synchronous=NORMAL mode. SQLite does not assume an fsync() is an atomic operation. If a failure occurs during an fsync(), some pages may be written to disk correctly, some may not be written, and some may have garbage data written to them. So if this happens in synchronous=NORMAL mode, and the journal header is written but some of the page data that follows is garbage, then garbage data can be copied into the main database file. __ Do you Yahoo!? Make Yahoo! your home page http://www.yahoo.com/r/hs