Re: [sqlite] specifying field type, any benefit?
On Wednesday 04 January 2006 02:54 pm, Mark Wyszomierski wrote: > Hi all, > > I switched to sqlite from mysql awhile ago, I maintained the field types > in my sqlite implementation such as: > > create table students (first_name TEXT, age INTEGER); > > I'm just wondering if there is any point to specifying the field type as if > I try adding a string type into the age field, it seems to be accepted ok: > > insert into students values('hello'); > > Does sqlite have any problem regarding setting a field defined as INTEGER > from a text string (any limits etc?), are there any performance gains to be > had with specifying the field type? sqlite does not care about types. You can insert BLOBs into INTEGER fields if it makes you happy. sqlite will not care. I recommend you place them in your definitions anyway, for two reasons. First, you might want to switch to a different database latter. Second, it is sometimes helpful to tell your successors what you intend a field to be. Even though sqlite doesn't care, it is a good idea to be more strict yourself. As the other response said, sqlite used NULL for the unspecified parameters.
Re: [sqlite] Crash - How can I tell if sqlite is compiled threadsafe?
On 12/6/2005 at 15:29 [EMAIL PROTECTED] wrote: > >This assertion failure inside of sqliteOsEnterMutex indicates >that the program was probably not compiled THREADSAFE. That >assert() statement was put there specifically to catch this >kind of problem. (Good thing you didn't compile with -DNDEBUG >to disable assert() statements - if the program had continued >the consequences could have been a corrupted database rather >than just a program crash.) Thank you. Moving to THREADSAFE solved this problem. (Now I just get to correct the rest of the bugs in my code)
[sqlite] Crash - How can I tell if sqlite is compiled threadsafe?
Greeting, I'm using sqlite 2.8.16 in a threaded programs.We have been seeing crashes when opening databases, and it looks like two threads are trying to open databases at the same time. We know that whoever compiled sqlite was told to define THREADSAFE, but we do not know if that was done.So the first question is how do we know if sqlite was compiled with THREADSAFE (if there is a way to tell). If this is THREADSAFE, how can we figure out what is wrong (most of the time the program works). If we are not threadsafe we will just upgrade the libraries, but patching production systems can be hard so we would prefer not to until we know this is the problem. Follows are stack traces in case that helps anyone. There are other threads, but they have nothing interesting. Note that the databases that are being opened are definatly two different files, which are not even in the same directory, so there should be no conflicts in the file itself. Thread 1: (C++ code) #0 0x28280df7 in fcntl () from /lib/libc.so.5 #1 0x28218f6e in fcntl () from /usr/lib/libpthread.so.1 #2 0x2868e4a0 in sqliteOsUnlock () from /usr/local/lib/libsqlite.so.2 #3 0x2868eab8 in pager_reset () from /usr/local/lib/libsqlite.so.2 #4 0x2869080f in sqlitepager_unref () from /usr/local/lib/libsqlite.so.2 #5 0x2867a521 in unlockBtreeIfUnused () from /usr/local/lib/libsqlite.so.2 #6 0x2867aa9f in fileBtreeCloseCursor () from /usr/local/lib/libsqlite.so.2 #7 0x2868c763 in sqliteInitOne () from /usr/local/lib/libsqlite.so.2 #8 0x2868c8ac in sqliteInit () from /usr/local/lib/libsqlite.so.2 #9 0x2868cb8b in sqlite_open () from /usr/local/lib/libsqlite.so.2 ... Thread 0: (opens from python) #0 0x2822cf17 in pthread_testcancel () from /usr/lib/libpthread.so.1 #1 0x2821e0b5 in sigaction () from /usr/lib/libpthread.so.1 #2 0x282181e1 in pthread_kill () from /usr/lib/libpthread.so.1 #3 0x28217bb0 in raise () from /usr/lib/libpthread.so.1 #4 0x282e8627 in abort () from /lib/libc.so.5 #5 0x282c3713 in __assert () from /lib/libc.so.5 #6 0x2868e627 in sqliteOsEnterMutex () from /usr/local/lib/libsqlite.so.2 #7 0x2868dcc6 in sqliteOsOpenReadWrite () from /usr/local/lib/libsqlite.so.2 #8 0x2868f63f in sqlitepager_open () from /usr/local/lib/libsqlite.so.2 #9 0x2867a306 in sqliteBtreeOpen () from /usr/local/lib/libsqlite.so.2 #10 0x2868d939 in sqliteBtreeFactory () from /usr/local/lib/libsqlite.so.2 #11 0x2868cb29 in sqlite_open () from /usr/local/lib/libsqlite.so.2 #12 0x2882cd01 in pysqlite_connect () ...
RE: [sqlite] How dangerous is PRAGMA Synchronous OFF?
On 11/17/2005 at 00:54 Preston Z wrote: > >Anybody have a better way to tell if a db is garbage? You will know in the real world because things don't work randomly and the users will complain. My experience is like yours, a corrupt database can be worked with much like a normal one so long as you stay away from any parts that are corrupt. In addition to the list of causes, I would like to add the one that bit us: Not noticing that sqlite requires that each instance be used in exactly one thread of your program. (Someone in our early design decided to wrap sqlite in a singleton class and then left. Since things worked and he was a good programer normally we didn't find the problem until customers ran into corruption)
Re: [sqlite] Proposed 3.3.0 changes. Was: 5/2==2
On 11/2/2005 at 16:13 Eric Bohlman wrote: >Henry Miller wrote: >> As for 5/2, my grade school teachers taught me that if I round it at >> all, the answer is 3, NEVER 2. It is only latter in life that I >> learned about bankers rounding which sometimes is 2, sometimes 3. I >> have never seen a justification for rounding to 2, except for the bogus >> answer that it is easy for the computer to do. Thus I conclude >> whatever the final answer is, sqlite would be wrong if 5/2 is always 2. > >The problem with the grade-school rule is that, assuming the last digit >is uniformly distributed, you'll be rounding up 5 times out of 9 and >rounding down 4 times out of 9. That means that if you add up a large >number of rounded numbers, the result will always be larger than the >what you'd get if you added up the unrounded numbers and then rounded >the sum. That introduces a systematic bias in financial and statistical >calculations. Hence bankers rounding, which I mentioned latter.This has nothing to do with the subject, which is not rounding per se, but division in general. That has nothing to do with my point though: 5/2 can never be ALWAYS rounded down to 2 correctly. You would be correct to always round up to 3 - by my grade school math. You would be correct to use one of the bankers rounding schemes which would round up or down. Maybe I should change the problem to 5/3. SQLITE will currently round this down to 1. This is not rounding, this is a floor function. Floor functions have their use, but they are not rounding, and should not be the default when doing division and turning the result into an integer. The question is since sqlite is clearly doing the wrong thing when dividing two integers, would be we ok to just turn the result into a real, or must we implement real rounding. (with some way to select from the different bankers rounding, or simple rounding)If a floor function is useful, sqlite could provide that too, but that is a different discussion (which I would likely argue against unless someone has a good argument for).
Re: [sqlite] Proposed 3.3.0 changes. Was: 5/2==2
On 11/1/2005 at 21:59 [EMAIL PROTECTED] wrote: >John Stanton <[EMAIL PROTECTED]> wrote: >> >> Users love such a number system because it is natural and works like the >> Arithmetic they learned in Grade School. >> >> I find the idea of dividing two integers stored in binary form and >> getting a gratuitous conversion to floating point ugly and potentially >> very annoying. > >I admit that it has been 4 decades since I was in grade school, >but back in the 60's we were taught that 5/2 is 2.5. Are they >teaching something different now? Or have I misunderstood the >comment? You are missing the point: What did your grade school teacher tell you about 1/3? What about pi, or e, to name two common numbers that cannot be stored easily. As for 5/2, my grade school teachers taught me that if I round it at all, the answer is 3, NEVER 2. It is only latter in life that I learned about bankers rounding which sometimes is 2, sometimes 3.I have never seen a justification for rounding to 2, except for the bogus answer that it is easy for the computer to do. Thus I conclude whatever the final answer is, sqlite would be wrong if 5/2 is always 2. After reading all the comments I've concluded that the only correct answer is to make all the arithmetic operators replaceable. Let those who care define their own math. When I do math homework (though why you would use sqlite for homework problems I don't know) if the formula is pi*r^2 (area of a circle), if r is 4, my answer should be 8*pi. If I'm doing engineering calculations on the same thing, I want 24.3 (perhaps to a few more decimal places. Bankers may want 5/2 to be 2, or 3, depending on how they round it. I say do whatever you want - anyone who cares about this issue would not be doing math in any database anyway. because the SQL standard requires the wrong answer for their real-world application. It will be a consistent wrong answer, but it will still be wrong.
Re: Re[6]: [sqlite] Accessing Database on Network
On 8/3/2005 at 10:34 djm wrote: >Wednesday, August 3, 2005, 4:41:24 AM, you wrote: > >> No, none of those things are guaranteed. If there's even a single >> writer it can be unsafe (consider AFS putting a corrupt journal up >> that clients notice download, and thrash their local copy). > >But Im saying there -wont- be a single writer (except the rare update >process on the server, during which all other clients wont be >reading), and the question is it it always safe then? No. It might be on your particular systems, but we cannot make a general case. It is possible that it will always be safe on (for example) Windows 2000, while not safe on NT or XP! It depends on how the OS handles caching. They are supposed to check on each file access to see if the file changed, but networks are slow, so they might not.Thus you could start reading data that is in the cache, which is out of date. You may get different results from AFS, NFS, Coda, Samba, or Windows networking. Not to mention different versions of each protocol, and implementation. (Windows file sharing uses the same protocol as Samba, but that doesn't mean they handle caching the same!) >> Okay. that's what's important. That update procedure is completely >> unsafe UNLESS you can guarantee that the sqlite database will be >> overwritten atomically. Most operating systems don't have such an >> operation- the closest thing being rename(). > >And this is presumably only important if another client is reading the >file while its being updated. If no client is reading the file on the >server during update (and none are writing anyways), then the os and >the filesystem should be irrelevant, right? No, because we don't know what caching the OS is doing. It is possible for the remote OS to not look at the server at all if it still remembers the part of the file you are asking for after the write! Even writing your own file system may not work if the OS is caching things above the file system. That said, most OSes get this right these days. So if you correctly shut down all your readers, then to a write, then start them back up again, you might consider the problems rare enough to ignore. In fact they may never happen in practice, but in theory they could. What will work is to replace all the places where sqlite reads the file to instead talk to some server on the network you write. This is the same as writing a network file system, but you control ALL the layers, so you know what caching is going on. This is a lot of work to get right, and generally a poor use of time, but it is an option. >In my opinion system time stamps etc are not a reliable means of >comparing 2 files. Many things can change the timestamp of a file, >without changing the contents, and one (server) os/filesystem can >report a different file size to another (local) for the same file >(contents). As I said already, I think having a version number >embedded in the databse itself is much more relible. You should be running NTP (network time protocol) on all computers. This will keep all your system times to within milliseconds. Unix systems keep the last modified times tamp separately. Microsoft Windows sets (resets? I can never remember) the archive bit, which could be abused to tell you when a file is modified - at the cost of breaking backups so I can't recommend it. >> you said "I need to access a database on the network (x), [how do I] >> do this safely (y)". > > I didnt. Youre rephrasing my post, to suit your means, and I think > the point is degenerating in the process. Not really. We are just reading between the lines. Your questions look suspiciously like his rephrasing. Don't be offended if we are answering the question we think you should have asked, not the one you did. The correct answer to your question you should ask is to use something like postgresql/Oracle/(I can think of half a dozen other choices). Sqlite was not designed for what you want to do. Now there may be good reasons you cannot use a different database. That is a whole different issue. We cannot make decisions for you, but we can point out what looks like a bad decision from the parts of the problem we know (we know that we don't know all your issues). Bottom line: what you want to do will probably work without problem. However if you want a strong statement you need to have your lawyers contact your OS vender(s) and get a contract in writing that they will guarantee that this will work. Salesmen will lie, and the documentation may not account for the latest changes, but when you have a contract in writing you can at least get relief if things do not work. Asking management for this forces them to place a value on data if nothing else, which may help drive the decision on what to do.
Re: [sqlite] Data/time storing?
On 8/1/2005 at 08:41 Puneet Kishor wrote: >I am curious about this as well... not about the various functions, but >what is, if at all, a better way to store the values -- as '-MM-DD >HH:MM:SS' strings (are they not stored internally as just strings?) or >as unixepoch time (which would likely be stored as an int)? > >Is not the latter (unixepoch) faster than the former ('-MM-DD >HH:MM:SS' strings)? > >Is one more malleable than the other for conversion into various other >display forms as well as for DATETIME calculations? > This depends on what you want. There is no best for all situations. Either form is convertible to the other. MMDDHHMMSS tends to be readable anyone, while epoch is not. This often makes for simpler code for simple projects.However when you need to manipulate time in your code it is much easier to do arithmetic on epoch time. Common programing languages tend to have good libraries for turning epoch time into something readable, but it is more code than a simple print on MMDDHHMMSS time.Epoch will run out of time in 2036 (2038? one of those two), which is creeping up fast - many current programers will still be working then! (But 64 bit platforms are coming fast, and that will solve this problem for our lifetimes, while introducing many other problems) If your field techs will use some tool to dump the database, MMDD format is much better, as they can understand it. This is a large win in many cases. Field debugging is often more expensive than programmer coding, so if dates are useful in field debugging it can be worth the pain of using this format in code in the long run. However this method fails on daylight savings time if you are in the repeated hour and need to know if it is the first or second. Epoch is based on UTC, so and the built in libraries handle time zones, leap years, daylight savings time, and sometimes leap seconds (there may be more factors I can't recall).This is a hard problem to solve, and the libraries were written by smarter people than you, and are well debugged by now. Governments change the exact date or daylight savings time fairly often, with epoch you don't have to worry about updating your program for these new dates.. Remember, it is easy to convert between the two (so long as daylight savings time isn't involved). There is no one best for everyone, so quit looking for it! Remember business considerations are often bigger than technical considerations. Sometimes a critical issue will be subtile for years. (day light savings time for instance may force use of epoch despite the cost in field debugging time) Make a choice and move on. This is one of those issues where it is fairly easy to understand all the concerns, so people like to debate it in depth to prove they are paying attention. Doing so is a waste of time.
Re: [sqlite] sqlite3_compile missing?
On 7/27/2005 at 16:24 djm wrote: >Hello, > >It seems that the functions required for reading data without a >callback are mising in SQLite 3? I cant find "sqlite3_compile", >or "sqlite3_step". Am I missing something? > >The documentation on the website describes "sqlite_compile" and >"sqlite_step" which were presumably menat to be "upgraded" to "3" >versions but are now just missing. The main.c file also contains the >comments "..The following routine destroys a virtual machine that is >created by >** the sqlite3_compile() routine.." I think you are looking at the sqlite2 documentation, not sqlite3! sqlite_compile was changed to sqlite3_prepare in sqlite 3 (with very different schematics, so readily closely) sqlite3_step still exists, but it takes different arguments
Re: [sqlite] Does SQLite have a fulltext search like MySQL?
On 7/5/2005 at 17:48 Michael Grice wrote: >If not, are there plans to add this? > >I'm just a NOOB, looking to see what all SQLite can do for me, before >I go too far down this road. I was on vacation last week... For full text search I find the lucene/clucene http://lucene.apache.org/ http://sourceforge.net/projects/clucene/ to be good solutions. The licenses isn't quite as easy as sqlite, but they should be good enough for everyone. They don't do sql, but most people who want full text search want a google like interface, which is easy (almost trivial) to provide. For structured data sqlite is much better. For unstructured text lucene is the best I have found.
Re: [sqlite] philosophy behind public domain?
On 5/25/2005 at 11:36 Darren Duncan wrote: >Moreover, in a discussion about open source software licenses I was >part of a few weeks ago, it was brought up that making a work public >domain was a very bad thing to do, because it opened up the author to >a whole bunch of legal liability that they had no recourse from, >which they wouldn't have if they retained their copyright but used a >permissive license. I think the gist was that the software couldn't >have a disclaimer of liability if it is public domain, and so anyone >could sue the author if something went wrong when using it. I don't >know how true this is or not, but would like to see it addressed in >the answer. Courts can hold any part of a contract valid they want to. Courts have struck down disclaimers of liability in several cases, particularly in cases where the product causes loss of life and the licenser (creator) should have known it could happen. If you die bungee jumping and it is discovered the rope was beyond the rated end of life the disclaimer of liability will not protect them. If the rope was new it would protect them. (See a lawyer for how this applies to your case) Generally you liability is limited based on what was paid. If you pay a lot of money for something the court is likely to conclude that you expected that value from it. When you get something from the public domain the courts are likely to conclude that you got what you paid for - in fact you could be liable for using public domain software that fails, instead of the creator of the software. Only a court can say what will happen, and then only after examining the case. Only a lawyer can give you specific advice.
Re: [sqlite] write invoices.... 2 which way??
On 3/9/2005 at 20:39 Jan Ekström wrote: >Clay! >Thank you for your answer. >Still. I have been programming IBM system 3- AS 400 thirty years ago or >so. >I didn't like the IBM prison. I'am struggeling for fresh air. >I like SQLite very much. >I want to put data - preferably validated data - in my sqlite database. >Then work with the data and finaly get the result out on paper. >Later perhaps I want to try the same with MYSQL and php. >All in open source inviroment. >But for now. Where can I find C and C compiler - if C is right - or >another >suitable tole for my efforts. There are far too many correct answers to this question for me to start listing them. You need to evaluate the options and decide. Nearly all open source OSes come with gcc installed, which is a plenty good C/C++ compiler. If yours doesn't have it, or make it really easy to install then you are likely looking at the wrong distribution. (I haven't used any that don't have it) See the documentation to your package manager. C is great for low level, performance at any price work. It is a pain for quick and dirty code, and leaves a lot of room to shoot yourself in the foot if you are not careful. This is the price you pay for power, it might or not be worth it. The way you ask the question I'm going to guess that you are not a C expert, so I would recomend you avoid C when you can.However remember that when the going gets tough you might need to drop into C where everything is hard, but at least if it can be done C can do it. I recomend you use python (www.python.org) as your programing language. However sqlite comes with good tcl binding, and tcl is nice enough. I can't stand perl, but this is a personal thing, you need to make your own decisions. Ruby is sometimes mentioned as a good general purpose scripting language, but I know nothing about it. Those 4 are the main open source scripting languages, take your pick. I would strongly recomend postgressql over mysql where you have a choice. Where you need a simple database sqlite is better than mysql, where you need a powerful database postgresssql is better. In the little middle ground left you will find that mysql still isn't enough better to bother with. There is one major exception though: mysql is used all over and is worth knowing because you might encounter it again.
RE: [sqlite] Is there any way to enable recursive triggers?
On 1/4/2005 at 12:13 Sandy Ganz wrote: >What OS does not return NULL or an exception when malloc() fails or is out >of memory? OS/2. Linux kernels < 2.6. I think Windows does this too, but I'm not sure. Its harder than you might think to implement, because most OSes don't allocate memory until you WRITE to it. There are programs that take advantage of this by mallocing lots of memory, but only using a small amount. Think of large sparse arrays, you allocate a lot of memory, but only use a few bytes scattered over it. In order to implement malloc that fails the system needs to refuse to give you memory, even though it knows about unused memory (that some other process has reserved but not touched yet). Of course when malloc does not fail there are a host of other problems, eventually the OS just stops your process.
Re: [sqlite] Is there any way to enable recursive triggers?
On 1/4/2005 at 15:54 Paolo Vernazza wrote: >>It seems to me that recursion that never touches the same row twice is >>less an issue. That is a trigger that just updates all other rows in >>the table once should be fine. So one (I suspect hard to implement) >>idea would be to keep track of which rows have been touched as part of >>a trigger. Any row touched N times breaks out. This works for both >>your delete case, and update cases. >> >Mmmm... let's suppose that when deleting a row in the linke list table >we update another table incrementing a counter so we can know how many >rows we have just deleted... > >CREATE TABLE list( > id INTEGER PRIMARY KEY, > next INTEGER REFERENCES list, > data BLOB > ); > >CREATE TABLE counter (value TEXT, count INTEGER); >INSERT INTO counter (value, count) VALUES ('deleted', 0); > > CREATE TRIGGER deep BEFORE DELETE ON list BEGIN > DELETE FROM list WHERE id=old.next; > UPDATE counter SET count=count+1 WHERE value='deleted'; > END; > > >It will reveal a loop on the second trigger because we are updating the >same row in the table "counter"... Good point. There is no way to solve the halting problem. No matter what limit we set, we need some way for the user to change the limits. In the end, limits are a convience for the programer who isn't thinking his triggers through. (not hard in a complex database where you are not the only one writing triggers) I present all my ideas as special cases where we can suspect something. We have already determined that a trigger that only deletes can never loop forever. On Update if a row is never touched more than once, we know there cannot be an infinite loop. In this case we can add the special case, a trigger on DELETE can never cause an infinite loop itself. (though of course it can hit other triggers which could loop themselves) I contend that we can keep adding special cases for a long time. I'm worried that checking all the special cases can easily make sqlite one of the slowest databases! Therefore I propose that all recursion checking be in code that can be excluded at compile time, in addition to run time changes to whatever limits we add. I could also see some embedded programers who have limits in their debug builds, but release builds are have none. (saves a a few bytes, something embedded programers will like) Note that it is not enough to detect out of memory situations. Several OSes will never fail malloc (except in the case where you are asking for memory than the hardware can address?), since they don't give you memory until you use it. Attempt to use memory when the system is out of it will cause your program to crash without warning! (linux 2.2 was this way, I think 2.6 will fail malloc)
Re: [sqlite] Is there any way to enable recursive triggers?
On 1/4/2005 at 08:48 D. Richard Hipp wrote: >Peter Bartholdsson wrote: >> >> [H]ow would [limiting recursion depth] not be enough? >> > >Limiting the recursion depth would be sufficient to prevent >infinite loops. But it seems overly restrictive. > >Consider the following case: > >CREATE TABLE list( > id INTEGER PRIMARY KEY, > next INTEGER REFERENCES list, > data BLOB >); > >The table above lets you look up BLOB data given an integer >ID. It also keeps all of the entries on a linked list. >(Never mind why you would want to do this - it comes up.) >Suppose that when you delete an entry you also want to >delete all subsequent entries in the list. We have: > >CREATE TRIGGER deep BEFORE DELETE ON list BEGIN >DELETE FROM list WHERE id=old.next; >END; > >This trigger is guaranteed to terminate because it will >eventually run out of entries to delete. But given any >recursion limit, we can always construct a case where >the trigger will want to run longer than the limit. > >So do we put recursion limits on UPDATE and INSERT triggers >only and let DELETE triggers run as long as they like? >That works as far as I know, but it seems kind of arbitrary. > >Surely somebody else has been down this road before me and >can offer a little guidance? I haven't been down this road, but I have some ideas. First, we cannot solve the halting problem in the general case. All we can do is various combinations of special cases where we can solve it, and recursion limits to deal with the rest. It seems to me that recursion that never touches the same row twice is less an issue. That is a trigger that just updates all other rows in the table once should be fine. So one (I suspect hard to implement) idea would be to keep track of which rows have been touched as part of a trigger. Any row touched N times breaks out. This works for both your delete case, and update cases. It doesn't solve infinite inserts though: CREATE TRIGGER infinite BEFORE INSERT ON foo BEGIN INSERT INTO foo VALUES (...); END; With recursion limits we need some way to specify the limit as part of the SQL. Something like: CREATE TRIGGER deep BEFORE DELETE ON list BEGIN DELETE FROM list WHERE id=old.next RECURSION LIMIT SELECT COUNT FROM list; END; I don't like the above syntax, but you get the idea. This requires support for SELECT COUNT, which I believe isn't a part of sqlite. (It is a part of MSSQL) In general a recursion limit should default to infinite (put the burden on the programer to not write them, but if he must it can be set), or very low (so even a 2 row test table will hit the recursion limit and cause the programer to re-think what is happening) This all ignores what should happen if the limit is reached. There needs to be some way to specify ON CONFLICT for when recursion limits are reached.