Re: [sqlite] Adding index to table makes SELECT much slower. Why?
Have you tried running VACUUM and ANAYZE on the indexed table? Not that I think it will matter, but who knows :) Daniel James Pringle wrote: > Hi- > > I am new to sqlite, and am having a puzzling problem. I have read > that adding an INDEX to a sqlite table can never make a SELECT > statement slower. However, in my case, it seems to be making the > select statement an order of magnitude slower, which is not what I > wanted! What could be going on? > > I am calling SQLite from python 2.5 from fink useing the sqlite3 > module on OS X 10.5 with 2Gb of memory. My data base contains 32 > million lines of ocean hydrographic data, stored on disk, with the > table created with the following statement: > > CREATE TABLE hydro (lat REAL, lon REAL, year INTEGER, > month INTEGER, day INTEGER, time REAL, cast_id INTEGER, > depth REAL, T REAL, S REAL, water_depth REAL) > > When I perform a SELECT that returns about 0.6% of the data in the > database (about 200,000 records) > > SELECT lat, lon, depth FROM hydro WHERE depth>= 49.01 AND depth <= 50.99 > > It selects and returns the records in 82 seconds. I wished to improve > on this time, so I created an index with the following command: > > CREATE INDEX hydro_indx ON hydro (depth) > > I then tried the SELECT command again, and read the results into > memory, and it took 717 seconds!?! > > The "depth" field contains many distinct numeric values, with no > specific value matching more then 0.5% of the data base. When I DROP > the index with the command > > DROP INDEX hydro_indx > > The SELECT time returns to about 80 seconds, confirming that it is the > index which is slowing things down. What is going on? I have > repeated and confirmed these timings. > > I have listened for disk chatter and monitored the system, and it does > not seem to be thrashing swap, or otherwise becoming unresponsive. > > I have two questions: > > 1) Why is the index making things slower? > 2) How can I make my SELECT statement faster? The primary > selection will be done >on the "depth" and "water_depth" keys. > > I thank you for your time. > > Cheers, > Jamie Pringle > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] concurrent users?
I'm using SQLite in a commercial application where the db is used as a document, often located at a network share (windows only). The db is used as a multiuser document where many users can connect and read/write to the db simultaneously, and it's working perfect. Although, mostly the users will read from the db and there is a very little chance that 2 users will write simultaneously. The only thing to keep in mind is to set the sqlite3_busy_timeout and handle errors correctly. Best regards Daniel Igor Tandetnik wrote: > "Shawn Anderson" <[EMAIL PROTECTED]> wrote > in message news:[EMAIL PROTECTED] > >> I've read many things, some conflicting, about concurrent users on a >> Sqlite database. >> The feeling I get is that Sqlite is good for a standalone desktop >> database, but should not be used in a concurrent user situation (like >> a website with many users). Is there a definitive answer? >> > > http://sqlite.org/whentouse.html > > Igor Tandetnik > > > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] busy_timeout and shared_cache
Hi all! I usually set the sqlite3_busy_timeout to 10 seconds or something like that to make sure that my db isn't locked by any other connection at the same time. This way I usually do not need to check for SQLITE_BUSY. Now I just tried out the sqlite3_enable_shared_cache and has enabled shared cache on 3 different threads connected to the same db. The funny thing is that now the bust_timeout seems to fail. Instead sqlite3_step will now return SQLITE_LOCKED every now and then (and I can assure you that the timeout has not been reached). Is this a bug, or is this an undocumented expected behavior? BTW: I'm using SQLite 3.6.0 compiled from amalgamation with VS2005. Best regards Daniel ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] interrupting sqlite3_prepare_v2
I have investigated futher and noticed that it will break when preparing statements like creating tables from a select. I filed a bugreport with example code at: http://www.sqlite.org/cvstrac/tktview?tn=3155 Dennis Cote wrote: > Daniel Önnerby wrote: > >> Sometimes this interrupt occur in the middle of a >> sqlite3_prepare_v2 and in some cases this will cause my application to >> break in the SQLite code somewhere. >> >> Please let me know if you want me to investigate this futher. >> >> > > Yes, please do so if you have the time. If you locate a bug it will help > all users of SQLite. > > Dennis Cote > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] interrupting sqlite3_prepare_v2
Forgot to say that I'm using SQLite 3.5.9 compiled from the amalgamation with threadsafety on. Daniel Önnerby wrote: > Hi all! > > My application is running all my SQLite stuff on one thread and > sometimes I run a sqlite3_interrupt from another thread to cancel the > current execution. Sometimes this interrupt occur in the middle of a > sqlite3_prepare_v2 and in some cases this will cause my application to > break in the SQLite code somewhere. > Luckily I have made a SQLite C++ wrapper, so I have solved this by > putting a mutex lock around all prepare and interrupt, but I'm just > curious if this is a known issue or if I should investigate this futher? > > I'm running on windows using VS2005 and from what I remember the break > comes when hitting the va_end inside the sqlite3SetString. From what I > can see, both "pz" and "ap" is NULL at this point. > > Please let me know if you want me to investigate this futher. > > > Best regards > Daniel > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] interrupting sqlite3_prepare_v2
Hi all! My application is running all my SQLite stuff on one thread and sometimes I run a sqlite3_interrupt from another thread to cancel the current execution. Sometimes this interrupt occur in the middle of a sqlite3_prepare_v2 and in some cases this will cause my application to break in the SQLite code somewhere. Luckily I have made a SQLite C++ wrapper, so I have solved this by putting a mutex lock around all prepare and interrupt, but I'm just curious if this is a known issue or if I should investigate this futher? I'm running on windows using VS2005 and from what I remember the break comes when hitting the va_end inside the sqlite3SetString. From what I can see, both "pz" and "ap" is NULL at this point. Please let me know if you want me to investigate this futher. Best regards Daniel ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] 3.5.8 alot slower than 3.5.7
Thank you! This helped. didn't find this before. Best regards Daniel Eric Minbiole wrote: >> This works great but when I upgraded from 3.5.7 to 3.5.8 the speed of >> this select went from 0.2s to around 1 minute. And 3.5.8 is stealing >> ALOT more memory. >> > > D. Richard Hipp had a very helpful work-around for this issue, by simply > rearranging the terms of your join's ON clause. Take a look at this > thread for details: > > http://www.mail-archive.com/sqlite-users%40sqlite.org/msg33267.html > > ~Eric > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] 3.5.8 alot slower than 3.5.7
Hi all! I have a db looking like this: http://onnerby.se/~daniel/mc2db.png and in my tracks table I have a column named sort_order1. When my application is running I try to optimize the database by doing a pretty massive SELECT to set this sort_order looking like this: SELECT t.id FROM tracks t LEFT OUTER JOIN genres g ON t.visual_genre_id=g.id LEFT OUTER JOIN albums al ON t.album_id=al.id LEFT OUTER JOIN artists ar ON t.visual_artist_id=ar.id LEFT OUTER JOIN folders f ON t.folder_id=f.id ORDER BY g.sort_order,ar.sort_order,al.sort_order,t.track,f.fullpath,t.filename This works great but when I upgraded from 3.5.7 to 3.5.8 the speed of this select went from 0.2s to around 1 minute. And 3.5.8 is stealing ALOT more memory. I tried this both in my own application and using the sqlite.exe (both for 3.5.7 and 3.5.8) and they show the same results. I tried to locate anything regarding this both on the mailinglist and searching tickets. Is this a known issue or is it a bug? Best regards Daniel ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Using sqlite3 from multiple threads
[EMAIL PROTECTED] wrote: I see this idea expressed often, Andreas. Please help me to understand how I can improve the SQLite website or documentation to make it clear that SQLite does *not* need to be "installed"? I think the sqlite.org make this very clear, but people just can't believe this: It's to good to be true :) Best regards Daniel - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] For Better Performance
Please read the following pages: http://www.sqlite.org/cvstrac/wiki?p=PerformanceTuning http://www.sqlite.org/cvstrac/wiki?p=PerformanceTuningWindows Sreedhar.a wrote: Hi, I am using sqlite for meta data storage of audio files. I am storing the sqlite database in hard disk. The sector size of FAT file system is 512 bytes. Hard disk rpm is 4200 Page size = 1K cache size = 2k The processor speed is 600 Mhz. I am using joins method in sqlite.The records are the meta data information of the audio files. How i can improve my search and insertion speeds? any change in the above parameters will help? Thanks and Regards, A.Sreedhar. - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Default Encoding In Sqlite
Sorry.. meant English is NOT my primary language :) Daniel Önnerby wrote: I figure I'll keep it short since it's only for the FAQ. English is my primary language, but here my suggestion: Question: Does SQLite handle unicode? Short answer: Yes! Answer: SQLite handles unicode very well. SQLite stores texts in either UTF-16 or UTF-8 format depending on how the database is created (sqlite3_open or sqlite3_open16). SQLite will also seamlessly convert between the different formats depending on how you retrieve the texts (sqlite3_column_text or sqlite_column_text16) regardless on what format it has been saved as. There are some cases -like using case insensitive LIKE- where SQLite needs to be extended with the ICU extension to fully work with unicode strings. [EMAIL PROTECTED] wrote: =?ISO-8859-1?Q?Daniel_=D6nnerby?= <[EMAIL PROTECTED]> wrote: Unicode questions seems to come up at least once a week on the mailinglist. Maybe there should be something about this in the FAQ or the features page? I will happily accept suggested text for such entries. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Default Encoding In Sqlite
I figure I'll keep it short since it's only for the FAQ. English is my primary language, but here my suggestion: Question: Does SQLite handle unicode? Short answer: Yes! Answer: SQLite handles unicode very well. SQLite stores texts in either UTF-16 or UTF-8 format depending on how the database is created (sqlite3_open or sqlite3_open16). SQLite will also seamlessly convert between the different formats depending on how you retrieve the texts (sqlite3_column_text or sqlite_column_text16) regardless on what format it has been saved as. There are some cases -like using case insensitive LIKE- where SQLite needs to be extended with the ICU extension to fully work with unicode strings. [EMAIL PROTECTED] wrote: =?ISO-8859-1?Q?Daniel_=D6nnerby?= <[EMAIL PROTECTED]> wrote: Unicode questions seems to come up at least once a week on the mailinglist. Maybe there should be something about this in the FAQ or the features page? I will happily accept suggested text for such entries. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Default Encoding In Sqlite
Unicode questions seems to come up at least once a week on the mailinglist. Maybe there should be something about this in the FAQ or the features page? Trevor Talbot wrote: I wrote: The default storage encoding on disk is UTF-8, but it can be changed to UTF-16 with a PRAGMA. As Igor reminds me, if you create the database file using sqlite3_open16() the default will be UTF-16 instead. You can still set it explicitly via a PRAGMA if you wish. - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Unicode support for Sqlite?
utf-8 and utf-16 ARE unicode formats. But there are some things that sqlite does not handle without the ICU extension. The ICU extension extends SQLite with the following functionallity: 1.1 SQL Scalars upper() and lower() 1.2 Unicode Aware LIKE Operator 1.3 ICU Collation Sequences 1.4 SQL REGEXP Operator Download the SQLite source and have a look in the ext/icu directory Sreedhar.a wrote: Hi, Does Sqlite support unicode? I have seen that it supports utf-8 and utf-16. I want to know whether it supports unicode character formats. Thanks and Best Regards, A.Sreedhar. - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] benchmarking UTF8 vs UTF16 encoded databases
When I started using SQLite I found it natural to use the sqlite3_open16 and use UTF16 encoding on strings since my applications always use wchar_t when handeling strings. I never questioned this until now when I decided to do some benchmark, and I found it interesting enough to share with you. In my benchmark I used a database with several tables and indexes and the table I decided to benchmark contains 10 columns and 14000 rows with different types. It's a well normalized database that is used in a real life application. The benchmark is made on 2 different databases that are identical except for the fact that one is UTF8 encoded and the other is UTF16 encoded. I always get the 2 columns using sqlite3_column_text16 - so when getting the string from the UTF8 database - a conversion is made, but the output strings from both databases are always the same. The benchmark is looped 10 times for better average results. Benchmark 1: Selecting 2 columns from the table without any WHERE or ORDER BY UTF8.db0.38s UTF16.db 0.33s As expected the UTF16 encoded database is a little bit faster since no conversion is made. The difference is: 15% slower using UTF8 encoding. Benchmark 2: Selecting 2 columns from the table without and WHERE, but with ORDER BY on a text-column without any index (slow) UTF8.db 4.34s UTF16.db11.19s Well, this is a slow query. Sorting a UTF8 encoded string is obviously a lot faster than sorting a UTF16 encoded string. The conversion done by sqlite3_column_text16 is not noticeable in this benchmark. Difference: 66% faster using UTF8 encoding. Benchmark 3: Selecting 2 columns from the table without any WHERE, but with ORDER BY on text-column WITH index. UTF8.db 0.58s UTF16.db 0.63s Interesting. I guess the conversion done by sqlite3_column_text16 is not noticeable compared to the extra disk/mem IO for the extra data using UTF16. Difference: 8% faster using UTF8 encoding. In the future I am using UTF8 encoded databases since the conversion of strings is a small thing for the system. The advantages of using UTF8 are many: 1. Faster in most cases 2. Smaller databases (30% smaller in benchmark test database) 3. Less memory usage OR more information will fit in memory. I forgot to tell you that the benchmark is made on windows XP. The conversion done in sqlite3_column_text16 may be a lot slower/faster on any other platform. Best regards Daniel - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] String manipulation with pure sqlite3?
This should be a simple task for any programming language to do once the results has been retrieved. With the current expressions in SQLite I believe there is no way to do this unless you extend SQLite with your own "string_find_last" or "replace_last" function. Alexander Skwar wrote: Hello. Suppose I've got tables like this: sqlite> .schema t1 CREATE TABLE t1 (id integer primary key not null, name); sqlite> .schema t2 CREATE TABLE t2 (t1id integer, txt STRING NOT NULL); Filled with: sqlite> select * from t1; 1|foo.bar.boing 2|bumm.krach.klong.schepper 3|just.a.test.entry sqlite> select * from t2; 1|kurz 2|etwas laenger Now I'd like to have a SELECT statement, which would return: 1|foo.bar (kurz).boing 2|bumm.krach.klong (etwas laenger).schepper Ie., before the LAST ".", add what's in t2 but put it in brackets (). It is so, that there are more values in t1, then there are in t2. I only want to get those rows, which are listed in t2. sqlite> select * from t1, t2 where t2.t1id = t1.id; 1|foo.bar.boing|1|kurz 2|bumm.krach.klong.schepper|2|etwas laenger Is this doable in pure sqlite3, or would I need to "massage" the returned data in a programming language? Thanks, Alexander Skwar - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Request for help with the SQLite Website
I guess letting developers having opinions about the website is like to many chefs making a soup. Everyone have different opinions. I like the new design but have a small comment about the rounded corners in the menu. If there are rounded corners in the menu - the content below the menu should have the additional margin as the radius of the rounded corner. Like this: http://onnerby.se/~daniel/sqlite_example/ Best regards Daniel - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Request for help with the SQLite Website
I have to agree about the amount of text on the front page. What happened to the download-page, I only see the "Direct Access To The Sources Via Anonymous CVS"? Samuel R. Neff wrote: Limiting the width is good, but the pixel-based limit can cause variations on different resolutions and font settings. I would suggest this instead: max-width: 60em; Which will cause the max width to adjust based on text size settings. With the most recent change, I feel overwhelmed by the amount of text on the front page. I liked the previous version with just a paragraph a lot more. Also I'd like to see the language bindings added to "Common Links". HTH, Sam --- We're Hiring! Seeking a passionate developer to join our team building Flex based products. Position is in the Washington D.C. metro area. If interested contact [EMAIL PROTECTED] -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Wednesday, November 14, 2007 3:59 PM To: sqlite-users@sqlite.org Subject: Re: [sqlite] Request for help with the SQLite Website "Scott Hess" <[EMAIL PROTECTED]> wrote: I probably am misunderstanding something. The box scales down to narrower windows just fine, so why can't the box scale until it hits the width of my browser, and _then_ start doing the vertical-wrapping thing? There is a CSS parameter that limits the width. max-width: 800px; People who know tell me this is a good thing. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Converting date from d/m/yy format
Sorry for this off topic comment: John, about the dates, could you please fix your system clock, your emails all appear at the top when sorting by date ;) Again, sorry for the off topic. Hope I'm not intruding :) Best regards Daniel John Stanton wrote: Our business for many years was producing compilers and database software to transport legacy software onto new platforms. We saw literally thousands of custom application software implementations and got to see the good, the bad and the ugly. As I said before dates were a blind spot with most developers and they got themselves into much quite avoidable trouble by jumping in without doing some research. Those who implemented the classic date/time system with an epoch in 4712BC (from memory) did the best. Lee Crain wrote: John, There are an almost infinite number of applications and operations that a person can think of regarding the use of dates and times. My employer's applications on that particular server were specific and well defined. I'm not going to go into all the details of the requirement; there were many. I picked the best, low-tech solution that satisfied every existing requirement and every foreseeable requirement. It was a very successful implementation that solved all date-time related problems on that server. And, (this is a very important point) if additional, unforeseen, date-time functionality must be implemented in the future, like adding or subtracting time intervals, those solutions can and should be implemented in the server software, not the server database. In my solution, the database is the repository of data, not logic. Lee Crain -Original Message- From: John Stanton [mailto:[EMAIL PROTECTED] Sent: Monday, December 03, 2007 12:55 PM To: sqlite-users@sqlite.org Subject: Re: [sqlite] Converting date from d/m/yy format I didn't mean to nitpick, but my experience has been that date and time processing is a blind spot. Over the years we have come across the most unholy kludges and nasty errors as people try to process dates without using the core theory. The Y2K situation was just one of the side effects in an ongoing litany of difficulties. Lee Crain wrote: John, None of the functionalities you mentioned were requirements on the project I worked on. Clumsy? My approach simplified everything on that server and is in production worldwide, and unmodified, today. Lee Crain ___ -Original Message- From: John Stanton [mailto:[EMAIL PROTECTED] Sent: Monday, December 03, 2007 12:18 PM To: sqlite-users@sqlite.org Subject: Re: [sqlite] Converting date from d/m/yy format That approach makes date processing clumsy. Distributing dates across time zones and into different calendars is difficult. Date arithmetic is awkward. When you use dates and times based on the magic astronomical epoch everything is simplified. Time zones and calendars are handled fairly transparently. Lee Crain wrote: Several years ago when I worked for a Fortune 70 company, we had a server whose source code and database were complicated by timestamps. I say complicated because there were different timestamp datatypes used for different fields (inherited from the data sources), the data could be stored in different formats, timestamp precision varied, and it was a problem to translate back and forth between the different representations and compare them. All of this added up to the occasional exercise of some obscure bugs in the server. I successfully undertook a project to fix this for all time. My solution was very simple: all timestamps were represented as strings in the following format: "MMDD:HHmmSS.nn" This format, no what the original data source or format, became the standard format for timestamps on this particular server. Precision was to the microsecond for all data, even if represented by zeroes. This had several virtues: When debugging software, all timestamps were readable when using Debug. Instead of looking at some binary number, the timestamp was easily human readable. When using administrative tools to access the database, it was easy to examine, modify, and compare timestamps, since they were all human readable and in exactly the same format. When comparing timestamps in the software to determine the most current, a simple string comparison always produced the correct result. The only feature that might have been needed (but wasn't on this particular server) was the ability to add or subtract time intervals from the timestamp. You may wish to consider a similar approach to managing your date and time information. Lee Crain -- --- To unsubscribe, send email to [EMAIL PROTECTED] -- --- ---
Re: [sqlite] INSERT OR IGNORE and sqlite3_last_insert_rowid()
Sorry, I didn't read the whole story before answering. You are right, the documentation on sqlite3_last_insert_rowid should contain some comment about the conflicts. I guess that, after working with SQLite for a long time, obvious things are not obvious to everyone and are easily forgotten in documentation :) Michael Ruck wrote: Daniel, My usecase is the following: I maintain a lot of tables, which are simply catalogs of predefined or user entered values. I wanted to reduce the amount of code and memory to maintain those tables, as the user can enter values as free text in a lot of web forms and also choose from previously entered values via ajax autocompletion. To simplify my code I wanted to use INSERT OR IGNORE in those catalog tables, as I don't care if the value is already there or not. I just need the rowids as foreign keys for other table(s). An example would be: - CREATE TABLE song (id INTEGER PRIMARY KEY, artist INTEGER, writer INTEGER, title INTEGER) - CREATE TABLE artists (id INTEGER PRIMARY KEY, value TEXT UNIQUE) - CREATE TABLE writers (id INTEGER PRIMARY KEY, value TEXT UNIQUE) - CREATE TABLE titles (id INTEGER PRIMARY KEY, value TEXT UNIQUE) For an insert of a new song I wanted to do (pseudo code) INSERT OR IGNORE INTO artists VALUES (NULL, 'user entered or chosen artistname'); rowid-of-first-insert = sqlite3_last_insert_rowid(); INSERT OR IGNORE INTO writers VALUES (NULL, 'user entered or chosen writer name'); rowid-of-second-insert = sqlite3_last_insert_rowid(); INSERT OR IGNORE INTO titles VALUES (NULL, 'user entered or chosen title'); rowid-of-third-insert = sqlite3_last_insert_rowid(); INSERT INTO song (NULL, rowid-of-first-insert, rowid-of-second-insert, rowid-of-third-insert); This is just a trivial example of what I want to do. My understanding of INSERT OR IGNORE was that it always succeeds, even though the record is already there (which is exactly what I want.) In contrast to INSERT OR REPLACE it doesn't remove the old row and thus keeps the same rowid. In conjunction with the documentation for sqlite3_last_insert_rowid(), which states: "This routine returns the rowid of the most recent INSERT into the database from the database connection given in the first argument. If no inserts have ever occurred on this database connection, zero is returned." I was assuming that I'll receive the rowid even in case where the conflict clause from INSERT OR IGNORE caused the insert not to happen. I was just a bit surprised about this and that's why I asked if this was expected behavior. Dr. Hipps answer cleared the reason for this up and I already started looking for alternatives. The only thing I was asking for is that this behavior (sqlite3_last_insert_rowid() and its results with CONFLICT clauses) be explicitly mentioned in the documentation of sqlite3_last_insert_rowid(). Unfortunately the solution proposed by Shawn Odekirk fails too, as I don't have the rowids for insert statements. If i did, I wouldn't INSERT OR IGNORE or INSERT OR REPLACE in the first place. The id column is automatically maintained by SQlite and I don't want to mess with it. So I'll just do blind inserts and check the return value and do a select if insert fails. Little more code, but it works. Mike -Ursprüngliche Nachricht- Von: Daniel Önnerby [mailto:[EMAIL PROTECTED] Gesendet: Dienstag, 30. Oktober 2007 12:03 An: sqlite-users@sqlite.org Betreff: Re: AW: [sqlite] INSERT OR IGNORE and sqlite3_last_insert_rowid() Why are you using the INSERT OR IGNORE? If you read the http://www.sqlite.org/lang_conflict.html it states that the IGNORE will not return any errors. Isn't the default behavior INSERT OR ABORT (or just plain INSERT) what you are looking for? The default INSERT will return an error (SQLITE_CONSTRAINT) if there is a conflict. If you get a conflict then do not trust the sqlite3_last_insert_rowid since (I guess) it will return the last successful insert rowid. Best regards Daniel Michael Ruck wrote: I'm not blaming anyone. I just think it should be mentioned in the docs. Mike -Ursprüngliche Nachricht- Von: John Stanton [mailto:[EMAIL PROTECTED] Gesendet: Donnerstag, 29. November 2007 20:12 An: sqlite-users@sqlite.org Betreff: Re: [sqlite] INSERT OR IGNORE and sqlite3_last_insert_rowid() As has been carefully explained by several people, it is reliable. You just did not think through your application. You could make an extension to Sqlite and implement an sqlite3_last_insert_or_ignore_id function, but to blithely assume that you can use last_insert_id with INSERT OR IGNORE is not logical and to blame others for your oversight is not helpful. Michael Ruck wrote: I don't get an error code. So how should I decide if I should call sqlite3_last_insert_rowid() or not? :) That's the problem - I don't have any indication if an i
Re: AW: [sqlite] INSERT OR IGNORE and sqlite3_last_insert_rowid()
Why are you using the INSERT OR IGNORE? If you read the http://www.sqlite.org/lang_conflict.html it states that the IGNORE will not return any errors. Isn't the default behavior INSERT OR ABORT (or just plain INSERT) what you are looking for? The default INSERT will return an error (SQLITE_CONSTRAINT) if there is a conflict. If you get a conflict then do not trust the sqlite3_last_insert_rowid since (I guess) it will return the last successful insert rowid. Best regards Daniel Michael Ruck wrote: I'm not blaming anyone. I just think it should be mentioned in the docs. Mike -Ursprüngliche Nachricht- Von: John Stanton [mailto:[EMAIL PROTECTED] Gesendet: Donnerstag, 29. November 2007 20:12 An: sqlite-users@sqlite.org Betreff: Re: [sqlite] INSERT OR IGNORE and sqlite3_last_insert_rowid() As has been carefully explained by several people, it is reliable. You just did not think through your application. You could make an extension to Sqlite and implement an sqlite3_last_insert_or_ignore_id function, but to blithely assume that you can use last_insert_id with INSERT OR IGNORE is not logical and to blame others for your oversight is not helpful. Michael Ruck wrote: I don't get an error code. So how should I decide if I should call sqlite3_last_insert_rowid() or not? :) That's the problem - I don't have any indication if an insert was actually performed or if it was simply ignored - thus I don't have any possibility to decide if the call is valid or not. This makes the OR IGNORE clause or the sqlite3_last_insert_rowid() function useless for *my purposes*. I would have never pursued this path in tests, if I would've known beforehand that it is not reliable if used with ON CONFLICT clauses. Mike -Ursprüngliche Nachricht- Von: Odekirk, Shawn [mailto:[EMAIL PROTECTED] Gesendet: Montag, 29. Oktober 2007 14:04 An: sqlite-users@sqlite.org Betreff: RE: AW: AW: [sqlite] INSERT OR IGNORE and sqlite3_last_insert_rowid() The sqlite3_last_insert_rowid function is completely, 100% reliable in your scenario. The problem is that in your scenario you shouldn't be calling that function. The function is called sqlite3_last_insert_rowid, not sqlite3_last_insert_or_ignore_rowid, and not sqlite3_last_insert_or_fail_rowid. It makes perfect sense that it returns the row id of the last row inserted successfully. This function should only be called after a successful insert. In your scenario you have not performed a successful insert. There is no reason to think that the function will return a meaningful row id after a failed insert attempt. I hope my response was not too harsh. You seem so adamant that there is a problem with the function or documentation, and I completely disagree. Shawn -Original Message- From: Michael Ruck [mailto:[EMAIL PROTECTED] Sent: Sunday, October 28, 2007 12:55 PM To: sqlite-users@sqlite.org Subject: AW: AW: AW: [sqlite] INSERT OR IGNORE and sqlite3_last_insert_rowid() I'd suggest putting this into the documentation of sqlite3_last_insert_rowid(), that the call is not reliable in scenarios such as this one. -Ursprüngliche Nachricht- Von: D. Richard Hipp [mailto:[EMAIL PROTECTED] Gesendet: Sonntag, 28. Oktober 2007 17:48 An: sqlite-users@sqlite.org Betreff: Re: AW: AW: [sqlite] INSERT OR IGNORE and sqlite3_last_insert_rowid() On Oct 28, 2007, at 10:59 AM, Michael Ruck wrote: Yes, I am well aware of this possibility as I've written in my initial mail. It just doesn't fit with the description of sqlite3_last_insert_rowid() in my understanding. I think this is a bug - either in the documentation or in the implementation. sqlite3_last_insert_rowid() should return the correct id, no matter what and it doesn't. Consider this scenario: CREATE TABLE ex1(id INTEGER PRIMARY KEY, b UNIQUE, c UNIQUE); INSERT INTO ex1 VALUES(1,1,1); INSERT INTO ex1 VALUES(2,2,2); INSERT INTO ex1 VALUES(3,3,3); Now you do your INSERT OR IGNORE: INSERT OR IGNORE INTO ex1 VALUES(1,2,3); Three different constraints fail, one for each of three different rows. So if sqlite3_last_insert_rowid() were to operate as you suggest and return the rowid of the failed insert, when rowid would it return? 1, 2, or 3? D. Richard Hipp [EMAIL PROTECTED] - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] ---
Re: [sqlite] what extension do you recommend
Well... From the lack of responses I gather that you actually don't give a damn what file extension I use :) I think this is fare enough and maybe a SQLite database shouldn't be tied to a single file extension. Anyway.. Thanks for listening and thanks for the best database ever! Daniel Önnerby wrote: Hi all! I have developed a small windows application that use a SQLite database as it's main format. The file is NOT a kind of document, it's more of a settings-file stored in the users "Application data"-directory and the extension will not be associated with the application. I know I can use any kind of extension on this file, but what file-extension is the most common for SQLite-files, .db, .db3, .SQLite? What file-extensions do the SQLite management tools associate? What do you recommend? Best regards! Daniel - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] what extension do you recommend
Hi all! I have developed a small windows application that use a SQLite database as it's main format. The file is NOT a kind of document, it's more of a settings-file stored in the users "Application data"-directory and the extension will not be associated with the application. I know I can use any kind of extension on this file, but what file-extension is the most common for SQLite-files, .db, .db3, .SQLite? What file-extensions do the SQLite management tools associate? What do you recommend? Best regards! Daniel - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Re: Re: C/C++ API
Your SQL-strings seem a little strange. You are trying to select from a table named as the content of nick, this would assume that you have a table for each "nick" witch seems unlikely. How is your database modeled? Severin Müller wrote: Hey Sorry for my unclearness. I want to access information in a sqlite database and store it in a std::string or const char*, so i can return it to the callin function. I have the following code: std::string get_user_info(const char* nick,int mode) { #ifdef _WIN32 const char *filename = "db\\users.db"; #else const char *filename = "db/users.db"; #endif services serv; Config conf; sqlite3 *db; char *zErrMsg = 0; int rc; char *sql = (char*) malloc(sizeof(char)*64); rc = sqlite3_open(filename,&db); if(rc!=SQLITE_OK) { std::cout << "S_ERR_USERDATABASE" << std::endl; globops(conf.get_s_name(),S_MSG_SRVGOINGDOWN); sqlite3_close(db); return NULL; } const unsigned char *userinfo = (unsigned char*) malloc(sizeof(unsigned char)*1024); std::string sUsername; sqlite3_stmt *oStmt; switch(mode) { case 1: sprintf(sql,"SELECT username FROM '%s';",nick); case 2: sprintf(sql,"SELECT hostname FROM '%s';",nick); case 3: sprintf(sql,"SELECT realname FROM '%s';",nick); case 4: sprintf(sql,"SELECT operline FROM '%s';",nick); } if(sqlite3_prepare(db,sql,-1,&oStmt,NULL)==SQLITE_OK ) { sqlite3_bind_text(oStmt,1,nick,-1,SQLITE_STATIC); if(sqlite3_step(oStmt)==SQLITE_ROW) { sUsername.assign((const char*) sqlite3_column_text(oStmt,0)); } } std::cout << sUsername << std::endl; sqlite3_reset(oStmt); sqlite3_finalize(oStmt); return sUsername; } So The std::cout << sUsername << sts::endl; is supposed to put the data out there from case 1 (i'm calling case 1 only) But it just puts out a blank line... Do you have a suggestion? Thanks a lot... - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] C/C++ API
This code is totaly untested. But something like this: std::string sUsername; sqlite3 *db; if( sqlite3_open("db/users.db",&db) ){ // prepare the statement sqlite3_stmt *oStmt; if( sqlite3_prepare_v2(db,"SELECT username FROM users WHERE nick=?",-1,&oStmt,NULL)==SQLITE_OK ){ // Bind your nickname to the ?-parameter in SQL. sqlite3_bind_text(oStmt,1,nick,-1,SQLITE_STATIC); // Execute the statement if( sqlite3_step(oStmt)==SQLITE_ROW){ sUsername.assign( sqlite3_column_text(oStmt,1) ); } } sqlite3_reset(oStmt); sqlite3_finalize(oStmt); sqlite3_close(db); return sUsername; } Severin Müller wrote: Hi I'm new to sqlite3, and i' have some Problems with my code. I'm trying to select data from a table and to save the result in a string. but i have no clue how to accomplish this. Here is a part of my code: std::string get_user_info(const char* nick,int mode) { #ifdef _WIN32 const char *filename = "db\\users.db"; #else const char *filename = "db/users.db"; #endif services serv; Config conf; sqlite3 *db; char *zErrMsg = 0; int rc; char *sql = (char*) malloc(sizeof(char)*64); rc = sqlite3_open(filename,&db); if(rc) { std::cout << "S_ERR_USERDATABASE" << std::endl; globops(conf.get_s_name(),S_MSG_SRVGOINGDOWN); sqlite3_close(db); return NULL; } sprintf(sql,"SELECT * FROM '%s';",nick); if(rc!=SQLITE_OK) { fprintf(stderr,"SQL error: %s\n",zErrMsg); sqlite3_free(zErrMsg); } user = ...// i'd like to store the db data in a string here... return user; } Can anybody help me here? Thanks Regards Sevi - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] 3.5.0 alpha TryEnterCriticalSection linker error
[EMAIL PROTECTED] wrote: - Original Message From: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> To: sqlite-users@sqlite.org Sent: Tuesday, September 4, 2007 3:32:38 PM Subject: Re: [sqlite] 3.5.0 alpha TryEnterCriticalSection linker error =?ISO-8859-1?Q?Daniel_=D6nnerby?= <[EMAIL PROTECTED]> wrote: Hi all! Tried upgrading to the 3.5.0 alpha from 3.4.2 using VS 2005 on XP. I downloaded the ZIP with preprocessed C code. Compiling SQLite to a .lib was no problem, but when linking it to an ..exe I got the following: sqlite.lib(mutex_w32.obj) : error LNK2019: unresolved external symbol _TryEnterCriticalSection referenced in function _sqlite3_mutex_try I made some attempt to fix it checking all kind of possible errors on my side (defines, compiler/linker settings etc) without any luck. Anyone got the same error? BTW. SQLite 3.4.2 works just fine. This is not a big deal for me to solve, just thought I share with the development team. http://www.sqlite.org/cvstrac/chngview?cn=4399 -- D. Richard Hipp [EMAIL PROTECTED] Isn't it time to drop the Win9X support from the default build? I'm thinking that any optimization should be enabled for the majority of users. Or if it's not really an optimization, why keeping it in the code then? If some people still need to compile for legacy OSes, they can always grab the source and compile without those optimizations. An alternative is to call this function when available using "GetProcAddress" (this is the case for a lot of other modern calls that cannot be done right now). I agree, or at least make one OS-layer for winnt and one for win9x. I also found out what I was doing wrong. The _WIN32_WINNT should be set like: _WIN32_WINNT=0x0400 (or something higher) Just my 2c Nicolas - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] 3.5.0 alpha TryEnterCriticalSection linker error
Hi all! Tried upgrading to the 3.5.0 alpha from 3.4.2 using VS 2005 on XP. I downloaded the ZIP with preprocessed C code. Compiling SQLite to a .lib was no problem, but when linking it to an .exe I got the following: sqlite.lib(mutex_w32.obj) : error LNK2019: unresolved external symbol _TryEnterCriticalSection referenced in function _sqlite3_mutex_try I made some attempt to fix it checking all kind of possible errors on my side (defines, compiler/linker settings etc) without any luck. Anyone got the same error? BTW. SQLite 3.4.2 works just fine. This is not a big deal for me to solve, just thought I share with the development team. Best regards Daniel - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Towards SQLite version 3.5.0
Klemens Friedl wrote: 2007/8/29, [EMAIL PROTECTED] <[EMAIL PROTECTED]>: I think you will be much much better off to put every thread it is own private address space. In other words, turn each thread into a separate process. Threads are an invention of the devil. Stay as far away from these fiendish abominations as you can get. In Unix processes are rather cheap ("fork", etc.), and as threads got added late (mid 90ies) to the unix (posix) standard, several concurrent threading implementation are spreading around, and pthread is just one of them. Famous book "Advanced Programming in Unix Environment" by Stevens (1992) doesn't cover threads at all, but processes. Butenhof's "Programming with POSIX Threads" (1997) is one of the first and best book that explains the new standard unix threads. In Win32, process are more expensive (more management overhead), and threads, "fibers" or even "I/O complettion ports" are the prefered way to go. Well, I'm already using boost::thread for this application on win32. Since the application will not be finished before sqlite 3.5 I just though I could develop using 3.5 and help finding sqlites multithread-bugs at the same time. This is why I originally asked the question about the best practice when going multithread on SQLite 3.5. - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Towards SQLite version 3.5.0
Hi! The new multithread-features will be great. Do you think that it will be better to share one connection between all theads in an application or is better to have each thread open a new connection and use the sqlite3_enable_shared_cache? Best regards Daniel [EMAIL PROTECTED] wrote: The transition from 3.4.2 to 3.5.0 will perhaps be the largest single change to SQLite since 2.8->3.0. There will not be that many visible changes, but a lot is changing behind the scenes. Some less frequently used interfaces will be changing in slightly incompatible ways. Users who have build customized OS intereface layers or backends for SQLite will find that they are going to need to do some rework. SQLite version 3.5.0 is not close to being ready yet. But it is to the point where the source code will compile and pass many tests. And so I would like to take this opportunity to encourage people in the community to download the CVS HEAD and give it a whirl in their applications. Please let me know about any serious issues you run across. I have *started* to prepare documentation describing the changes in 3.5.0. This is draft documentation. But for those who are interested, please visit http://www.sqlite.org/34to35.html http://www.sqlite.org/capi350ref.html In particular, if your application uses a customized OS interface for SQLite, you should read the 34to35.html document to see exactly what will be involved in porting your application to run with version 3.5.0. The SQLite code currently in CVS HEAD is not ready for production use. We know that. We know what many of the problems are and Dan and I are working long hours to fix them. It's the problems that we *do not* know about that are scary. So that is why I am inviting the larger community to have an early look and perhaps bring our attention to issues sooner rather than later. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] synchronous=OFF on multithreads
Hi all! I have a multithreaded application where every thread (almost) has it's own sqlite connection to the same database. The database is only accessed by this application and the application always run on the same computer as the database. My question: Is it "safe" to set "PRAGMA synchronous=OFF"? The database is not critical, so I'm not that worried if the database gets corrupted on powerfailure. But the important thing is that, when INSERTing in one thread, it should be able to be read by another thread when transaction is done. Best regards Daniel - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Saving binary files
I agree! My statement was meant "in general", hence my proposal of the sqlite3_bind_file-functions that I think would be a nice feature in SQLite. Dimitris Servis wrote: Hello Daniel, Personally I think that files should be save like files on the filesystem. Personally I think that each tool should be used for the purpose it has been created, just to generalize what you said above. Nevertheless, there are situations like mine, where you need the good old untouchable legacy software that was once run on a stanfalone platform, to work over a network in a parallel computing scheme. So you either develop a full transaction/communication/locking etc system yourself, or you try to use what's there and robust to do it... BR dimitris - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Saving binary files
The questions about saving files in a database is a reoccurring subject of this mailing list. How about adding a feature request for something like a sqlite3_bind_file() to load a file into a statement and maybe a sqlite3_column_savefile(). I guess this could solve some things like not loading the whole file into memory and instead stream the file on commit. Oracle has something similar like this in the PHP-implementation ( http://se2.php.net/manual/en/function.oci-lob-import.php ) Personally I think that files should be save like files on the filesystem. Best regards Daniel Eduardo Morras wrote: At 19:00 18/03/2007, you wrote: Hello John, thanks for the valuable piece of advice. The idea is that either 1) I store data in tabular form and work with them 2) I create a table of blobs and each blob is the binary content of a file (2) is my method in question, for (1) we all know it works. So I turned to SQLite just because it seems that it is a lighweight single file database. So, even if i don't like (2), I can setup an implementation where I have a file system inside a fully portable file. BR dimitris You can use zlib to dwhat you want. It has functions for add and delete files, it's flat file and provides medium/good compression. You can store your file metadata on SQLite as zip filename, name of the binary file, an abstract or even a password for zip file. HTH - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Fastest way of UPDATE'ing
If you attach both databases I guess you can use the replace-command since the uniqueID exists in both databases. REPLACE INTO server.table SELECT * FROM client.table; It would also be helpful to keep a local track of what ID has not been updated on the server yet, so you do not need to update all records: REPLACE INTO server.table SELECT * FROM client.table WHERE id>[update from ID]; regards Daniel jose isaias cabrera wrote: Greetings! I have this scenario... 6 users with local dbs 1 network db to backup all those DBs and to share info. Every local DB unique record id based on the network DB. So, before each user creates a new record, the tool asks the network ID for the next available sequential record and creates a new record on the local DB using that unique id from the network DB. The question is, what is the fastest way to UPDATE the main DB? Right now, what I am doing is a for each record and UPDATE all the values where id=. Is there a faster way? Thanks, josé - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Why no sqlite3_exec16?
Why isn't there a sqlite3_exec16 function? I've seem this question before, but haven't found any answer. Best regards Daniel - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] SQLite in Adobe Lightroom
This is great! I can imagine that there will be several opensource galleries taking advantage of this in the future. [EMAIL PROTECTED] wrote: "Jeff Godfrey" <[EMAIL PROTECTED]> wrote: Though it's only a matter of curiosity, I wonder if anyone here knows how/where Adobe employed SQLite in the Lightroom product? Adobe stores just about all of your Lightroom state in an SQLite database. Find the database (on your Mac) at ~/Pictures/Lightroom/Lightroom\ Database.lrdb You won't be able to view the database with the version of SQLite that comes on your Mac (3.1.3) because Lightroom uses version 3.3.4 and thus creates database files with the descending-index feature and the more efficient boolean format - databases that version 3.1.3 cannot read. So if you want to peruse the data, download the SQLite 3.3.14 and use it. If you are using the windows version of Lightroom, I don't know where you might find the database, but one can imagine that it probably has a similar or identical name. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] multiuser DB on network share
So what you are saying is that opening a SQLite DB on a shared network drive SHOULD work with multiple clients (if all servers and NFS-version are updated to most recent version)? Roger Binns wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Jay Sprenkle wrote: This sounds exactly like what causes the trashed shared MS Access databases I've seen and network locking issues I see warnings about here. No it isn't. How is this supposed to work correctly without the client being notified? Err, the client OS is notified with an oplock break. If I write an application that requires serialized access to a file, and I rely on operating system locking, if the operating system drops my locks without notifying me my design breaks. You are totally confused. The application talks to the client OS (kernel). The abstraction is a kernel interface and file handles. The kernel then has a provider for the relevant filesystem. That provider then talks to a block layer or over the network. Oplocks etc are happening in that second part. They are not application visible. (1)Application <= (2)kernel => (3)redirector <= (4) network server Oplocks are something available in the SMB protocol (and recent NFS versions) between 3 and 4. They allow the redirector to optimize in the case when it is the only client with the file open. For example if oplocks are not available, then every read/write/lock request from the application (1) gets to the redirector (3) which has to contact the server (4) in order to satisfy the request. If the server (4) grants an oplock to the redirector (3) then that redirector knows it is the only client of the server with the file open. Therefore file contents cannot change so it can cache file content information. Similarly it can also do all byte range locking internally since no other client has the file open. If another client requests to open the file, then the server (4) sends the oplock break to the first redirector which will discard cached file contents and apply byte range locks it was handling internally, before acknowledging the oplock break. From that point on it has to send all read/write/lock requests to the server since the other client(s) could be modifying the file. At no point in all of this is the application(1) aware of oplocks, nor does it have any control over them. Heck even the kernel (2) doesn't know. It just hands all requests to the redirector (3) which can use oplocks to optimize performance. Roger -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFFpdWdmOOfHg372QQRArwkAKC+NX8C8KTBtL5DVKNZbefZ0W/VTACgi7Kr JBlumHql+rsB0AbjG/5NNpM= =YNJF -END PGP SIGNATURE- - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] multiuser DB on network share
Oplocks seems interesting. I will need to do some reading/testing. Thanks for all the comments and hints. /Daniel Jay Sprenkle wrote: I've heard this too. Windows networking has some issues with locking. You might research 'oplocks' or 'opportunistic locking' (or opportunistic caching) if you're interested in understanding what it's doing. I was reading it the other day and thought it might be the key to making it work correctly if you could turn oplocks off in windows. On 1/9/07, Daniel Önnerby <[EMAIL PROTECTED]> wrote: I thought I read somewhere in the docs that this was not reliable (maybe I dreamed it)??? This is great if this works, although I might still make the socketserver for notifying when updates has been made. Thank you for your replies. John Stanton wrote: > Why not just use the SMB file locks if you are using the SMB networking? -- The PixAddixImage Collector suite: http://groups-beta.google.com/group/pixaddix SqliteImporter and SqliteReplicator: Command line utilities for Sqlite http://www.reddawn.net/~jsprenkl/Sqlite Cthulhu Bucks! http://www.cthulhubucks.com - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] multiuser DB on network share
I thought I read somewhere in the docs that this was not reliable (maybe I dreamed it)??? This is great if this works, although I might still make the socketserver for notifying when updates has been made. Thank you for your replies. John Stanton wrote: Why not just use the SMB file locks if you are using the SMB networking? Daniel Önnerby wrote: Well.. I do not mean that I will use the socketserver to run queries against it. What I mean is that the database is opened by the applications from a windows share. The socketserver is only used to ask permission to write to the database and notifying the other applications that an update has been made. My thought was that this will not require much implementation on the application. Just adding a call to the socketserver before every INSERT or UPDATE, and that call will wait until the permission has been granted by the socketserver. Once the INSERT/UPDATE has been made, another call is made to the socketserver to unlock the database. The socketserver will then notify the other clients that an update has been made. Best regards Daniel John Stanton wrote: That should work quite well. We use such a strategy to implement remote, multi user access to Sqlite databases. the user is unconcerned about locking or contentions. In our case we made the server run on port 80 (HTTP) and use regular HTTP protocol so that it easily penetrates firewalls. The server in our case can either be a CGI process on a regular WWW server or use a purpose developed multi-threaded daemon which gives better performance. We make the data transport format XML for uniformity. For example if the usage requirement were to become too intensive for sqlite we can switch the shared database to being PostgreSQL without affecting the clients. Daniel Önnerby wrote: Hi all! At the company I work we have a windows application that use sqlite for the document format and this works great. We are now thinking about if it would be possible to have multiple users to access the db simultaneously from different computers (like a enterprise edition :) ). I have read everything about the multithreading issues and I know that sqlite is not designed to work like this. But I have an idea on how I might solve this in our case and would like to ask the community if you think this is a god idea (or if it would work at all): My idea is to create a small socketserver on the local network that the application holds an open connection to. When someone wants to write (lock) to the DB you always need to ask the socketserver if this is ok. The server will not keep any track of the database itself. The only purpose of the server is so that no one tries to write simultaneously. The server will also notify the applications when a modification has been made (on unlock). So.. could this work??? Best regards and thanks for the best (and smallest) SQL database ever made. Daniel - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] multiuser DB on network share
Well.. I do not mean that I will use the socketserver to run queries against it. What I mean is that the database is opened by the applications from a windows share. The socketserver is only used to ask permission to write to the database and notifying the other applications that an update has been made. My thought was that this will not require much implementation on the application. Just adding a call to the socketserver before every INSERT or UPDATE, and that call will wait until the permission has been granted by the socketserver. Once the INSERT/UPDATE has been made, another call is made to the socketserver to unlock the database. The socketserver will then notify the other clients that an update has been made. Best regards Daniel John Stanton wrote: That should work quite well. We use such a strategy to implement remote, multi user access to Sqlite databases. the user is unconcerned about locking or contentions. In our case we made the server run on port 80 (HTTP) and use regular HTTP protocol so that it easily penetrates firewalls. The server in our case can either be a CGI process on a regular WWW server or use a purpose developed multi-threaded daemon which gives better performance. We make the data transport format XML for uniformity. For example if the usage requirement were to become too intensive for sqlite we can switch the shared database to being PostgreSQL without affecting the clients. Daniel Önnerby wrote: Hi all! At the company I work we have a windows application that use sqlite for the document format and this works great. We are now thinking about if it would be possible to have multiple users to access the db simultaneously from different computers (like a enterprise edition :) ). I have read everything about the multithreading issues and I know that sqlite is not designed to work like this. But I have an idea on how I might solve this in our case and would like to ask the community if you think this is a god idea (or if it would work at all): My idea is to create a small socketserver on the local network that the application holds an open connection to. When someone wants to write (lock) to the DB you always need to ask the socketserver if this is ok. The server will not keep any track of the database itself. The only purpose of the server is so that no one tries to write simultaneously. The server will also notify the applications when a modification has been made (on unlock). So.. could this work??? Best regards and thanks for the best (and smallest) SQL database ever made. Daniel - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] multiuser DB on network share
Hi all! At the company I work we have a windows application that use sqlite for the document format and this works great. We are now thinking about if it would be possible to have multiple users to access the db simultaneously from different computers (like a enterprise edition :) ). I have read everything about the multithreading issues and I know that sqlite is not designed to work like this. But I have an idea on how I might solve this in our case and would like to ask the community if you think this is a god idea (or if it would work at all): My idea is to create a small socketserver on the local network that the application holds an open connection to. When someone wants to write (lock) to the DB you always need to ask the socketserver if this is ok. The server will not keep any track of the database itself. The only purpose of the server is so that no one tries to write simultaneously. The server will also notify the applications when a modification has been made (on unlock). So.. could this work??? Best regards and thanks for the best (and smallest) SQL database ever made. Daniel - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] is blob compression worth it
Thanks for pointing out the obvious :) Seriously though, there are times when probably all of us has made "just a simple database" that was not normalized in the correct way that later turns out to be used a lot more than intended. Normalizing the database at a later state requires a lot of more reprogramming and rewriting a lot of sql. I could see a use of this kind of functionality but the best way would always be to normalize. But then again I was just curios to see if anyone had tried or thought about something like this before. I'm not even sure I would like this type of functionality implemented in SQLite Best regards Daniel John Stanton wrote: Your solution here is to normalize your database. Third normal form will do it for you. Daniel Önnerby wrote: Just out of curiosity. If I for instants have 1000 rows in a table with a lot of blobs and a lot of them have the same data in them, is there any way to make a plugin to sqlite that in this case would just save a reference to another blob if it's identical. I guess this could save a lot of space without any fancy decompression algorithm, and if the blob-field is already indexed there would be no extra time to locate the other identical blobs :) Just a thought :) John Stanton wrote: What are you using for compression? Have you checked that you get a useful degree of compression on that numeric data? You might find that it is not particularly amenable to compression. Hickey, Larry wrote: I have a blob structure which is primarily doubles. Is there anyone with some experience with doing data compression to make the blobs smaller? Tests I have run so far indicate that compression is too slow on blobs of a few meg to be practical. I get now at least 20 to 40 inserts per second but if a single compression takes over a second, it's clearly not worth the trouble. Does anybody have experience with a compression scheme with blobs that consist of mostly arrays of doubles? Some schemes ( ibsen) offer lightening speed decompression so if the database was primarily used to read, this would be good choice but very expensive to do the compression required to make it. - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] is blob compression worth it
Just out of curiosity. If I for instants have 1000 rows in a table with a lot of blobs and a lot of them have the same data in them, is there any way to make a plugin to sqlite that in this case would just save a reference to another blob if it's identical. I guess this could save a lot of space without any fancy decompression algorithm, and if the blob-field is already indexed there would be no extra time to locate the other identical blobs :) Just a thought :) John Stanton wrote: What are you using for compression? Have you checked that you get a useful degree of compression on that numeric data? You might find that it is not particularly amenable to compression. Hickey, Larry wrote: I have a blob structure which is primarily doubles. Is there anyone with some experience with doing data compression to make the blobs smaller? Tests I have run so far indicate that compression is too slow on blobs of a few meg to be practical. I get now at least 20 to 40 inserts per second but if a single compression takes over a second, it's clearly not worth the trouble. Does anybody have experience with a compression scheme with blobs that consist of mostly arrays of doubles? Some schemes ( ibsen) offer lightening speed decompression so if the database was primarily used to read, this would be good choice but very expensive to do the compression required to make it. - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] ip2long
I havn't followed the whole thread, but from what I understand you store the data as numbers and not IP-strings. The most common way of checking a range is to use a IP-mask, and this should work fine in this case as well. Something like: SELECT * FROM myiptable WHERE (ip & "maskvalue") == ("subnet IP" & "maskvalue") the maskvalue in this case could be something like 255.255.255.0 (decimal value 4294967040) to check the subnet Kevin Waterson wrote: How can I SELECT an IP within a range? All my IP's are stored as 1.2.3.4 but I need to check if each is within a range of long IP's. Can this be done with SQLite? or do I need some sort of external ip2long() function to throw at it? Kind regards kevin - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] C precompiler to bytecode
Hi everyone! I'm just a bit curios if it would be possible to make like a C precompiler or a macro of some kind that compiles/interpret the SQL-statements to bytecode just like the sqlite_prepare does but does this when compiling/precompiling your application instead of at runtime. Since most application written in C/C++ use static SQL-statements (unless you are building your SQL-strings on the fly) and then bind the values, I guess there would be several benefits for your compiled application: * Faster: Since the SQL-statement is already interpreted. * Smaller (not that sqlite needs to be smaller): The executable does not need to contain the part of sqlite that interprets the SQL-statements since this was made at compile time. Just a thought :) Best regards Daniel Önnerby