[sqlite] Invitation to connect on LinkedIn
LinkedIn I'd like to add you to my professional network on LinkedIn. - Stephen Stephen Kervin Solutions Architect and Development Manager at SunGard Reading, United Kingdom Confirm that you know Stephen Kervin https://www.linkedin.com/e/-62mihx-gkthm0kj-65/isd/2434061149/Xvw4IbTk/ -- (c) 2011, LinkedIn Corporation ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] System function with Sqlite
Hello Chris, It looks like you've been dealing with this for a while now. 1. Try these extra gcc flags in your build... "-Wall -Wconversion -Wshadow". Also try renaming your database handle to something more unique. 2. Have you tried 'strace' like someone else suggested? 3. Strip all unnecessary includes. Check your include directory order. 4. Try a 'ldd' on your binary and make sure you have the libraries you expected. I'd guess that you're most likely dealing with a path issue ( runtime or compile time, library or maybe include ) I hope that helps. Best regards, Kervin - Original Message > From: Chris Brown <[EMAIL PROTECTED]> > To: General Discussion of SQLite Database > Sent: Monday, August 18, 2008 7:06:15 AM > Subject: Re: [sqlite] System function with Sqlite > > I have tried to trace the problem further through the Sqlite source by > checking at which point I could no-longer successfully call the system > function. I got as far as xOpen through sqlite3_open > opendatabase > > sqlite3Btreefactory > sqlite3Btreeopen > sqlite3PagerOpen > Sqlite3OsOpen > > xOpen. Immediately prior to the xOpen call I could successfully make System > calls but not after. > > I'm not sure if this additional information may be of help. > > Thanks > Chris > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] ANN: SQLite .NET provider updated
I have to say SQLite.NET, like SQLite itself is an incredible piece of software. - Original Message > From: Harold Wood <[EMAIL PROTECTED]> > To: General Discussion of SQLite Database > Sent: Wednesday, July 16, 2008 11:46:29 PM > Subject: Re: [sqlite] ANN: SQLite .NET provider updated > > wow, can i nominate you for sainthood? i mean really! i was trying to use > esql > and its a mess...ug! > > thanks! > > Woody > > > --- On Wed, 7/16/08, Robert Simpson wrote: > > From: Robert Simpson > Subject: Re: [sqlite] ANN: SQLite .NET provider updated > To: "'General Discussion of SQLite Database'" > Date: Wednesday, July 16, 2008, 10:27 PM > > Sure does! > > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] On Behalf Of Harold Wood > Sent: Wednesday, July 16, 2008 7:20 PM > To: General Discussion of SQLite Database > Subject: Re: [sqlite] ANN: SQLite .NET provider updated > > very very cool! now does it work with teh compact framework? > > --- On Wed, 7/16/08, Robert Simpson wrote: > > From: Robert Simpson > Subject: [sqlite] ANN: SQLite .NET provider updated > To: "'General Discussion of SQLite Database'" > > Date: Wednesday, July 16, 2008, 8:22 PM > > I don't normally announce releases here, but this one's got some great > stuff > in it. Those of you using the SQLite ADO.NET provider will want to check > out. > > Some highlights in the 52 release: > 3.6.0 code merge > Table and View designers - you can now create and design tables and views, > indexes and foreign keys from the Visual Studio Server Explorer with a nice > interactive GUI. This is still in beta, but it's looking really good. > Trigger designer is coming up soon. > Entity Framework support much improved from the 51 release. Still in beta > while Visual Studio 2008 SP1 is in beta. > > It's public domain, it's open source, and 3.5 years stable. Why buy a > SQLite ADO.NET provider from someone else? > > http://sqlite.phxsoftware.com > > Robert > > > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > > > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Re: Shouldn't this query work?
Hello Igor, Thanks. That was the problem. I had been doing 32 bit math on SQLite's 64 bit integers. Best regards, Kervin --- Igor Tandetnik <[EMAIL PROTECTED]> wrote: > Kervin L. Pierre > > wrote: > > I expected this script to work... > > > > create table testtable ( testcol int ); > > insert into testtable ( testcol ) values ( > -2146369472 > > ) > > select * from testtable where ( testcol & > 4294967295 ) > > = -2146369472 > > > > Note that 4294967295 in binary is > > all '1's. > > No, it's 32 zeros followed by 32 ones. SQLite deals > with 64-bit > integers. > > Igor Tandetnik > > > - > To unsubscribe, send email to > [EMAIL PROTECTED] > - > > - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Shouldn't this query work?
Hello, I've been wrestling with this issue for a long while now so I am hoping some could give some indication to what I am doing wrong. I expected this script to work... create table testtable ( testcol int ); insert into testtable ( testcol ) values ( -2146369472 ) select * from testtable where ( testcol & 4294967295 ) = -2146369472 Note that 4294967295 in binary is all '1's. Also if -2146369472 is changed to 2146369472, then the select query works. Could someone explain my error to me? Best regards, Kervin - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Bitwise 'AND' issue with bound variables
Hello, I'd been looking into a bug in my application which worked down to an issue with Bitwise AND and bound variables in prepared statements it seems. The query... SELECT * FROM example WHERE (intColumn & 4294901760) = ? Where 'intColumn' is an integer column and the parameter is bound using sqlite3_bind_int() always returned zero rows. Even when that exact query returned multiple rows from management tools. I realized that AND'ing the parameter with any integer value fixed this. Eg... SELECT * FROM example WHERE (intColumn & 4294901760) = (? & 4294967295) Note that 4294967295 is equal to 0xFF and the parameter's actual value is always the same width so (? & 4294967295) should not change the parameter's value. The second query produces the results I expected but I'd like to know why the first query did not work. Does anyone have any ideas? Best regards, Kervin - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Q about new SQLite API
Hello, Thanks for the improvements! Q1 Any name would be find, though the 'ex' extension seems to be popular for that sought of thing. Q2 A non-blocking resultset API? :) Sorry, had to try. Q3 I think this should be a new error for the caution's sake. But overall I prefer more, finer grain, errors than less. The API user can always deal with them en masse with a switch. Best regards, Kervin --- [EMAIL PROTECTED] wrote: > I'm working on a new API routine for SQLite and I > have > questions for the community. > > The working name of the new api is > sqlite3_prepare_v2(). > sqlite3_prepare_v2() works like sqlite3_prepare() in > that > it generates a prepared statement in an sqlite3_stmt > structure. The differences is in the behavior of > the > resulting sqlite3_stmt and in particular a > difference in > the way sqlite3_step() responds to the sqlite3_stmt. > The > differences are these: > > * You never get an SQLITE_SCHEMA error. > sqlite3_prepare_v2 > retains the original SQL and automatically > reprepares and > rebinds it following a schema change. > > * sqlite3_step() returns the correct error code > right > away, rather than just returning SQLITE_ERROR > and making > you call sqlite3_reset() to find the true reason > for the > error. > > In this way, I am hoping that sqlite3_prepare_v2() > will work > around two of the most visible warts in the current > API. > > QUESTION 1: sqlite3_prepare_v2 is the merely the > working name > for the new function. What should the official name > be? > Some possibilities include: > > sqlite3_prepare_ex1 > sqlite3_prepare_ng > sqlite3_new_prepare > sqlite3_compile > > QUESTION 2: Are there any other API warts that need > to be > worked around that can be fixed by this same change? > > QUESTION 3: Suppose there is a schema change and the > SQL > statement is automatically reprepared. But the > schema change > is such that the SQL is no longer valid. (Perhaps > one of the > tables mentioned in a SELECT statement was dropped.) > What > error code should sqlite3_step() return in that > case? > > -- > D. Richard Hipp <[EMAIL PROTECTED]> > > > - > To unsubscribe, send email to > [EMAIL PROTECTED] > - > > - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Please test check-ins [3445] and [3446]
Hello, I just tried. I am getting build errors for CVS. 3.3.7 builds though. Error 101 error C2065: 'SQLITE_FUNCTION' : undeclared identifier c:\src\sqlite\sqlite\src\expr.c 1185 Error 121 error C2065: 'sqlite3_overload_function' : undeclared identifier c:\src\sqlite\sqlite\src\loadext.c 217 Error 122 error C2099: initializer is not a constant c:\src\sqlite\sqlite\src\loadext.c 217 --- [EMAIL PROTECTED] wrote: > Windows users with the ability to build from source, > please help me > out by testing the changes in check-ins [3445] and > [3446] at > http://www.sqlite.org/cvstrac/chngview?cn=3445 and > http://www.sqlite.org/cvstrac/chngview?cn=3446 > verifying that > they do not break anything. Tnx. > -- > D. Richard Hipp <[EMAIL PROTECTED]> > > > - > To unsubscribe, send email to > [EMAIL PROTECTED] > - > > - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] How long can I keep a prepared statement around?
Hello Michael, Thanks. Best regards, Kervin --- Michael Ruck <[EMAIL PROTECTED]> wrote: > Use the function sqlite3_expired to determine, when > you need to > recompile a prepared statement. That's the approach > I use. > > Mike > > Am 24.09.2006 um 20:48 schrieb Kervin L. Pierre: > > > Hello, > > > > I have a few queries that are executed very > > often. I would like to keep them around as > > much as possible. > > > > The problem is, I don't know what > > 'invalidates' a prepared statement. In > > other words, when can I expect to have to > > 're-'prepare a statement? > > > > How long can I keep a prepared statement? > > Can they be passed between threads? I take > > it they are tied to a specific sqlite3_db* > > handle? Do starting new transactions, or > > transaction rollbacks, etc. affect them? > > > > Ideally, for instance, for simple queries > > such as 'BEGIN' and 'COMMIT', I'd like to > > keep those prepared statements for the > > lifetime of the application if possible. > > > > Any information would be appreciated. > > > > Best regards, > > Kervin > > > > > -- > > > --- > > To unsubscribe, send email to > [EMAIL PROTECTED] > > > -- > > > --- > > > > > - > To unsubscribe, send email to > [EMAIL PROTECTED] > - > > - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Re: How long can I keep a prepared statement around?
Hello Igor, Thanks for the info. Best regards, Kervin --- Igor Tandetnik <[EMAIL PROTECTED]> wrote: > Kervin L. Pierre > > wrote: > > The problem is, I don't know what > > 'invalidates' a prepared statement. In > > other words, when can I expect to have to > > 're-'prepare a statement? > > > > How long can I keep a prepared statement? > > Can they be passed between threads? I take > > it they are tied to a specific sqlite3_db* > > handle? Do starting new transactions, or > > transaction rollbacks, etc. affect them? > > Prepared statements are tied to a connection > (sqlite* handle). Since > SQLite connection cannot be shared between threads, > prepared statements > cannot either. A prepared statement is valid for the > lifetime of a > connection, with one exception: it becomes invalid > when database schema > changes, that is, when tables are created, altered > or dropped, when > triggers are created or dropped, and so on. > > Igor Tandetnik > > > - > To unsubscribe, send email to > [EMAIL PROTECTED] > - > > - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] How long can I keep a prepared statement around?
Hello, I have a few queries that are executed very often. I would like to keep them around as much as possible. The problem is, I don't know what 'invalidates' a prepared statement. In other words, when can I expect to have to 're-'prepare a statement? How long can I keep a prepared statement? Can they be passed between threads? I take it they are tied to a specific sqlite3_db* handle? Do starting new transactions, or transaction rollbacks, etc. affect them? Ideally, for instance, for simple queries such as 'BEGIN' and 'COMMIT', I'd like to keep those prepared statements for the lifetime of the application if possible. Any information would be appreciated. Best regards, Kervin - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] sqlite_exec query string maximum length?
Hello, Thanks for the explaination. Looks like this isn't going to help me after all. Thought sqlite3_exec() would compile the query only once. Best regards, Kervin --- Dennis Cote <[EMAIL PROTECTED]> wrote: > Kervin L. Pierre wrote: > > and for performance, I would like to execute as > > few sqlite_exec() calls as possible. > > > > > > Kervin, > > While you can pass several SQL statements to > sqlite3_exec in one sql > string, each statement is compiled and executed > separately, so the > performance increase over separate calls to > sqlite3_exec is not > substantial. If you add 1000 inserts into a string > and pass that to > sqlite3_exec, it will repeat the following steps > 1000 times; parse > insert statement sql, generate insert statement, > execute insert > statement, and destroy insert statement. > > However, if you use a prepared statement, and then > simply bind new > values to it for each insert you eliminate the > parse, generate, and > destroy steps for 999 out of the 1000 statements. > This will provide a > much more substantial performance boost. > > Note, it is also extremely important to wrap your > block of inserts with > "begin transaction" and "end transaction" commands. > > HTH > Dennis Cote > > > - > To unsubscribe, send email to > [EMAIL PROTECTED] > - > > - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] sqlite_exec query string maximum length?
Hello, Thanks. Best regards, Kervin --- [EMAIL PROTECTED] wrote: > "Kervin L. Pierre" <[EMAIL PROTECTED]> wrote: > > what is the maximumm number > > of characters there can be in a query > > string sent to sqlite_exec()? > > 2147483647 bytes > > -- > D. Richard Hipp <[EMAIL PROTECTED]> > > > - > To unsubscribe, send email to > [EMAIL PROTECTED] > - > > - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] sqlite_exec query string maximum length?
Hello, My application is building a query string with multiple queries. The number of queries depends on the user and for performance, I would like to execute as few sqlite_exec() calls as possible. So my question what is the maximumm number of characters there can be in a query string sent to sqlite_exec()? ( if there is a maximum defined ). I estimate that the query string would get to 65K characters on occasion, and there is no gaurantee that it won't get to larger once in a while. Best regards, Kervin - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Need help with query optimization
Hello Dennis, Jay, Thanks for you suggestions. The ids are simply returned to the calling application in a very large array. It's the API we're writing against; so we can't deviate from that unfortunately. Thanks for clarifying the INSERT trigger behavior. I'll try your suggestions and see what happens. Best regards, Kervin --- Dennis Cote <[EMAIL PROTECTED]> wrote: > Kervin L. Pierre wrote: > > I'd appreciate any help or > > pointers optimizing the SQL in the main > > loop. > > > > for( 'large number' ){ > > // query1 > > "SELECT id FROM table > >WHERE attr1 = 'a', attr2 = 'b', ..." > > > > if( 'query1 returns no rows' ){ > >if( 'create flag is on' ){ > >// query2 > >"INSERT OR REPLACE values > > ( attr1 = 'a', attr2 = 'b', ..." > > > >// query3 > >"SELECT last_insert_rowid()" > > > >// use returned id ... > >} > > } > > else{ // use returned id ... } > > } > > > > Basically, for every iteration we check > > that an id exists ( ie. 'query1' ), if > > it does we use it. But if the id does > > not exist we insert a row ( ie. 'query2' > > ), then get the 'last_insert_rowid' > > (ie. 'query3' ) and use that. > > > > Can this be done using 1 query and > > possibly an insert trigger? > > > > I've tried adding... > > > > "CREATE TRIGGER mytrigger > > AFTER INSERT ON mytable > > BEGIN > > SELECT id from mytable > > WHERE attr1 = new.attr1 > > AND attr2 = new.attr2; > > END" > > > > ...and then planned on changing 'query2' > > to "INSERT OR IGNORE..." > > > > But this does not work as sqlite3_step() > > doesn't seem to ever return SQLITE_ROW > > on the INSERT, even after the trigger > > executes the SELECT. > > > > > Kervin, > > Sqlite_step will never return SQLITE_ROW from an > INSERT query. Your > insert is executing the select query inside your > trigger and then > throwing the result away because your trigger > doesn't use the result for > anything. > > What do you want to do with the id returned by the > select in the > trigger? You will need to add SQL to do whatever > that is inside the > trigger to get this to work. > > I suspect that your speed issue is primarily caused > by your "large > number" of executions of the select you call query 1 > without suitable > indexes, which causes you to execute a large number > of table scans > through a table that contains a large number of > rows. If this is the > case adding a index on attr1 and attr2 should help > immensely. > > create index table_attr1_attr2 on table(attr1, > atttr2); > > Also, your pseudo-code doesn't say if you are > preparing your statements > once before your loop or if your are preparing the > statements before > each execution. If you are not preparing your > statements outside the > loop and binding the parameters, you could change > that to eliminate a > "large number" of unnecessary recompilations of the > same SQL statements. > > It's not clear to me what you are trying to do in > this loop but if you > can elaborate on your explanation, we may be able to > suggest a much > faster way to accomplish your goal. > > HTH > Dennis Cote > > - > To unsubscribe, send email to > [EMAIL PROTECTED] > - > > - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Need help with query optimization
Hello, I working on a function in our open-source Outlook plugin [ http://openconnector.org ] and I've run into some speed issues. We've been avoiding optimizing for speed till later but this function is currently operating at 3 orders of magnitude worse than acceptible and is slowing down testing. I'd appreciate any help or pointers optimizing the SQL in the main loop. I would like to get down the max number of queries from 3 to 1, using triggers if necessary. The pseudo code and SQL.. for( 'large number' ){ // query1 "SELECT id FROM table WHERE attr1 = 'a', attr2 = 'b', ..." if( 'query1 returns no rows' ){ if( 'create flag is on' ){ // query2 "INSERT OR REPLACE values ( attr1 = 'a', attr2 = 'b', ..." // query3 "SELECT last_insert_rowid()" // use returned id ... } } else{ // use returned id ... } } Basically, for every iteration we check that an id exists ( ie. 'query1' ), if it does we use it. But if the id does not exist we insert a row ( ie. 'query2' ), then get the 'last_insert_rowid' (ie. 'query3' ) and use that. Can this be done using 1 query and possibly an insert trigger? I've tried adding... "CREATE TRIGGER mytrigger AFTER INSERT ON mytable BEGIN SELECT id from mytable WHERE attr1 = new.attr1 AND attr2 = new.attr2; END" ...and then planned on changing 'query2' to "INSERT OR IGNORE..." But this does not work as sqlite3_step() doesn't seem to ever return SQLITE_ROW on the INSERT, even after the trigger executes the SELECT. Any pointers would be appreciated. Best regards, Kervin - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] sqlite3_free()
Hello, --- Dennis Cote <[EMAIL PROTECTED]> wrote: > This really has nothing to do with the Windows DLL > system. It is simply Thanks for the explanation. Wondered what that bug reporter was talking about :) There's a lot Windows does wrong, we don't have to go around making up stuff :) SQLite could export its memory management routines as function pointers to the host app. And have the host app provide their implementations if desired. That would solve this issue amongst others. Best regards, Kervin
[sqlite] function pointers? - Re: [sqlite] DLLs containing user-defined SQL functions
Hello, Regardless of program loading design, wouldn't this feature be better coded using function pointers? Ie. Have a "register/load" function that maps functions in the exe? PS. It would be helpful to have sqlite3OSMalloc() and sqlite3OSFree() as function pointers as well, so an application can do it's own memory management without recompiling SQLite library. Best regards, Kervin --- [EMAIL PROTECTED] wrote: > "Igor Tandetnik" <[EMAIL PROTECTED]> wrote: > > > > Note an inherent chicken and egg problem: you > can't build two DLLs (or > > an EXE and a DLL) using this approach where a > circular dependency > > exists, that is, where DLL A needs a function > exported from DLL B, and > > at the same time DLL B needs a function exported > from DLL A. To > > successfully link DLL A, you need an import > library from DLL B, but an > > import library is produced as a side effect of > link process, and to link > > DLL B you need an import library from DLL A, which > you can't build until > > you've built B, ... There is a way to break this > circle with the use > > of so called export files (.exp ), but the > technique is rather > > cumbersome. You don't want to go that way unless > there's a gun to your > > head. > > > > It's official then: The lack of sensible shared > library loader > is yet another reason to avoid windows at all costs. > In fact, > I'm thinking this reason will go near the top of the > list > > Thanks, everybody, for your help. > > -- > D. Richard Hipp <[EMAIL PROTECTED]> > >
Re: [sqlite] Problems with multiple threads?
Hello, I was under the impress that we could never get an SQLITE_BUSY, not even on COMMIT if we use BEGIN EXCLUSIVE. But this seems to say that COMMITs on exclusive transactions can through SQLITE_BUSY?... --- [EMAIL PROTECTED] wrote: > then start the transaction initially with BEGIN > EXCLUSIVE. This > will acquire the reserved lock immediately (instead > of waiting to > the first write occurs) and so you will either get > an SQLITE_BUSY > right away (when it is a simple matter to just rerun > the BEGIN EXCLUSIVE > statement until it works) or you can be assured of > never getting > another SQLITE_BUSY again until you try to COMMIT > (and there too, > you can simply rerun COMMIT repeatedly until it > works.) How is that? Since the process at that point has the exclusive access to the database file. Best regards, Kervin
Re: [sqlite] Enabling Memory Management in 3.3.4
Alexander Roston wrote: > I asked the compiler to show me a list of the library routines and "sqlite3_release_memory" was not present. shouldn't that give you a link error rather than a segfault? Guessing that if your program linked properly the linker found the function somewhere. Maybe it's finding the wrong version? Best Regards, Kervin
Re: [sqlite] concers about database size
Hello Daniel, Daniel Franke wrote: Was sqlite designed for those numbers? The docs state that sqlite supports "databases up to 2 terabytes in size". OTOH, "supports" is not the same as "works-well-with"?! Any suggestions whether my descision to use sqlite was appropiate for this table design? http://www.sqlite.org/faq.html#q10 From this page... A database is limited in size to 2 tibibytes (241 bytes). That is a theoretical limitation. In practice, you should try to keep your SQLite databases below 100 gigabytes to avoid performance problems. If you need to store 100 gigabytes or more in a database, consider using an enterprise database engine which is designed for that purpose. Best Regards, Kervin
[sqlite] memory database, ATTACH, and threads
Hello, I am not sure if that is exactly the problem, but it seems that sqlite3_prepare() deadlocks when called to attach a in-memory database that is already attached. The second ATTACH occurs on a separate thread with a separate handle to a database that is already opened in the process. Is that known behavior? If a thread is attached on a database handle, and the database is opened on a separate handle, should the memory database be attached to the second handle as well? Best Regards, Kervin
Re: [sqlite] Shared Memory Question
Hello, We are currently looking at this very issue ourselves. I just put in an enhancement request... http://www.sqlite.org/cvstrac/tktview?tn=1679 For allowing the runtime replacement of memory functions by using function pointers instead of C mallocs. We could then replace SQLite's allocators with a memory-pool based allocator for speed. The application's memory-pool would be allocated on shared-memory for debugging in-memory database. Findly SQLiteBrowser [ http://sqlitebrowser.sourceforge.net/ ] would be modified to look to the shared-memory segment for the in-memory database ( that should be easy ) Best Regards, Kervin
Re: [sqlite] :memory: and sessions with PHP
Hello, I think the problem is that PHP uses a file-based session serialization. Therefore anything that cannot be saved to a file and returned ( eg. you can't do this with file handles, etc. ) cannot be saved in session scope in PHP as it is implemented by default. There is the 'mm' extension ( search for reference on http://us3.php.net/session ) that is suppose to fix this, I've heard. Also, there is word that there will be memory based session in future versions PHP engine by default. I have never used 'mm'. So, your problem is that you have no place to put your SQLite handle after a script has finished executing, so that the next instance of the script can get it. PHP has no such scope by default. Best Regards, Kervin CrazyChris wrote: We may be at crossed paths... I'm wanting to save the :memory: database to the session, not the other way round, so that when the 2nd page loads, the :memory: database can be recreated and available as it was on the last page load. The advantage is that after some time, the session is deleted automatically by the server and the database goes with it, so short term, high-intensity data can be stored and queried quickly in :memory: and the add/edits remain through the entire user experience. An alternative is to use a file based database per user, but this would require a tidy-up routine to be manually coded, and makes the code less portable. An alternative is to create the :memory: database and populate it from session data each time, then save back to session on script close. Not as swift or elegant, but if it's the only way then that may be that! --- Hi there, I have a need to create a :memory: sqlite database, but save it into the user session (PHP) but can't see a way to access the data to save. Looking for a sqlite version of serialize() I guess. Has anyone managed to do this? Is it even possible? Wanting to be able to maintain a large chunk of data across a users session on a website, and the array's are getting tedious to manage and search through! The PHP session information has to be persistent, so it's not going to be easy to use a :memory: database. There is lots of information about how to save session information to a database, though, on the PHP web site. I haven't looked at it in a couple of years, but I'd guess that you'll get some good pointers if you look at the documentation for session_set_save_handler(). Also, IIRC, PHP provides functions to do serialization. You won't need them if you go the session_set_save_handler() route, but if you want to serialize data yourself, those functions should be available. Derrell
Re: [sqlite] Simple ATTACH/memory database question
[EMAIL PROTECTED] wrote: Is this right? You are doing a separate CREATE TABLE for each message? That's going to be the source of your problem. I think Yes we are. I tried a message/row design early on but I could not count on it being behaving well. The problem was fitting Outlook's internal API ( MAPI ) on a SQL database. Every message has a one-to-many relationship with properties, which have a one-to-many with values. Combined with some of the reporting requirements of the API, I thought that a simple message/table would work, at least for version 1. I was hoping that the CREATE hit would not be significant since it is only occured when a message is created. But there doesn't seem to be much we can do about the reads. PS. One approach seems to be to replace the default 'sqlite3OsMalloc' with a pool based malloc for speed. It would be nice if this was a function pointer instead of a define. that way we could replace the function with- out having to modify SQLite source. Best Regards, Kervin
Re: [sqlite] Simple ATTACH/memory database question
[EMAIL PROTECTED] wrote: I'm reading and replying to this message using an SQLite-backed email client See That's very interesting... All incoming and archival emails are stored as BLOBs in a table. The full text of messages is indexed. It is all very fast and I haven't had to do anything special to make it so. How did you implement full text indexing? I though sqlite did not support full text index, am I wrong? Of course, all this assumes a modern workstation. Perhaps your MUA is designed to work on a handheld with cheapest (read: slowest) flash memory available and a 50 MHz ARM processor or something? Lol, I wish. It's the direct opposite. The project is essentially an alternative to PST in Microsoft Outlook :) http://openconnector.org/ We are stuck with Outlook's internal API. Our design has an SQLite table for each message. Message objects are transacted. Hence the plan is to mirror the message SQLite table in an in memory database until the SaveChanges() API is called, which would copy the changed rows to disk. Currently message saves and retieval is very slow because Outlook makes about 250 calls to our 'sqlite-backed' generic property retrival function when it opens a single message. So that SQLite-based function, GetProps(), needs to be very, very, fast. Any ideas? Best Regards, Kervin
[sqlite] Simple ATTACH/memory database question
Hello, We are getting ready to start using in- memory database to cache sqlite reads/writes in effort to improve speed. For background, the application is an email client. The way we envision the caching working is that we have a mirror copy of any table in use in memory. The tables are small and not many are in use at any time, so we are ok with space. Since we read much more than we write, reading from memory should improve. Also, this should simplify our transaction model for transacted objects. Has anyone done this? Comments? Best Regards, Kervin
Re: [sqlite] Sqlite and Java
Cloudscape, which was given to Apache foundation and is now the Apache Derby Project. http://db.apache.org/derby/ Jonathan Ballet wrote: I think you're talking of http://hsqldb.org/, used among other project by OpenOffice ...
Re: [sqlite] built-in functrion suggestion: size of blob
From my understanding of Length(), it still reads the entire Blob into memory before getting its size ( please correct me if I am wrong ). Many have suggested, that the best approach is to have your application restrict the size of the blobs that it writes to a size that is good for you. You can then read your blob back in chunks as needed. Nuno Lucas wrote: > > Now the only thing left is a way to get part of blob without having to > read the entire row into memory, but that probably would not be > easy... > Would be nice :) Looks to me that blob would have to be stored out-of-row probably, with the first 'x' bytes kept in-row, with 'x' being a user-defineable number. Regards, Kervin
Re: [sqlite] built-in functrion suggestion: size of blob
Lloyd Dupont wrote: But to my disbelief there is (apparently) no way to get the size of a blob (other than loading it :-() I'd like to be corrected if I am wrong, but I don't think there is anyway to do this in SQLite, besides simply storing the size of the blob with the blob when you write it into the database. SQLite stores BLOBS 'in-row' so it has to read the entire BLOB into memory before it figures out the size. I believe even the 'column_bytes' function 'suffers' from this. Tried to find out the feasibility of 'out-of-row' BLOB in SQLite once, but I don't think there was much interest in that. Regards, Kervin
Re: [sqlite] sqlite with java + hibernate
SQLite is a great database, but if you're using Java, why not use something like Apache Derby ( http://db.apache.org/derby/ )? Regards, Kervin Christoph Langewisch wrote: Hello, I'm looking for an embedded SQL database and found SQLite. Now I need some information I did not found at the page or in documentation. Is it possible to use this database with Java and in particular with hibernate? To integrate a database in hibernate there is the following needed: hibernate.dialect hibernate.connection.driver_class hibernate.connection.url Best regards Christoph Langewisch - - - - - Dipl.-Inform. Christoph LangewischTel: +49-89-747377-67 TESIS DYNAware GmbH Fax: +49-89-747377-99 Baierbrunner Str. 15 http://www.tesis.de/dynaware D-81379 München [EMAIL PROTECTED]
Re: [sqlite] query problem
D. Richard Hipp wrote: Hence, the result set contains no rows. A COUNT() of a empty result set gives NULL. I thought per the last discussion on "Sum and NULL" that the count of an empty set would return zero. Regards, Kervin
Re: [sqlite] FAQ clarification
Christian Smith wrote: I went through the link you had sent. This page mentions 5 different types of locks which are provided by the pager module in SQLite. Could you please clarify these 2 doubts - 1. My application uses our own Mutex variables to allow single reader/writer operation - this is no longer required. I'd suggest you keep a wrapper between SQLite and your application though; with the option of locking out other instances of itself. Correct. SQLite handles locking and concurrency. But you must handle the case where you cannot execute because of a lock. Check out: http://www.sqlite.org/capi3ref.html#sqlite3_busy_handler http://www.sqlite.org/capi3ref.html#sqlite3_busy_timeout Alternatively, handle SQLITE_BUSY in your code to retry a failed query some time in the future. According to past discussions on the list there are instances where you'd get SQLITE_BUSY even if you have set the sqlite3_busy_timeout or handler. So you always have to check for that return value. If you're using threads you need to check for SQLITE_SCHEMA in that same loop ( I am assuming you would retry on SCHEMA and BUSY errors ). This is partly why a wrapper between SQLite and the application seems useful. 2. The 5 lock types mentioned on the documentation page are acquired by processes/threads on their own and as a programmer i can leave all these details for the pager to handle. You can manipulation the locking using the different transaction levels. http://www.sqlite.org/lang_transaction.html Regards, Kervin
Re: [sqlite] any plans for out-of-row blobs?
Kurt Welgehausen wrote: The common way to handle this in SQLite is to store the blob in a file and store the file name in the db. Right, but then we lose the many benefits of keeping the data in a database. What I was really getting at is support for retrieving chunks of data using the SUBSTR() function as with Oracle and SQL Server. Regards, Kervin
[sqlite] any plans for out-of-row blobs?
I need to retrieve parts of the Blob column value. The blob may simply be to large for memory at runtime. There is no way to retrieve part of a blob value in SQLite. From my understanding, most databases store blob values separate from the rest of the row. Would that be a possibility for SQLite? This would also solve the issue of blobs being read entirely into memory if a column defined after the blob column is retrieved. Regards, Kervin
Re: [sqlite] CROSS keyword disables certain join optimizations
Darren Duncan wrote: At 7:25 PM -0400 9/10/05, D. Richard Hipp wrote: Well, that's fine as long as CROSS still continues to mean and do what it has always meant, which is that you explicitly want the result set of If I understand the issue correctly, it does. "FROM a, b" is usually equivalent to "FROM a CROSS JOIN b" in most databases. With the new fix, the first form gives you the optimized query, whilst the second form turns it off. But you should get the same results. ...I think :) Regards, Kervin
Re: [sqlite] All transactions are Database locks?
Hello Matt, Matt Froncek wrote: Are all transactions database locks (not table locks)? Sqlite doesn't support table or row level locking. Locking is implemented via file locks. Hence if you separate your table space into different databases and use the ATTACH keyword, you can effectively get finer grain locking ( but I guess you know that ). If I prepare a select statement and start stepping on Table A on process one and pause before finializing and do an update on Table B with process two or even the same process I will get a database is locked error? Do I understand that this is how SQLite works or am I doing something wrong? You're correct I believe. Sqlite does not support concurrent writers on the same database. Also, any readers will get an SQLITE_SCHEMA error, that is, "the database schema has changed". Also we're ( us new to Sqlite ) use to a "DECLARE...INSENSITIVE...CURSOR" or resultant set type paradigm where the results of our queries are *separated* from the actual table data. Sqlite doesn't copy the resultant set if you use the step() API. Seems one has to get the data needed and close the transaction as soon as possible. Does everyone make databases with one table each or what? Yup, or do your own database marshalling logic. My wrapper is ugly, I "TRANSACTION EXCLUSIVE" almost everything. I tried many things, but that's what finally worked for me. When you have separate threads and processes doing mixed reads and writes in the same transaction even "TRANSACTION IMMEDIATE" can give leave you with unexpected results. Regards, Kervin
Re: [sqlite] What's the safest, most elegant way to copy a live db.
Hello, Seems to me if your backup program does a "BEGIN EXCLUSIVE" before doing the file copy then you should be fine. Just my guess though. I believe an exclusive transaction should ensure that you are the only writer to the database. Regards, Kervin John Duprey wrote: I'd like to copy a database that may or may not be in use. Doing a filesystem copy will not ensure a stable copy. Can I use the sqlite3 CLI and some SQL to do this such that I can wrap it up into a script or do I need to write my own program, that gets a lock and re-creates the DB in a new file? I'd like to avoid dumping the database and importing it into a new. My DB is 400M and growing so I'd like to avoid dumping such a large amount of data. What I'd like is a safe binary copy. From a previous post, I have seen someone suggest this: attach 'foo.db' as bar; create table baz as select * from bar.baz; detach bar; If I wrapped this in a loop for all tables it would probably do the trick. Can this be done from the sqlite3 cli or will it have to be done in code.? I appreciate any suggestions. Thank you, -John
Re: [sqlite] Problems with threadsafe opt correction #2623
Hello, Is this the only reason for the... "database handle can only be used on the same thread that opened it" ...rule? Does Windows have that issue? If so, can we convert this to a compile time option? Eg. OS_SUPPORTS_THREADSAFE_FILE_LOCKS or similar flag which can be set to true for OSes which do not have this issue? My application is Windows only. It's also a component that can be passed around between threads. Hence I have to do a 'thread-id check' on *every* method call into the component. This rule also forces us to pass around the database filename and path rather than the handle. Since on any call, if the 'thread-id check' fails, the database has to be re-opened. Regards, Kervin D. Richard Hipp wrote: On Wed, 2005-08-31 at 12:53 +0200, Guillaume Fougnies wrote: My code is sharing a pool of SQLite connections on multiple databases between a bunch of treatment threads. Each thread pops a connection from the pool safely and push it back once finished. This works on some systems but not on others. On some versions of Linux, a thread is not able to override locks created by a different thread in the same process. When that happens, a database connection created on one thread will not be usable by a different thread. Additional information: http://www.sqlite.org/cvstrac/tktview?tn=1272 http://www.sqlite.org/cvstrac/chngview?cn=2521
Re: [sqlite] Unlucky number for the ROUND function
Do numbers with a final digit of five and rounding precision greater than zero ever round up? Bob Dankert wrote: According to that, it rounds to the nearest even number. Shouldn't 9.95 go to 10 then, and 9.85 go to 9.8? After additional testing with SQLite 3.2.2, I have the following results: Round(9.95,1) -> 9.9*Rounded Down* Round(9.85,1) -> 9.8*Rounded Down* Round(9.5,0) -> 10 *Rounded Up* Round(9.995,2) -> 9.99 *Rounded Down* I really see no pattern or sense to the results. Bob Envision Information Technologies Associate [EMAIL PROTECTED] v. 608.256.5680 f. 608.256.3780 -Original Message- From: Kervin L. Pierre [mailto:[EMAIL PROTECTED] Sent: Tuesday, August 30, 2005 12:18 PM To: sqlite-users@sqlite.org Subject: Re: [sqlite] Unlucky number for the ROUND function Interesting... Bankers' Rounding http://blogs.msdn.com/ericlippert/archive/2003/09/26/53107.aspx Brass Tilde wrote: From: "Bob Dankert" Using an older version which I compiled, I get 9.9, though it seems it should round up to 10.0? This may be dependent upon the math library linked into the application by the compiler used to build SQLite. Some libraries appear to now be using so-called "banking rounding" (though there are other names), where a "5" digit is rounded up or down depending upon the digit immediately to its left. If that digit is odd, it rounds one way, if even, the other. It looks like in this case, 9.95 would round to 9.9, while 9.85 would likely round to 9.9 as well. Try rounding 9.85 and see what you get. Brad .
Re: [sqlite] Unlucky number for the ROUND function
Interesting... Bankers' Rounding http://blogs.msdn.com/ericlippert/archive/2003/09/26/53107.aspx Brass Tilde wrote: From: "Bob Dankert" Using an older version which I compiled, I get 9.9, though it seems it should round up to 10.0? This may be dependent upon the math library linked into the application by the compiler used to build SQLite. Some libraries appear to now be using so-called "banking rounding" (though there are other names), where a "5" digit is rounded up or down depending upon the digit immediately to its left. If that digit is odd, it rounds one way, if even, the other. It looks like in this case, 9.95 would round to 9.9, while 9.85 would likely round to 9.9 as well. Try rounding 9.85 and see what you get. Brad .
Re: [sqlite] table or record size without reading the entire table/record?
D. Richard Hipp wrote: Are you trying to estimate the size of a table, or a single row in that table? Your words say the table but the context suggest you really want the size of a row. I need the table. But I thought if I could have the row, calculating the total for the table wouldn't be difficult. Then when you need to know the size, just do a query for the size. Thanks. That was my backup plan if there was no other way. Note: This will only work if the "size" column comes before the "data" column in the table definition. Thanks for the heads up. Out of curiosity, why does the size column have to come before the data column? Regards, Kervin
[sqlite] table or record size without reading the entire table/record?
Hello, I am trying to figure out a way to return the actual or approximate table size in a database. Basically, in my application a table relates to an object and I need to calculate the approximate size of an object. There is the sqlite3_column_bytes(), but I understand that that function returns the value into memory before counting the bytes. Is there another way of estimating a table's size on disk? Thanks, Kervin
Re: [sqlite] How do I attach a database from C++? -- RESOLVED
Greg Stark wrote: On a related issue, does anyone know if nested transactions are planned for SQLite? We most probably won't be seeing nested transactions in the near future. The issue has come up a few times before :) It seems to be a common practise to wrap Sqlite with a wrapper library before using. If this is something you did, you can just count the transaction levels yourself. That's what I've done. The catch with doing simple transaction counting is that a rollback rolls back your topmost transaction. But according to SQL Server documentation, that's how MS does it too ( I don't know how Oracle does it ). Regards, Kervin
Re: [sqlite] Richard Hipp Awarded Google-O'Reilly Open Source Award at OSCON 2005
Me too! :) Congratulations to Dr. Hipp. Open-source or commercial, all things considered, Sqlite is definitely the best at what it sets out to do. Regards, Kervin Nuno Lucas wrote: Just want to add my congatulations here too. The project well deserves it! :) Best Regards, ~Nuno Lucas
Re: [sqlite] Segmentation fault on large selects
Sorry, I read your trace wrong, thought the debugger was complaining. What does the the 'where' command say? scunacc wrote: Dear Kervin, Can you run the sqlite3 under dbx? You may have better luck getting a backtrace that way instead of reading the core file after the crash. eg. 'dbx -r sqlite3' or something similar. Thanks for the suggestion, but I already tried that with exactly the same results. Kind regards Derek .
Re: [sqlite] Segmentation fault on large selects
scunacc wrote: I have built with debugging on, and can't do anything with the core dump: dbx Type 'help' for help. enter object file name (default is `a.out', ^D to exit): sqlite3 reading symbolic information ... [using memory image in core] Illegal instruction (reserved addressing fault) in . at 0x0 ($t1) warning: Unable to access address 0x0 from core 0x warning: Unable to access address 0x0 from core Can you run the sqlite3 under dbx? You may have better luck getting a backtrace that way instead of reading the core file after the crash. eg. 'dbx -r sqlite3' or something similar. Regards, Kervin
Re: [sqlite] Multi-threading.
Mrs. Brisby wrote: chances are you can't use threads correctly either. This mailing list is an excellent example of how many "professional programmers" simply can't deal with threads- every problem they run into, it's "how do I make sqlite work with threads". If you have to ask that question, you simply have no idea what you're doing. I wasn't going to drag this thread on any longer, but just in case this view is popular... SQLite is a great library, but calling it multi-threaded is stretching the common definition of the term. I consider SQLite multi-thread "tolerant" :) It detects and allows you to deal with potential threading related issues as opposed to dealing with those issues for the developer ( real thread "support" in my view ). Given the good doctor's stance on threads, I am grateful for what thread tolerance is in the library :) We get all those threads related questions because SQLite is difficult to use with threads and has sparse sometimes conflicting documentation on the subject. Don't get me wrong, again I am not complaining, it is *way* better than the other options out there. But you can't expect someone to magically figure out an API based on one or two sentence definitions and comments in a header file without asking a few questions and making a few mistakes. Regards, Kervin
Re: [sqlite] Multi-threading.
Paul G wrote: richard's advice is solid. use async io/event loops if possible, separate processes if possible, threads as a last resort, in that order. the grey area is the 'if possible' test, since it's a multi-way tradeoff between performance, simplicity and provable (to an extent) correctness. i fully expect that a lot of folks *do* need to use threads and the probability of that being the case on windows is much higher than on posixish platforms. I agree with you, but it doesn't seem like you're exactly concurring with what DRH said though. I'm guessing that that 'if possible' test is huge, and very frequently will fail. Why suffer the context switch when you don't have to? Would you write a non-trivial GUI program today without using threads? Why subject the users to the added latency IPC is going to introduce? The funny thing is eventually multi-process apps go to MMap for IPC, for performance, and then run into a lot of the same issues they'd have to deal with if they were threaded. And as far as the 'threads introduce difficult to debug' errors; Isn't that the age-old performance versus complexity trade- off? Processes are easier to use, but very often perform worse under the same conditions as the more complex threaded application. That is a fact many of us can not look past easily. PS. It's funny, this discussion seems like its been taked right from a early '90s newsgroup :) Regards, Kervin
Re: [sqlite] BLOB read/write in chunks?
D. Richard Hipp wrote: On Wed, 2005-05-04 at 23:31 -0400, Henrik Bruun wrote: Does Sqlite support reading/writing BLOB data in chunks? I'm considering storing very large items (200MB+) in BLOB fields. With SQLite, the entire BLOB must be read or written all at once. There is no limit on the size of BLOBs. (The I was under the impression that there was a row size limit of 1 MB. Saw this in documentation somewhere. Was that limit removed? Regards, Kervin
Re: [sqlite] sqlite wrapper to prevent locking
Ben Clewett wrote: I do not unfortunately have any place to host this work. There may be somebody on this list who knows of a place where this can be published for any members who may with make use of it. http://sourceforge.net/ ?
Re: [sqlite] Locking Methods
That's part of the problem I had been having... 1) You have to loop through the resultset twice, once to put it in memory, the other to process it in your application. 2) The sqlite3_prepare() and sqlite3_finalize() both should be in a loop as well right? Since they can throw SQLITE_BUSY and SQLITE_SCHEMA themselves. So that should be a nested loop. So we now have about 25-30 lines of code including a nested loop so to step through a *relatively* simple resultset. I'm not complaining, just wanted to share my observation :) PS. What if sqlite *optionally* placed the resultset in memory for the user. Then they would they have to worry about BUSY or SCHEMA errors whilst stepping through it? - Kervin Jay Sprenkle wrote: I prepared the statement and executed it. Preparing doesn't use callbacks and prevents sql injection attacks. I used a C++ class to store the query results in an STL container. brief C/C++ pseudocode: string sql = "select * from blah"; sqlite3_open(); sqlite3_prepare( sql ); bool Loop = true; int retry = 0; while ( loop && retry < 10 ) switch ( sqlite3_step() ) { // if database busy wait for a short time // to see if it becomes available case SQLITE_BUSY: case SQLITE_LOCKED: retry++; break; case SQLITE_ROW: // get results retry = 0; p = sqlite3_column_text(); in_memory_list.push_back( p ); break; case SQLITE_DONE: Loop = false; break; default: string str = "Cannot execute sql: " + sql + ", Error: " + sqlite3_errmsg(db); throw str.c_str(); break; } // clean up when finished sqlite3_finalize(); //process your list here sqlite3_close( db ); On 4/20/05, Thomas Briggs <[EMAIL PROTECTED]> wrote: What APIs are you guys using to retrieve results and execute the subsequent updates? Are you using prepare/step or sqlite3_exec with a callback to retrieve the results? Would it be possible for you to post more detailed pseudo-code? Conceptually I think we're all on the same page; I think what we're unclear about is exactly how you're trying to accomplish this. Thanks -Tom -Original Message- From: Ben Clewett [mailto:[EMAIL PROTECTED] Sent: Wednesday, April 20, 2005 3:32 AM To: sqlite-users@sqlite.org Subject: Re: [sqlite] Locking Methods This is exactly my problem. My version is 3.1.6. The error is SQLITE_LOCKED. Ben Jay Sprenkle wrote: I had the same trouble he did. Here's what I did that doesn't work: select * from a into result; foreach row in result ' this fails: update b set col = a.value; next But based on what I read here it's supposed to do this. On 4/19/05, Gerry Blanchette <[EMAIL PROTECTED]> wrote: Could you please elaborate your scenario? I tried a test myself but am afraid I may not have interpreted your test case properly. I have 2 tables, fred and bob, each with 1 rows. I select a column from fred and bind the value obtained from sqlite3_column_int to an update statement that operates on bob. I loop over fred via sqlite3_step, where each iteration successfully updates the row in bob. Both tables exist in the same DB, accessed via the same sqlite3 *. Have I misinterpreted your scenario somehow, as this works for me? Thank you for helping clear this up for me. -- Gerry Blanchette -Original Message- From: Ben Clewett [mailto:[EMAIL PROTECTED] Sent: Monday, April 18, 2005 4:50 AM To: sqlite-users@sqlite.org Subject: [sqlite] Locking Methods I am experiencing problems with the locking. Because SQLite uses database locking this forces two major problems: - I can't read through a record set and use the data to execute updates. For instance, some parsing exercise which cannot be completed using a single SQL command. I have to store all the data locally, get to the end of the query, then execute and update statements. Ben Clewett.
Re: [sqlite] Locking Methods
I think that's an excellent idea, and I'd like to help however possible if work starts on a patch. My wishlist 1. Finer grain locking ( Row/table ) 2. Memory resident resultsets 3. Reduction or elimination SQLITE_SCHEMA I think memory resident resultsets would be an excellent feature to start with, at a glance it seems simple enough ( famous last words right? :) ). - Kervin Ben Clewett wrote: This is true - selecting * from a table of size exceeding the memory of the machine would not work in any way. Any large table would be slow. A solution used by MySQL is to have two cursors. The standard cursor returns a memory resident table containing the result of the query. Ideal for small queries. No locking problems, easy to program with, good for concurrency. You can move forward and back through the result set, and know the dimensions of the result set. This is still very fast. Then they provide a second cursor which works like SQLite. Returning a row at a time. This is faster and better for large tables. However, neither cursor locks more than a row at a time. Then only locking for as long as it takes to generate a copy of the row. So concurrency still works very well. I would hate to see SQLite become MySQL, there is one too many of this already :) But they have may have some good ideas. Regards, Ben. Ken & Deb Allen wrote: The largest drawback of this approach is scalability -- in my case I often deal with databases with 20-30 closely related tables, many of which can have 1,000,000 or more records in them, and containing multiple relations. Copying this into memory before each query would be very time consuming. Having two copies of the database (one in memory and one on disk) and ensuring that changes are made to both as part of a transaction would be more effective, but this would require large amounts of memory! -Ken On 19-Apr-05, at 3:36 AM, Ben Clewett wrote: There are two thinks I am working on. Firstly, just to loop around the execution: do { execute(sql); } while (SQLITE_BUSY || SQLITE_LOCKED) This does the job, but is not very nice. Also impossible to tell where a genuine lock through bad programming, and a lock through heavy load. Secondly, an alternate simple API to complete a SELECT query. My version will load the table into memory, then complete the query. That way no locking is maintained, and I can use the data for as long as I want. (This is the way MySQL and PostgreSQL work.) I can also stream this table down a TCP/IP connection. Although this will be only a table of text, and require freeing after use. (Has any person done this already to save me the work?) If my version work, I'll try and offer a patch. But I would still like to offer encouragement to the great developers of SQLite to get row locking working. For me will transform SQLite into a very professional system. Kind regards, Ben Clewett. Will Leshner wrote: On 4/18/05, Ben Clewett <[EMAIL PROTECTED]> wrote: I see in the manual there are plans use Row Locking. This would solve my problems. Allowing me to load the database from multiple processes to a far greater amount without fear of SQL_BUSY. As well as simplifying my programs. I am currently experimenting with using triggers to do row locking and it seems to work quite well, though I don't know if it will help solve your particular problem.
Re: [sqlite] multi_thread for writing
As far as I know, SQLite does not support multiple simultaneous writers. I believe you need to be ready to handle SQLITE_BUSY and SQLITE_SCHEMA errors on every call in a multi- threaded application. SQLITE_BUSY... http://www.sqlite.org/cvstrac/wiki?p=MultiThreading SQLITE_SCHEMA... http://www.sqlite.org/faq.html#q17 The wiki page for multithreaded applications suggests that you use a loop to keep retrying on SQLITE_BUSY but that does not work well, especially since that loop should check for SQLITE_SCHEMA and also have some count to prevent infinite loops. Also backing out of your multiple successful calls to sqlite3_steps() after a single call in a nested loop fails further complicates things, I think. Try use the sqlite3_busy_timeout() or sqlite3_busy_handler() instead. - Kervin RexChan(TP/HK) wrote: > Hi all, > > I meet the problem of multi_thread writing in version 3.2.1. First I create > two threads A and B and each thread has its own db structure. each thread > uses the following SQL commands to do the insert action. > > BEGIN; > . > insert record 200 times > . > . > END; > > Thread A does the insert first and Thread B inserts records during Thread A > is doing the insert action. > > Then the SQLITE_BUSY error is returned to Thread A when doing the "END;" SQL > statement. And Thread B is also returned to SQLITE_BUSY. > And my question is: > > 1. Is it a normal when the error is returned to Thread A when doing the > "END;" SQL statement? > > It seems the Thread A is locking the db although the insert 200 records > action has been done and it wants to do the "END" SQL statement. > > Because I am using ver 2.8.16 and using the same code, it does not happen the > SQLITE_BUSY using its own db structure in each thread. > Do I miss something to do the locking or setting some parameters? thanks! > > Best regards, > Rex >
Re: [sqlite] create table question
Ken & Deb Allen wrote: So, would this problem still exist if the code were to open two 'connections' to the database, issue the SELECT on one open connection and then issue the CREATE TABLE via the other open connection? I'm new to this to, but I think it depends on if that 'db.execute()' function you are using deals with SQLITE_SCHEMA errors as suggested in... http://www.sqlite.org/faq.html#q17 or if it uses sqlite3_exec() instead. Is that a wrapper object you wrote? Your running 'CREATE TABLE' statement *I think* invalidates all currently compiled statements so that they have to be recompiled. This does not matter if it is across threads or across processes. - Kervin
Re: [sqlite] Re: using triggers to lock records
Will Leshner wrote: On Apr 6, 2005 10:43 AM, Will Leshner <[EMAIL PROTECTED]> wrote: After thinking about this a bit more, I realized that writing a trigger that didn't trigger for the locker is pretty simple. So I'm thinking that triggers would work in this situation, but I'm wondering if anybody has ever done anything like this before. I know your pain :) . Finer grain locking would be great. The only problem I see is that you'd have to call that trigger on every write. To at least check for a lock. Would that be an acceptable performance hit for you? Another solution, if it suits your environment would be to create a 'record' object or struct and give each object a mutex. If you maintain that that object is the only way to modify a row and that your application keeps track of all objects ( as to not create multiples on the same row ) then that would effectively lock the record. Should be faster to since you don't have to hit the database to find out the row state. Good luck, Kervin
Re: [sqlite] idea: sqlite3_begin() and sqlite3_end() ?
John LeSueur wrote: Maybe it's the principle of least surprise. As a new developer, coming to sqlite, if there was this reference counting and loosely emulated nested transactions, and one day I made a mistake and called sqlite3_begin() twice and calling sqlite3_end() just once, nothing would be committed. But since calling sqlite3_begin() twice didn't throw an error, I would be surprised and confused. I don't know if The current behavior of the BEGIN/END SQL statements can be the default behavior of the functions with a means supplied to turn on reference counting. At any rate, I'm just going to update my SQLite wrapper object like everyone else. It's just that... AFAIK, the lower the level we lock, the safer and faster the code ( in general ). SQLite counting reference in the sqlite3_db object should fair better than a user doing it in their application. Right? Plus, I don't want to worry about deadlocks with yet another mutex. SQLite is a very capable library, but it almost feels like the library equivalent of assembly sometimes. It pushes so much boilerplate stuff to the user. - Kervin
[sqlite] most stable/supported C++ wrapper library?
In users experience, what is the most stable, supported LGPL ( or free'er ) SQLite wrapper library out there? I've seen http://www.sqlite.org/cvstrac/wiki?p=SqliteWrappers LiteSQL seems like the strongest contender. http://litesql.sourceforge.net/ Any preferences? Input? Thanks, Kervin
Re: [sqlite] idea: sqlite3_begin() and sqlite3_end() ?
Hello, These functions are not supposed to be replace nested transactions. They just use the tools the API *already* has to make it easier to work with transactions in recursive and inter-dependent functions/methods. sqlite3_rollback() would rollback the transaction. It does nothing else. By you calling sqlite3_rollback() you have requesting the tranaction be cancelled. Most likely, if you rollbacked back the transaction, you're in your error handling code/logic at that point right? Ie. You'd be propagating that error to your outer functions which would not bother continue with it's part of the transaction, for example. Even if that assumtion is incorrect, you are responsible for dealing with letting the rest of your code know that you rollback the transaction. Does this seem fair? - Kervin John LeSueur wrote: Ned Batchelder wrote: I went through this same issue in my application, and realized that rollbacks throw off the whole scheme. This proposal doesn't account for rollbacks: how would they work? If only the outermost "transaction" would truly perform a rollback, then what would an inner one do? Consider this scenario: 1. begin() 2. do_db_work() 3. begin() 4. do_db_work() 5. rollback() 6. do_db_work() 7. end() What does line 5 do? What does line 6 do? What does line 7 do? I decided for my own work that magic nested transactions are a bad idea. To properly handle errors and rollback the database, you need to know where the edges of the transaction really are. Fully-supported nested transactions may be a good thing (I've never used them, so I don't know). But pretending that you have nested transactions when you don't is just waving your hands over some very important issues. --Ned. http://nedbatchelder.com Line 5 marks that the whole transaction that ends on 7 should be rolled back. Sure, you lose some work, but generally speaking(depending on your application's needs), Losing work is better than leaving your database in an inconsistent state. So I agree that there's some difficult choices to be made, and that they're better left to the application code than the library, but they are choices that can be made. John
[sqlite] idea: sqlite3_begin() and sqlite3_end() ?
Hello, [ Sorry for the second email, but I wanted to separate this ] Would a... sqlite3_begin(sqlite3 *db, int type) sqlite3_end( sqlite3 *db ) be helpful for people counting transactions in a thread environment? The only difference to doing the 'transaction level' counting yourself is that those functions would use the structure 'sqlite3 *db' to keep track of if a tranaction is in progress or not. The begin and end functions could be configured to begin/end the transaction *if* there isn't/is one. ie. func1() { sqlite3_begin(...) func2(); DoSQLStuff... sqlite3_end(...) } func2() { sqlite3_begin(...) DoSQLStuff... sqlite3_end(...) } ...would work in a threaded environment since each thread as its own copy of 'sqlite3 *db' anyway. func2() would also work as expected whether called in or outside func1(). Dumb idea? Do I get get to wear the dunce hat? :) - Kervin
Re: [sqlite] nested transactions?
D. Richard Hipp wrote: On Thu, 2005-03-31 at 17:08 -0500, Kervin L. Pierre wrote: Are there plans for supporting nested transactions in the future? No. Shucks. :) Instead of calling sqlite3_exec("BEGIN") and sqlite3_exec("END") directly, put them in a wrapper function that counts the number of nested invocations. Only execute the SQL at the top level. Thanks for that workaround. I will to to make that work but... It gets hairy when the API needs to be thread safe and the function prototypes are dictated to you. Eg. a plugin for a multithreaded application. One alternative seems to be to serialize access to the counter variable. But I'd like to avoid that for performance and debugging reasons. Having an internal version of each function that takes those SQLite parameters, and having the external version call those only with the sqlite helper object, seems like a fix as well. Please let me know what you think... internal_func1( sqliteHelper *sql, int param, ){ some_Other_Object_That_Also_Uses_SQLite *otherObj; otherObj->internal_func1(sql, ); [...] } exposed_func1( int param, ){ sqliteHelper *sql; sql->Begin(); internal_func1(sql, param, ...) sql->End(); } But that requires a lot of code for a single problem. Is there an elegant solution? - Kervin
[sqlite] nested transactions?
Hello again, Are there plans for supporting nested transactions in the future? Would that be a difficult extension to code ( eg. if one thought they could give it a try :) ) The current restriction makes it hard to use SQLite in developing a API eg exposed_func1() { sqlite3_exec("BEGIN"); [...do stuff...] func2(); sqlite3_exec("END"); } exposed_func2() { sqlite3_exec("BEGIN"); [...do stuff...] sqlite3_exec("END"); [...do more stuff...] } I could commit early, eg. before calling expose_func2(), but on error the entire function needs to be rolled back, both inner and outer functions. Any information and, or insight would be appreciated. - Kervin
Re: [sqlite] Concurrency tutorial ( small bounty )
Marco Bambini wrote: Don't know if my mail can help you but my advice is to create a custom solution if you really need to have many simultaneous accesses to your sqlite database. Hello Marco, Thanks for your reply. Would you mind describing your concurrency solution further? I guess you have some sought of marshalling thread that holds the FIFO queue? I am interested in how you got this problem solved. Thanks, Kervin
[sqlite] Concurrency tutorial ( small bounty )
Hello, I've been pulling my hair out over concurrency problems with SQLite 3. SQLITE_BUSY, SQLITE_SCHEMA, SQLITE_MISUSE, I get them all. Over the last few weeks, I've been strictly debugging SQLite problems in my application [ http://openconnector.org ]. No program logic besides fixing ( and apparently creating in the process ) SQLite errors. http://www.sqlite.org/faq.html#q17 , http://www.sqlite.org/lockingv3.html , and http://www.sqlite.org/cvstrac/wiki?p=MultiThreading are good starts but similar do not provide enough ( any? ) real code examples. Simply, I think it work be a _great_ service to the SQLite user community if someone, or a few people _experienced_ in working with SQLite in a massive multithreaded environment would write a short tutorial with *real code* examples explaining best practices around using SQLite 3 API in a multithreaded environment. Just the basics really. The tutorial, and its short examples should cover issues like, do we use a do(){...}while() loop or busy handler to handle SQLITE_BUSY? When do you use either? How do we use the different transaction types properly. It could give real code implementation of the pseudo code examples in the concurrency listed above for example. Basically it should be something of a 'mini-cookbook' still approach at SQLite 3 concurrency. I'm willing to give up $100 US for a helpful tutorial with sufficient tested real code snipnets. Not asking for something that would take more than a day or day to write. If anyone else thinks this is a good idea and would like to contribution to this bounty that would be great. - Kervin
Re: [sqlite] still struggling with "Database schema has changed" errors
Wow that's a lot different than what I read in some of the messages I read in the archive. I read that we're not suppose to see SQLITE_SCHEMA errors in SQLite3 unless something very wrong happened. Thanks, Kervin Dan Kennedy wrote: There's a little bit written about SQLITE_SCHEMA errors here: http://www.sqlite.org/faq.html#q17 --- "Kervin L. Pierre" <[EMAIL PROTECTED]> wrote: Hi, Thanks for your response. I'm at wit's end with this thing. I do open the database multiple times, but exactly once per thread. I then keep the sqlite3* on thread local storage and only use that pointer on the thread it was created on. This is how I interpreted the documentation. Is that correct? I wrote a function to get a sqlite3* pointer. Any method that requires SQLite API calls GetDatabaseHandle() first... EnterCriticalSection(&otlkcon_sqlite_cs); databaseHandle = (sqlite3 *)TlsGetValue(otlkcon_tlsIndex); if( databaseHandle == NULL ) { // Thread does not have a db handle yet sqlRes = sqlite3_open(databaseFilename, &databaseHandle); if ( sqlRes != SQLITE_OK ) { // error code and exit... } sqlRes = TlsSetValue( otlkcon_tlsIndex, databaseHandle ); } LeaveCriticalSection(&otlkcon_sqlite_cs); http://cvs.sourceforge.net/viewcvs.py/otlkcon/otlkcon0/mstore/O_IProp.cpp?view=markup Tls* functions provide thread local storage. By my reckoning, this should garantee a strict sqlite3*<->thread relationship. Do mind sharing or explaining your changes? Thanks, Kervin Randall Fox wrote: On Thu, 03 Feb 2005 14:15:52 -0500, you wrote: Hello, I am using SQLite 3.0.8 in a Win32 threaded environment. I keep getting random "Database schema has changed" errors even though I am using thread local storage to make sure sqlite3_open() gets called on each thread and a there is a sqlite3* per thread. Has anyone had any luck with resolving SQLITE_SCHEMA errors in a threaded environment? How are you accessing the database? Do you open it multiple times and write to it? I had the same problem with the schema errors. I was opening the database twice, and when I would create a table with one of the open instances, the other would get the schema error the next time I started a write operation. I did end up fixing it by rewriting part of the SQLITE code. Randall Fox __ Do you Yahoo!? Yahoo! Mail - Helps protect you from nasty viruses. http://promotions.yahoo.com/new_mail
Re: [sqlite] compiled features at runtime? - Re: [sqlite] still struggling with "Database schema has changed" errors
Jeff Thompson wrote: The way THREADSAFE works currently, is that it must be defined, but also must have a value assigned to it. The compiler switch /D THREADSAFE won't cause sqlite to have thread safe code included, whiel /D THREADSAFE=1 will. Ensure you're using the latter format, or That was at least part of my problem, thanks a lot for that tip. I was using '/D THREADSAFE' to turn on multi-thread support. - Kervin
[sqlite] compiled features at runtime? - Re: [sqlite] still struggling with "Database schema has changed" errors
Thanks for your response Chris. Is there a way to find out what components are compiled in at runtime? I built sqlite myself and I did turn on that macro, but I want to double check everything at runtime. Maybe throw an error if multi-threading support is not available in the DLL. Is there a way to detect multi-threading support at runtime? Thanks again, Kervin Chris Schirlinger wrote: I am using SQLite 3.0.8 in a Win32 threaded environment. I keep getting random "Database schema has changed" errors even though I am using thread local storage to make sure sqlite3_open() gets called on each thread and a there is a sqlite3* per thread. Has anyone had any luck with resolving SQLITE_SCHEMA errors in a threaded environment? The docs for SQLite mention that you need to specifically compile the libs (DLL) with the THREADSAFE preprocessor macro set to 1 for you to be able to use SQLite in threads Not sure what the DLL's on the web site are compiled with, perhaps that's the issue?
Re: [sqlite] still struggling with "Database schema has changed" errors
Hi, Thanks for your response. I'm at wit's end with this thing. I do open the database multiple times, but exactly once per thread. I then keep the sqlite3* on thread local storage and only use that pointer on the thread it was created on. This is how I interpreted the documentation. Is that correct? I wrote a function to get a sqlite3* pointer. Any method that requires SQLite API calls GetDatabaseHandle() first... EnterCriticalSection(&otlkcon_sqlite_cs); databaseHandle = (sqlite3 *)TlsGetValue(otlkcon_tlsIndex); if( databaseHandle == NULL ) { // Thread does not have a db handle yet sqlRes = sqlite3_open(databaseFilename, &databaseHandle); if ( sqlRes != SQLITE_OK ) { // error code and exit... } sqlRes = TlsSetValue( otlkcon_tlsIndex, databaseHandle ); } LeaveCriticalSection(&otlkcon_sqlite_cs); http://cvs.sourceforge.net/viewcvs.py/otlkcon/otlkcon0/mstore/O_IProp.cpp?view=markup Tls* functions provide thread local storage. By my reckoning, this should garantee a strict sqlite3*<->thread relationship. Do mind sharing or explaining your changes? Thanks, Kervin Randall Fox wrote: On Thu, 03 Feb 2005 14:15:52 -0500, you wrote: Hello, I am using SQLite 3.0.8 in a Win32 threaded environment. I keep getting random "Database schema has changed" errors even though I am using thread local storage to make sure sqlite3_open() gets called on each thread and a there is a sqlite3* per thread. Has anyone had any luck with resolving SQLITE_SCHEMA errors in a threaded environment? How are you accessing the database? Do you open it multiple times and write to it? I had the same problem with the schema errors. I was opening the database twice, and when I would create a table with one of the open instances, the other would get the schema error the next time I started a write operation. I did end up fixing it by rewriting part of the SQLITE code. Randall Fox
[sqlite] still struggling with "Database schema has changed" errors
Hello, I am using SQLite 3.0.8 in a Win32 threaded environment. I keep getting random "Database schema has changed" errors even though I am using thread local storage to make sure sqlite3_open() gets called on each thread and a there is a sqlite3* per thread. Has anyone had any luck with resolving SQLITE_SCHEMA errors in a threaded environment? Thanks, Kervin
[sqlite] how are people dealing with threading?
Hello, I am writing an application that uses sqlite in a multithreaded environment. The application is a plugin for microsoft Outlook ( http://openconnector.org/ ) and hence I do not have full control over how my objects are passed amongst threads. I am trying to keep a hash map of thread-id/sqlite3 pointers per object. And then check that map before any database access. But I am getting lots of "sql schema has changed" and "file not a database" errors randomly. I have read the wiki topic on multithreading. Can a single thread call sqlite3_open() multiple times? Also, how are people getting sqlite to work with threading? Is there a simple, efficient pattern someone can recommend? Has anyone come up against the seemingly random "sql schema has changed" errors and "file not a database" errors in sqlite3 and fixed it? How did you? Thanks, Kervin
RE: [sqlite] Saving bitmap image to SQLite3 record?
GetObject() returns a BITMAP struct which contains a 'LPVOID bmBits' member. I thought that would do it. I have never tried it though. GetDIBits() might also work... http://msdn.microsoft.com/library/default.asp?url=/library/en-us/gdi/bitmaps_7gms.asp - Kervin Quoting support <[EMAIL PROTECTED]>: > The GetObject() call gets basic information about the bitmap (size, etc.) > but doesn't provide anything else. I'd like to get the bytes contained in > the HBITMAP handle and possibly bind that to a record field. If anyone has > done this before, please let me know! > > Thanks, > > Dave > > > Hello, > > > > A bit on the offtopic side but would GDI function > > 'GetObject()' do? > > http://msdn.microsoft.com/library/default.asp?url=/library/en- > us/gdi/devcons_912s.asp > > Kervin > > Quoting support <[EMAIL PROTECTED]>: > > > I'd like to store bitmap images into an SQLite database, and I can't > > seem to figure out how to do it. I'm programming for Windows using > > Visual C++, and I have the bitmap image stored in a HBITMAP handle in > > my C program. > > > > My question is, how can I get the data from the HBITMAP handle and > > store it in a SQL record field? > > > > I understand how to bind blob data (which I think would apply in this > > case), however, I don't know what data to bind using the HBITMAP > > handle. > > > > Any advice would be appreciated! > > > > Thanks, > > > > Dave > > > > > > > > >
Re: [sqlite] Saving bitmap image to SQLite3 record?
Hello, A bit on the offtopic side but would GDI function 'GetObject()' do? http://msdn.microsoft.com/library/default.asp?url=/library/en-us/gdi/devcons_912s.asp Kervin Quoting support <[EMAIL PROTECTED]>: > I'd like to store bitmap images into an SQLite database, and I can't seem to > figure out how to do it. I'm programming for Windows using Visual C++, and I > have the bitmap image stored in a HBITMAP handle in my C program. > > My question is, how can I get the data from the HBITMAP handle and store it > in a SQL record field? > > I understand how to bind blob data (which I think would apply in this case), > however, I don't know what data to bind using the HBITMAP handle. > > Any advice would be appreciated! > > Thanks, > > Dave > >
Re: [sqlite] sqlite3_trace args?
Clear's things up. Thanks! - Kervin Quoting Christian Smith <[EMAIL PROTECTED]>: > On Wed, 6 Oct 2004 [EMAIL PROTECTED] wrote: > > > > >Thanks, > > > >Question, why does transactions report the error whilst CREATE > >on its own returns success but secretly fails? > > Because the actual CREATE succeeded. The COMMIT couldn't happen because of > the ongoing VM from the non-finalized VM, hence the error. But within that > transaction, the CREATE'd table exists. > > > > > >Kervin > > > > Christian > > > > > >Quoting Christian Smith <[EMAIL PROTECTED]>: > > > >> On Tue, 5 Oct 2004 [EMAIL PROTECTED] wrote: > >> > >> > > >> > > >> >Hello, > >> > > >> >I am trying debugging my application ( Sqlite reports no errors > >> >after a CREATE query, but the table does not exist ). > >> > >> > >> Have you done this in a transaction? Check you haven't issued a BEGIN > >> statement, and/or COMMIT the current transaction. > >> > >> > >> > > >> >I'd like to use sqlite3_trace() function, but the documentation > >> >does not have the expected args, neither does the mail archives. > >> > > >> >Can someone explain to me briefly what the xTrace() args are at > >> >runtime, and the args for sqlite3_trace() as well? > >> > >> > >> "Use the source, Luke..." > >> > >> xTrace is a callback function called for every SQL block executed. Args to > >> xTrace are: > >>void * pArg : User pointer > >>const char * sql : SQL being executed > >> > >> Args to sqlite3_trace are: > >>sqlite3 * db : SQLite database > >>void (*xTrace)(void*,const char*) : Callback function described > >> above. > >> void * pArg : User pointer passed to xTrace > >> > >> > >> > > >> >Thanks, > >> >Kervin > >> > > >> > >> -- > >> /"\ > >> \ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL > >> X - AGAINST MS ATTACHMENTS > >> / \ > >> > > > > > > > > -- > /"\ > \ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL > X - AGAINST MS ATTACHMENTS > / \ >
Re: [sqlite] sqlite3_trace args?
Thanks, Eventually I did get into the source. I was just hoping there was more authorative documenation out there, other than my interpretation of what the source may be doing. Transactions solved the problem, thanks. Funny, it didn't help till I put the transaction BEGIN, CREATE, and COMMIT statements in different sqlite3_step() functions. Then the COMMIT warned about on ungoing statement. Turned out my code, after error conditions, left the function before sqlite3_finalize() Question, why does transactions report the error whilst CREATE on its own returns success but secretly fails? Kervin Quoting Christian Smith <[EMAIL PROTECTED]>: > On Tue, 5 Oct 2004 [EMAIL PROTECTED] wrote: > > > > > > >Hello, > > > >I am trying debugging my application ( Sqlite reports no errors > >after a CREATE query, but the table does not exist ). > > > Have you done this in a transaction? Check you haven't issued a BEGIN > statement, and/or COMMIT the current transaction. > > > > > >I'd like to use sqlite3_trace() function, but the documentation > >does not have the expected args, neither does the mail archives. > > > >Can someone explain to me briefly what the xTrace() args are at > >runtime, and the args for sqlite3_trace() as well? > > > "Use the source, Luke..." > > xTrace is a callback function called for every SQL block executed. Args to > xTrace are: > void * pArg : User pointer > const char * sql : SQL being executed > > Args to sqlite3_trace are: > sqlite3 * db : SQLite database > void (*xTrace)(void*,const char*) : Callback function described > above. > void * pArg : User pointer passed to xTrace > > > > > >Thanks, > >Kervin > > > > -- > /"\ > \ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL > X - AGAINST MS ATTACHMENTS > / \ >
[sqlite] free memory from sqlite3_column_blob?
Hello again, How do I free memory returned from sqlite3_column_blob()? sqlite3_free() segfaults on that memory, and I don't see anything in the documentation about this. Thanks, Kervin
[sqlite] sqlite3_trace args?
Hello, I am trying debugging my application ( Sqlite reports no errors after a CREATE query, but the table does not exist ). I'd like to use sqlite3_trace() function, but the documentation does not have the expected args, neither does the mail archives. Can someone explain to me briefly what the xTrace() args are at runtime, and the args for sqlite3_trace() as well? Thanks, Kervin
Re: [sqlite] why remove sqlite_encode() ?
Thanks for both responses. I have converted to the new API. - Kervin Quoting "D. Richard Hipp" <[EMAIL PROTECTED]>: > [EMAIL PROTECTED] wrote: > > > > Question, why as the encode and decode funtions been removed? > > Did they perform badly, are there bugs? Caveats? > > > > SQLite version 2.8 and earlier could not (easily) store binary > data - data with embedded \000 characters. Thus the encode/decode > routines were provide to transform data so that it contained no > \000 characters. > > SQLite version 3.0 can store binary data without difficulty. > There is no longer a need to encode binary data prior to storing > it. The encode/decode became superfluous and were removed. > > -- > D. Richard Hipp -- [EMAIL PROTECTED] -- 704.948.4565 > >
[sqlite] why remove sqlite_encode() ?
Hello, I am new to sqlite. So far I've been very impressed by it. We are using it as the backend of an open-source MAPI message store. Still struggling with locking issues, but looking alright. Question, why as the encode and decode funtions been removed? Did they perform badly, are there bugs? Caveats? We have incorporated the old source into our project, though I hate to rely on dead code. Should we switch to base64? One very attractive feature of sqlite_encode was that it did minimal encoding, hence it was more effecient that base64. Thanks, Kervin