Re: [sqlite] NFS--how to test?
On Nov 8, 2008, at 11:06 AM, Joshua Paine wrote: > Joshua Paine wrote: >> Some web hosts, like DreamHost and Mosso, provide only NFS-based >> storage. I would very much like to know if it's possible to run >> SQLite >> safely on these (Mosso, actually), so I need some way of testing >> if the >> NFS implementation is adequate. > > A quickie port of my app to MySQL only took ~30 min, so I worried > about > that a bit more than necessary. But I found to my surprise that for my > smallish data set without much attempt at optimization either way, > MySQL-over-internal-network doesn't seem to have any substantial speed > advantage over SQLite-over-NFS on Mosso. SQLite may even be faster. > > So now that I know performance is a toss-up, it would be really > nice to > know if I'm safe using SQLite on NFS. Anyone have any thoughts? What > kind of race conditions should I be simulating? Is it enough, do you > think, to make a simple page that runs a few inserts and try to > load it > with N clients simultaneously a few [dozen? hundred? thousand?] times? > Simultaneous access over NFS will *not* work. Lack of Unix file semantics, caching effects, and locking issues prevent it. Lockd cannot be depended on in a busy environment. Access will have to serialized somehow. Even with serialized access there are no guarantees, since behavior between NFS storage implementations varies. Play around with it, and see how it goes. Find out if they are using NFSv2 or v3, as v2 has some nasty races with exclusive creates. Peter ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] The APress SQLite Book
On Jun 18, 2008, at 8:01 AM, Stefan Arentz wrote: > Sorry if this is a little offtopic but I am curious what other > people thing. > > Is it just me, or is the APress book 'The definitive guide to SQLIte' > not very high quality? > > First, the index is completely unusable. The index is grouped by > subject, so if you want to for example lookup where the book talks > about sqlite3_reset you first have to know that it is grouped under > the 'Prepared Query'. This is completely useless for looking up things > and I keep for a full index scan (heh) to find things. My copy of the > book mentions 'Find it faster at http://superindex.apress.com !' but > that site gives a 404. I found the book useful as an introduction. It is also probably an even better introduction for folks that haven't fiddled with databases before. Some of the internals discussion is interesting. Beyond that it isn't useful as a reference, in large part because of the very poor index you mention. Peter ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Writing double into a socket file
Alex Katebi wrote: > Peter, >I am using my own implementation. I found RPC and others too complicated > to use and it did not give me enough control on the transport layer. I need > my socket to be non-blocking and I am using epoll( ) which is very > efficient. > XDR is a separate specification and can be used independently of RPC. Python has a library for dealing with it. XDR isn't simple, but it does provide the ability to pass complex structures and types between different kinds of machines. You can even create things like linked lists that can be sent over the wire. If that doesn't work for you, there are other options like XML, which comes with it's own complexity and cost. Cheers, Peter ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Writing double into a socket file
Alex Katebi wrote: > I am trying to implement remote procedure calls (RPC) for SQLite API to be > used in my application. > In particular sqlite3_column_double( ) returns a floating point double. > How can I write this double value into a TCP socket? > I have tried writing 8 bytes as integer values but the received valued at > the other end of the socket is incorrect. > I don't have a lot of experience with real numbers. Can someone help? > Are you using something like ONC or Sun RPC, or are you rolling your own? XDR already handles these types of problems for you. Peter ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Question on Blobs
On Feb 27, 2008, at 7:35 PM, Mike McGonagle wrote: > Wow, Peter, didn't expect that anyone would go to the trouble of > writing a > program on the spot I didn't. :-) That was just a snippet of something I wrote for myself when I first started playing with SQLite. > Just curious, but from those few things that I have seen, it > appears that > you can only put a Blob into the DB if it is already on disc, > right? All > three examples I have seen passed the filename to the database, and > one of > them was working within a server context, so I wasn't sure how the > local > filename would be of any use to a machine that is in another part > of the > room (or anywhere else...). The insert statement has two parameters. One of them happens to be the filename and is bound as a text parameter. That's just there as a means to identify the images and allow for wildcard searches. The actual file data is loaded into memory by the code, not SQLite itself. You just bind the binary blob to the appropriate parameter with the address of the loaded data. Whether the image data was loaded from a file or fetched over a TCP connection doesn't matter, the method of storing the data is the same. Peter ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Question on Blobs
On Feb 27, 2008, at 4:48 PM, Mike McGonagle wrote: > Hello all, > I was hoping that someone might share some tips on working with > Blobs? I > would like to be able to store some images and sound files in a > database, > but never having dealt with them, I am kind of at a loss for some > examples. > I have looked on the web, and there are few examples that were of use. Well, I wrote a quick and dirty program for stuffing image files into a database. You just provide a directory and it stats() each file, allocates enough space for the image data, then loads it from disk. Sql statement is something like: char* sql = "insert into i (name, data) values (?, ?);"; Of course if your images are huge this method coud be problematic. I believe SQLite supports an incremental way to do this but I haven't looked at those calls yet. while ( (dentry = readdir(dir)) != '\0') { if (dentry->d_name[0] == '.') continue; if (fd != -1) { close(fd); fd = -1; } if (data != '\0') { free(data); data = '\0'; } snprintf(fname, sizeof(fname), "%s/%s", newdir, dentry->d_name); stat(fname, ); if ( (data = malloc(sb.st_size)) == '\0') { fprintf(stderr, "malloc() failed\n"); sqlite3_finalize(stmt); sqlite3_close(db); exit(1); } if ( (fd = open(fname, O_RDONLY, )) == -1) { fprintf(stderr, "open() failed\n"); sqlite3_finalize(stmt); sqlite3_close(db); exit(1); } if ( (retval = read(fd, data, sb.st_size)) == -1) { fprintf(stderr, "read() failed\n"); sqlite3_finalize(stmt); sqlite3_close(db); exit(1); } if (retval != sb.st_size) { fprintf(stderr, "read() failed\n"); sqlite3_finalize(stmt); sqlite3_close(db); exit(1); } rc = sqlite3_bind_text(stmt, 1, dentry->d_name, dentry->d_namlen, SQLITE_STATIC); if (rc != SQLITE_OK) { fprintf(stderr, "sqlite3_bind_text() %s\n", sqlite3_errmsg (db)); sqlite3_finalize(stmt); sqlite3_close(db); exit(1); } rc = sqlite3_bind_blob(stmt, 2, data, sb.st_size, SQLITE_STATIC); if (rc != SQLITE_OK) { fprintf(stderr, "sqlite3_bind_blob() %s\n", sqlite3_errmsg (db)); sqlite3_finalize(stmt); sqlite3_close(db); exit(1); } rc = sqlite3_step(stmt); if (rc != SQLITE_DONE) { fprintf(stderr, "sqlite3_step() %s\n", sqlite3_errmsg(db)); sqlite3_finalize(stmt); sqlite3_close(db); exit(1); } sqlite3_reset(stmt); } ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] LIKE clauses with prepared statements
On Feb 12, 2008, at 3:51 PM, Igor Tandetnik wrote: > Diego Souza <[EMAIL PROTECTED]> > wrote: >> I wasn't able to figure this out. I'm trying to execute a query like >> this: >> >> SELECT columns FROM table WHERE column LIKE ? >> >> However, I'd like to use % or _ placeholders. For instance in the >> following >> code: >> >> sqlite3_prepare_v2(db, "SELECT columns FROM table WHERE column LIKE >> ?", -1, stmt, 0); sqlite3_bind_text16(stmt, 1, "myutf16txt", bytes, >> SQLITE_STATIC); >> >> How do I insert the % stuff ? > > You can make them part of the parameter value: > > sqlite3_bind_text16(stmt, 1, "%myutf16txt%", bytes, SQLITE_STATIC); > > Or, you can change the statement to something like this: > > SELECT columns FROM table WHERE column LIKE '%' || ? || '%'; > > Igor Tandetnik Also, don't forget to carefully read the notes about the optimizer. In particular: The right-hand side of the GLOB or LIKE operator must be a literal string value that does not begin with a wildcard. If the right-hand side is a parameter that is bound to a string, then no optimization is attempted. So if you bind the parameter then optimization is disabled, which may be important depending on what you are doing. In my case I needed fast lookups from a large db so I had to make the LIKE/GLOB parameter part of the SQL statement I was preparing. Peter ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] odd behavior difference between LIKE and GLOB
On Oct 10, 2007, at 2:12 PM, [EMAIL PROTECTED] wrote: "Peter A. Friend" <[EMAIL PROTECTED]> wrote: As near as I can tell from the documentation, the only difference between LIKE and GLOB is the wildcard character and that LIKE is case- insensitive (unless configuration overrides that). Is there some detail about the behavior of LIKE that I have missed? See http://www.sqlite.org/optoverview.html#like_opt Ah, I had read that but obviously not close enough. So basically, since the SQL standard requires LIKE to be case-insensitive and the default collating sequence is BINARY, the LIKE optimization gets disabled. The only way to allow for LIKE to be optimized *and* provide a case-insensitive match is to use the NOCASE collation. Many thanks, Peter - To unsubscribe, send email to [EMAIL PROTECTED] -