Re: [sqlite] .dump/.load not workin in 3.4.0 or later for "large" rows?

2007-12-11 Thread Joe Wilson
--- [EMAIL PROTECTED] wrote: > I think those exceedingly rare programs that need a larger > SQL statement length limit can include their own copy of > sqlite3.c. I does not take up that much space, after all. It's easy enough to recompile with the new setting once you're aware of it. But

Re: [sqlite] .dump/.load not workin in 3.4.0 or later for "large" rows?

2007-12-11 Thread drh
Joe Wilson <[EMAIL PROTECTED]> wrote: > --- [EMAIL PROTECTED] wrote: > > Yes, this does create problems for .dump/.load in the shell. > > But, as has been pointed out, you can work around it using > > a compile-time switch: > > > > gcc -DSQLITE_MAX_SQL_LENGTH=10 shell.c sqlite3.c -o

Re: [sqlite] .dump/.load not workin in 3.4.0 or later for "large" rows?

2007-12-11 Thread Jim Correia
On Dec 11, 2007, at 2:10 PM, [EMAIL PROTECTED] wrote: The limits in SQLite (introduced in version 3.4.0) were added at the request of the Google Gears developers. Consider the situation that Gears and similar applications (such as Adobe AIR) are in. They have to accept generic SQL from

Re: [sqlite] .dump/.load not workin in 3.4.0 or later for "large" rows?

2007-12-11 Thread Joe Wilson
--- [EMAIL PROTECTED] wrote: > Yes, this does create problems for .dump/.load in the shell. > But, as has been pointed out, you can work around it using > a compile-time switch: > > gcc -DSQLITE_MAX_SQL_LENGTH=10 shell.c sqlite3.c -o sqlite3 > > I should probably modify the makefile

Re: [sqlite] .dump/.load not workin in 3.4.0 or later for "large" rows?

2007-12-11 Thread Jim Correia
On Dec 11, 2007, at 2:10 PM, [EMAIL PROTECTED] wrote: You should normally not be inserting megabyte-sized blobs and strings using raw SQL. Instead, use bound parameters: sqlite3_prepare("INSERT INTO tablexyz VALUES(:blobcontent)"); sqlite3_bind_blob(pStmt, 1, pBlobContent,

Re: [sqlite] .dump/.load not workin in 3.4.0 or later for "large" rows?

2007-12-11 Thread drh
Jim Correia <[EMAIL PROTECTED]> wrote: > > Is a 1MB limit on the SQL intentional? > > Per my previous message, the comment in the source disagrees with the > value. > > Also, at the default value, .dump/.load will only support rows of > about 1/2 MB (to account for hex expansion), while the

Re: [sqlite] .dump/.load not workin in 3.4.0 or later for "large" rows?

2007-12-11 Thread Jim Correia
On Dec 11, 2007, at 11:03 AM, Joe Wilson wrote: If this is intentional, what is the recommended replacement for .dump/.load for large rows? You have to recompile with a large value for SQLITE_MAX_SQL_LENGTH via a compiler -D flag or other means. Monotone encountered this issue as well for

Re: [sqlite] .dump/.load not workin in 3.4.0 or later for "large" rows?

2007-12-11 Thread Joe Wilson
--- Jim Correia <[EMAIL PROTECTED]> wrote: > I notice that SQLite 3.4.0 and later impose hard limits on some > sizes. I'm running into a problem where a .dump/.load cycle fails on > a database with columns that have blobs which are about 2MB in size. > > Looking at the source for 3.5.3 (I

[sqlite] .dump/.load not workin in 3.4.0 or later for "large" rows?

2007-12-11 Thread Jim Correia
I notice that SQLite 3.4.0 and later impose hard limits on some sizes. I'm running into a problem where a .dump/.load cycle fails on a database with columns that have blobs which are about 2MB in size. Looking at the source for 3.5.3 (I can't find a tarball of 3.4 on the web site, but I'm