On Dec 11, 2007, at 11:03 AM, Joe Wilson wrote:

If this is intentional, what is the recommended replacement
for .dump/.load for large rows?

You have to recompile with a large value for SQLITE_MAX_SQL_LENGTH
via a compiler -D flag or other means.

Monotone encountered this issue as well for dumping/restoring databases
with large BLOBs:

http://lists.gnu.org/archive/html/monotone-devel/2007-09/ msg00246.html

I think the default value is too small, but as long as you're able to
compile/use your own library, it's not too much trouble.

Joe,

Thanks for the response and sorry for not being 100% clear in my initial inquiry.

I do realize that one possible solution is to recompile sqlite (even if it isn't practical for my current problem.)

What I was really asking is this:

Is a 1MB limit on the SQL intentional?

Per my previous message, the comment in the source disagrees with the value.

Also, at the default value, .dump/.load will only support rows of about 1/2 MB (to account for hex expansion), while the default limit for BLOB columns is 1GB.

In other words, independent of the solution to my current problem, should the default value be changed in the trunk version of SQLite?

Jim


-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to