I notice that SQLite 3.4.0 and later impose hard limits on some sizes. I'm running into a problem where a .dump/.load cycle fails on a database with columns that have blobs which are about 2MB in size.

Looking at the source for 3.5.3 (I can't find a tarball of 3.4 on the web site, but I'm using 3.4 since that is what ships on Mac OS X 10.5)

I see:

        /*
        ** The maximum length of a TEXT or BLOB in bytes.   This also
        ** limits the size of a row in a table or index.
        **
        ** The hard limit is the ability of a 32-bit signed integer
        ** to count the size: 2^31-1 or 2147483647.
        */
        #ifndef SQLITE_MAX_LENGTH
        # define SQLITE_MAX_LENGTH 1000000000
        #endif

and more importantly:

        /*
        ** The maximum length of a single SQL statement in bytes.
        ** The hard limit here is the same as SQLITE_MAX_LENGTH.
        */
        #ifndef SQLITE_MAX_SQL_LENGTH
        # define SQLITE_MAX_SQL_LENGTH 1000000
        #endif

Is the comment wrong, or the source? The value is not the same as SQLITE_MAX_LENGTH; it is in fact much smaller.

If this is intentional, what is the recommended replacement for .dump/.load for large rows?

Thanks,
Jim

-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to