On Dec 11, 2007, at 2:10 PM, [EMAIL PROTECTED] wrote:
The limits in SQLite (introduced in version 3.4.0) were added
at the request of the Google Gears developers. Consider the
situation that Gears and similar applications (such as Adobe AIR)
are in. They have to accept generic SQL from untrust
On Dec 11, 2007, at 2:10 PM, [EMAIL PROTECTED] wrote:
You should normally not be inserting megabyte-sized blobs and
strings using raw SQL. Instead, use bound parameters:
sqlite3_prepare("INSERT INTO tablexyz VALUES(:blobcontent)");
sqlite3_bind_blob(pStmt, 1, pBlobContent, SQLITE_STATI
On Dec 11, 2007, at 11:03 AM, Joe Wilson wrote:
If this is intentional, what is the recommended replacement
for .dump/.load for large rows?
You have to recompile with a large value for SQLITE_MAX_SQL_LENGTH
via a compiler -D flag or other means.
Monotone encountered this issue as well for dum
I notice that SQLite 3.4.0 and later impose hard limits on some
sizes. I'm running into a problem where a .dump/.load cycle fails on
a database with columns that have blobs which are about 2MB in size.
Looking at the source for 3.5.3 (I can't find a tarball of 3.4 on the
web site, but I'm u
4 matches
Mail list logo