On 8/2/16, 梨田 <[email protected]> wrote: > Dear friend: > > hi,i am a sqlite(3.7.7.1) user ,i have a question to > consult you.I find when the data in database is larger than tens of mega > bytes,it spends more than 5~10s time to read it. Is the time costed > reasonable? situation:one history table ,one connect ,arm-A8,frequency > 700M,embedded linux,saved in flash storage. >
See https://www.sqlite.org/intern-v-extern-blob.html Many people like to store very large BLOBs in external files, then just store the filename in the database. As you can see in the chart on the page above, for BLOBs of about 100K or smaller, it is generally faster to store the BLOB in the database file. But for BLOBs of 100K or larger, it can be faster to store the BLOB as a separate file in the database. To optimize reading a BLOB from the database, set your page size to 4096 or 8192. Also consider using the Incremental Blob I/O interfaces (https://www.sqlite.org/c3ref/blob_open.html), rather than trying to read the whole BLOB all at once. Reading the BLOB incrementally uses much less memory, and can therefore be faster on memory-constrained devices. Another factor: Can you upgrade to the latest version of SQLite? As you can see in the graph at https://www.sqlite.org/graphs/cpucycles-20160801.jpg newer versions of SQLite use about half the number of CPU cycles compared to version 3.7.7.1 from 2011. -- D. Richard Hipp [email protected] _______________________________________________ sqlite-users mailing list [email protected] http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

