On Mon, 24 Apr 2006, Tony Fenleish wrote:

>I've been developing an app on a linux embedded device with 128 M of RAM,
>although the available RAM for queries is less than that.  Now that the
>databases are getting a little larger (120 MB), I'm having problems with
>some queries that have large results.  I watch my memory drop completely,
>and then my sqlite client is killed.  Now, as I understand it, the entire
>database is not opened into memory, just the amount of data that is needed.
>
>According to documentation, 256 Bytes of memory is needed for every 1 MB of
>data.  Does anyone have insights as to the memory limitations of SQLite?
>

IIRC, the 256 bytes of memory per 1MB of data is to track dirty pages for
rollback.

The memory footprint of SQLite is controlled primarily by:
- Page cache. Defaults to up to 2000 pages. At 1K pages, that's up to 2MB
  of cached page data. This number can be reduced as needed.
- Temporary tables in memory or disk based tables. If memory based
  temporary store is used, then any temporary tables used for sorting, for
  example, will be memory resident. Temporary tables, however, can be
  moved to temporary disk files to compete with the page cache just like
  the main database. This is set using "PRAGMA temp_store=<value>". See
  http://www.sqlite.org/pragma.html for details.

Otherwise, avoid loading the full result set into memory as you process
it. Ie. Avoid sqlite3_get_table().

If you are sorting large queries, switch to using disk based temporary
store.

Christian


-- 
    /"\
    \ /    ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
     X                           - AGAINST MS ATTACHMENTS
    / \

Reply via email to