If the majority of your queries don't need blobs and ask only meta
information then you definitely need to go with 2 or 3. Because it
will compact your meta data and SQLite will need to read fewer file
pages to reach more meta data. Also it will help your meta data to be
read more from database cache and do not touch disk unless you
actually need to read blob data. Though it will be true only if your
metadata is less than total amount of cache allowed. I'm not sure that
3 will be better in some ways than 2, maybe only somewhere on the OS
level. But you can try it.

Pavel

On Wed, Sep 16, 2009 at 12:13 PM, Itamar Syn-Hershko
<[email protected]> wrote:
> Hi all,
>
> I'm in the design phase of an application with SQLite backend. The SQLite
> file will hold a table of about 20K records initially, and a few several
> other small tables. About 75% of the records in the large table will have
> binary data associated with it. My main question is which one of the
> following options I'm better off with to store those BLOBs, in terms of DB
> efficiency, memory usage, media seeks (since this will most likely to reside
> on a CD) and file size. The storage options I see relevant are:
>
> 1. BLOBs in the original table in a per-record basis (records with no BLOBs
> NULLified). If separating the BLOBs from this table will help performance in
> any way, I see two further options:
> 2. BLOBs in a separate table, and having the unique ID of the record in the
> large table point at this. No indices necessary, and will never use JOINs in
> queries since that table will be accessed explicitly on-demand only.
> 3. Same as #2 above, except in a separated, joint SQLite file (to aid file
> seeks).
>
> As mentioned, the binaries I'll be storing will only be pulled on demand
> (most queries to the large table will return the accompanying meta-data
> WITHOUT the binary data); no JOINs or foreign indices necessary. The average
> BLOB size is a few 10s of KBs; anyway I do not expect to have a BLOB over
> 1-2MBs. In the shelf version writes to the DB (particularly the large table)
> will very rarely occur; mostly only read operations, so I'm willing to take
> any cost to write operations.
>
> Also, looking up on compression support with SQLite I found 2 solutions -
> CEROD [1] and per-field compression using zlib and extension functions to
> compress / decompress. Are there more options I might have missed?
>
> Thanks in advance for any advice on this.
>
> Itamar.
>
> [1] http://www.hwaci.com/sw/sqlite/cerod.html
> _______________________________________________
> sqlite-users mailing list
> [email protected]
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to