Thanks. We are running the analyzer that Mr Hipp reccomended.

It is an option to merge tables, but the concern there is that the speed of
lookup maybe compromised? I know, we cannot have it all :-)
But if there is a way we can have "close to all", please let us know.

Kavita


On 12/17/09 10:24 AM, "Simon Slavin" <slav...@bigfraud.org> wrote:

> 
> On 17 Dec 2009, at 3:44pm, Kavita Raghunathan wrote:
> 
>> We have a pressing need to reduce memory consumption. Your help is
>> appreciated.
>> 
>> We have a database 672 tables, all with the same layout: 2 columns of 4 bytes
>> each and 186 rows.
>> Theoretically, the memory consumed should be:
>> 
>> 672 * (2 *4 * 186) = 999936 Bytes ~= 1.0 MB
>> 
>> However, the actual memory consumed is
>> 
>> 4.2MB
>> 
>> Is the difference due to overhead of using Sqlite3 ? Can you recommend ways
>> for us to cut down?
> 
> There is a big overhead for each table.  You have 672 tables, so you have lots
> of overheads.  If these are lots of tables with the same schema (same columns)
> then merge them all into one table using an extra column to tell which source
> each row comes from.
> 
> Also, each table has, at least, one index: the PRIMARY INDEX.  This includes a
> copy of the data in any column in the primary key.  So if, for example, in
> each table one of your columns of 4 bytes is the primary key, then you have at
> least another 0.5 MB of data in your indexes.
> 
> Simon.
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to