Morning,

 

I have been playing around with SQLite to use as an alternative to one of
our proprietary file formats used to read large amounts of data. Our
proprietary format performs very badly i.e. takes a long time to load some
data; as expected SQLite is lighting quick in comparison - great!

 

One considerable stumbling block is the footprint (size) of the database
file on disk. It turns out that SQLite is roughly 7x larger than our
proprietary format - this is prohibitive. The data is pretty simple really,
2 tables

 

Table 1

BIGINT (index),  VARCHAR(30), VARCHAR(10)

 

Table 2

BIGINT (index), FLOAT

 

For a particular data set Table1 has 1165 rows and Table 2 has 323 rows,
however typically Table 2 becomes bigger for larger models. The size on disk
of this file is 11.8 Mb (compared to 1.7 Mb for our proprietary format). I
have noticed that if I drop the indexes the size drops dramatically -
however the query performance suffers to an unacceptable level.

 

For a larger model the DB footprint is 2.2 Gb compared to 267 Mb for the
proprietary format.

 

Does anybody have any comments on this? Are there any configuration options
or ideas I could use to reduce the footprint of the db file?

 

Many thanks,

Simon

 

 

--

Simon Bulman

Petrel Reservoir Engineering Architect

Schlumberger

Lambourn Court, Wyndyke Furlong,

Abingdon Business Park, Abingdon,

Oxfordshire, OX14 1UJ, UK

Tel: +44 (0)1235 543 401

 

Registered Name: Schlumberger Oilfield UK PLC

Registered Office: 8th Floor, South Quay Plaza 2, 183 Marsh Wall, London.
E14 9SH

Registered in England No. 4157867

 

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to