Hello!

В сообщении от Monday 16 February 2009 17:42:25 danjenkins написал(а):
> I fully understand that performance will depend on the coding, database
> structure and indexing (& hardware) but, assuming these are taken care of,
> should a 100 million record table perform loosely in the same performance
> class as other popular databases?

I did test SQLite databases up to 100 GB size with 1 000 000 blobs or 100 000 
000 strings in single 
table. Of cource, write operations must be grouped becouse memory allocation 
for write transaction 
is proportional to database size (see offsite). I did not find any explicit 
non-linear size 
effects. 

P.S. In real projects I'm prefer to split databases of a few gigabytes pieces 
and attach these when 
it's needed. For example, a lot of datasets can be splitted by months.

On debian etch/lenny hosts with ext3 filesystem and RAM>1 Gb this options may 
be useful:

pragma page_size=4096;
PRAGMA default_cache_size=200000;
PRAGMA cache_size=200000;

P.P.S. SQLite very effectively work with huge databases when database size >> 
RAM size. And 
databases is compact. So, 4,5 Gb SQLite database work fine but 18 Gb PostgreSQL 
database with equal 
data (yes, PostgreSQL databases are not compact) is work very bad on linux host 
with 1Gb RAM. 

Best regards, Alexey.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to