So you are definitely thrashing disk then.


An SSD might help as head seek time is constant for those.



But if your gronking 288G in 5m22s that is 894MB/sec (relative to database 
size).



With the default 2M cache_size your flushing cache 450 times per second.



What happens if you bump up your cache_size to the maximum you can?



And have you tried a different page_size?



I also wonder if you did your own count if it would be faster -- since your 
data is interleaved just 'select rowid" and walk through all the tables one row 
at  a time to count them.  That sounds like it would emulate the format on the 
disk and take max advantage of the cache.



Michael D. Black

Senior Scientist

Advanced Analytics Directorate

Advanced GEOINT Solutions Operating Unit

Northrop Grumman Information Systems

________________________________
From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Bo Peng [ben....@gmail.com]
Sent: Sunday, October 23, 2011 8:47 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] EXT :Re: Concurrent readonly access to a large database.

On Sun, Oct 23, 2011 at 8:12 AM, Black, Michael (IS)
<michael.bla...@ngc.com> wrote:
> #1 What's the size of your database?

288G, 5000 table, each with ~1.4 million records

> #2 What's your cache_size setting?

default

> #3 How are you loading the data?  Are your table inserts interleaved or by 
> table?  Your best bet would be by interleaving during insert so cache hits 
> would be better.

The tables were created all at once, with records inserted evenly, so
the content of each table is spread all over the 288G place. I believe
this is the reason why cache_size did not help.

> Looks to me like you're getting disk thrashing in test3 and test4 which 
> cache_size could affect also.

I am now thinking that if I vacuum the database so that all tables are
copied one by one. The performance could be increased dramatically
because the content of each table could be read to memory easier.

> And are you running your test twice to bypass the initial cache filling of 
> sqlite?

I ran all the tests on tables that have not been processed (cached).

Bo
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to