I'll try this.
Unfortunately reading from disk appears not to be the problem. Even at 32
threads the IO appears to be very minimal. Our inability to scale appears to
be caused by a mutex in the caching. My CPU usage is at 30% and my disk is
near silent watching the performance monitor. This is on a 10 GB database
executing 32 different queries against 7 different tables.


> You
> may try increasing page size - bigger block means less near-random
> reads from the disc.

It's good way. With page size 8k instead of default 1k selects performance
may increasing ~3x. Note: PostgreSQL use 8k disk pages.

Thank you all for your responses.

On Wed, Aug 10, 2011 at 9:23 AM, Drew Kozicki <drewkozi...@gmail.com> wrote:

> To Answer several questions at once.
>
> Simon,
> Just checking: by 'queries' you mean 'SELECT', right ?  You're not making
> changes, just searching
>
> Yes to optimize we average about 5-6 indexes per table.
>
> D. Richard Hipp,
> Open a separate database connection for each thread.  Don't try to use the
> same database connection on all threads because access to the database
> connection is serialized.
>
> I'll look into this. Thank you
>
> Teg,
> Why multiple threads? What kind of performance do you get if you only
> use a single thread?
>
> Is it one thread per database perhaps?
>
> This program is ran on massive servers and the people that use it are
> talking of running 100's of millions of records through and we're trying to
> let them scale so that they can benefit from the new servers. We seem to
> have peeked out single thread performance at aprox. 2-10 million
> records/hour.
>
> Thank you once again in advance,
> Drew
>
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to