On Wed, 2005-11-02 at 21:16 +1100, Gavin Sherry wrote:
> connections are updating the branches table heavily. As an aside, did you
> initialise with a scaling factor of 10 to match your level of concurrency?
Yep, I did.
> that. The hackers list archive also contains links to the testing Mark
> Wong has been doing at OSDL with TPC-C and TPC-H. Taking a look at the
> configuration file he is using, along with the annotated postgresql.conf,
> would be useful, depending on the load you're antipating and your
I will look into that project.
> Well, two things may be at play. 1) if you are using write caching on your
> controller/disks then at the point at which that cache fills up
> performance will degrade to roughly that you can expect if write through
> cache was being used. Secondly, we checkpoint the system periodically to
> ensure that recovery wont be too long a job. Running for pgbench for a few
> seconds, you will not see the effect of checkpointing, which usually runs
> once every 5 minutes.
I still think it is strange. Simple tests with tar suggest that I could
easily do 600-700 tps at 50.000 KB/second ( as measured by iostat), a
test with bonnie++ measured throughputs > 40.000 KB/sec during very long
times, with 1723 - 2121 operations per second. These numbers suggest
that PostgreSQL is not using all it could from the hardware. Processor
load however is negligible during the pgbench tests.
As written before, I will look into the OSDL benchmarks. Maybe they are
more suited for my needs: *understanding* performance determinators.
> Hope this helps.
You certainly did, thanks.
tel: 024-3888063 / 06-51855277
e-mail: [EMAIL PROTECTED]
---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?