> Ok... simple tests have completed.  Here are some numbers.
> FreeBSD 4.8
> PG 7.4b2
> 4GB Ram
> Dual Xeon 2.4GHz processors
> 14 U320 SCSI disks attached to Dell PERC3/DC RAID controller in RAID 5
>  config with 32k stripe size
> Then I took the suggestion to update PG's page size to 16k and did the
> same increase on sort_mem and checkpoint_segments as above.  I also
> halved the shared_buffers and max_fsm_pages  (probably should have
> halved the effective_cache_size too...)
> restore time: 11322 seconds
> vacuum analyze time: 27 minutes
> select count(*) from user_list where owner_id=315;   48267.66 ms
> Granted, given this simple test it is hard to say whether the 16k
> blocks will make an improvement under live load, but I'm gonna give it
> a shot.  The 16k block size shows me roughly 2-6% improvement on these
> tests.
> So throw in my vote for 16k blocks on FreeBSD (and annotate the docs
> to tell which parameters need to be halved to account for it).

I haven't had a chance to run any tests yet (ELIFE), but there was a
suggestion that 32K blocks was a better performer than 16K blocks
(!!??!!??).  I'm not sure why this is and my only guess is that it
relies more heavily on the disk cache to ease IO.  Since you have the
hardware setup, Vivek, would it be possible for you to run a test with
32K blocks?

I've started writing a threaded benchmarking program called pg_crush
that I hope to post here in a few days that'll time connection startup
times, INSERTs, DELETEs, UPDATEs, and both sequential scans as well as
index scans for random and sequentially ordered tuples.  It's similar
to pgbench, except it generates its own data, uses pthreads (chears on
KSE!), and returns more fine grained timing information for the
various activities.


Sean Chittenden

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to