...
Is the PERC 5/i dual channel?  If so, are 1/2 the drives on one channel and the 
other half on the other channel?  I find this helps RAID10 performance when the 
mirrored pairs are on separate channels.
...

With the SAS controller (PERC 5/i), every drive gets it's own 3 GB/s port. 

...
Your transfer rate seems pretty good for Dell hardware, but I'm not experienced 
with SAS drives to know if those numbers are good in an absolute sense.

Also, which driver picked up the SAS controller?  amr(4) or aac(4) or some 
other?  That makes a big difference too.  I think the amr driver is "better" 
than the aac driver.
..

The internals of the current SAS drives are similar to the U320's they replaced 
in terms of read/write/seek performance, however the benefit is the SAS bus, 
which helps eliminate some of the U320 limitations (e.g. with Perc4, you only 
get 160 MB/s per channel as you mentioned). It's using the mfi driver... 

Here's some simplistic performance numbers:
time bash -c "(dd if=/dev/zero of=bigfile count=125000 bs=8k && sync)"

Raid0 x 2 (2 spindles) ~138 MB/s on BSD
Raid5 x 4 ~160 MB/s BSD, ~274 MB/s Knoppix (ext2)
Raid5 x 6 ~255 MB/s BSD, 265 MB/s Knoppix (ext3)
Raid10 x 4 ~25 MB/s BSD
Raid50 x 6 ~144 MB/s BSD, 271 MB/s Knoppix

* BSD is 6.1-RELEASE amd64 with UFS + Soft updates, Knoppix is 5.1 (ext2 didn't 
like the > 1TB partition for the 6 disk RAID 5, hence ext3)

Seems to me the PERC5 has issues with layered raid (10, 50) as others have 
suggested on this list is a common problem with lower end raid cards. For now, 
I'm going with the RAID 5 option, however if I have time, I would like to test 
having the hardware do raid 0 and doing raid 1 in the os, or vice versa, as 
proposed in other posts.

Also, I ran a pgbench -s 50 -c 10 -t 1000 on a completely default BSD 6.1 and 
PG 8.1.4 install with RAID5 x 6 disks, and got 442 tps on a fresh run (the 
numbers climb very rapidly due to caching after running simultaneous tests 
without reinitializing the test db. I'm guessing this is due to OS caching 
since the default postgresql.conf is pretty limited in terms of resource use). 
I probably need to up the scaling factor significantly so the whole data set 
doesn't get cached in RAM if I want realistic results from simultaneous tests, 
but it seems quicker to just reinit each time at this point.

On to some kernel tweaks and some adjustments to postgresql.conf... 

- Bucky


---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Reply via email to