Jeremiah Jahn wrote:
On Sat, 2005-08-20 at 21:32 -0500, John A Meinel wrote:Ron wrote:At 02:53 PM 8/20/2005, Jeremiah Jahn wrote:Well, since you can get a read of the RAID at 150MB/s, that means that it is actual I/O speed. It may not be cached in RAM. Perhaps you could try the same test, only using say 1G, which should be cached.[EMAIL PROTECTED] pgsql]# time dd if=/dev/zero of=testfile bs=1024 count=1000000 1000000+0 records in 1000000+0 records out real 0m8.885s user 0m0.299s sys 0m6.998s [EMAIL PROTECTED] pgsql]# time dd of=/dev/null if=testfile bs=1024 count=1000000 1000000+0 records in 1000000+0 records out real 0m1.654s user 0m0.232s sys 0m1.415s
The write time seems about the same (but you only have 128MB of write cache), but your read jumped up to 620MB/s. So you drives do seem to be giving you 150MB/s.
...
I'm actually curious about PCI bus saturation at this point. Old 32-bit 33MHz pci could only push 1Gbit = 100MB/s. Now, I'm guessing that this is a higher performance system. But I'm really surprised that your write speed is that close to your read speed. (100MB/s write, 150MB/s read).The raid array I have is currently set up to use a single channel. But I have dual controllers In the array. And dual external slots on the card.The machine is brand new and has pci-e backplane.Assuming these are U320 15Krpm 147GB HDs, a RAID 10 array of 14 of them doing raw sequential IO like this should be capable of at ~7*75MB/s= 525MB/s using Seagate Cheetah 15K.4's, ~7*79MB/s= 553MB/sBTW I'm using Seagate Cheetah 15K.4's
Now, are the numbers that Ron is quoting in megabytes or megabits? I'm guessing he knows what he is talking about, and is doing megabytes. 80MB/s sustained seems rather high for a hard-disk.
Though this page: http://www.storagereview.com/articles/200411/20041116ST3146754LW_2.html Does seem to agree with that statement. (Between 56 and 93MB/s)And since U320 is a 320MB/s bus, it doesn't seem like anything there should be saturating. So why the low performance????
_IF_ the controller setup is high powered enough to keep that kind of IO rate up. This will require a controller or controllers providing dual channel U320 bandwidth externally and quad channel U320 bandwidth internally. IOW, it needs a controller or controllers talking 64b 133MHz PCI-X, reasonably fast DSP/CPU units, and probably a decent sized IO buffer as well.AFAICT, the Dell PERC4 controllers use various flavors of the LSI Logic MegaRAID controllers. What I don't know is which exact one yours is, nor do I know if it (or any of the MegaRAID controllers) are high powered enough.PERC4eDC-PCI Express, 128MB Cache, 2-External Channels
Do you know which card it is? Does it look like this one: http://www.lsilogic.com/products/megaraid/megaraid_320_2e.html Judging by the 320 speed, and 2 external controllers, that is my guess. They at least claim a theoretical max of 2GB/s.Which makes you wonder why reading from RAM is only able to get throughput of 600MB/s. Did you run it multiple times? On my windows system, I get just under 550MB/s for what should be cached, copying from /dev/zero to /dev/null I get 2.4GB/s (though that might be a no-op).
On a similar linux machine, I'm able to get 1200MB/s for a cached file. (And 3GB/s for a zero=>null copy).
John =:->
Talk to your HW supplier to make sure you have controllers adequate to your HD's....and yes, your average access time will be in the 5.5ms - 6ms range when doing a physical seek. Even with RAID, you want to minimize seeks and maximize sequential IO when accessing them.Best to not go to HD at all ;-)Well, certainly, if you can get more into RAM, you're always better off. For writing, a battery-backed write cache, and for reading lots of system RAM.I'm not really worried about the writing, it's the reading the readingthat needs to be faster.Hope this helps, Ron PeacetreeJohn =:->
signature.asc
Description: OpenPGP digital signature