Hi,

Brandon High <bhigh <at> freaks.com> writes:
> 
> I only looked at the Megaraid 8888 that he mentioned, which has a PCIe
> 1.0 4x interface, or 1000MB/s.

You mean x8 interface (theoretically plugged into that x4 slot below...)

> The board also has a PCIe 1.0 4x electrical slot, which is 8x
> physical. If the card was in the PCIe slot furthest from the CPUs,
> then it was only running 4x.

If Giovanni had put the Megaraid 8888 in this slot, he would have seen
an even lower throughput, around 600MB/s:

This slot is provided by the ICH10R which as you can see on:
http://www.supermicro.com/manuals/motherboard/5500/MNL-1062.pdf
is connected to the northbridge through a DMI link, an Intel-
proprietary PCIe 1.0 x4 link. The ICH10R supports a Max_Payload_Size
of only 128 bytes on the DMI link:
http://www.intel.com/Assets/PDF/datasheet/320838.pdf
And as per my experience:
http://opensolaris.org/jive/thread.jspa?threadID=54481&tstart=45
a 128-byte MPS allows using just about 60% of the theoretical PCIe
throughput, that is, for the DMI link: 250MB/s * 4 links * 60% = 600MB/s.
Note that the PCIe x4 slot supports a larger, 256-byte MPS but this is
irrevelant as the DMI link will be the bottleneck anyway due to the
smaller MPS.

> > A single 3Gbps link provides in theory 300MB/s usable after 8b-10b 
encoding,
> > but practical throughput numbers are closer to 90% of this figure, or 
270MB/s.
> > 6 disks per link means that each disk gets allocated 270/6 = 45MB/s.
> 
> ... except that a SFF-8087 connector contains four 3Gbps connections.

Yes, four 3Gbps links, but 24 disks per SFF-8087 connector. That's
still 6 disks per 3Gbps (according to Giovanni, his LSI HBA was
connected to the backplane with a single SFF-8087 cable).

> It may depend on how the drives were connected to the expander. You're
> assuming that all 18 are on 3 channels, in which case moving drives
> around could help performance a bit.

True, I assumed this and, frankly, this is probably what he did by
using adjacent drive bays... A more optimal solution would be spread
the 18 drives in a 5+5+4+4 config so that the 2 most congested 3Gbps
links are shared by only 5 drives, instead of 6, which would boost the
througput by 6/5 = 1.2x. Which would change my first overall 810MB/s
estimate to 810*1.2 = 972MB/s.

PS: it was not my intention to start a pissing contest. Peace!

-mrb

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to