One thing to keep in mind with regard to the max throughput values:

The PCI-X slots on the x4500 are all on the same bus.  I saw the same 
throughput in similar tests I did with the x4500 and the SAS port 
provider.  I believe the 760 MB/s read and 850 MB/s r/w numbers 
represent the realistic bandwidth limit of the bus.

Using an x4600 as the target system and two initiators, I was able to 
practically saturate a 1.2 Gb/s 4x SAS port.

David

Sumit Gupta wrote:
> Hi Bob
> 
> For a storage subsystem 'COMSTAR in itself' is only a component. Other 
> components are HBA, PCI, CPU, backend storage etc. Each one of those 
> influence the performance.  But so far here is what we have found. Our 
> reference platform is x4500 (Thumper) and we use zfs raidz zvols as our 
> backend. Also the protocol for these tests is FC using Qlogic 4G HBAs.
> I have to also note here that most of this is done as a part of our 
> performance regression testing, the purpose of which is to make sure 
> that any bugfixes we make don't regress the performance. There is no 
> formal performance data yet.
> 
> Max IOPS (Hitting the backend cache):
>     90K using one HBA(2 ports), 130K using 2 HBAs (4 ports)
> 
> Random IOPS (not hitting the backend cache at all). In this case the 
> backend is just all the 47 disks with no filesystem on them. Note that 
> this also requires some changes that went into B96 of SXCE.
>     4.3K (Limited by backend seek)
> 
> Max throughput (large block I/O)
>     760 MB/s (one HBA, 2 ports, reads)
>     850 MB/s (one HBA, 2 ports, read/write)
> 
> Sumit
> 
> Bob Friesenhahn wrote:
>> COMSTAR seems quite promising and useful.  Is there any benchmark data 
>> yet to compare COMSTAR + ZFS to a normal hardware-based RAID array? 
>> For example, test performance with fiber channel direct to a RAID 
>> array, and then (using same array hardware for storage) with COMSTAR + 
>> ZFS?
>>
>> It is useful to know if COMSTAR + ZFS boosts performance due to ZFS 
>> RAID smarts, much more powerful CPU, and massive caching in RAM, or if 
>> going through Solaris kernel and ZFS adds latency which hurts 
>> performance.
>>
>> Bob
>> ======================================
>> Bob Friesenhahn
>> [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
>> GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
>>
>> _______________________________________________
>> storage-discuss mailing list
>> [email protected]
>> http://mail.opensolaris.org/mailman/listinfo/storage-discuss
>>   
> 
> _______________________________________________
> storage-discuss mailing list
> [email protected]
> http://mail.opensolaris.org/mailman/listinfo/storage-discuss

-- 
David Hollister
Sun Microsystems
x41028/+1 303 395 4273
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to