Am Donnerstag, den 15.09.2011, 21:23 +0100 schrieb --[ UxBoD ]--: 

        Hello all,
        


Hi,



        we are about to configure a new storage system that utilizes the 
Nexenta OS with sparsely allocated ZVOLs.  We wish to present 4TB of storage to 
a Linux system that has four NICs available to it. We are unsure whether to 
present one large ZVOL or four smaller ones to maximize the use of the NICs 
available to us.  We have set rr_min_io to 100 which we have found offers a 
good level of performance.  Though this raises an interesting question; that 
the multipath.conf man pages says that the rr_min_io parameter is the number of 
IOs across the whole path group before a switch is made to the next path. What 
constitutes a single IO operation ? A user opens a file for read access, one 
IOP to open the file, IOsX to read the contents, and another to close ? Do each 
of those SCSI operations happen on the same path ie. on the same block device ? 
If a second user comes along and requests data from the same block device do 
they happen on the same path or the next one in the path group ? We imagine 
that they will all happen on the same path until rr_min_io is reached and it 
switches over to the next path.
        


I can only speak for our own tests, which showed that link saturation can only 
be achieved by stream i/o in huge blocks (e.g. dd [...] bs=[~1G] oflag=direct). 
Switching pathes really fast (rr_min_io below ~16) helps to get a higher 
throughput balanced., but NOT up to [nics] x [linkspeed], [nics] x [linkspeed] 
x 2 / 3 isn't really exact, but fits better. Indeed, some bad setups showed up 
less than 1 x [linkspeed] ...

We've tested different setups with up to 8 nics over two zoned (virtually four) 
switches.
Short list of summaries:
- Channel bonding (802.3ad) does not really help to get more throughput even in 
multiserver setup. It's only failover stuff.
- More than 2 nics at the initiator side are not helpful, multipath context 
switching and irq consumption are expensive.
- 9k etherframes gives (depends on workload) coarse 10% more throughput 
(reliability might be a better word here)
-  having offloading (TSO, LRO,S-G) seems to be the right way, but didn't show 
any benefit (Intel 82576 4Port)
- latency is way more important than raw throughput capabilites (calculating 
mss etc. stuff really helps)
- four pathes over only two physical links (alias IPs) shows better performance 
than two on two, don't know why...
- LUN behind two portals performs better than behind one (same hardware setup, 
test w/ two pathes on two links)
- STGT outperforms ietd outperforms nexenta
- saving irqs at the target is a plus, controllers in RAID mode outperform JBOD 
setups
- random i/o throughput comes at cost. There's nothing better than swept 
volume, except more swept volume ;)
- obvious: SAS drives are better than SATA (never tested full SSD / EFD setup), 
SSD caches *might* be useful
- obvious: RAID 10 is better than RAID 5
- controller BBU and write caches are a big plus


I know,  every single statement depends on hardware and surrounding 
infrastructure and needs to be discussed in endless papers. But, back to your 
setup: Personally, I'ld really like to continue a nexenta setup (hey, it's a 
fork from the ancient solaris godess ;), but after finally switching over to 
STGT on identical hardware, we were able to roughly double the workload.

btw. multipathd -k, list config shows really useful presets for different 
target vendors.

One additional warning: nexenta doesn't play well with areca controllers. Under 
heavy load the driver panics. Not worse enough, some kernel parts (especially 
ipmi watchdog functions) keep working.

Hopefully, this helps (apoligize, if this has gone too off-topic)

Cheers,

Stephan






        We are trying to squeeze out the maximum performance from our system 
and we are unable to max out our 4 x 1Gbe interfaces. Any thoughts on how we 
can improve our performance ?
        -- 
        Thanks, Phil
        
        
        

        -- 
        You received this message because you are subscribed to the Google 
Groups "open-iscsi" group.
        To post to this group, send email to open-iscsi@googlegroups.com.
        To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
        For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en. 

-- 
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com.
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/open-iscsi?hl=en.

Reply via email to