On 01/20/12 15:27, Nikolay Denev wrote:

On Jan 20, 2012, at 2:31 PM, Alexander Motin wrote:

On 01/20/12 14:13, Nikolay Denev wrote:
On Jan 20, 2012, at 1:30 PM, Alexander Motin wrote:
On 01/20/12 13:08, Nikolay Denev wrote:
On 20.01.2012, at 12:51, Alexander Motin<m...@freebsd.org>    wrote:

On 01/20/12 10:09, Nikolay Denev wrote:
Another thing I've observed is that active/active probably only makes sense if 
you are accessing single LUN.
In my tests where I have 24 LUNS that form 4 vdevs in a single zpool, the 
highest performance was achieved
when I split the active paths among the controllers installed in the server importing the 
pool. (basically "gmultipath rotate $LUN" in rc.local for half of the paths)
Using active/active in this situation resulted in fluctuating performance.

How big was fluctuation? Between speed of one and all paths?

Several active/active devices without knowledge about each other with some 
probability will send part of requests via the same links, while ZFS itself 
already does some balancing between vdevs.

I will test in a bit and post results.

P.S.: Is there a way to enable/disable active-active on the fly? I'm
currently re-labeling to achieve that.

No, there is not now. But for experiments you may achieve the same results by 
manually marking as failed all paths except one. It is not dangerous, as if 
that link fail, all other will resurrect automatically.

I had to destroy and relabel anyways, since I was not using active-active 
currently. Here's what I did (maybe a little too verbose):

And now a very naive benchmark :

:~# dd if=/dev/zero of=/tank/TEST bs=1M count=512
512+0 records in
512+0 records out
536870912 bytes transferred in 7.282780 secs (73717855 bytes/sec)
:~# dd if=/dev/zero of=/tank/TEST bs=1M count=512
512+0 records in
512+0 records out
536870912 bytes transferred in 38.422724 secs (13972745 bytes/sec)
:~# dd if=/dev/zero of=/tank/TEST bs=1M count=512
512+0 records in
512+0 records out
536870912 bytes transferred in 10.810989 secs (49659740 bytes/sec)

Now deactivate the alternative paths :
And the benchmark again:

:~# dd if=/dev/zero of=/tank/TEST bs=1M count=512
512+0 records in
512+0 records out
536870912 bytes transferred in 1.083226 secs (495622270 bytes/sec)
:~# dd if=/dev/zero of=/tank/TEST bs=1M count=512
512+0 records in
512+0 records out
536870912 bytes transferred in 1.409975 secs (380766249 bytes/sec)
:~# dd if=/dev/zero of=/tank/TEST bs=1M count=512
512+0 records in
512+0 records out
536870912 bytes transferred in 1.136110 secs (472551848 bytes/sec)

P.S.: The server is running 8.2-STABLE, dual port isp(4) card, and is directly 
connected to a 4Gbps Xyratex dual-controller (active-active) storage array.
All the 24 SAS drives are setup as single disk RAID0 LUNs.

This difference is too huge to explain it with ineffective paths utilization. 
Can't this storage have some per-LUN port/controller affinity that may penalize 
concurrent access to the same LUN from different paths? Can't it be 
active/active on port level, but active/passive for each specific LUN? If there 
are really two controllers inside, they may need to synchronize their caches or 
bounce requests, that may be expensive.

--
Alexander Motin

Yes, I think that's what's happening. There are two controllers each with it's 
own CPU and cache and have cache synchronization enabled.
I will try to test multipath if both paths are connected to the same controller 
(there are two ports on each controller). But that will require remote hands 
and take some time.

In the mean time I've now disabled the writeback cache on the array (this 
disables also the cache synchronization) and here are the results :

ACTIVE-ACTIVE:

:~# dd if=/dev/zero of=/tank/TEST bs=1M count=512
512+0 records in
512+0 records out
536870912 bytes transferred in 2.497415 secs (214970639 bytes/sec)
:~# dd if=/dev/zero of=/tank/TEST bs=1M count=512
512+0 records in
512+0 records out
536870912 bytes transferred in 1.076070 secs (498918172 bytes/sec)
:~# dd if=/dev/zero of=/tank/TEST bs=1M count=512
512+0 records in
512+0 records out
536870912 bytes transferred in 1.908101 secs (281363979 bytes/sec)

ACTIVE-PASSIVE (half of the paths failed the same way as in the previous email):

:~# dd if=/dev/zero of=/tank/TEST bs=1M count=512
512+0 records in
512+0 records out
536870912 bytes transferred in 0.324483 secs (1654542913 bytes/sec)
:~# dd if=/dev/zero of=/tank/TEST bs=1M count=512
512+0 records in
512+0 records out
536870912 bytes transferred in 0.795685 secs (674727909 bytes/sec)
:~# dd if=/dev/zero of=/tank/TEST bs=1M count=512
512+0 records in
512+0 records out
536870912 bytes transferred in 0.233859 secs (2295702835 bytes/sec)

This increased the performance for both cases, probably because writeback 
caching does nothing for large sequential writes.
Anyways, here ACTIVE-ACTIVE is still slower, but not by that much.

Thank you for numbers, but I have some doubts about them. 2295702835 bytes/sec is about 18Gbps. If you have 4Gbps links, that would need more then 4 of them, I think.

--
Alexander Motin
_______________________________________________
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

Reply via email to