On Jan 17, 2013, at 9:35 PM, Thomas Nau <thomas....@uni-ulm.de> wrote:

> Thanks for all the answers more inline)
> 
> On 01/18/2013 02:42 AM, Richard Elling wrote:
>> On Jan 17, 2013, at 7:04 AM, Bob Friesenhahn <bfrie...@simple.dallas.tx.us 
>> <mailto:bfrie...@simple.dallas.tx.us>> wrote:
>> 
>>> On Wed, 16 Jan 2013, Thomas Nau wrote:
>>> 
>>>> Dear all
>>>> I've a question concerning possible performance tuning for both iSCSI 
>>>> access
>>>> and replicating a ZVOL through zfs send/receive. We export ZVOLs with the
>>>> default volblocksize of 8k to a bunch of Citrix Xen Servers through iSCSI.
>>>> The pool is made of SAS2 disks (11 x 3-way mirrored) plus mirrored STEC 
>>>> RAM ZIL
>>>> SSDs and 128G of main memory
>>>> 
>>>> The iSCSI access pattern (1 hour daytime average) looks like the following
>>>> (Thanks to Richard Elling for the dtrace script)
>>> 
>>> If almost all of the I/Os are 4K, maybe your ZVOLs should use a 
>>> volblocksize of 4K?  This seems like the most obvious improvement.
>> 
>> 4k might be a little small. 8k will have less metadata overhead. In some 
>> cases
>> we've seen good performance on these workloads up through 32k. Real pain
>> is felt at 128k :-)
> 
> My only pain so far is the time a send/receive takes without really loading 
> the
> network at all. VM performance is nothing I worry about at all as it's pretty 
> good.
> So key question for me is if going from 8k to 16k or even 32k would have some 
> benefit for
> that problem?

send/receive can bottleneck on the receiving side. Take a look at the archives
searching for "mbuffer" as a method of buffering on the receive side. In a well
tuned system, the send will be from ARC :-)
 -- richard

--

richard.ell...@richardelling.com
+1-760-896-4422









_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to