On Jan 18, 2013, at 4:40 AM, Jim Klimov <jimkli...@cos.ru> wrote:

> On 2013-01-18 06:35, Thomas Nau wrote:
>>>> If almost all of the I/Os are 4K, maybe your ZVOLs should use a 
>>>> volblocksize of 4K?  This seems like the most obvious improvement.
>>> 
>>> 4k might be a little small. 8k will have less metadata overhead. In some 
>>> cases
>>> we've seen good performance on these workloads up through 32k. Real pain
>>> is felt at 128k :-)
>> 
>> My only pain so far is the time a send/receive takes without really loading 
>> the
>> network at all. VM performance is nothing I worry about at all as it's 
>> pretty good.
>> So key question for me is if going from 8k to 16k or even 32k would have 
>> some benefit for
>> that problem?
> 
> I would guess that increasing the block size would on one hand improve
> your reads - due to more userdata being stored contiguously as part of
> one ZFS block - and thus sending of the backup streams should be more
> about reading and sending the data and less about random seeking.

There is too much caching in the datapath to make a broad statement stick.
Empirical measurements with your workload will need to choose the winner.

> On the other hand, this may likely be paid off with the need to do more
> read-modify-writes (when larger ZFS blocks are partially updated with
> the smaller clusters in the VM's filesystem) while the overall system
> is running and used for its primary purpose. However, since the guest
> FS is likely to store files of non-minimal size, it is likely that the
> whole larger backend block would be updated anyway...

For many ZFS implementations, RMW for zvols is the norm.

> 
> So, I think, this is something an experiment can show you - whether the
> gain during backup (and primary-job) reads vs. possible degradation
> during the primary-job writes would be worth it.
> 
> As for the experiment, I guess you can always make a ZVOL with different
> recordsize, DD data into it from the production dataset's snapshot, and
> attach the VM or its clone to the newly created clone of its disk image.

In my experience, it is very hard to recreate in the lab the environments
found in real life. dd, in particular, will skew the results a bit because it
is in LBA order for zvols, not the creation order as seen in the real world.

That said, trying to get high performance out of HDDs is an exercise like
fighting the tides :-)
 -- richard

--

richard.ell...@richardelling.com
+1-760-896-4422









_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to