From: Richard Elling [mailto:richard.ell...@gmail.com]
Sent: Saturday, January 19, 2013 5:39 PM
the space allocation more closely resembles a variant
of mirroring,
like some vendors call RAID-1E
Awesome, thank you. :-)
___
zfs-discuss mailing
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
If almost all of the I/Os are 4K, maybe your ZVOLs should use a
volblocksize of 4K? This seems like the most obvious improvement.
Oh, I forgot to mention - The above logic
On Jan 19, 2013, at 7:16 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
If almost all of the I/Os are
On 2013-01-19 23:39, Richard Elling wrote:
This is not quite true for raidz. If there is a 4k write to a raidz
comprised of 4k sector disks, then
there will be one data and one parity block. There will not be 4 data +
1 parity with 75%
space wastage. Rather, the space allocation more closely
On 2013-01-18 06:35, Thomas Nau wrote:
If almost all of the I/Os are 4K, maybe your ZVOLs should use a volblocksize of
4K? This seems like the most obvious improvement.
4k might be a little small. 8k will have less metadata overhead. In some cases
we've seen good performance on these
On Jan 17, 2013, at 9:35 PM, Thomas Nau thomas@uni-ulm.de wrote:
Thanks for all the answers more inline)
On 01/18/2013 02:42 AM, Richard Elling wrote:
On Jan 17, 2013, at 7:04 AM, Bob Friesenhahn bfrie...@simple.dallas.tx.us
mailto:bfrie...@simple.dallas.tx.us wrote:
On Wed, 16 Jan
On Jan 18, 2013, at 4:40 AM, Jim Klimov jimkli...@cos.ru wrote:
On 2013-01-18 06:35, Thomas Nau wrote:
If almost all of the I/Os are 4K, maybe your ZVOLs should use a
volblocksize of 4K? This seems like the most obvious improvement.
4k might be a little small. 8k will have less metadata
On Wed, 16 Jan 2013, Thomas Nau wrote:
Dear all
I've a question concerning possible performance tuning for both iSCSI access
and replicating a ZVOL through zfs send/receive. We export ZVOLs with the
default volblocksize of 8k to a bunch of Citrix Xen Servers through iSCSI.
The pool is made of
On 2013-01-17 16:04, Bob Friesenhahn wrote:
If almost all of the I/Os are 4K, maybe your ZVOLs should use a
volblocksize of 4K? This seems like the most obvious improvement.
Matching the volume block size to what the clients are actually using
(due to their filesystem configuration) should
On Jan 17, 2013, at 7:04 AM, Bob Friesenhahn bfrie...@simple.dallas.tx.us
wrote:
On Wed, 16 Jan 2013, Thomas Nau wrote:
Dear all
I've a question concerning possible performance tuning for both iSCSI access
and replicating a ZVOL through zfs send/receive. We export ZVOLs with the
default
On Jan 17, 2013, at 8:35 AM, Jim Klimov jimkli...@cos.ru wrote:
On 2013-01-17 16:04, Bob Friesenhahn wrote:
If almost all of the I/Os are 4K, maybe your ZVOLs should use a
volblocksize of 4K? This seems like the most obvious improvement.
Matching the volume block size to what the clients
Thanks for all the answers more inline)
On 01/18/2013 02:42 AM, Richard Elling wrote:
On Jan 17, 2013, at 7:04 AM, Bob Friesenhahn bfrie...@simple.dallas.tx.us
mailto:bfrie...@simple.dallas.tx.us wrote:
On Wed, 16 Jan 2013, Thomas Nau wrote:
Dear all
I've a question concerning possible
Dear all
I've a question concerning possible performance tuning for both iSCSI access
and replicating a ZVOL through zfs send/receive. We export ZVOLs with the
default volblocksize of 8k to a bunch of Citrix Xen Servers through iSCSI.
The pool is made of SAS2 disks (11 x 3-way mirrored) plus
13 matches
Mail list logo