From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Stephan Budach
I am always experiencing chksum errors while scrubbing my zpool(s), but
I never experienced chksum errors while resilvering. Does anybody know
why that would be?
When you
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
And regarding the considerable activity - AFAIK there is little way
for ZFS to reliably read and test TXGs newer than X
My understanding is like this: When you make a snapshot,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Nico Williams
I've wanted a system where dedup applies only to blocks being written
that have a good chance of being dups of others.
I think one way to do this would be to keep a scalable
So ... The way things presently are, ideally you would know in advance what
stuff you were planning to write that has duplicate copies. You could enable
dedup, then write all the stuff that's highly duplicated, then turn off dedup
and write all the non-duplicate stuff. Obviously, however,
From: Richard Elling [mailto:richard.ell...@gmail.com]
Sent: Saturday, January 19, 2013 5:39 PM
the space allocation more closely resembles a variant
of mirroring,
like some vendors call RAID-1E
Awesome, thank you. :-)
___
zfs-discuss mailing
Bloom filters are very small, that's the difference. You might only need a
few bits per block for a Bloom filter. Compare to the size of a DDT entry.
A Bloom filter could be cached entirely in main memory.
___
zfs-discuss mailing list
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Nico Williams
To decide if a block needs dedup one would first check the Bloom
filter, then if the block is in it, use the dedup code path, else the
non-dedup codepath and insert the block
On 19 January, 2013 - Jim Klimov sent me these 2,0K bytes:
Hello all,
While revising my home NAS which had dedup enabled before I gathered
that its RAM capacity was too puny for the task, I found that there is
some deduplication among the data bits I uploaded there (makes sense,
since it
Am 20.01.13 16:51, schrieb Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris):
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Stephan Budach
I am always experiencing chksum errors while scrubbing my zpool(s), but
I never experienced
On 2013-01-20 19:55, Tomas Forsman wrote:
On 19 January, 2013 - Jim Klimov sent me these 2,0K bytes:
Hello all,
While revising my home NAS which had dedup enabled before I gathered
that its RAM capacity was too puny for the task, I found that there is
some deduplication among the data bits
On 2013-01-20 16:56, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
And regarding the considerable activity - AFAIK there is little way
for ZFS to reliably read and
Did you try replacing the patch-cables and/or SFPs on the path
between servers and disks, or at least cleaning them? A speck
of dust (or, God forbid, a pixel of body fat from a fingerprint)
caught between the two optic cable cutoffs might cause any kind
of signal weirdness from time to time...
On Jan 20, 2013, at 8:16 AM, Edward Harvey imaginat...@nedharvey.com wrote:
But, by talking about it, we're just smoking pipe dreams. Cuz we all know
zfs is developmentally challenged now. But one can dream...
I disagree the ZFS is developmentally challenged. There is more development
now
On 2013-01-20 17:16, Edward Harvey wrote:
But, by talking about it, we're just smoking pipe dreams. Cuz we all know zfs
is developmentally challenged now. But one can dream...
I beg to disagree. While most of my contribution was so far about
learning stuff and sharing with others, as well
On Sun, Jan 20, 2013 at 6:19 PM, Richard Elling richard.ell...@gmail.comwrote:
On Jan 20, 2013, at 8:16 AM, Edward Harvey imaginat...@nedharvey.com
wrote:
But, by talking about it, we're just smoking pipe dreams. Cuz we all
know zfs is developmentally challenged now. But one can dream...
On Jan 20, 2013, at 4:51 PM, Tim Cook t...@cook.ms wrote:
On Sun, Jan 20, 2013 at 6:19 PM, Richard Elling richard.ell...@gmail.com
wrote:
On Jan 20, 2013, at 8:16 AM, Edward Harvey imaginat...@nedharvey.com wrote:
But, by talking about it, we're just smoking pipe dreams. Cuz we all know
Am 21.01.13 00:21, schrieb Jim Klimov:
Did you try replacing the patch-cables and/or SFPs on the path
between servers and disks, or at least cleaning them? A speck
of dust (or, God forbid, a pixel of body fat from a fingerprint)
caught between the two optic cable cutoffs might cause any kind
of
17 matches
Mail list logo