Hi all,
I am running S11 on a Dell PE650. It has 5 zpools attached that are made
out of 240 drives, connected via fibre. On thursday all of the sudden
two out of three zpools on one FC channel showed numerous errors and one
of them showed this:
root@solaris11a:~# zpool status vsmPool01
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
If almost all of the I/Os are 4K, maybe your ZVOLs should use a
volblocksize of 4K? This seems like the most obvious improvement.
Oh, I forgot to mention - The above logic
Hi,
I am always experiencing chksum errors while scrubbing my zpool(s), but
I never experienced chksum errors while resilvering. Does anybody know
why that would be? This happens on all of my servers, Sun Fire 4170M2,
Dell PE 650 and on any FC storage that I have.
Currently I had a major
Hello all,
While revising my home NAS which had dedup enabled before I gathered
that its RAM capacity was too puny for the task, I found that there is
some deduplication among the data bits I uploaded there (makes sense,
since it holds backups of many of the computers I've worked on - some
of
On Sat, 19 Jan 2013, Stephan Budach wrote:
Now, this zpool is made of 3-way mirrors and currently 13 out of 15 vdevs are
resilvering (which they had gone through yesterday as well) and I never got
any error while resilvering. I have been all over the setup to find any
glitch or bad part, but
On 2013-01-19 18:17, Bob Friesenhahn wrote:
Resilver may in fact be just verifying that the pool disks are coherent
via metadata. This might happen if the fiber channel is flapping.
Correction: that (verification) would be scrubbing ;)
The way I get it, resilvering is related to scrubbing
Am 19.01.13 18:17, schrieb Bob Friesenhahn:
On Sat, 19 Jan 2013, Stephan Budach wrote:
Now, this zpool is made of 3-way mirrors and currently 13 out of 15
vdevs are resilvering (which they had gone through yesterday as well)
and I never got any error while resilvering. I have been all over
On Sat, 19 Jan 2013, Jim Klimov wrote:
On 2013-01-19 18:17, Bob Friesenhahn wrote:
Resilver may in fact be just verifying that the pool disks are coherent
via metadata. This might happen if the fiber channel is flapping.
Correction: that (verification) would be scrubbing ;)
I don't think
On Sat, 19 Jan 2013, Stephan Budach wrote:
Just ignore the timestamp, as it seems that the time is not set correctly,
but the dates match my two issues from today and thursday, which accounts for
three days. I didn't catch that before, but it seems to clearly indicate a
problem with the FC
On 2013-01-19 20:08, Bob Friesenhahn wrote:
On Sat, 19 Jan 2013, Jim Klimov wrote:
On 2013-01-19 18:17, Bob Friesenhahn wrote:
Resilver may in fact be just verifying that the pool disks are coherent
via metadata. This might happen if the fiber channel is flapping.
Correction: that
Am 19.01.13 20:18, schrieb Bob Friesenhahn:
On Sat, 19 Jan 2013, Stephan Budach wrote:
Just ignore the timestamp, as it seems that the time is not set
correctly, but the dates match my two issues from today and thursday,
which accounts for three days. I didn't catch that before, but it
On 2013-01-19 20:23, Jim Klimov wrote:
On 2013-01-19 20:08, Bob Friesenhahn wrote:
On Sat, 19 Jan 2013, Jim Klimov wrote:
On 2013-01-19 18:17, Bob Friesenhahn wrote:
Resilver may in fact be just verifying that the pool disks are coherent
via metadata. This might happen if the fiber channel
On Jan 19, 2013, at 7:16 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
If almost all of the I/Os are
On 2013-01-19 23:39, Richard Elling wrote:
This is not quite true for raidz. If there is a 4k write to a raidz
comprised of 4k sector disks, then
there will be one data and one parity block. There will not be 4 data +
1 parity with 75%
space wastage. Rather, the space allocation more closely
I've wanted a system where dedup applies only to blocks being written
that have a good chance of being dups of others.
I think one way to do this would be to keep a scalable Bloom filter
(on disk) into which one inserts block hashes.
To decide if a block needs dedup one would first check the
bloom filters are a great fit for this :-)
-- richard
On Jan 19, 2013, at 5:59 PM, Nico Williams n...@cryptonector.com wrote:
I've wanted a system where dedup applies only to blocks being written
that have a good chance of being dups of others.
I think one way to do this would be to
16 matches
Mail list logo