Peter Taps wrote:
Hi Eric,
Thank you for your help. At least one part is clear now.
I still am confused about how the system is still functional after one disk
fails.
Consider my earlier example of 3 disks zpool configured for raidz-1. To keep it
simple let's not consider block sizes.
Someone posted about CERN having a bad network card which injected faulty bits
into the data stream. And ZFS detected it, because of end-to-end checksum. Does
anyone has more information on this?
--
This message posted from opensolaris.org
___
Hi,
Someone posted about CERN having a bad network card which injected faulty
bits into the data stream.
I've regularly heard people mention data corruption in network layer here - and
other physics sites like FNAL and SLAC - but I don't have details of any
specific incident. We do see our
On Wed, Aug 11, 2010 at 12:57 AM, Peter Taps ptr...@yahoo.com wrote:
Hi Eric,
Thank you for your help. At least one part is clear now.
I still am confused about how the system is still functional after one disk
fails.
Consider my earlier example of 3 disks zpool configured for raidz-1. To
On Tue, 10 Aug 2010, seth keith wrote:
# zpool status
pool: brick
state: UNAVAIL
status: One or more devices could not be used because the label is missing
or invalid. There are insufficient replicas for the pool to continue
functioning.
action: Destroy and re-create the pool
Erik Trimble wrote:
On 8/10/2010 9:57 PM, Peter Taps wrote:
Hi Eric,
Thank you for your help. At least one part is clear
now.
I still am confused about how the system is still
functional after one disk fails.
Consider my earlier example of 3 disks zpool
configured for raidz-1. To
On Tue, August 10, 2010 23:13, Ian Collins wrote:
On 08/11/10 03:45 PM, David Dyer-Bennet wrote:
cannot receive incremental stream: most recent snapshot of
bup-wrack/fsfs/zp1/ddb does not
match incremental source
That last error occurs if the snapshot exists, but has changed, it has
been
On Tue, August 10, 2010 16:41, Dave Pacheco wrote:
David Dyer-Bennet wrote:
If that turns out to be the problem, that'll be annoying to work around
(I'm making snapshots every two hours and deleting them after a couple
of
weeks). Locks between admin scripts rarely end well, in my
Just wanted to post a bit of closure to this thread quick...
Most of the import taking too long threads I've found on the list tend to
fade out without any definitive answer as to what went wrong. I needed
something a bit more concrete to make me happy.
After zfs send'ing everything to a
Hello,
In OpenSolaris b111 with autoreplace=on and a pool without spares,
ZFS is not kicking the resilver after a faulty disk is replaced and
shows up with the same device name, even after waiting several
minutes. The solution is to do a manual `zpool replace` which returns
the following:
#
On Tue, Aug 10 at 21:57, Peter Taps wrote:
Hi Eric,
Thank you for your help. At least one part is clear now.
I still am confused about how the system is still functional after one disk
fails.
The data for any given sector striped across all drives can be thought
of as:
A+B+C = P
where
Thank you all for your help. It appears my understanding of parity was rather
limited. I kept on thinking about parity in memory where the extra bit would be
used to ensure that the total of all 9 bits is always even.
In case of zfs, the above type of checking is actually moved into checksum.
I found strange issue.
Let say I have zfs filesystem export/test1, which is shared over NFSv3
then I
zfs create export/test1/test2
chown myuser /export/test1/test2
ls -l /export/test1/test2 (it will output that myusers is owner).
But if I do ls -l on NFS client where /export/test1 is mounted,
On Wed, 11 Aug 2010, Seth Keith wrote:
When I do a zdb -l /dev/rdsk/any device I get the same output for all my
drives in the pool, but I don't think it looks right:
# zdb -l /dev/rdsk/c4d0
What about /dev/rdsk/c4d0s0?
___
zfs-discuss mailing
Hi Giovanni,
The spare behavior and the autoreplace property behavior are separate
but they should work pretty well in recent builds.
You should not need to perform a zpool replace operation if the
autoreplace property is set. If autoreplace is set and a replacement
disk is inserted into the
Peter wrote:
One question though. Marty mentioned that raidz
parity is limited to 3. But in my experiment, it
seems I can get parity to any level.
You create a raidz zpool as:
# zpool create mypool raidzx disk1 diskk2
Here, x in raidzx is a numeric value indicating the
desired
this is for newbies like myself: I used using 'zdb -l' wrong, just using the
drive name from 'zpool status' or format which is like c6d1, didn't work. I
needed to add s0 to the end:
zdb -l /dev/dsk/c6d1s0
gives me a good looking label ( I think ). The pool_guid values are the same
for all
I know that performance has been discussed often here, but I
have just gone through some testing in preparation for deploying a
large configuration (120 drives is a large configuration for me) and I
wanted to share my results, both to share the results as well as to
see if anyone sees
On Wed, 11 Aug 2010, seth keith wrote:
NAME STATE READ WRITE CKSUM
brick DEGRADED 0 0 0
raidz1 DEGRADED 0 0 0
c13d0 ONLINE 0 0 0
c4d0
I am looking for references of folks using ZFS with either NFS
or iSCSI as the backing store for VMware (4.x) backing store for
virtual machines. We asked the local VMware folks and they had not
even heard of ZFS. Part of what we are looking for is a recommendation
for NFS or iSCSI, and all
On Wed, Aug 11, 2010 at 10:36 AM, David Dyer-Bennet d...@dd-b.net wrote:
On Tue, August 10, 2010 16:41, Dave Pacheco wrote:
David Dyer-Bennet wrote:
If that turns out to be the problem, that'll be annoying to work around
(I'm making snapshots every two hours and deleting them after a couple
I'm stumbling over BugID 6961707 on build 134.
OpenSolaris Development snv_134 X86
Via the b134 Live CD, when I try to zpool import -f -F -n rpool
I get this helpful panic.
panic[cpu4]/thread=ff006cd06c60: zfs: allocating allocated
segment(offset=95698377728 size=16384)
ff006cd06580
p...@kraus-haus.org said:
Based on these results, and our capacity needs, I am planning to go with 5
disk raidz2 vdevs.
I did similar tests with a Thumper in 2008, with X4150/J4400 in 2009,
and more recently comparing X4170/J4400 and X4170/MD1200:
I am running ZFS file system version 5 on Nexenta.
Peter
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thank you, Eric. Your explanation is clear to understand.
Regards,
Peter
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Paul,
I am using EXSi 4.0 with a NFS-on-ZFS datastore running on OSOL b134. It
previously ran on Solaris 10u7 with VMware Server 2.x. Disks are SATAs in a
JBOD over FC.
I'll try to summarize my experience here, albeit our system does not provide
services to end users and thus is not very
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Paul Kraus
Sent: Wednesday, August 11, 2010 3:53 PM
To: ZFS Discussions
Subject: [zfs-discuss] ZFS and VMware
I am looking for references of folks using
On Wed, Aug 11, 2010 at 4:06 PM, Cindy Swearingen
cindy.swearin...@oracle.com wrote:
Hi Giovanni,
The spare behavior and the autoreplace property behavior are separate
but they should work pretty well in recent builds.
You should not need to perform a zpool replace operation if the
On Aug 11, 2010, at 04:05, Orvar Korvar wrote:
Someone posted about CERN having a bad network card which injected
faulty bits into the data stream. And ZFS detected it, because of
end-to-end checksum. Does anyone has more information on this?
CERN generally uses Linux AFAICT:
Folks,
When I create a zpool, I get to specify the vdev type - mirror, raidz1, raidz2,
etc. How do I get back this information for an existing pool? The status
command does not reveal this information:
# zpool status mypool
When this command is run, I can see the disks in use. However, I
On 12/08/10 09:21 AM, Peter Taps wrote:
Folks,
When I create a zpool, I get to specify the vdev type - mirror, raidz1, raidz2,
etc. How do I get back this information for an existing pool? The status
command does not reveal this information:
# zpool status mypool
When this command is run, I
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Paul Kraus
I am looking for references of folks using ZFS with either NFS
or iSCSI as the backing store for VMware (4.x) backing store for
I'll try to clearly separate what I know,
On Wed, Aug 11, 2010 at 7:27 PM, Edward Ned Harvey sh...@nedharvey.comwrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Paul Kraus
I am looking for references of folks using ZFS with either NFS
or iSCSI as the backing
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Tim Cook
Sent: Wednesday, August 11, 2010 8:46 PM
To: Edward Ned Harvey
Cc: ZFS Discussions
Subject: Re: [zfs-discuss] ZFS and VMware
On Wed, Aug 11, 2010
This is not entirely correct either. You're not forced to use VMFS.
It is entirely true. You absolutely cannot use ESX with a guest on a block
device without formatting the LUN with VMFS. You are *FORCED* to use VMFS.
You can format the LUN with VMFS, then put VM files inside the VMFS;
Actually, this brings up a related issue. Does anyone have experience
with running VirtualBox on iSCSI volumes vs NFS shares, both of which
would be backed by a ZFS server?
-Erik
On Wed, 2010-08-11 at 21:41 -0500, Tim Cook wrote:
This is not entirely
-Original Message-
From: Tim Cook [mailto:t...@cook.ms]
Sent: Wednesday, August 11, 2010 10:42 PM
To: Saxon, Will
Cc: Edward Ned Harvey; ZFS Discussions
Subject: Re: [zfs-discuss] ZFS and VMware
I still think there are reasons why iSCSI would be
better than NFS and vice
My understanding is that if you wanted to use MS Cluster Server, you'd need
to use a LUN as an RDM for the quorum drive. VMDK files are locked when
open, so they can't typically be shared. VMware's Fault Tolerance gets
around this somehow, and I have a suspicion that their Lab Manager
I am having a similar issue at the moment.. 3 GB RAM under ESXi, but dedup for
this zvol (1.2 T) was turned off and only 300 G was used. The pool does contain
other datasets with dedup turned on but are small enough so I'm not hitting the
memory limits (been there, tried that, never again
On 08/12/10 04:16, Steve Gonczi wrote:
Greetings,
I am seeing some unexplained performance drop using the above cpus,
using a fairly up-to-date build ( late 145).
Basically, the system seems to be 98% idle, spending most if its time in this
stack:
unix`i86_mwait+0xd
40 matches
Mail list logo