We have a weird issue with our ZFS pool and COMSTAR. The pool shows online with
no errors, everything looks good but when we try to access zvols shared out
with COMSTAR, windows reports that the devices have bad blocks. Everything has
been working great until last night and no changes have been
On 8/10/10 03:28 PM, Anand Bhakthavatsala wrote:
...
--
*From:* James C. McPherson j...@opensolaris.org
*To:* Ramesh Babu rama.b...@gmail.com
On 7/10/10 03:46 PM, Ramesh Babu wrote:
I am trying to create ZPool using
On Oct 8, 2010, at 10:25 AM, James C. McPherson wrote:
On 8/10/10 03:28 PM, Anand Bhakthavatsala wrote:
...
--
*From:* James C. McPherson j...@opensolaris.org
*To:* Ramesh Babu rama.b...@gmail.com
On 7/10/10 03:46
Hi Cindys,
Thank you for step by step explanation and it is quite clear now.
Regards,
sridhar.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
So, I decided to give tar a whirl, after zfs send encountered the next
corrupted file, resulting in an I/O error, even though scrub ran successfully
w/o any erors.
I then issued a
/usr/gnu/bin/tar -cf /dev/null /obelixData/…/.zfs/snapshot/actual snapshot/DTP
which finished without any issue
Is there a ZFS equivalent (or alternative) of inotify?
You have some thing, which wants to be notified whenever a specific file or
directory changes. For example, a live sync application of some kind...
___
zfs-discuss mailing list
Is there a ZFS equivalent (or alternative) of inotify?
You have some thing, which wants to be notified whenever a specific file or
directory changes. For example, a live sync application of some kind...
Have you looked at port_associate and ilk?
Casper
From: Peter Jeremy [mailto:peter.jer...@alcatel-lucent.com]
Sent: Thursday, October 07, 2010 10:02 PM
On 2010-Oct-08 09:07:34 +0800, Edward Ned Harvey sh...@nedharvey.com
wrote:
If you're going raidz3, with 7 disks, then you might as well just make
mirrors instead, and eliminate the slow
From: cas...@holland.sun.com [mailto:cas...@holland.sun.com] On Behalf
Of casper@sun.com
Is there a ZFS equivalent (or alternative) of inotify?
Have you looked at port_associate and ilk?
port_associate looks promising. But google is less than useful on ilk.
Got any pointers, or
On Oct 8, 2010, at 2:06 AM, Wolfraider wrote:
We have a weird issue with our ZFS pool and COMSTAR. The pool shows online
with no errors, everything looks good but when we try to access zvols shared
out with COMSTAR, windows reports that the devices have bad blocks.
Everything has been
On 10/08/10 12:18, Edward Ned Harvey wrote:
Is there a ZFS equivalent (or alternative) of inotify?
You have some thing, which wants to be notified whenever a specific file
or directory changes. For example, a live sync application of some kind...
On Thu, 7 Oct 2010, Edward Ned Harvey wrote:
If you're going raidz3, with 7 disks, then you might as well just make
mirrors instead, and eliminate the slow resilver.
While the math supports using raidz3, practicality (other than storage
space) supports using mirrors. Mirrors are just much
Hi!We're trying to pinpoint our performance issues and we could use all the
help to community can provide. We're running the latest version of Nexenta on
a pretty powerful machine (4x Xeon 7550, 256GB RAM, 12x 100GB Samsung SSDs for
the cache, 50GB Samsung SSD for the ZIL, 10GbE on a
On Oct 8, 2010, at 4:33 AM, Edward Ned Harvey wrote:
From: Peter Jeremy [mailto:peter.jer...@alcatel-lucent.com]
Sent: Thursday, October 07, 2010 10:02 PM
On 2010-Oct-08 09:07:34 +0800, Edward Ned Harvey sh...@nedharvey.com
wrote:
If you're going raidz3, with 7 disks, then you might as
On Fri, 8 Oct 2010, Michael DeMan wrote:
Now, the above does not include things like proper statistics that
the chances of that 2nd and 3rd disk failing (even correlations) may
be higher than our 'flat-line' %/hr. based on 1-year MTBF, or stuff
like if all the disks were purchased in the same
On Oct 8, 2010, at 8:25 AM, Bob Friesenhahn wrote:
It also does not include the human factor which is still the most
significant contributor to data loss. This is the most difficult factor to
diminish. If the humans have difficulty understanding the system or the
hardware, then they
Hi Hans-Christian,
Can you provide the commands you used to create this pool?
Are the pool devices actually files? If so, I don't see how you
have a pool device that starts without a leading slash. I tried
to create one and it failed. See the example below.
By default, zpool import looks in
Now, the above does not include things like proper statistics that the
chances of that 2nd and 3rd disk failing (even correlations) may be
higher than our 'flat-line' %/hr. based on 1-year MTBF, or stuff like
if all the disks were purchased in the same lots and at the same time,
so their
On Fri, 8 Oct 2010, Roy Sigurd Karlsbakk wrote:
In addition to this comes another aspect. What if one drive fails
and you find bad data on another in the same VDEV while resilvering.
This is quite common these days, and for mirrors, that will mean
data loss unless you mirror 3-way or more,
Where should we look at? What more information should I provide? Start
Start with 'iostat -xdn 1'. That'll provide info about the actual device I/O.
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all
So - after 10 hrs and 21 mins. the incremental zfs send/recv finished without a
problem. ;)
Seems that using tar for checking all files is an appropriate action.
Cheers,
budy
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Hi Cindy,
Can you provide the commands you used to create this pool?
I don't have them anymore, no. But they were pretty much like what you wrote
below.
Are the pool devices actually files? If so, I don't see how you
have a pool device that starts without a leading slash. I tried
to create
Hi Christian,
Yes, with non-standard disks you will need to provide the path to zpool
import.
I don't think the force import of a degraded pool would cause the pool
to be faulted. In general, the I/O error is caused when ZFS can't access
the underlying devices. In this case, your non-standard
Hi Cindy,
I don't think the force import of a degraded pool would cause the pool
to be faulted. In general, the I/O error is caused when ZFS can't access the
underlying devices. In this case, your non-standard devices names
might have caused that message.
as I wrote in my first mail, zpool
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
In addition to this comes another aspect. What if one drive fails and
you find bad data on another in the same VDEV while resilvering. This
is quite common these days,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ian D
the help to community can provide. We're running the latest version of
Nexenta on a pretty powerful machine (4x Xeon 7550, 256GB RAM, 12x
100GB Samsung SSDs for the cache, 50GB
To see if it is iscsi related or zfs, have you tried to test performance over
nfs to a zfs filesystem instead of a zvol?
SR
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I'm having some very strange nfs issues that are driving me somewhat mad.
I'm running b134 and have been for months now, without issue. Recently i
enabled 2 services to get bonjoir notificatons working in osx
/network/dns/multicast:default
/system/avahi-bridge-dsd:default
and i added a few
28 matches
Mail list logo