[zfs-discuss] ZFS pool issues with COMSTAR

2010-10-08 Thread Wolfraider
We have a weird issue with our ZFS pool and COMSTAR. The pool shows online with no errors, everything looks good but when we try to access zvols shared out with COMSTAR, windows reports that the devices have bad blocks. Everything has been working great until last night and no changes have been

Re: [zfs-discuss] ZPool creation brings down the host

2010-10-08 Thread James C. McPherson
On 8/10/10 03:28 PM, Anand Bhakthavatsala wrote: ... -- *From:* James C. McPherson j...@opensolaris.org *To:* Ramesh Babu rama.b...@gmail.com On 7/10/10 03:46 PM, Ramesh Babu wrote: I am trying to create ZPool using

Re: [zfs-discuss] ZPool creation brings down the host

2010-10-08 Thread Victor Latushkin
On Oct 8, 2010, at 10:25 AM, James C. McPherson wrote: On 8/10/10 03:28 PM, Anand Bhakthavatsala wrote: ... -- *From:* James C. McPherson j...@opensolaris.org *To:* Ramesh Babu rama.b...@gmail.com On 7/10/10 03:46

Re: [zfs-discuss] moving newly created pool to alternate host

2010-10-08 Thread sridhar surampudi
Hi Cindys, Thank you for step by step explanation and it is quite clear now. Regards, sridhar. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Finding corrupted files

2010-10-08 Thread Stephan Budach
So, I decided to give tar a whirl, after zfs send encountered the next corrupted file, resulting in an I/O error, even though scrub ran successfully w/o any erors. I then issued a /usr/gnu/bin/tar -cf /dev/null /obelixData/…/.zfs/snapshot/actual snapshot/DTP which finished without any issue

[zfs-discuss] ZFS equivalent of inotify

2010-10-08 Thread Edward Ned Harvey
Is there a ZFS equivalent (or alternative) of inotify? You have some thing, which wants to be notified whenever a specific file or directory changes. For example, a live sync application of some kind... ___ zfs-discuss mailing list

Re: [zfs-discuss] ZFS equivalent of inotify

2010-10-08 Thread Casper . Dik
Is there a ZFS equivalent (or alternative) of inotify? You have some thing, which wants to be notified whenever a specific file or directory changes. For example, a live sync application of some kind... Have you looked at port_associate and ilk? Casper

Re: [zfs-discuss] [RFC] Backup solution

2010-10-08 Thread Edward Ned Harvey
From: Peter Jeremy [mailto:peter.jer...@alcatel-lucent.com] Sent: Thursday, October 07, 2010 10:02 PM On 2010-Oct-08 09:07:34 +0800, Edward Ned Harvey sh...@nedharvey.com wrote: If you're going raidz3, with 7 disks, then you might as well just make mirrors instead, and eliminate the slow

Re: [zfs-discuss] ZFS equivalent of inotify

2010-10-08 Thread Edward Ned Harvey
From: cas...@holland.sun.com [mailto:cas...@holland.sun.com] On Behalf Of casper@sun.com Is there a ZFS equivalent (or alternative) of inotify? Have you looked at port_associate and ilk? port_associate looks promising. But google is less than useful on ilk. Got any pointers, or

Re: [zfs-discuss] ZFS pool issues with COMSTAR

2010-10-08 Thread Jim Dunham
On Oct 8, 2010, at 2:06 AM, Wolfraider wrote: We have a weird issue with our ZFS pool and COMSTAR. The pool shows online with no errors, everything looks good but when we try to access zvols shared out with COMSTAR, windows reports that the devices have bad blocks. Everything has been

Re: [zfs-discuss] ZFS equivalent of inotify

2010-10-08 Thread Darren J Moffat
On 10/08/10 12:18, Edward Ned Harvey wrote: Is there a ZFS equivalent (or alternative) of inotify? You have some thing, which wants to be notified whenever a specific file or directory changes. For example, a live sync application of some kind...

Re: [zfs-discuss] [RFC] Backup solution

2010-10-08 Thread Bob Friesenhahn
On Thu, 7 Oct 2010, Edward Ned Harvey wrote: If you're going raidz3, with 7 disks, then you might as well just make mirrors instead, and eliminate the slow resilver. While the math supports using raidz3, practicality (other than storage space) supports using mirrors. Mirrors are just much

[zfs-discuss] Performance issues with iSCSI under Linux

2010-10-08 Thread Ian D
Hi!We're trying to pinpoint our performance issues and we could use all the help to community can provide. We're running the latest version of Nexenta on a pretty powerful machine (4x Xeon 7550, 256GB RAM, 12x 100GB Samsung SSDs for the cache, 50GB Samsung SSD for the ZIL, 10GbE on a

Re: [zfs-discuss] [RFC] Backup solution

2010-10-08 Thread Michael DeMan
On Oct 8, 2010, at 4:33 AM, Edward Ned Harvey wrote: From: Peter Jeremy [mailto:peter.jer...@alcatel-lucent.com] Sent: Thursday, October 07, 2010 10:02 PM On 2010-Oct-08 09:07:34 +0800, Edward Ned Harvey sh...@nedharvey.com wrote: If you're going raidz3, with 7 disks, then you might as

Re: [zfs-discuss] [RFC] Backup solution

2010-10-08 Thread Bob Friesenhahn
On Fri, 8 Oct 2010, Michael DeMan wrote: Now, the above does not include things like proper statistics that the chances of that 2nd and 3rd disk failing (even correlations) may be higher than our 'flat-line' %/hr. based on 1-year MTBF, or stuff like if all the disks were purchased in the same

Re: [zfs-discuss] [RFC] Backup solution

2010-10-08 Thread Scott Meilicke
On Oct 8, 2010, at 8:25 AM, Bob Friesenhahn wrote: It also does not include the human factor which is still the most significant contributor to data loss. This is the most difficult factor to diminish. If the humans have difficulty understanding the system or the hardware, then they

Re: [zfs-discuss] raidz faulted with only one unavailable disk

2010-10-08 Thread Cindy Swearingen
Hi Hans-Christian, Can you provide the commands you used to create this pool? Are the pool devices actually files? If so, I don't see how you have a pool device that starts without a leading slash. I tried to create one and it failed. See the example below. By default, zpool import looks in

Re: [zfs-discuss] [RFC] Backup solution

2010-10-08 Thread Roy Sigurd Karlsbakk
Now, the above does not include things like proper statistics that the chances of that 2nd and 3rd disk failing (even correlations) may be higher than our 'flat-line' %/hr. based on 1-year MTBF, or stuff like if all the disks were purchased in the same lots and at the same time, so their

Re: [zfs-discuss] [RFC] Backup solution

2010-10-08 Thread Bob Friesenhahn
On Fri, 8 Oct 2010, Roy Sigurd Karlsbakk wrote: In addition to this comes another aspect. What if one drive fails and you find bad data on another in the same VDEV while resilvering. This is quite common these days, and for mirrors, that will mean data loss unless you mirror 3-way or more,

Re: [zfs-discuss] Performance issues with iSCSI under Linux

2010-10-08 Thread Roy Sigurd Karlsbakk
Where should we look at? What more information should I provide? Start Start with 'iostat -xdn 1'. That'll provide info about the actual device I/O. Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 r...@karlsbakk.net http://blogg.karlsbakk.net/ -- I all

Re: [zfs-discuss] Finding corrupted files

2010-10-08 Thread Stephan Budach
So - after 10 hrs and 21 mins. the incremental zfs send/recv finished without a problem. ;) Seems that using tar for checking all files is an appropriate action. Cheers, budy -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] raidz faulted with only one unavailable disk

2010-10-08 Thread Hans-Christian Otto
Hi Cindy, Can you provide the commands you used to create this pool? I don't have them anymore, no. But they were pretty much like what you wrote below. Are the pool devices actually files? If so, I don't see how you have a pool device that starts without a leading slash. I tried to create

Re: [zfs-discuss] raidz faulted with only one unavailable disk

2010-10-08 Thread Cindy Swearingen
Hi Christian, Yes, with non-standard disks you will need to provide the path to zpool import. I don't think the force import of a degraded pool would cause the pool to be faulted. In general, the I/O error is caused when ZFS can't access the underlying devices. In this case, your non-standard

Re: [zfs-discuss] raidz faulted with only one unavailable disk

2010-10-08 Thread Hans-Christian Otto
Hi Cindy, I don't think the force import of a degraded pool would cause the pool to be faulted. In general, the I/O error is caused when ZFS can't access the underlying devices. In this case, your non-standard devices names might have caused that message. as I wrote in my first mail, zpool

Re: [zfs-discuss] [RFC] Backup solution

2010-10-08 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk In addition to this comes another aspect. What if one drive fails and you find bad data on another in the same VDEV while resilvering. This is quite common these days,

Re: [zfs-discuss] Performance issues with iSCSI under Linux

2010-10-08 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ian D the help to community can provide.  We're running the latest version of Nexenta on a pretty powerful machine (4x Xeon 7550, 256GB RAM, 12x 100GB Samsung SSDs for the cache, 50GB

Re: [zfs-discuss] Performance issues with iSCSI under Linux

2010-10-08 Thread SR
To see if it is iscsi related or zfs, have you tried to test performance over nfs to a zfs filesystem instead of a zvol? SR -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] nfs issues

2010-10-08 Thread Thomas Burgess
I'm having some very strange nfs issues that are driving me somewhat mad. I'm running b134 and have been for months now, without issue. Recently i enabled 2 services to get bonjoir notificatons working in osx /network/dns/multicast:default /system/avahi-bridge-dsd:default and i added a few