Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-04 Thread Jeff Bonwick
Oh, you're right! Well, that will simplify things! All we have to do is convince a few bits of code to ignore ub_txg == 0. I'll try a couple of things and get back to you in a few hours... Jeff On Fri, May 02, 2008 at 03:31:52AM -0700, Benjamin Brumaire wrote: Hi, while diving deeply in

Re: [zfs-discuss] lost zpool when server restarted.

2008-05-04 Thread Victor Latushkin
Looking at the txg numbers, it's clear that labels on to devices that are unavailable now may be stale: Krzys wrote: When I do zdb on emcpower3a which seems to be ok from zpool perspective I get the following output: bash-3.00# zdb -lv /dev/dsk/emcpower3a

[zfs-discuss] Inconcistancies with scrub and zdb

2008-05-04 Thread Jonathan Loran
Hi List, First of all: S10u4 120011-14 So I have the weird situation. Earlier this week, I finally mirrored up two iSCSI based pools. I had been wanting to do this for some time, because the availability of the data in these pools is important. One pool mirrored just fine, but the other

Re: [zfs-discuss] lost zpool when server restarted.

2008-05-04 Thread Jeff Bonwick
It's OK that you're missing labels 2 and 3 -- there are four copies precisely so that you can afford to lose a few. Labels 2 and 3 are at the end of the disk. The fact that only they are missing makes me wonder if someone resized the LUNs. Growing them would be OK, but shrinking them would

Re: [zfs-discuss] lost zpool when server restarted.

2008-05-04 Thread Jeff Bonwick
Looking at the txg numbers, it's clear that labels on to devices that are unavailable now may be stale: Actually, they look OK. The txg values in the label indicate the last txg in which the pool configuration changed for devices in that top-level vdev (e.g. mirror or raid-z group), not the

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-04 Thread Jeff Bonwick
OK, here you go. I've successfully recovered a pool from a detached device using the attached binary. You can verify its integrity against the following MD5 hash: # md5sum labelfix ab4f33d99fdb48d9d20ee62b49f11e20 labelfix It takes just one argument -- the disk to repair: # ./labelfix

[zfs-discuss] 3510 JBOD with multipath

2008-05-04 Thread Gino
Well, 3510 is even supported as JBOD by Sun. The only limitation is to use only one FC link. I have tried both 3510 and 3511 as JBODS - 3510 works fins, with 3511 I had some problems under higher load. -- Best regards, Robert Hi Robert, I saw in your post that you had problems

Re: [zfs-discuss] lost zpool when server restarted.

2008-05-04 Thread Victor Latushkin
Jeff Bonwick wrote: Looking at the txg numbers, it's clear that labels on to devices that are unavailable now may be stale: Actually, they look OK. The txg values in the label indicate the last txg in which the pool configuration changed for devices in that top-level vdev (e.g. mirror or

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-04 Thread Jeff Bonwick
Oh, and here's the source code, for the curious: #include devid.h #include dirent.h #include errno.h #include libintl.h #include stdlib.h #include string.h #include sys/stat.h #include unistd.h #include fcntl.h #include stddef.h #include sys/vdev_impl.h /* * Write a label block with a ZBT

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-04 Thread Cyril Plisko
On Sun, May 4, 2008 at 11:42 AM, Jeff Bonwick [EMAIL PROTECTED] wrote: Oh, and here's the source code, for the curious: [snipped] label_write(fd, offsetof(vdev_label_t, vl_uberblock), 1ULL UBERBLOCK_SHIFT, ub); label_write(fd, offsetof(vdev_label_t,

Re: [zfs-discuss] cp -r hanged copying a directory

2008-05-04 Thread Simon Breden
I have moved this saga to storage-discuss now, as this doesn't appear to be a ZFS issue, and it can be found here: http://www.opensolaris.org/jive/thread.jspa?threadID=59201 This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] cp -r hanged copying a directory

2008-05-04 Thread Simon Breden
I have moved this saga to storage-discuss now, as this doesn't appear to be a ZFS issue, and it can be found here: http://www.opensolaris.org/jive/thread.jspa?threadID=59201 This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] ZFS jammed while busy

2008-05-04 Thread Scott
Hi... Here's my system: 2 Intel 3 Ghz 5160 dual-core cpu's 0 SATA 750 GB disks running as a ZFS RAIDZ2 pool 8 GB Memory SunOS 5.11 snv_79a on a separate UFS mirror ZFS pool version 10 No separate ZIL or ARC cache ran into a problem today where the ZFS pool jammed for

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-04 Thread Mario Goebbels
Oh, and here's the source code, for the curious: The forensics project will be all over this, I hope, and wrap it up in a nice command line tool. -mg ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] lost zpool when server restarted.

2008-05-04 Thread Krzys
Because this system was in production I had to fairly quickly recover, so I was unable to play much more with it we had to destroy it and recreate new pool and then recover data from tapes. Its a mistery as to why in the middle of a night it rebooted, we could not figure this out and why pool

Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-04 Thread Benjamin Brumaire
Well, thanks your program, I could recover the data on the detach disk. Now I m copying the data on other disks and resilver it inside the pool. Warm words aren't enough to express how I feel. This community is great. Thanks you very much. bbr This message posted from opensolaris.org