Well I have a 10TB (5x2TB) in RAIDZ on VirtualBox got it all working on Windows XP and Windows 7. SMB shares back to my PC, great managed the impossible! Copied all my data over form a loads of old external disks, sorted it, all in all 15 days work (my holiday :-)) Used raw disks to the VirtualBox so was quite ok performance.
Then Opensolaris (2009.6) crashes as I tried to close it down, in the end had to power of the VirtualBox. rebooted, I then get this: #zpool status pool: array1 state: FAULTED status: The pool metadata is corrupted and the pool cannot be opened. action: Destroy and re-create the pool from a backup source. see: http://www.sun.com/msg/ZFS-8000-72 scrub: none requested config: NAME STATE READ WRITE CKSUM array1 FAULTED 0 0 1 corrupted data raidz1 ONLINE 0 0 6 c9t0d0s0 ONLINE 0 0 0 c9t1d0s0 ONLINE 0 0 0 c9t2d0s0 ONLINE 0 0 0 c9t3d0s0 ONLINE 0 0 0 c9t4d0s0 ONLINE 0 0 0 pool: rpool state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using 'zpool upgrade'. Once this is done, the pool will no longer be accessible on older software versions. scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c7d0s0 ONLINE 0 0 0 errors: No known data errors r...@storage1:/rpool/rtsmb/lost# =============================== After 9 hours of reading many blogs and posting I am about to give up. Heres some output that may hopefully allow some one to help me (Victor?) =============================== #zdb -u array1 zdb: can't open array1: I/O error #zdb -l /dev/dsk/c9t0d0s0 -------------------------------------------- LABEL 0 -------------------------------------------- version=14 name='array1' state=0 txg=336051 pool_guid=2240875695356292882 hostid=881445 hostname='storage1' top_guid=2550252815929083498 guid=1431843495093629813 vdev_tree type='raidz' id=0 guid=2550252815929083498 nparity=1 metaslab_array=23 metaslab_shift=36 ashift=9 asize=9901403013120 is_log=0 children[0] type='disk' id=0 guid=1431843495093629813 path='/dev/dsk/c9t0d0s0' devid='id1,s...@sata_____vbox_harddisk____vb90d7ae97-6e68097a/a' phys_path='/p...@0,0/pci8086,2...@d/d...@0,0:a' whole_disk=0 DTL=44 children[1] type='disk' id=1 guid=1558447330187786228 path='/dev/dsk/c9t1d0s0' devid='id1,s...@sata_____vbox_harddisk____vb315f2939-fdadfa14/a' phys_path='/p...@0,0/pci8086,2...@d/d...@1,0:a' whole_disk=0 DTL=43 children[2] type='disk' id=2 guid=10659506225279255914 path='/dev/dsk/c9t2d0s0' devid='id1,s...@sata_____vbox_harddisk____vbd9514af5-8837e2f7/a' phys_path='/p...@0,0/pci8086,2...@d/d...@2,0:a' whole_disk=0 DTL=42 degraded=1 children[3] type='disk' id=3 guid=2558128054346170575 path='/dev/dsk/c9t3d0s0' devid='id1,s...@sata_____vbox_harddisk____vbab7f62b2-3b162694/a' phys_path='/p...@0,0/pci8086,2...@d/d...@3,0:a' whole_disk=0 DTL=41 children[4] type='disk' id=4 guid=13991896528691960894 path='/dev/dsk/c9t4d0s0' devid='id1,s...@sata_____vbox_harddisk____vb67b9775c-3ba02834/a' phys_path='/p...@0,0/pci8086,2...@d/d...@4,0:a' whole_disk=0 DTL=40 -------------------------------------------- LABEL 1 -------------------------------------------- version=14 name='array1' state=0 txg=336051 pool_guid=2240875695356292882 hostid=881445 hostname='storage1' top_guid=2550252815929083498 guid=1431843495093629813 vdev_tree type='raidz' id=0 guid=2550252815929083498 nparity=1 metaslab_array=23 metaslab_shift=36 ashift=9 asize=9901403013120 is_log=0 children[0] type='disk' id=0 guid=1431843495093629813 path='/dev/dsk/c9t0d0s0' devid='id1,s...@sata_____vbox_harddisk____vb90d7ae97-6e68097a/a' phys_path='/p...@0,0/pci8086,2...@d/d...@0,0:a' whole_disk=0 DTL=44 children[1] type='disk' id=1 guid=1558447330187786228 path='/dev/dsk/c9t1d0s0' devid='id1,s...@sata_____vbox_harddisk____vb315f2939-fdadfa14/a' phys_path='/p...@0,0/pci8086,2...@d/d...@1,0:a' whole_disk=0 DTL=43 children[2] type='disk' id=2 guid=10659506225279255914 path='/dev/dsk/c9t2d0s0' devid='id1,s...@sata_____vbox_harddisk____vbd9514af5-8837e2f7/a' phys_path='/p...@0,0/pci8086,2...@d/d...@2,0:a' whole_disk=0 DTL=42 degraded=1 children[3] type='disk' id=3 guid=2558128054346170575 path='/dev/dsk/c9t3d0s0' devid='id1,s...@sata_____vbox_harddisk____vbab7f62b2-3b162694/a' phys_path='/p...@0,0/pci8086,2...@d/d...@3,0:a' whole_disk=0 DTL=41 children[4] type='disk' id=4 guid=13991896528691960894 path='/dev/dsk/c9t4d0s0' devid='id1,s...@sata_____vbox_harddisk____vb67b9775c-3ba02834/a' phys_path='/p...@0,0/pci8086,2...@d/d...@4,0:a' whole_disk=0 DTL=40 -------------------------------------------- LABEL 2 -------------------------------------------- version=14 name='array1' state=0 txg=336051 pool_guid=2240875695356292882 hostid=881445 hostname='storage1' top_guid=2550252815929083498 guid=1431843495093629813 vdev_tree type='raidz' id=0 guid=2550252815929083498 nparity=1 metaslab_array=23 metaslab_shift=36 ashift=9 asize=9901403013120 is_log=0 children[0] type='disk' id=0 guid=1431843495093629813 path='/dev/dsk/c9t0d0s0' devid='id1,s...@sata_____vbox_harddisk____vb90d7ae97-6e68097a/a' phys_path='/p...@0,0/pci8086,2...@d/d...@0,0:a' whole_disk=0 DTL=44 children[1] type='disk' id=1 guid=1558447330187786228 path='/dev/dsk/c9t1d0s0' devid='id1,s...@sata_____vbox_harddisk____vb315f2939-fdadfa14/a' phys_path='/p...@0,0/pci8086,2...@d/d...@1,0:a' whole_disk=0 DTL=43 children[2] type='disk' id=2 guid=10659506225279255914 path='/dev/dsk/c9t2d0s0' devid='id1,s...@sata_____vbox_harddisk____vbd9514af5-8837e2f7/a' phys_path='/p...@0,0/pci8086,2...@d/d...@2,0:a' whole_disk=0 DTL=42 degraded=1 children[3] type='disk' id=3 guid=2558128054346170575 path='/dev/dsk/c9t3d0s0' devid='id1,s...@sata_____vbox_harddisk____vbab7f62b2-3b162694/a' phys_path='/p...@0,0/pci8086,2...@d/d...@3,0:a' whole_disk=0 DTL=41 children[4] type='disk' id=4 guid=13991896528691960894 path='/dev/dsk/c9t4d0s0' devid='id1,s...@sata_____vbox_harddisk____vb67b9775c-3ba02834/a' phys_path='/p...@0,0/pci8086,2...@d/d...@4,0:a' whole_disk=0 DTL=40 -------------------------------------------- LABEL 3 -------------------------------------------- version=14 name='array1' state=0 txg=336051 pool_guid=2240875695356292882 hostid=881445 hostname='storage1' top_guid=2550252815929083498 guid=1431843495093629813 vdev_tree type='raidz' id=0 guid=2550252815929083498 nparity=1 metaslab_array=23 metaslab_shift=36 ashift=9 asize=9901403013120 is_log=0 children[0] type='disk' id=0 guid=1431843495093629813 path='/dev/dsk/c9t0d0s0' devid='id1,s...@sata_____vbox_harddisk____vb90d7ae97-6e68097a/a' phys_path='/p...@0,0/pci8086,2...@d/d...@0,0:a' whole_disk=0 DTL=44 children[1] type='disk' id=1 guid=1558447330187786228 path='/dev/dsk/c9t1d0s0' devid='id1,s...@sata_____vbox_harddisk____vb315f2939-fdadfa14/a' phys_path='/p...@0,0/pci8086,2...@d/d...@1,0:a' whole_disk=0 DTL=43 children[2] type='disk' id=2 guid=10659506225279255914 path='/dev/dsk/c9t2d0s0' devid='id1,s...@sata_____vbox_harddisk____vbd9514af5-8837e2f7/a' phys_path='/p...@0,0/pci8086,2...@d/d...@2,0:a' whole_disk=0 DTL=42 degraded=1 children[3] type='disk' id=3 guid=2558128054346170575 path='/dev/dsk/c9t3d0s0' devid='id1,s...@sata_____vbox_harddisk____vbab7f62b2-3b162694/a' phys_path='/p...@0,0/pci8086,2...@d/d...@3,0:a' whole_disk=0 DTL=41 children[4] type='disk' id=4 guid=13991896528691960894 path='/dev/dsk/c9t4d0s0' devid='id1,s...@sata_____vbox_harddisk____vb67b9775c-3ba02834/a' phys_path='/p...@0,0/pci8086,2...@d/d...@4,0:a' whole_disk=0 DTL=40 ####################### did the same for all five disks, all ok but heres a grep of txg fields from all of them:grep txg t* t0: txg=336051 t0: txg=336051 t0: txg=336051 t0: txg=336051 t1: txg=319963 t1: txg=319963 t1: txg=319963 t1: txg=319963 t2: txg=336051 t2: txg=336051 t2: txg=336051 t2: txg=336051 t3: txg=319963 t3: txg=319963 t3: txg=319963 t3: txg=319963 t4: txg=319963 t4: txg=319963 t4: txg=319963 t4: txg=319963 ####################### wrote little script to go back bards until I found a txg which would give me a U block evcenutual found this one bellow ####################### #zdb -u -t 335425 array1 Uberblock magic = 0000000000bab10c version = 14 txg = 335425 guid_sum = 16544206071174628188 timestamp = 1247514285 UTC = Mon Jul 13 20:44:45 2009 #date Sunday, 19 July 2009 01:58:18 BST =============================================== Infact July 13 is ok thats when I was last adding files and moving things about so thats not a bad point to return to.... So how do I manage to "roll-back" to txg = 335425 so I can hopefully get my 10TB back? Or is the answer return to doing HW raid under NTFS and windows directly? I heard people at sun may be working on a tool that can roll-back to a txg is it about (yes I understand the issue of you can't roll back as a freeded block may have been re-used) but I'd happ;y loosse one of my 6GB files to get the other 6TB back thanks!! PLEASE PLEASE any help really appreciated. Russel -- This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss