Bill, Matthew, and Jussi.

Late reply but I was able to recover my array.  I'm posting this hoping it will 
help those who encounter the same problem in the future.

My setup consist of mirrored SSD’s for the zones and raidz of 6 hard drives for 
the data running SMB.  Oneof the raidz drive crashed.  The drivewas replaced.  
In the process ofresilvering, the whole array crashed likely due to URE while 
resilvering.  This brought one of the mirrored SSD for thezones down as well.  
After physically detaching thedrives in the raidz array, the zones can still be 
read.  Running in recovery mode prevents the raidzfrom getting imported and 
caused a boot loop.  I was able to import and backupthe contents in read only 
mode # zpool import –o readonly=on tank.  After backup, the pool was destroyed 
the pool, addedanother drive, and created raidz2 array.  It took me over a week 
backing up and restoringthe pool again.  The SSD mirrors successfullyresilvered 
with the raidz2 working.  
Lesson is to get away with raidz (Raid 5).  Everybody probably already know 
that.  I'm running a home server but it is not worth the pain. Raidz2 (Raid 6) 
will help minimize or preventthis in the future.
Bob


    On Saturday, March 24, 2018, 4:40:59 PM PDT, Matthew Parsons 
<mpars...@mindless.com> wrote:  
 
 It sounds like you may be new to ZFS, so apologies if some of what follows 
seems elementary. (In which case you omitted some key details.)
However your first priority before anything else (and not SmartOS specific) is 
to get a copy (or two!) of your original /zones pool. Take that other mirrored 
drive, attach it to another system (External adaptor, SATA cable w/ the case 
open, whatever), and get a good copy.  If you have access to enough space, get 
a raw image of the entire drive:  Even if it's something that doesn't support 
ZFS, use dd/clonezilla/macrium reflect and use raw image mode.  Then, assuming 
you're on something that can "speak" ZFS  look up "zfs import" for 
mounting/accessing pool, and "zfs send".


For trying to re-integrate an existing drive and SmartOS specific setup, you'll 
need to wait for someone else to chime in. (looking through the setup source 
scripts on github may help. )

It might be best to re-create a representative setup on a VM, go through the 
steps you did and replicate the process and issue, then you can safely try 
various iterations, with easy roll-back using snapshots. 



On Fri, Mar 23, 2018 at 4:03 PM, LastQuark via smartos-discuss 
<smartos-discuss@lists.smartos.org> wrote:



 On Friday, March 23, 2018, 1:27:25 PM PDT, Jussi Sallinen <ju...@jus.si> 
wrote: 



On 22/03/2018 8.07, LastQuark wrote:

Here's the output of zpool history:
--- start ---History for 'zones':2018-03-21.19:26:30 zpool create -f zones 
c2t0d0 log c3t1d02018-03-21.19:26:35 zfs set atime=off zones2018-03-21.19:26:36 
zfs create -V 1378mb -o checksum=noparity zones/dump2018-03-21.19:26:38 zfs 
create zones/config2018-03-21.19:26:38 zfs set mountpoint=legacy 
zones/config2018-03-21.19:26:39 zfs create -o mountpoint=legacy 
zones/usbkey2018-03-21.19:26:39 zfs create -o quota=10g -o 
mountpoint=/zones/global/cores -o compression=gzip 
zones/cores2018-03-21.19:26:39 zfs create -o mountpoint=legacy 
zones/opt2018-03-21.19:26:40 zfs create zones/var2018-03-21.19:26:40 zfs set 
mountpoint=legacy zones/var2018-03-21.19:26:41 zfs create -V 32741mb 
zones/swap2018-03-21.19:28:44 zpool import -f zones2018-03-21.19:28:44 zpool 
set feature@extensible_dataset= enabled zones2018-03-21.19:28:45 zfs set 
checksum=noparity zones/dump2018-03-21.19:28:45 zpool set 
feature@multi_vdev_crash_dump= enabled zones2018-03-21.19:28:46 zfs destroy -r 
zones/cores2018-03-21.19:28:46 zfs create -o compression=gzip -o 
mountpoint=none zones/cores2018-03-21.19:28:52 zfs create -o quota=10g -o 
mountpoint=/zones/global/cores zones/cores/global2018-03-21.19:29:12 zfs create 
-o compression=lzjb -o mountpoint=/zones/archive 
zones/archive2018-03-22.05:20:28 zpool import -f zones2018-03-22.05:20:28 zpool 
set feature@extensible_dataset= enabled zones2018-03-22.05:20:29 zfs set 
checksum=noparity zones/dump2018-03-22.05:20:29 zpool set 
feature@multi_vdev_crash_dump= enabled zones--- end ---

Also, if I may add, installed a more later version of Smartos USB and I am 
still getting errors.  I hope I didn't lose the original zones.  I still have 
the other mirrored drive to try.

Hi,

 According to the zpool history you never had mirrored zones pool, the 'zones' 
pool was created on 2018-03-21 with a single disk (c2t0d0) and a log device 
(c3t1d0).
 Unfortunately ZFS Log Device (for ZIL, ZFS Intent Log) is not a mirrored copy 
of the contents of the 'zones' data disk (c2t0d0).

 Do you mean you have a disk which is not currently connected to the host, ie. 
your previous installation which suffered a hardware failure?

 Ps. Here's few links regarding ZFS Log Device & ZIL:
https://www.ixsystems.com/ blog/o-slog-not-slog-best- configure-zfs-intent-log/
http://www.freenas.org/blog/ zfs-zil-and-slog-demystified/

 -Jussi

Hi Jussi,
After the crash, I took one of the mirrored drive out and installed a fresh 
single disk (c2t0d0).  I went through the Smartos install process this time 
installing the zones to c2t0d0 while c3t1d0 drive is still attached.  I believe 
this process created the log device (c3t1d0) and overrides the content of the 
original zones.  From the looks of it, this copy of the zones in c3t1d0 is 
already screwed.  I have one more shot at it on the other mirrored drive.
What is the correct process of retrieving the zones in the original mirrored 
drive and copying them over to the new zones in a new drive?  I do not seem to 
find good documentation on it particularly related to Smartos.
Thanks for looking into this by the way and for the links.
Rgds.










|  smartos-discuss | Archives | Modify Your Subscription |  |

  


-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com

Reply via email to