I’ll also add a few comments here:

If you’re not used to ZFS, the whole experience can be a bit jarring. I made a 
lot of mistakes before getting things right.

You definitely don’t want to go through the SmartOS install process, as that 
generally results in disks being reinitialized. I don’t know this as a 100% 
certainty, but I assume this as a safety measure.

If you still have a good disk, making a clone somewhere else is a great idea if 
you have the space. 

There’s a boot option that starts SmartOS without doing a pool import. Starting 
in this mode is a key technique when dealing with ZFS problems. Also key is to 
do pool imports without mounting the filesystems (see the -N option). When 
filesystems get mounted, you’ll run into a lot of problems for which the only 
resolution is a reboot (back to an un-imported pool).

All of this said, if you have a good, one-half mirror you should be able to pop 
that in (and only that drive), start up in recovery mode, import the pool 
without mounting filesystems, and attach a new half (look at the zpool replace 
command). 

Hopefully this advice makes sense… if not write back and I’m sure someone here 
will be able to help!

Bill

> On Mar 24, 2018, at 7:40 PM, Matthew Parsons <mpars...@mindless.com> wrote:
> 
> It sounds like you may be new to ZFS, so apologies if some of what follows 
> seems elementary. (In which case you omitted some key details.)
> 
> However your first priority before anything else (and not SmartOS specific) 
> is to get a copy (or two!) of your original /zones pool. Take that other 
> mirrored drive, attach it to another system (External adaptor, SATA cable w/ 
> the case open, whatever), and get a good copy.  If you have access to enough 
> space, get a raw image of the entire drive:  Even if it's something that 
> doesn't support ZFS, use dd/clonezilla/macrium reflect and use raw image 
> mode.  Then, assuming you're on something that can "speak" ZFS  look up "zfs 
> import" for mounting/accessing pool, and "zfs send".
> 
> 
> For trying to re-integrate an existing drive and SmartOS specific setup, 
> you'll need to wait for someone else to chime in. (looking through the setup 
> source scripts on github may help. )
> 
> It might be best to re-create a representative setup on a VM, go through the 
> steps you did and replicate the process and issue, then you can safely try 
> various iterations, with easy roll-back using snapshots. 
> 
> 
> 
> On Fri, Mar 23, 2018 at 4:03 PM, LastQuark via smartos-discuss 
> <smartos-discuss@lists.smartos.org 
> <mailto:smartos-discuss@lists.smartos.org>> wrote:
> 
> 
> On Friday, March 23, 2018, 1:27:25 PM PDT, Jussi Sallinen <ju...@jus.si 
> <mailto:ju...@jus.si>> wrote:
> 
> 
> 
> On 22/03/2018 8.07, LastQuark wrote:
>> Here's the output of zpool history:
>> 
>> --- start ---
>> History for 'zones':
>> 2018-03-21.19:26:30 zpool create -f zones c2t0d0 log c3t1d0
>> 2018-03-21.19:26:35 zfs set atime=off zones
>> 2018-03-21.19:26:36 zfs create -V 1378mb -o checksum=noparity zones/dump
>> 2018-03-21.19:26:38 zfs create zones/config
>> 2018-03-21.19:26:38 zfs set mountpoint=legacy zones/config
>> 2018-03-21.19:26:39 zfs create -o mountpoint=legacy zones/usbkey
>> 2018-03-21.19:26:39 zfs create -o quota=10g -o 
>> mountpoint=/zones/global/cores -o compression=gzip zones/cores
>> 2018-03-21.19:26:39 zfs create -o mountpoint=legacy zones/opt
>> 2018-03-21.19:26:40 zfs create zones/var
>> 2018-03-21.19:26:40 zfs set mountpoint=legacy zones/var
>> 2018-03-21.19:26:41 zfs create -V 32741mb zones/swap
>> 2018-03-21.19:28:44 zpool import -f zones
>> 2018-03-21.19:28:44 zpool set feature@extensible_dataset=enabled zones
>> 2018-03-21.19:28:45 zfs set checksum=noparity zones/dump
>> 2018-03-21.19:28:45 zpool set feature@multi_vdev_crash_dump=enabled zones
>> 2018-03-21.19:28:46 zfs destroy -r zones/cores
>> 2018-03-21.19:28:46 zfs create -o compression=gzip -o mountpoint=none 
>> zones/cores
>> 2018-03-21.19:28:52 zfs create -o quota=10g -o 
>> mountpoint=/zones/global/cores zones/cores/global
>> 2018-03-21.19:29:12 zfs create -o compression=lzjb -o 
>> mountpoint=/zones/archive zones/archive
>> 2018-03-22.05:20:28 zpool import -f zones
>> 2018-03-22.05:20:28 zpool set feature@extensible_dataset=enabled zones
>> 2018-03-22.05:20:29 zfs set checksum=noparity zones/dump
>> 2018-03-22.05:20:29 zpool set feature@multi_vdev_crash_dump=enabled zones
>> --- end ---
>> 
>> Also, if I may add, installed a more later version of Smartos USB and I am 
>> still getting errors.  I hope I didn't lose the original zones.  I still 
>> have the other mirrored drive to try.
> 
> Hi,
> 
> According to the zpool history you never had mirrored zones pool, the 'zones' 
> pool was created on 2018-03-21 with a single disk (c2t0d0) and a log device 
> (c3t1d0).
> Unfortunately ZFS Log Device (for ZIL, ZFS Intent Log) is not a mirrored copy 
> of the contents of the 'zones' data disk (c2t0d0).
> 
> Do you mean you have a disk which is not currently connected to the host, ie. 
> your previous installation which suffered a hardware failure?
> 
> Ps. Here's few links regarding ZFS Log Device & ZIL:
> https://www.ixsystems.com/blog/o-slog-not-slog-best-configure-zfs-intent-log/ 
> <https://www.ixsystems.com/blog/o-slog-not-slog-best-configure-zfs-intent-log/>
> http://www.freenas.org/blog/zfs-zil-and-slog-demystified/ 
> <http://www.freenas.org/blog/zfs-zil-and-slog-demystified/>
> 
> 
> -Jussi
> 
> 
> Hi Jussi,
> 
> After the crash, I took one of the mirrored drive out and installed a fresh 
> single disk (c2t0d0).  I went through the Smartos install process this time 
> installing the zones to c2t0d0 while c3t1d0 drive is still attached.  I 
> believe this process created the log device (c3t1d0) and overrides the 
> content of the original zones.  From the looks of it, this copy of the zones 
> in c3t1d0 is already screwed.  I have one more shot at it on the other 
> mirrored drive.
> 
> What is the correct process of retrieving the zones in the original mirrored 
> drive and copying them over to the new zones in a new drive?  I do not seem 
> to find good documentation on it particularly related to Smartos.
> 
> Thanks for looking into this by the way and for the links.
> 
> Rgds.
> 
> 
> 
> 
> 
> 
> 
> 
> smartos-discuss | Archives 
> <https://www.listbox.com/member/archive/184463/=now> | Modify 
> <https://www.listbox.com/member/?&;> Your Subscription          
> <http://www.listbox.com/>



-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com

Reply via email to