[OpenIndiana-discuss] New SSD won't boot in legacy or UEFI mode

2020-06-21 Thread Judah Richardson
Still trying to migrate from my 32 GB SSD to a new 128 GB SSD.

I used zfs send to put everything from the old SSD's rpool into the new
SSD's rpool2.

Then I booted into a live environment and used beadm to activate a BE and
then bootadm install-bootloader to install a bootloader on rpool2.

However, now the PC won't legacy or UEFI boot from the new SSD.

Any ideas? Did I miss something?
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-21 Thread Judah Richardson
On Sat, Jun 20, 2020 at 2:29 AM Guenther Alka  wrote:

> Another option is to backup the current BE

How would I determine which one is current?

to the datapool via zfs send.
> This can be done continously via incremental send for ongoing backups.
> If the system disk fails (or you want to replace), add a new disk,
> install a default OS, import the datapool and restore the BE

What exactly are the commands for this?

via zfs
> send.

What would a command for this look like?

Could folks kindly be more specific and detailed in their replies, please?
A lot of these are just really generic and I'm not really sure how to
proceed based on them.

Then activate this BE and reboot to have the exact former OS
> installation restored.
>
>   Gea
> @napp-it.org
>
> Am 19.06.2020 um 21:19 schrieb Gary Mills:
> > On Fri, Jun 19, 2020 at 01:23:35PM -0500, Judah Richardson wrote:
> >> I currently run OpenIndiana Hipster on a 32 GB SSD. I'd like move that
> >> installation to a 128 GB SSD. What's the easiest way to do this?
> > The easiest way is to use zpool commands.  First, add the large SSD as
> > half a mirror to the smaller one.  Then, detach the smaller one.
> > These options are all described in the zpool man page.
> >
> > You will likely need to use the installboot command on the large SSD
> > to make it bootable before you do the detach.  This operation is
> > described in the installboot man page.
> >
> >> I was thinking of using Clonezilla, but I'm not sure if that's the way
> to
> >> go here.
> > I'd recommend using native illumos commands instead.
> >
>
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-21 Thread Judah Richardson
On Sun, Jun 21, 2020 at 3:37 AM Jonathan Adams 
wrote:

> Honestly, I'd probably stick the new drive in the computer and install
> illumos on it first, so that the layout and zpool are set correctly, then
> look to trash the contents and "zfs send" to it after booting back to the
> old drive ... Or as I said, just using something like "rsync" to copy on
> the bits of the system you care about from the old system ...
>
What would the rsync command for this be? I find rsync's syntax incredibly
confusing.

>
> The old drive and the new drive are so different, so my original "add a
> replica" wouldn't work as is ...
>
> Jon
>
> On Sun, 21 Jun 2020, 06:56 Judah Richardson, 
> wrote:
>
> > OK, I'm still confused. Everything I've read here so far and online
> either
> > seems to be lacking critical details or is outdated. Let's take it one
> step
> > at a time.
> >
> > Let's say my source SSD is cdisk1 (32 GB, ashift=9, MBR) and I'm trying
> to
> > move the installation to target SSD cdisk2 (128 GB, ashift=12, GPT).
> >
> > Do I create a zpool on cdisk2 1st or format it?
> >
> > On Sat, Jun 20, 2020 at 7:40 PM Jonathan Adams 
> > wrote:
> >
> > > $ sudo -s
> > > # zfs send -r rpool@snap01 | zfs recv -F rpool2@snap01
> > >
> > > ?
> > >
> > > On Sun, 21 Jun 2020, 00:25 Judah Richardson, <
> judahrichard...@gmail.com>
> > > wrote:
> > >
> > > > I am absolutely thoroughly confused here. It seems a lot of details
> are
> > > > being left out in the docs? Here are my current rpool filesystems:
> > > >
> > > > # zfs list
> > > > NAME   USED  AVAIL  REFER
> > > > MOUNTPOINT
> > > > rpool 27.0G  1.53G  33.5K
> > > > /rpool
> > > > rpool/ROOT17.9G  1.53G24K
> > > > legacy
> > > > rpool/ROOT/openindiana15.2M  1.53G  6.09G
> > /
> > > > rpool/ROOT/openindiana-2019:12:01  970M  1.53G  7.32G
> > /
> > > > rpool/ROOT/openindiana-2019:12:02 48.0M  1.53G  7.32G
> > /
> > > > rpool/ROOT/openindiana-2019:12:10  813K  1.53G  8.34G
> > /
> > > > rpool/ROOT/openindiana-2020:01:14 15.7M  1.53G  7.88G
> > /
> > > > rpool/ROOT/openindiana-2020:02:12  858K  1.53G  7.82G
> > /
> > > > rpool/ROOT/openindiana-2020:02:27  650K  1.53G  7.92G
> > /
> > > > rpool/ROOT/openindiana-2020:03:10  656K  1.53G  8.23G
> > /
> > > > rpool/ROOT/openindiana-2020:03:26 16.8G  1.53G  8.85G
> > /
> > > > rpool/ROOT/pre_activate_18.12_1575387063   239K  1.53G  7.31G
> > /
> > > > rpool/ROOT/pre_activate_19.10.homeuse_1575387229   239K  1.53G  7.31G
> > /
> > > > rpool/ROOT/pre_download_19.10.homeuse_1575354576   223K  1.53G  7.29G
> > /
> > > > rpool/ROOT/pre_download_19.12.homeuse_1581739687 1K  1.53G  7.68G
> > /
> > > > rpool/ROOT/pre_napp-it-18.12   273K  1.53G  6.63G
> > /
> > > > rpool/dump3.95G  1.53G  3.95G
> > -
> > > > rpool/export   987M  1.53G24K
> > > > /export
> > > > rpool/export/home  987M  1.53G24K
> > > > /export/home
> > > > rpool/export/home/judah987M  1.53G   270M
> > > > /export/home/judah
> > > > rpool/swap4.20G  4.51G  1.22G
> > -
> > > >
> > > > I then ran
> > > >
> > > > # zfs snapshot -r rpool@snap01
> > > >
> > > > and then
> > > >
> > > > sudo zfs send rpool@snap01 | sudo zfs recv -F rpool2@snap01
> > > >
> > > > where rpool2 is the ZFS pool on the new SSD.
> > > >
> > > > It doesn't seem anything was copied over.
> > > >
> > > > What am I doing wrong?
> > > >
> > > >
> > > >
> > > > On Sat, Jun 20, 2020 at 6:09 PM Till Wegmüller  >
> > > > wrote:
> > > >
> > > > > Have a look at zfs-send(1m)
> > > > >
> > > > > it's -r. You must have a snapshot to send you cannot sent datasets
> > > > > directly. The snapshots must be named exaclty the same on the whole
> > > > > pool. You can achieve this with zfs snap -r very easily.
> > > > >
> > > > > Hope this helps
> > > > > Greetings
> > > > > Till
> > > > >
> > > > > On 21.06.20 01:03, Judah Richardson wrote:
> > > > > > I can't seem to find any command that recursively sends all the
> > > > datasets
> > > > > on
> > > > > > 1 zpool to another ...
> > > > > >
> > > > > > This is all very confusing and frustrating. Disk upgrades must
> have
> > > > been
> > > > > a
> > > > > > considered user operation, no? Why not make it intuitive and
> > simple?
> > > :(
> > > > > >
> > > > > > On Sat, Jun 20, 2020 at 5:58 PM Jonathan Adams <
> > > t12nsloo...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > >> I've never done that, but it must be worth a go, unless you want
> > to
> > > > just
> > > > > >> install a new system in the new disk and copy over the files you
> > > want
> > > > to
> > > > > >> 

Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-21 Thread Judah Richardson
On Sun, Jun 21, 2020 at 3:37 AM Jonathan Adams 
wrote:

> Honestly, I'd probably stick the new drive in the computer and install
> illumos on it first,

OI has the dubious distinction of being the most difficult desktop OS
installation I've ever experienced. So much so it took me all evening and
most of the following day. I had to write notes for myself for next time
, and
even that method only resulted in a legacy boot MBR installation with
apparently 512b sectors (not ideal.)

I looked at the 2019.10 version release notes and it says UEFI support has
been improved: "bootadm install-bootloader (or installboot if run directly)
now populates/maintains the ESP."

How would one run either of those? No examples or instructions are
provided.

so that the layout and zpool are set correctly, then
> look to trash the contents and "zfs send" to it after booting back to the
> old drive ... Or as I said, just using something like "rsync" to copy on
> the bits of the system you care about from the old system ...
>
> The old drive and the new drive are so different, so my original "add a
> replica" wouldn't work as is ...
>
I see.

>
> Jon
>
> On Sun, 21 Jun 2020, 06:56 Judah Richardson, 
> wrote:
>
> > OK, I'm still confused. Everything I've read here so far and online
> either
> > seems to be lacking critical details or is outdated. Let's take it one
> step
> > at a time.
> >
> > Let's say my source SSD is cdisk1 (32 GB, ashift=9, MBR) and I'm trying
> to
> > move the installation to target SSD cdisk2 (128 GB, ashift=12, GPT).
> >
> > Do I create a zpool on cdisk2 1st or format it?
> >
> > On Sat, Jun 20, 2020 at 7:40 PM Jonathan Adams 
> > wrote:
> >
> > > $ sudo -s
> > > # zfs send -r rpool@snap01 | zfs recv -F rpool2@snap01
> > >
> > > ?
> > >
> > > On Sun, 21 Jun 2020, 00:25 Judah Richardson, <
> judahrichard...@gmail.com>
> > > wrote:
> > >
> > > > I am absolutely thoroughly confused here. It seems a lot of details
> are
> > > > being left out in the docs? Here are my current rpool filesystems:
> > > >
> > > > # zfs list
> > > > NAME   USED  AVAIL  REFER
> > > > MOUNTPOINT
> > > > rpool 27.0G  1.53G  33.5K
> > > > /rpool
> > > > rpool/ROOT17.9G  1.53G24K
> > > > legacy
> > > > rpool/ROOT/openindiana15.2M  1.53G  6.09G
> > /
> > > > rpool/ROOT/openindiana-2019:12:01  970M  1.53G  7.32G
> > /
> > > > rpool/ROOT/openindiana-2019:12:02 48.0M  1.53G  7.32G
> > /
> > > > rpool/ROOT/openindiana-2019:12:10  813K  1.53G  8.34G
> > /
> > > > rpool/ROOT/openindiana-2020:01:14 15.7M  1.53G  7.88G
> > /
> > > > rpool/ROOT/openindiana-2020:02:12  858K  1.53G  7.82G
> > /
> > > > rpool/ROOT/openindiana-2020:02:27  650K  1.53G  7.92G
> > /
> > > > rpool/ROOT/openindiana-2020:03:10  656K  1.53G  8.23G
> > /
> > > > rpool/ROOT/openindiana-2020:03:26 16.8G  1.53G  8.85G
> > /
> > > > rpool/ROOT/pre_activate_18.12_1575387063   239K  1.53G  7.31G
> > /
> > > > rpool/ROOT/pre_activate_19.10.homeuse_1575387229   239K  1.53G  7.31G
> > /
> > > > rpool/ROOT/pre_download_19.10.homeuse_1575354576   223K  1.53G  7.29G
> > /
> > > > rpool/ROOT/pre_download_19.12.homeuse_1581739687 1K  1.53G  7.68G
> > /
> > > > rpool/ROOT/pre_napp-it-18.12   273K  1.53G  6.63G
> > /
> > > > rpool/dump3.95G  1.53G  3.95G
> > -
> > > > rpool/export   987M  1.53G24K
> > > > /export
> > > > rpool/export/home  987M  1.53G24K
> > > > /export/home
> > > > rpool/export/home/judah987M  1.53G   270M
> > > > /export/home/judah
> > > > rpool/swap4.20G  4.51G  1.22G
> > -
> > > >
> > > > I then ran
> > > >
> > > > # zfs snapshot -r rpool@snap01
> > > >
> > > > and then
> > > >
> > > > sudo zfs send rpool@snap01 | sudo zfs recv -F rpool2@snap01
> > > >
> > > > where rpool2 is the ZFS pool on the new SSD.
> > > >
> > > > It doesn't seem anything was copied over.
> > > >
> > > > What am I doing wrong?
> > > >
> > > >
> > > >
> > > > On Sat, Jun 20, 2020 at 6:09 PM Till Wegmüller  >
> > > > wrote:
> > > >
> > > > > Have a look at zfs-send(1m)
> > > > >
> > > > > it's -r. You must have a snapshot to send you cannot sent datasets
> > > > > directly. The snapshots must be named exaclty the same on the whole
> > > > > pool. You can achieve this with zfs snap -r very easily.
> > > > >
> > > > > Hope this helps
> > > > > Greetings
> > > > > Till
> > > > >
> > > > > On 21.06.20 01:03, Judah Richardson wrote:
> > > > > > I can't seem to find any command that recursively sends all the
> > > > datasets
> > > > > on
> > > > 

Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-21 Thread Guenther Alka
As you use my napp-it you can use menu "Jobs > Replication > Create" to 
create a replication job with the current active BE as source and your 
datapool as destination. On a new regular OI setup from scratch (boot OS 
installer, install Hipster to the new SSD) you can create a new 
replication job with last as source and rpool/ROOT as destination. These 
are all preconfigured options in the napp-it replication menu. For other 
replications, just enable recursive to include daughter filesystems.


The active BE is your current complete Hipster setup without what is 
needed to boot. You can use this method as a full or disaster recovery 
method. As you first install a base Hipster again, the bootable disk is 
fully prepared to reinstall the BE only. You cannot install OmniOS and 
then switch to Hipster with some sort of copy.


If you want to switch between Illumos distributions you must do a 
regular setup and can import the datapools only. Do not use anything 
from rpool. Napp-it settings like jobs are in /var/web-gui/_log. 
Copy/restore them manually or use a backup job to save them to the 
datapool and a User > Restore to restore them. If you want to preserve 
user permissions ex via SMB, recreate the same users, groups (same 
uid/gid) and SMB groups. For other services, you must reinstall if 
available.


Gea
@napp-it.org

Am 21.06.2020 um 10:36 schrieb Jonathan Adams:

Honestly, I'd probably stick the new drive in the computer and install
illumos on it first, so that the layout and zpool are set correctly, then
look to trash the contents and "zfs send" to it after booting back to the
old drive ... Or as I said, just using something like "rsync" to copy on
the bits of the system you care about from the old system ...

The old drive and the new drive are so different, so my original "add a
replica" wouldn't work as is ...

Jon

On Sun, 21 Jun 2020, 06:56 Judah Richardson, 
wrote:


OK, I'm still confused. Everything I've read here so far and online either
seems to be lacking critical details or is outdated. Let's take it one step
at a time.

Let's say my source SSD is cdisk1 (32 GB, ashift=9, MBR) and I'm trying to
move the installation to target SSD cdisk2 (128 GB, ashift=12, GPT).

Do I create a zpool on cdisk2 1st or format it?

On Sat, Jun 20, 2020 at 7:40 PM Jonathan Adams 
wrote:


$ sudo -s
# zfs send -r rpool@snap01 | zfs recv -F rpool2@snap01

?

On Sun, 21 Jun 2020, 00:25 Judah Richardson, 
wrote:


I am absolutely thoroughly confused here. It seems a lot of details are
being left out in the docs? Here are my current rpool filesystems:

# zfs list
NAME   USED  AVAIL  REFER
MOUNTPOINT
rpool 27.0G  1.53G  33.5K
/rpool
rpool/ROOT17.9G  1.53G24K
legacy
rpool/ROOT/openindiana15.2M  1.53G  6.09G

/

rpool/ROOT/openindiana-2019:12:01  970M  1.53G  7.32G

/

rpool/ROOT/openindiana-2019:12:02 48.0M  1.53G  7.32G

/

rpool/ROOT/openindiana-2019:12:10  813K  1.53G  8.34G

/

rpool/ROOT/openindiana-2020:01:14 15.7M  1.53G  7.88G

/

rpool/ROOT/openindiana-2020:02:12  858K  1.53G  7.82G

/

rpool/ROOT/openindiana-2020:02:27  650K  1.53G  7.92G

/

rpool/ROOT/openindiana-2020:03:10  656K  1.53G  8.23G

/

rpool/ROOT/openindiana-2020:03:26 16.8G  1.53G  8.85G

/

rpool/ROOT/pre_activate_18.12_1575387063   239K  1.53G  7.31G

/

rpool/ROOT/pre_activate_19.10.homeuse_1575387229   239K  1.53G  7.31G

/

rpool/ROOT/pre_download_19.10.homeuse_1575354576   223K  1.53G  7.29G

/

rpool/ROOT/pre_download_19.12.homeuse_1581739687 1K  1.53G  7.68G

/

rpool/ROOT/pre_napp-it-18.12   273K  1.53G  6.63G

/

rpool/dump3.95G  1.53G  3.95G

-

rpool/export   987M  1.53G24K
/export
rpool/export/home  987M  1.53G24K
/export/home
rpool/export/home/judah987M  1.53G   270M
/export/home/judah
rpool/swap4.20G  4.51G  1.22G

-

I then ran

# zfs snapshot -r rpool@snap01

and then

sudo zfs send rpool@snap01 | sudo zfs recv -F rpool2@snap01

where rpool2 is the ZFS pool on the new SSD.

It doesn't seem anything was copied over.

What am I doing wrong?



On Sat, Jun 20, 2020 at 6:09 PM Till Wegmüller 
wrote:


Have a look at zfs-send(1m)

it's -r. You must have a snapshot to send you cannot sent datasets
directly. The snapshots must be named exaclty the same on the whole
pool. You can achieve this with zfs snap -r very easily.

Hope this helps
Greetings
Till

On 21.06.20 01:03, Judah Richardson wrote:

I can't seem to find any command that recursively sends all the

datasets

on

1 zpool to another ...

This is all very confusing and 

Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-21 Thread Jonathan Adams
Honestly, I'd probably stick the new drive in the computer and install
illumos on it first, so that the layout and zpool are set correctly, then
look to trash the contents and "zfs send" to it after booting back to the
old drive ... Or as I said, just using something like "rsync" to copy on
the bits of the system you care about from the old system ...

The old drive and the new drive are so different, so my original "add a
replica" wouldn't work as is ...

Jon

On Sun, 21 Jun 2020, 06:56 Judah Richardson, 
wrote:

> OK, I'm still confused. Everything I've read here so far and online either
> seems to be lacking critical details or is outdated. Let's take it one step
> at a time.
>
> Let's say my source SSD is cdisk1 (32 GB, ashift=9, MBR) and I'm trying to
> move the installation to target SSD cdisk2 (128 GB, ashift=12, GPT).
>
> Do I create a zpool on cdisk2 1st or format it?
>
> On Sat, Jun 20, 2020 at 7:40 PM Jonathan Adams 
> wrote:
>
> > $ sudo -s
> > # zfs send -r rpool@snap01 | zfs recv -F rpool2@snap01
> >
> > ?
> >
> > On Sun, 21 Jun 2020, 00:25 Judah Richardson, 
> > wrote:
> >
> > > I am absolutely thoroughly confused here. It seems a lot of details are
> > > being left out in the docs? Here are my current rpool filesystems:
> > >
> > > # zfs list
> > > NAME   USED  AVAIL  REFER
> > > MOUNTPOINT
> > > rpool 27.0G  1.53G  33.5K
> > > /rpool
> > > rpool/ROOT17.9G  1.53G24K
> > > legacy
> > > rpool/ROOT/openindiana15.2M  1.53G  6.09G
> /
> > > rpool/ROOT/openindiana-2019:12:01  970M  1.53G  7.32G
> /
> > > rpool/ROOT/openindiana-2019:12:02 48.0M  1.53G  7.32G
> /
> > > rpool/ROOT/openindiana-2019:12:10  813K  1.53G  8.34G
> /
> > > rpool/ROOT/openindiana-2020:01:14 15.7M  1.53G  7.88G
> /
> > > rpool/ROOT/openindiana-2020:02:12  858K  1.53G  7.82G
> /
> > > rpool/ROOT/openindiana-2020:02:27  650K  1.53G  7.92G
> /
> > > rpool/ROOT/openindiana-2020:03:10  656K  1.53G  8.23G
> /
> > > rpool/ROOT/openindiana-2020:03:26 16.8G  1.53G  8.85G
> /
> > > rpool/ROOT/pre_activate_18.12_1575387063   239K  1.53G  7.31G
> /
> > > rpool/ROOT/pre_activate_19.10.homeuse_1575387229   239K  1.53G  7.31G
> /
> > > rpool/ROOT/pre_download_19.10.homeuse_1575354576   223K  1.53G  7.29G
> /
> > > rpool/ROOT/pre_download_19.12.homeuse_1581739687 1K  1.53G  7.68G
> /
> > > rpool/ROOT/pre_napp-it-18.12   273K  1.53G  6.63G
> /
> > > rpool/dump3.95G  1.53G  3.95G
> -
> > > rpool/export   987M  1.53G24K
> > > /export
> > > rpool/export/home  987M  1.53G24K
> > > /export/home
> > > rpool/export/home/judah987M  1.53G   270M
> > > /export/home/judah
> > > rpool/swap4.20G  4.51G  1.22G
> -
> > >
> > > I then ran
> > >
> > > # zfs snapshot -r rpool@snap01
> > >
> > > and then
> > >
> > > sudo zfs send rpool@snap01 | sudo zfs recv -F rpool2@snap01
> > >
> > > where rpool2 is the ZFS pool on the new SSD.
> > >
> > > It doesn't seem anything was copied over.
> > >
> > > What am I doing wrong?
> > >
> > >
> > >
> > > On Sat, Jun 20, 2020 at 6:09 PM Till Wegmüller 
> > > wrote:
> > >
> > > > Have a look at zfs-send(1m)
> > > >
> > > > it's -r. You must have a snapshot to send you cannot sent datasets
> > > > directly. The snapshots must be named exaclty the same on the whole
> > > > pool. You can achieve this with zfs snap -r very easily.
> > > >
> > > > Hope this helps
> > > > Greetings
> > > > Till
> > > >
> > > > On 21.06.20 01:03, Judah Richardson wrote:
> > > > > I can't seem to find any command that recursively sends all the
> > > datasets
> > > > on
> > > > > 1 zpool to another ...
> > > > >
> > > > > This is all very confusing and frustrating. Disk upgrades must have
> > > been
> > > > a
> > > > > considered user operation, no? Why not make it intuitive and
> simple?
> > :(
> > > > >
> > > > > On Sat, Jun 20, 2020 at 5:58 PM Jonathan Adams <
> > t12nsloo...@gmail.com>
> > > > > wrote:
> > > > >
> > > > >> I've never done that, but it must be worth a go, unless you want
> to
> > > just
> > > > >> install a new system in the new disk and copy over the files you
> > want
> > > to
> > > > >> change afterwards ...
> > > > >>
> > > > >> On Sat, 20 Jun 2020, 23:46 Judah Richardson, <
> > > judahrichard...@gmail.com
> > > > >
> > > > >> wrote:
> > > > >>
> > > > >>> On Sat, Jun 20, 2020 at 2:29 AM Guenther Alka <
> a...@hfg-gmuend.de>
> > > > >> wrote:
> > > > >>>
> > > >  Another option is to backup the current BE to the datapool via
> zfs
> > > > >> send.
> > > > 
> > > > >>> Would it be possible to just zfs send everything from