Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-27 Thread Judah Richardson
Just to tie a bow on this issue: the source SSD died during the transplant
operation and I've had to setup everything from scratch anyway.

FWIW, UEFI installations still don't work because the BIOS can't find the a
filesystem on the SSD.

On Mon, Jun 22, 2020 at 10:48 AM Jonathan Adams 
wrote:

> Yep, that's the idea.
>
> Depending on how old your system was, you may need to zfs upgrade once
> you've finished, but it should all work like that.
>
> You'll need to: zfs snapshot -r rpool2@upgrade
>
> But that's the essential plan.
>
> I hope that the new install creates the correct ashift on the new pool.
>
> Jon
>
> On Mon, 22 Jun 2020, 16:40 Judah Richardson, 
> wrote:
>
> > On Mon, Jun 22, 2020 at 3:26 AM Jonathan Adams 
> > wrote:
> >
> > > Unfortunately I'm currently redundant
> >
> > Ugh, I'm sorry to hear that. I've been made redundant myself 3 times :(
> >
> > and don't have access to what was my
> > > illumos hardware ...
> > >
> > > I'd suggest you use the nappit suggestion, and do the backup from
> within
> > > the old system before swapping the disks over ...
> > >
> > > As for the ashift, it's been a few years since I did a fresh install,
> and
> > > I've forgotten how to force an ashift on a device during installation
> ...
> > >
> > Out of curiosity, is the method I described here
> > <
> >
> https://forums.servethehome.com/index.php?threads/how-do-i-migrate-openindiana-hipster-installation-from-512k-mbr-legacy-boot-32-gb-ssd-to-4k-gpt-uefi-boot-128-gb-ssd.29223/post-270825
> > >
> > in line with your previous suggestion of installing a default OI and then
> > zfs
> > send-ing the old SSD's rpool snapshot to the new SSD's rpool? Does that
> > look like it would work?
> >
> > >
> > > Jon
> > >
> > > On Mon, 22 Jun 2020, 08:10 Guenther Alka,  wrote:
> > >
> > > > hello Judah
> > > >
> > > > Am 22.06.2020 um 05:00 schrieb Judah Richardson:
> > > > > On Sat, Jun 20, 2020 at 2:29 AM Guenther Alka 
> > > > wrote:
> > > > >
> > > > >> Another option is to backup the current BE
> > > > > How would I determine which one is current?
> > > > >
> > > > > to the datapool via zfs send.
> > > >
> > > > In napp-it, check menu Snapshots > Bootenvironment, at console beadm
> > list
> > > > Look in the list for N=curreNt, R=Reboot. Normally you find one with
> NR
> > > > that is current and the one for next reboot.
> > > >
> > > > If you create a napp-it replication job, it automatically shows only
> > the
> > > > current BE as filesystem source together with all other regular
> > > > filesystems.
> > > >
> > > > >> This can be done continously via incremental send for ongoing
> > backups.
> > > > >> If the system disk fails (or you want to replace), add a new disk,
> > > > >> install a default OS, import the datapool and restore the BE
> > > > > What exactly are the commands for this?
> > > > >
> > > > > via zfs
> > > >
> > > > Use napp-it menu Jobs > Replicate > create then start the job. On
> first
> > > > run it replicates the whole filesystem, on next run only modified
> > > > datablock (incremental zfs send)
> > > >
> > > > On console, create a snapshot of the current BE filesystem and use it
> > > > for a zfs send
> > > > Look at https://docs.oracle.com/cd/E36784_01/html/E36835/gbinw.html
> or
> > > > other basic ZFS manuals
> > > >
> > > > >> send.
> > > > > What would a command for this look like?
> > > > >
> > > > > Could folks kindly be more specific and detailed in their replies,
> > > > please?
> > > > > A lot of these are just really generic and I'm not really sure how
> to
> > > > > proceed based on them.
> > > > >
> > > > > Then activate this BE and reboot to have the exact former OS
> > > >
> > > > After you restored the BE filesystem via another replication job or
> zfs
> > > > send on the new installation use napp-it menu Snaps > Bootenvironment
> > or
> > > > "beadm activate bename" to set a BE active for next reboot. In beadm
> > > > list you show the current marked with N and the activated one marked
> > > > with R (will become current after a Reboot).
> > > >
> > > > >> installation restored.
> > > > >>
> > > > >>Gea
> > > > >> @napp-it.org
> > > > >>
> > > > >>
> > > >
> > > > ___
> > > > openindiana-discuss mailing list
> > > > openindiana-discuss@openindiana.org
> > > > https://openindiana.org/mailman/listinfo/openindiana-discuss
> > > >
> > > ___
> > > openindiana-discuss mailing list
> > > openindiana-discuss@openindiana.org
> > > https://openindiana.org/mailman/listinfo/openindiana-discuss
> > >
> > ___
> > openindiana-discuss mailing list
> > openindiana-discuss@openindiana.org
> > https://openindiana.org/mailman/listinfo/openindiana-discuss
> >
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>

Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-22 Thread Jonathan Adams
Yep, that's the idea.

Depending on how old your system was, you may need to zfs upgrade once
you've finished, but it should all work like that.

You'll need to: zfs snapshot -r rpool2@upgrade

But that's the essential plan.

I hope that the new install creates the correct ashift on the new pool.

Jon

On Mon, 22 Jun 2020, 16:40 Judah Richardson, 
wrote:

> On Mon, Jun 22, 2020 at 3:26 AM Jonathan Adams 
> wrote:
>
> > Unfortunately I'm currently redundant
>
> Ugh, I'm sorry to hear that. I've been made redundant myself 3 times :(
>
> and don't have access to what was my
> > illumos hardware ...
> >
> > I'd suggest you use the nappit suggestion, and do the backup from within
> > the old system before swapping the disks over ...
> >
> > As for the ashift, it's been a few years since I did a fresh install, and
> > I've forgotten how to force an ashift on a device during installation ...
> >
> Out of curiosity, is the method I described here
> <
> https://forums.servethehome.com/index.php?threads/how-do-i-migrate-openindiana-hipster-installation-from-512k-mbr-legacy-boot-32-gb-ssd-to-4k-gpt-uefi-boot-128-gb-ssd.29223/post-270825
> >
> in line with your previous suggestion of installing a default OI and then
> zfs
> send-ing the old SSD's rpool snapshot to the new SSD's rpool? Does that
> look like it would work?
>
> >
> > Jon
> >
> > On Mon, 22 Jun 2020, 08:10 Guenther Alka,  wrote:
> >
> > > hello Judah
> > >
> > > Am 22.06.2020 um 05:00 schrieb Judah Richardson:
> > > > On Sat, Jun 20, 2020 at 2:29 AM Guenther Alka 
> > > wrote:
> > > >
> > > >> Another option is to backup the current BE
> > > > How would I determine which one is current?
> > > >
> > > > to the datapool via zfs send.
> > >
> > > In napp-it, check menu Snapshots > Bootenvironment, at console beadm
> list
> > > Look in the list for N=curreNt, R=Reboot. Normally you find one with NR
> > > that is current and the one for next reboot.
> > >
> > > If you create a napp-it replication job, it automatically shows only
> the
> > > current BE as filesystem source together with all other regular
> > > filesystems.
> > >
> > > >> This can be done continously via incremental send for ongoing
> backups.
> > > >> If the system disk fails (or you want to replace), add a new disk,
> > > >> install a default OS, import the datapool and restore the BE
> > > > What exactly are the commands for this?
> > > >
> > > > via zfs
> > >
> > > Use napp-it menu Jobs > Replicate > create then start the job. On first
> > > run it replicates the whole filesystem, on next run only modified
> > > datablock (incremental zfs send)
> > >
> > > On console, create a snapshot of the current BE filesystem and use it
> > > for a zfs send
> > > Look at https://docs.oracle.com/cd/E36784_01/html/E36835/gbinw.html or
> > > other basic ZFS manuals
> > >
> > > >> send.
> > > > What would a command for this look like?
> > > >
> > > > Could folks kindly be more specific and detailed in their replies,
> > > please?
> > > > A lot of these are just really generic and I'm not really sure how to
> > > > proceed based on them.
> > > >
> > > > Then activate this BE and reboot to have the exact former OS
> > >
> > > After you restored the BE filesystem via another replication job or zfs
> > > send on the new installation use napp-it menu Snaps > Bootenvironment
> or
> > > "beadm activate bename" to set a BE active for next reboot. In beadm
> > > list you show the current marked with N and the activated one marked
> > > with R (will become current after a Reboot).
> > >
> > > >> installation restored.
> > > >>
> > > >>Gea
> > > >> @napp-it.org
> > > >>
> > > >>
> > >
> > > ___
> > > openindiana-discuss mailing list
> > > openindiana-discuss@openindiana.org
> > > https://openindiana.org/mailman/listinfo/openindiana-discuss
> > >
> > ___
> > openindiana-discuss mailing list
> > openindiana-discuss@openindiana.org
> > https://openindiana.org/mailman/listinfo/openindiana-discuss
> >
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-22 Thread Judah Richardson
On Mon, Jun 22, 2020 at 3:26 AM Jonathan Adams 
wrote:

> Unfortunately I'm currently redundant

Ugh, I'm sorry to hear that. I've been made redundant myself 3 times :(

and don't have access to what was my
> illumos hardware ...
>
> I'd suggest you use the nappit suggestion, and do the backup from within
> the old system before swapping the disks over ...
>
> As for the ashift, it's been a few years since I did a fresh install, and
> I've forgotten how to force an ashift on a device during installation ...
>
Out of curiosity, is the method I described here

in line with your previous suggestion of installing a default OI and then zfs
send-ing the old SSD's rpool snapshot to the new SSD's rpool? Does that
look like it would work?

>
> Jon
>
> On Mon, 22 Jun 2020, 08:10 Guenther Alka,  wrote:
>
> > hello Judah
> >
> > Am 22.06.2020 um 05:00 schrieb Judah Richardson:
> > > On Sat, Jun 20, 2020 at 2:29 AM Guenther Alka 
> > wrote:
> > >
> > >> Another option is to backup the current BE
> > > How would I determine which one is current?
> > >
> > > to the datapool via zfs send.
> >
> > In napp-it, check menu Snapshots > Bootenvironment, at console beadm list
> > Look in the list for N=curreNt, R=Reboot. Normally you find one with NR
> > that is current and the one for next reboot.
> >
> > If you create a napp-it replication job, it automatically shows only the
> > current BE as filesystem source together with all other regular
> > filesystems.
> >
> > >> This can be done continously via incremental send for ongoing backups.
> > >> If the system disk fails (or you want to replace), add a new disk,
> > >> install a default OS, import the datapool and restore the BE
> > > What exactly are the commands for this?
> > >
> > > via zfs
> >
> > Use napp-it menu Jobs > Replicate > create then start the job. On first
> > run it replicates the whole filesystem, on next run only modified
> > datablock (incremental zfs send)
> >
> > On console, create a snapshot of the current BE filesystem and use it
> > for a zfs send
> > Look at https://docs.oracle.com/cd/E36784_01/html/E36835/gbinw.html or
> > other basic ZFS manuals
> >
> > >> send.
> > > What would a command for this look like?
> > >
> > > Could folks kindly be more specific and detailed in their replies,
> > please?
> > > A lot of these are just really generic and I'm not really sure how to
> > > proceed based on them.
> > >
> > > Then activate this BE and reboot to have the exact former OS
> >
> > After you restored the BE filesystem via another replication job or zfs
> > send on the new installation use napp-it menu Snaps > Bootenvironment or
> > "beadm activate bename" to set a BE active for next reboot. In beadm
> > list you show the current marked with N and the activated one marked
> > with R (will become current after a Reboot).
> >
> > >> installation restored.
> > >>
> > >>Gea
> > >> @napp-it.org
> > >>
> > >>
> >
> > ___
> > openindiana-discuss mailing list
> > openindiana-discuss@openindiana.org
> > https://openindiana.org/mailman/listinfo/openindiana-discuss
> >
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-22 Thread Judah Richardson
On Mon, Jun 22, 2020 at 2:10 AM Guenther Alka  wrote:

> hello Judah
>
> Am 22.06.2020 um 05:00 schrieb Judah Richardson:
> > On Sat, Jun 20, 2020 at 2:29 AM Guenther Alka 
> wrote:
> >
> >> Another option is to backup the current BE
> > How would I determine which one is current?
> >
> > to the datapool via zfs send.
>
> In napp-it, check menu Snapshots > Bootenvironment, at console beadm list
> Look in the list for N=curreNt, R=Reboot. Normally you find one with NR
> that is current and the one for next reboot.
>
> If you create a napp-it replication job, it automatically shows only the
> current BE as filesystem source together with all other regular
> filesystems.
>
> >> This can be done continously via incremental send for ongoing backups.
> >> If the system disk fails (or you want to replace), add a new disk,
> >> install a default OS, import the datapool and restore the BE
> > What exactly are the commands for this?
> >
> > via zfs
>
> Use napp-it menu Jobs > Replicate > create then start the job. On first
> run it replicates the whole filesystem, on next run only modified
> datablock (incremental zfs send)
>
> On console, create a snapshot of the current BE filesystem and use it
> for a zfs send
> Look at https://docs.oracle.com/cd/E36784_01/html/E36835/gbinw.html or
> other basic ZFS manuals
>
> >> send.
> > What would a command for this look like?
> >
> > Could folks kindly be more specific and detailed in their replies,
> please?
> > A lot of these are just really generic and I'm not really sure how to
> > proceed based on them.
> >
> > Then activate this BE and reboot to have the exact former OS
>
> After you restored the BE filesystem via another replication job or zfs
> send on the new installation use napp-it menu Snaps > Bootenvironment or
> "beadm activate bename" to set a BE active for next reboot. In beadm
> list you show the current marked with N and the activated one marked
> with R (will become current after a Reboot).
>
Alright. Since the source SSD is currently disconnected, I'm going to try
the method I detailed at your STH forum here
.


>
> >> installation restored.
> >>
> >>Gea
> >> @napp-it.org
> >>
> >>
>
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-22 Thread Jonathan Adams
Unfortunately I'm currently redundant and don't have access to what was my
illumos hardware ...

I'd suggest you use the nappit suggestion, and do the backup from within
the old system before swapping the disks over ...

As for the ashift, it's been a few years since I did a fresh install, and
I've forgotten how to force an ashift on a device during installation ...

Jon

On Mon, 22 Jun 2020, 08:10 Guenther Alka,  wrote:

> hello Judah
>
> Am 22.06.2020 um 05:00 schrieb Judah Richardson:
> > On Sat, Jun 20, 2020 at 2:29 AM Guenther Alka 
> wrote:
> >
> >> Another option is to backup the current BE
> > How would I determine which one is current?
> >
> > to the datapool via zfs send.
>
> In napp-it, check menu Snapshots > Bootenvironment, at console beadm list
> Look in the list for N=curreNt, R=Reboot. Normally you find one with NR
> that is current and the one for next reboot.
>
> If you create a napp-it replication job, it automatically shows only the
> current BE as filesystem source together with all other regular
> filesystems.
>
> >> This can be done continously via incremental send for ongoing backups.
> >> If the system disk fails (or you want to replace), add a new disk,
> >> install a default OS, import the datapool and restore the BE
> > What exactly are the commands for this?
> >
> > via zfs
>
> Use napp-it menu Jobs > Replicate > create then start the job. On first
> run it replicates the whole filesystem, on next run only modified
> datablock (incremental zfs send)
>
> On console, create a snapshot of the current BE filesystem and use it
> for a zfs send
> Look at https://docs.oracle.com/cd/E36784_01/html/E36835/gbinw.html or
> other basic ZFS manuals
>
> >> send.
> > What would a command for this look like?
> >
> > Could folks kindly be more specific and detailed in their replies,
> please?
> > A lot of these are just really generic and I'm not really sure how to
> > proceed based on them.
> >
> > Then activate this BE and reboot to have the exact former OS
>
> After you restored the BE filesystem via another replication job or zfs
> send on the new installation use napp-it menu Snaps > Bootenvironment or
> "beadm activate bename" to set a BE active for next reboot. In beadm
> list you show the current marked with N and the activated one marked
> with R (will become current after a Reboot).
>
> >> installation restored.
> >>
> >>Gea
> >> @napp-it.org
> >>
> >>
>
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-22 Thread Guenther Alka

hello Judah

Am 22.06.2020 um 05:00 schrieb Judah Richardson:

On Sat, Jun 20, 2020 at 2:29 AM Guenther Alka  wrote:


Another option is to backup the current BE

How would I determine which one is current?

to the datapool via zfs send.


In napp-it, check menu Snapshots > Bootenvironment, at console beadm list
Look in the list for N=curreNt, R=Reboot. Normally you find one with NR 
that is current and the one for next reboot.


If you create a napp-it replication job, it automatically shows only the 
current BE as filesystem source together with all other regular filesystems.



This can be done continously via incremental send for ongoing backups.
If the system disk fails (or you want to replace), add a new disk,
install a default OS, import the datapool and restore the BE

What exactly are the commands for this?

via zfs


Use napp-it menu Jobs > Replicate > create then start the job. On first 
run it replicates the whole filesystem, on next run only modified 
datablock (incremental zfs send)


On console, create a snapshot of the current BE filesystem and use it 
for a zfs send
Look at https://docs.oracle.com/cd/E36784_01/html/E36835/gbinw.html or 
other basic ZFS manuals



send.

What would a command for this look like?

Could folks kindly be more specific and detailed in their replies, please?
A lot of these are just really generic and I'm not really sure how to
proceed based on them.

Then activate this BE and reboot to have the exact former OS


After you restored the BE filesystem via another replication job or zfs 
send on the new installation use napp-it menu Snaps > Bootenvironment or 
"beadm activate bename" to set a BE active for next reboot. In beadm 
list you show the current marked with N and the activated one marked 
with R (will become current after a Reboot).



installation restored.

   Gea
@napp-it.org




___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-21 Thread Judah Richardson
On Sat, Jun 20, 2020 at 2:29 AM Guenther Alka  wrote:

> Another option is to backup the current BE

How would I determine which one is current?

to the datapool via zfs send.
> This can be done continously via incremental send for ongoing backups.
> If the system disk fails (or you want to replace), add a new disk,
> install a default OS, import the datapool and restore the BE

What exactly are the commands for this?

via zfs
> send.

What would a command for this look like?

Could folks kindly be more specific and detailed in their replies, please?
A lot of these are just really generic and I'm not really sure how to
proceed based on them.

Then activate this BE and reboot to have the exact former OS
> installation restored.
>
>   Gea
> @napp-it.org
>
> Am 19.06.2020 um 21:19 schrieb Gary Mills:
> > On Fri, Jun 19, 2020 at 01:23:35PM -0500, Judah Richardson wrote:
> >> I currently run OpenIndiana Hipster on a 32 GB SSD. I'd like move that
> >> installation to a 128 GB SSD. What's the easiest way to do this?
> > The easiest way is to use zpool commands.  First, add the large SSD as
> > half a mirror to the smaller one.  Then, detach the smaller one.
> > These options are all described in the zpool man page.
> >
> > You will likely need to use the installboot command on the large SSD
> > to make it bootable before you do the detach.  This operation is
> > described in the installboot man page.
> >
> >> I was thinking of using Clonezilla, but I'm not sure if that's the way
> to
> >> go here.
> > I'd recommend using native illumos commands instead.
> >
>
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-21 Thread Judah Richardson
On Sun, Jun 21, 2020 at 3:37 AM Jonathan Adams 
wrote:

> Honestly, I'd probably stick the new drive in the computer and install
> illumos on it first, so that the layout and zpool are set correctly, then
> look to trash the contents and "zfs send" to it after booting back to the
> old drive ... Or as I said, just using something like "rsync" to copy on
> the bits of the system you care about from the old system ...
>
What would the rsync command for this be? I find rsync's syntax incredibly
confusing.

>
> The old drive and the new drive are so different, so my original "add a
> replica" wouldn't work as is ...
>
> Jon
>
> On Sun, 21 Jun 2020, 06:56 Judah Richardson, 
> wrote:
>
> > OK, I'm still confused. Everything I've read here so far and online
> either
> > seems to be lacking critical details or is outdated. Let's take it one
> step
> > at a time.
> >
> > Let's say my source SSD is cdisk1 (32 GB, ashift=9, MBR) and I'm trying
> to
> > move the installation to target SSD cdisk2 (128 GB, ashift=12, GPT).
> >
> > Do I create a zpool on cdisk2 1st or format it?
> >
> > On Sat, Jun 20, 2020 at 7:40 PM Jonathan Adams 
> > wrote:
> >
> > > $ sudo -s
> > > # zfs send -r rpool@snap01 | zfs recv -F rpool2@snap01
> > >
> > > ?
> > >
> > > On Sun, 21 Jun 2020, 00:25 Judah Richardson, <
> judahrichard...@gmail.com>
> > > wrote:
> > >
> > > > I am absolutely thoroughly confused here. It seems a lot of details
> are
> > > > being left out in the docs? Here are my current rpool filesystems:
> > > >
> > > > # zfs list
> > > > NAME   USED  AVAIL  REFER
> > > > MOUNTPOINT
> > > > rpool 27.0G  1.53G  33.5K
> > > > /rpool
> > > > rpool/ROOT17.9G  1.53G24K
> > > > legacy
> > > > rpool/ROOT/openindiana15.2M  1.53G  6.09G
> > /
> > > > rpool/ROOT/openindiana-2019:12:01  970M  1.53G  7.32G
> > /
> > > > rpool/ROOT/openindiana-2019:12:02 48.0M  1.53G  7.32G
> > /
> > > > rpool/ROOT/openindiana-2019:12:10  813K  1.53G  8.34G
> > /
> > > > rpool/ROOT/openindiana-2020:01:14 15.7M  1.53G  7.88G
> > /
> > > > rpool/ROOT/openindiana-2020:02:12  858K  1.53G  7.82G
> > /
> > > > rpool/ROOT/openindiana-2020:02:27  650K  1.53G  7.92G
> > /
> > > > rpool/ROOT/openindiana-2020:03:10  656K  1.53G  8.23G
> > /
> > > > rpool/ROOT/openindiana-2020:03:26 16.8G  1.53G  8.85G
> > /
> > > > rpool/ROOT/pre_activate_18.12_1575387063   239K  1.53G  7.31G
> > /
> > > > rpool/ROOT/pre_activate_19.10.homeuse_1575387229   239K  1.53G  7.31G
> > /
> > > > rpool/ROOT/pre_download_19.10.homeuse_1575354576   223K  1.53G  7.29G
> > /
> > > > rpool/ROOT/pre_download_19.12.homeuse_1581739687 1K  1.53G  7.68G
> > /
> > > > rpool/ROOT/pre_napp-it-18.12   273K  1.53G  6.63G
> > /
> > > > rpool/dump3.95G  1.53G  3.95G
> > -
> > > > rpool/export   987M  1.53G24K
> > > > /export
> > > > rpool/export/home  987M  1.53G24K
> > > > /export/home
> > > > rpool/export/home/judah987M  1.53G   270M
> > > > /export/home/judah
> > > > rpool/swap4.20G  4.51G  1.22G
> > -
> > > >
> > > > I then ran
> > > >
> > > > # zfs snapshot -r rpool@snap01
> > > >
> > > > and then
> > > >
> > > > sudo zfs send rpool@snap01 | sudo zfs recv -F rpool2@snap01
> > > >
> > > > where rpool2 is the ZFS pool on the new SSD.
> > > >
> > > > It doesn't seem anything was copied over.
> > > >
> > > > What am I doing wrong?
> > > >
> > > >
> > > >
> > > > On Sat, Jun 20, 2020 at 6:09 PM Till Wegmüller  >
> > > > wrote:
> > > >
> > > > > Have a look at zfs-send(1m)
> > > > >
> > > > > it's -r. You must have a snapshot to send you cannot sent datasets
> > > > > directly. The snapshots must be named exaclty the same on the whole
> > > > > pool. You can achieve this with zfs snap -r very easily.
> > > > >
> > > > > Hope this helps
> > > > > Greetings
> > > > > Till
> > > > >
> > > > > On 21.06.20 01:03, Judah Richardson wrote:
> > > > > > I can't seem to find any command that recursively sends all the
> > > > datasets
> > > > > on
> > > > > > 1 zpool to another ...
> > > > > >
> > > > > > This is all very confusing and frustrating. Disk upgrades must
> have
> > > > been
> > > > > a
> > > > > > considered user operation, no? Why not make it intuitive and
> > simple?
> > > :(
> > > > > >
> > > > > > On Sat, Jun 20, 2020 at 5:58 PM Jonathan Adams <
> > > t12nsloo...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > >> I've never done that, but it must be worth a go, unless you want
> > to
> > > > just
> > > > > >> install a new system in the new disk and copy over the files you
> > > want
> > > > to
> > > > > >> 

Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-21 Thread Judah Richardson
On Sun, Jun 21, 2020 at 3:37 AM Jonathan Adams 
wrote:

> Honestly, I'd probably stick the new drive in the computer and install
> illumos on it first,

OI has the dubious distinction of being the most difficult desktop OS
installation I've ever experienced. So much so it took me all evening and
most of the following day. I had to write notes for myself for next time
, and
even that method only resulted in a legacy boot MBR installation with
apparently 512b sectors (not ideal.)

I looked at the 2019.10 version release notes and it says UEFI support has
been improved: "bootadm install-bootloader (or installboot if run directly)
now populates/maintains the ESP."

How would one run either of those? No examples or instructions are
provided.

so that the layout and zpool are set correctly, then
> look to trash the contents and "zfs send" to it after booting back to the
> old drive ... Or as I said, just using something like "rsync" to copy on
> the bits of the system you care about from the old system ...
>
> The old drive and the new drive are so different, so my original "add a
> replica" wouldn't work as is ...
>
I see.

>
> Jon
>
> On Sun, 21 Jun 2020, 06:56 Judah Richardson, 
> wrote:
>
> > OK, I'm still confused. Everything I've read here so far and online
> either
> > seems to be lacking critical details or is outdated. Let's take it one
> step
> > at a time.
> >
> > Let's say my source SSD is cdisk1 (32 GB, ashift=9, MBR) and I'm trying
> to
> > move the installation to target SSD cdisk2 (128 GB, ashift=12, GPT).
> >
> > Do I create a zpool on cdisk2 1st or format it?
> >
> > On Sat, Jun 20, 2020 at 7:40 PM Jonathan Adams 
> > wrote:
> >
> > > $ sudo -s
> > > # zfs send -r rpool@snap01 | zfs recv -F rpool2@snap01
> > >
> > > ?
> > >
> > > On Sun, 21 Jun 2020, 00:25 Judah Richardson, <
> judahrichard...@gmail.com>
> > > wrote:
> > >
> > > > I am absolutely thoroughly confused here. It seems a lot of details
> are
> > > > being left out in the docs? Here are my current rpool filesystems:
> > > >
> > > > # zfs list
> > > > NAME   USED  AVAIL  REFER
> > > > MOUNTPOINT
> > > > rpool 27.0G  1.53G  33.5K
> > > > /rpool
> > > > rpool/ROOT17.9G  1.53G24K
> > > > legacy
> > > > rpool/ROOT/openindiana15.2M  1.53G  6.09G
> > /
> > > > rpool/ROOT/openindiana-2019:12:01  970M  1.53G  7.32G
> > /
> > > > rpool/ROOT/openindiana-2019:12:02 48.0M  1.53G  7.32G
> > /
> > > > rpool/ROOT/openindiana-2019:12:10  813K  1.53G  8.34G
> > /
> > > > rpool/ROOT/openindiana-2020:01:14 15.7M  1.53G  7.88G
> > /
> > > > rpool/ROOT/openindiana-2020:02:12  858K  1.53G  7.82G
> > /
> > > > rpool/ROOT/openindiana-2020:02:27  650K  1.53G  7.92G
> > /
> > > > rpool/ROOT/openindiana-2020:03:10  656K  1.53G  8.23G
> > /
> > > > rpool/ROOT/openindiana-2020:03:26 16.8G  1.53G  8.85G
> > /
> > > > rpool/ROOT/pre_activate_18.12_1575387063   239K  1.53G  7.31G
> > /
> > > > rpool/ROOT/pre_activate_19.10.homeuse_1575387229   239K  1.53G  7.31G
> > /
> > > > rpool/ROOT/pre_download_19.10.homeuse_1575354576   223K  1.53G  7.29G
> > /
> > > > rpool/ROOT/pre_download_19.12.homeuse_1581739687 1K  1.53G  7.68G
> > /
> > > > rpool/ROOT/pre_napp-it-18.12   273K  1.53G  6.63G
> > /
> > > > rpool/dump3.95G  1.53G  3.95G
> > -
> > > > rpool/export   987M  1.53G24K
> > > > /export
> > > > rpool/export/home  987M  1.53G24K
> > > > /export/home
> > > > rpool/export/home/judah987M  1.53G   270M
> > > > /export/home/judah
> > > > rpool/swap4.20G  4.51G  1.22G
> > -
> > > >
> > > > I then ran
> > > >
> > > > # zfs snapshot -r rpool@snap01
> > > >
> > > > and then
> > > >
> > > > sudo zfs send rpool@snap01 | sudo zfs recv -F rpool2@snap01
> > > >
> > > > where rpool2 is the ZFS pool on the new SSD.
> > > >
> > > > It doesn't seem anything was copied over.
> > > >
> > > > What am I doing wrong?
> > > >
> > > >
> > > >
> > > > On Sat, Jun 20, 2020 at 6:09 PM Till Wegmüller  >
> > > > wrote:
> > > >
> > > > > Have a look at zfs-send(1m)
> > > > >
> > > > > it's -r. You must have a snapshot to send you cannot sent datasets
> > > > > directly. The snapshots must be named exaclty the same on the whole
> > > > > pool. You can achieve this with zfs snap -r very easily.
> > > > >
> > > > > Hope this helps
> > > > > Greetings
> > > > > Till
> > > > >
> > > > > On 21.06.20 01:03, Judah Richardson wrote:
> > > > > > I can't seem to find any command that recursively sends all the
> > > > datasets
> > > > > on
> > > > 

Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-21 Thread Guenther Alka
As you use my napp-it you can use menu "Jobs > Replication > Create" to 
create a replication job with the current active BE as source and your 
datapool as destination. On a new regular OI setup from scratch (boot OS 
installer, install Hipster to the new SSD) you can create a new 
replication job with last as source and rpool/ROOT as destination. These 
are all preconfigured options in the napp-it replication menu. For other 
replications, just enable recursive to include daughter filesystems.


The active BE is your current complete Hipster setup without what is 
needed to boot. You can use this method as a full or disaster recovery 
method. As you first install a base Hipster again, the bootable disk is 
fully prepared to reinstall the BE only. You cannot install OmniOS and 
then switch to Hipster with some sort of copy.


If you want to switch between Illumos distributions you must do a 
regular setup and can import the datapools only. Do not use anything 
from rpool. Napp-it settings like jobs are in /var/web-gui/_log. 
Copy/restore them manually or use a backup job to save them to the 
datapool and a User > Restore to restore them. If you want to preserve 
user permissions ex via SMB, recreate the same users, groups (same 
uid/gid) and SMB groups. For other services, you must reinstall if 
available.


Gea
@napp-it.org

Am 21.06.2020 um 10:36 schrieb Jonathan Adams:

Honestly, I'd probably stick the new drive in the computer and install
illumos on it first, so that the layout and zpool are set correctly, then
look to trash the contents and "zfs send" to it after booting back to the
old drive ... Or as I said, just using something like "rsync" to copy on
the bits of the system you care about from the old system ...

The old drive and the new drive are so different, so my original "add a
replica" wouldn't work as is ...

Jon

On Sun, 21 Jun 2020, 06:56 Judah Richardson, 
wrote:


OK, I'm still confused. Everything I've read here so far and online either
seems to be lacking critical details or is outdated. Let's take it one step
at a time.

Let's say my source SSD is cdisk1 (32 GB, ashift=9, MBR) and I'm trying to
move the installation to target SSD cdisk2 (128 GB, ashift=12, GPT).

Do I create a zpool on cdisk2 1st or format it?

On Sat, Jun 20, 2020 at 7:40 PM Jonathan Adams 
wrote:


$ sudo -s
# zfs send -r rpool@snap01 | zfs recv -F rpool2@snap01

?

On Sun, 21 Jun 2020, 00:25 Judah Richardson, 
wrote:


I am absolutely thoroughly confused here. It seems a lot of details are
being left out in the docs? Here are my current rpool filesystems:

# zfs list
NAME   USED  AVAIL  REFER
MOUNTPOINT
rpool 27.0G  1.53G  33.5K
/rpool
rpool/ROOT17.9G  1.53G24K
legacy
rpool/ROOT/openindiana15.2M  1.53G  6.09G

/

rpool/ROOT/openindiana-2019:12:01  970M  1.53G  7.32G

/

rpool/ROOT/openindiana-2019:12:02 48.0M  1.53G  7.32G

/

rpool/ROOT/openindiana-2019:12:10  813K  1.53G  8.34G

/

rpool/ROOT/openindiana-2020:01:14 15.7M  1.53G  7.88G

/

rpool/ROOT/openindiana-2020:02:12  858K  1.53G  7.82G

/

rpool/ROOT/openindiana-2020:02:27  650K  1.53G  7.92G

/

rpool/ROOT/openindiana-2020:03:10  656K  1.53G  8.23G

/

rpool/ROOT/openindiana-2020:03:26 16.8G  1.53G  8.85G

/

rpool/ROOT/pre_activate_18.12_1575387063   239K  1.53G  7.31G

/

rpool/ROOT/pre_activate_19.10.homeuse_1575387229   239K  1.53G  7.31G

/

rpool/ROOT/pre_download_19.10.homeuse_1575354576   223K  1.53G  7.29G

/

rpool/ROOT/pre_download_19.12.homeuse_1581739687 1K  1.53G  7.68G

/

rpool/ROOT/pre_napp-it-18.12   273K  1.53G  6.63G

/

rpool/dump3.95G  1.53G  3.95G

-

rpool/export   987M  1.53G24K
/export
rpool/export/home  987M  1.53G24K
/export/home
rpool/export/home/judah987M  1.53G   270M
/export/home/judah
rpool/swap4.20G  4.51G  1.22G

-

I then ran

# zfs snapshot -r rpool@snap01

and then

sudo zfs send rpool@snap01 | sudo zfs recv -F rpool2@snap01

where rpool2 is the ZFS pool on the new SSD.

It doesn't seem anything was copied over.

What am I doing wrong?



On Sat, Jun 20, 2020 at 6:09 PM Till Wegmüller 
wrote:


Have a look at zfs-send(1m)

it's -r. You must have a snapshot to send you cannot sent datasets
directly. The snapshots must be named exaclty the same on the whole
pool. You can achieve this with zfs snap -r very easily.

Hope this helps
Greetings
Till

On 21.06.20 01:03, Judah Richardson wrote:

I can't seem to find any command that recursively sends all the

datasets

on

1 zpool to another ...

This is all very confusing and 

Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-21 Thread Jonathan Adams
Honestly, I'd probably stick the new drive in the computer and install
illumos on it first, so that the layout and zpool are set correctly, then
look to trash the contents and "zfs send" to it after booting back to the
old drive ... Or as I said, just using something like "rsync" to copy on
the bits of the system you care about from the old system ...

The old drive and the new drive are so different, so my original "add a
replica" wouldn't work as is ...

Jon

On Sun, 21 Jun 2020, 06:56 Judah Richardson, 
wrote:

> OK, I'm still confused. Everything I've read here so far and online either
> seems to be lacking critical details or is outdated. Let's take it one step
> at a time.
>
> Let's say my source SSD is cdisk1 (32 GB, ashift=9, MBR) and I'm trying to
> move the installation to target SSD cdisk2 (128 GB, ashift=12, GPT).
>
> Do I create a zpool on cdisk2 1st or format it?
>
> On Sat, Jun 20, 2020 at 7:40 PM Jonathan Adams 
> wrote:
>
> > $ sudo -s
> > # zfs send -r rpool@snap01 | zfs recv -F rpool2@snap01
> >
> > ?
> >
> > On Sun, 21 Jun 2020, 00:25 Judah Richardson, 
> > wrote:
> >
> > > I am absolutely thoroughly confused here. It seems a lot of details are
> > > being left out in the docs? Here are my current rpool filesystems:
> > >
> > > # zfs list
> > > NAME   USED  AVAIL  REFER
> > > MOUNTPOINT
> > > rpool 27.0G  1.53G  33.5K
> > > /rpool
> > > rpool/ROOT17.9G  1.53G24K
> > > legacy
> > > rpool/ROOT/openindiana15.2M  1.53G  6.09G
> /
> > > rpool/ROOT/openindiana-2019:12:01  970M  1.53G  7.32G
> /
> > > rpool/ROOT/openindiana-2019:12:02 48.0M  1.53G  7.32G
> /
> > > rpool/ROOT/openindiana-2019:12:10  813K  1.53G  8.34G
> /
> > > rpool/ROOT/openindiana-2020:01:14 15.7M  1.53G  7.88G
> /
> > > rpool/ROOT/openindiana-2020:02:12  858K  1.53G  7.82G
> /
> > > rpool/ROOT/openindiana-2020:02:27  650K  1.53G  7.92G
> /
> > > rpool/ROOT/openindiana-2020:03:10  656K  1.53G  8.23G
> /
> > > rpool/ROOT/openindiana-2020:03:26 16.8G  1.53G  8.85G
> /
> > > rpool/ROOT/pre_activate_18.12_1575387063   239K  1.53G  7.31G
> /
> > > rpool/ROOT/pre_activate_19.10.homeuse_1575387229   239K  1.53G  7.31G
> /
> > > rpool/ROOT/pre_download_19.10.homeuse_1575354576   223K  1.53G  7.29G
> /
> > > rpool/ROOT/pre_download_19.12.homeuse_1581739687 1K  1.53G  7.68G
> /
> > > rpool/ROOT/pre_napp-it-18.12   273K  1.53G  6.63G
> /
> > > rpool/dump3.95G  1.53G  3.95G
> -
> > > rpool/export   987M  1.53G24K
> > > /export
> > > rpool/export/home  987M  1.53G24K
> > > /export/home
> > > rpool/export/home/judah987M  1.53G   270M
> > > /export/home/judah
> > > rpool/swap4.20G  4.51G  1.22G
> -
> > >
> > > I then ran
> > >
> > > # zfs snapshot -r rpool@snap01
> > >
> > > and then
> > >
> > > sudo zfs send rpool@snap01 | sudo zfs recv -F rpool2@snap01
> > >
> > > where rpool2 is the ZFS pool on the new SSD.
> > >
> > > It doesn't seem anything was copied over.
> > >
> > > What am I doing wrong?
> > >
> > >
> > >
> > > On Sat, Jun 20, 2020 at 6:09 PM Till Wegmüller 
> > > wrote:
> > >
> > > > Have a look at zfs-send(1m)
> > > >
> > > > it's -r. You must have a snapshot to send you cannot sent datasets
> > > > directly. The snapshots must be named exaclty the same on the whole
> > > > pool. You can achieve this with zfs snap -r very easily.
> > > >
> > > > Hope this helps
> > > > Greetings
> > > > Till
> > > >
> > > > On 21.06.20 01:03, Judah Richardson wrote:
> > > > > I can't seem to find any command that recursively sends all the
> > > datasets
> > > > on
> > > > > 1 zpool to another ...
> > > > >
> > > > > This is all very confusing and frustrating. Disk upgrades must have
> > > been
> > > > a
> > > > > considered user operation, no? Why not make it intuitive and
> simple?
> > :(
> > > > >
> > > > > On Sat, Jun 20, 2020 at 5:58 PM Jonathan Adams <
> > t12nsloo...@gmail.com>
> > > > > wrote:
> > > > >
> > > > >> I've never done that, but it must be worth a go, unless you want
> to
> > > just
> > > > >> install a new system in the new disk and copy over the files you
> > want
> > > to
> > > > >> change afterwards ...
> > > > >>
> > > > >> On Sat, 20 Jun 2020, 23:46 Judah Richardson, <
> > > judahrichard...@gmail.com
> > > > >
> > > > >> wrote:
> > > > >>
> > > > >>> On Sat, Jun 20, 2020 at 2:29 AM Guenther Alka <
> a...@hfg-gmuend.de>
> > > > >> wrote:
> > > > >>>
> > > >  Another option is to backup the current BE to the datapool via
> zfs
> > > > >> send.
> > > > 
> > > > >>> Would it be possible to just zfs send everything from 

Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-20 Thread Judah Richardson
OK, I'm still confused. Everything I've read here so far and online either
seems to be lacking critical details or is outdated. Let's take it one step
at a time.

Let's say my source SSD is cdisk1 (32 GB, ashift=9, MBR) and I'm trying to
move the installation to target SSD cdisk2 (128 GB, ashift=12, GPT).

Do I create a zpool on cdisk2 1st or format it?

On Sat, Jun 20, 2020 at 7:40 PM Jonathan Adams 
wrote:

> $ sudo -s
> # zfs send -r rpool@snap01 | zfs recv -F rpool2@snap01
>
> ?
>
> On Sun, 21 Jun 2020, 00:25 Judah Richardson, 
> wrote:
>
> > I am absolutely thoroughly confused here. It seems a lot of details are
> > being left out in the docs? Here are my current rpool filesystems:
> >
> > # zfs list
> > NAME   USED  AVAIL  REFER
> > MOUNTPOINT
> > rpool 27.0G  1.53G  33.5K
> > /rpool
> > rpool/ROOT17.9G  1.53G24K
> > legacy
> > rpool/ROOT/openindiana15.2M  1.53G  6.09G  /
> > rpool/ROOT/openindiana-2019:12:01  970M  1.53G  7.32G  /
> > rpool/ROOT/openindiana-2019:12:02 48.0M  1.53G  7.32G  /
> > rpool/ROOT/openindiana-2019:12:10  813K  1.53G  8.34G  /
> > rpool/ROOT/openindiana-2020:01:14 15.7M  1.53G  7.88G  /
> > rpool/ROOT/openindiana-2020:02:12  858K  1.53G  7.82G  /
> > rpool/ROOT/openindiana-2020:02:27  650K  1.53G  7.92G  /
> > rpool/ROOT/openindiana-2020:03:10  656K  1.53G  8.23G  /
> > rpool/ROOT/openindiana-2020:03:26 16.8G  1.53G  8.85G  /
> > rpool/ROOT/pre_activate_18.12_1575387063   239K  1.53G  7.31G  /
> > rpool/ROOT/pre_activate_19.10.homeuse_1575387229   239K  1.53G  7.31G  /
> > rpool/ROOT/pre_download_19.10.homeuse_1575354576   223K  1.53G  7.29G  /
> > rpool/ROOT/pre_download_19.12.homeuse_1581739687 1K  1.53G  7.68G  /
> > rpool/ROOT/pre_napp-it-18.12   273K  1.53G  6.63G  /
> > rpool/dump3.95G  1.53G  3.95G  -
> > rpool/export   987M  1.53G24K
> > /export
> > rpool/export/home  987M  1.53G24K
> > /export/home
> > rpool/export/home/judah987M  1.53G   270M
> > /export/home/judah
> > rpool/swap4.20G  4.51G  1.22G  -
> >
> > I then ran
> >
> > # zfs snapshot -r rpool@snap01
> >
> > and then
> >
> > sudo zfs send rpool@snap01 | sudo zfs recv -F rpool2@snap01
> >
> > where rpool2 is the ZFS pool on the new SSD.
> >
> > It doesn't seem anything was copied over.
> >
> > What am I doing wrong?
> >
> >
> >
> > On Sat, Jun 20, 2020 at 6:09 PM Till Wegmüller 
> > wrote:
> >
> > > Have a look at zfs-send(1m)
> > >
> > > it's -r. You must have a snapshot to send you cannot sent datasets
> > > directly. The snapshots must be named exaclty the same on the whole
> > > pool. You can achieve this with zfs snap -r very easily.
> > >
> > > Hope this helps
> > > Greetings
> > > Till
> > >
> > > On 21.06.20 01:03, Judah Richardson wrote:
> > > > I can't seem to find any command that recursively sends all the
> > datasets
> > > on
> > > > 1 zpool to another ...
> > > >
> > > > This is all very confusing and frustrating. Disk upgrades must have
> > been
> > > a
> > > > considered user operation, no? Why not make it intuitive and simple?
> :(
> > > >
> > > > On Sat, Jun 20, 2020 at 5:58 PM Jonathan Adams <
> t12nsloo...@gmail.com>
> > > > wrote:
> > > >
> > > >> I've never done that, but it must be worth a go, unless you want to
> > just
> > > >> install a new system in the new disk and copy over the files you
> want
> > to
> > > >> change afterwards ...
> > > >>
> > > >> On Sat, 20 Jun 2020, 23:46 Judah Richardson, <
> > judahrichard...@gmail.com
> > > >
> > > >> wrote:
> > > >>
> > > >>> On Sat, Jun 20, 2020 at 2:29 AM Guenther Alka 
> > > >> wrote:
> > > >>>
> > >  Another option is to backup the current BE to the datapool via zfs
> > > >> send.
> > > 
> > > >>> Would it be possible to just zfs send everything from the current
> SSD
> > > to
> > > >>> the new one, then enable autoexpand on the new SSD and make it
> > > bootable?
> > > >>>
> > > >>> This can be done continously via incremental send for ongoing
> > backups.
> > >  If the system disk fails (or you want to replace), add a new disk,
> > >  install a default OS, import the datapool and restore the BE via
> zfs
> > >  send. Then activate this BE and reboot to have the exact former OS
> > >  installation restored.
> > > 
> > >    Gea
> > >  @napp-it.org
> > > 
> > >  Am 19.06.2020 um 21:19 schrieb Gary Mills:
> > > > On Fri, Jun 19, 2020 at 01:23:35PM -0500, Judah Richardson wrote:
> > > >> I currently run OpenIndiana Hipster on a 32 GB SSD. I'd like
> move
> > > >> that
> > > >> 

Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-20 Thread Jonathan Adams
$ sudo -s
# zfs send -r rpool@snap01 | zfs recv -F rpool2@snap01

?

On Sun, 21 Jun 2020, 00:25 Judah Richardson, 
wrote:

> I am absolutely thoroughly confused here. It seems a lot of details are
> being left out in the docs? Here are my current rpool filesystems:
>
> # zfs list
> NAME   USED  AVAIL  REFER
> MOUNTPOINT
> rpool 27.0G  1.53G  33.5K
> /rpool
> rpool/ROOT17.9G  1.53G24K
> legacy
> rpool/ROOT/openindiana15.2M  1.53G  6.09G  /
> rpool/ROOT/openindiana-2019:12:01  970M  1.53G  7.32G  /
> rpool/ROOT/openindiana-2019:12:02 48.0M  1.53G  7.32G  /
> rpool/ROOT/openindiana-2019:12:10  813K  1.53G  8.34G  /
> rpool/ROOT/openindiana-2020:01:14 15.7M  1.53G  7.88G  /
> rpool/ROOT/openindiana-2020:02:12  858K  1.53G  7.82G  /
> rpool/ROOT/openindiana-2020:02:27  650K  1.53G  7.92G  /
> rpool/ROOT/openindiana-2020:03:10  656K  1.53G  8.23G  /
> rpool/ROOT/openindiana-2020:03:26 16.8G  1.53G  8.85G  /
> rpool/ROOT/pre_activate_18.12_1575387063   239K  1.53G  7.31G  /
> rpool/ROOT/pre_activate_19.10.homeuse_1575387229   239K  1.53G  7.31G  /
> rpool/ROOT/pre_download_19.10.homeuse_1575354576   223K  1.53G  7.29G  /
> rpool/ROOT/pre_download_19.12.homeuse_1581739687 1K  1.53G  7.68G  /
> rpool/ROOT/pre_napp-it-18.12   273K  1.53G  6.63G  /
> rpool/dump3.95G  1.53G  3.95G  -
> rpool/export   987M  1.53G24K
> /export
> rpool/export/home  987M  1.53G24K
> /export/home
> rpool/export/home/judah987M  1.53G   270M
> /export/home/judah
> rpool/swap4.20G  4.51G  1.22G  -
>
> I then ran
>
> # zfs snapshot -r rpool@snap01
>
> and then
>
> sudo zfs send rpool@snap01 | sudo zfs recv -F rpool2@snap01
>
> where rpool2 is the ZFS pool on the new SSD.
>
> It doesn't seem anything was copied over.
>
> What am I doing wrong?
>
>
>
> On Sat, Jun 20, 2020 at 6:09 PM Till Wegmüller 
> wrote:
>
> > Have a look at zfs-send(1m)
> >
> > it's -r. You must have a snapshot to send you cannot sent datasets
> > directly. The snapshots must be named exaclty the same on the whole
> > pool. You can achieve this with zfs snap -r very easily.
> >
> > Hope this helps
> > Greetings
> > Till
> >
> > On 21.06.20 01:03, Judah Richardson wrote:
> > > I can't seem to find any command that recursively sends all the
> datasets
> > on
> > > 1 zpool to another ...
> > >
> > > This is all very confusing and frustrating. Disk upgrades must have
> been
> > a
> > > considered user operation, no? Why not make it intuitive and simple? :(
> > >
> > > On Sat, Jun 20, 2020 at 5:58 PM Jonathan Adams 
> > > wrote:
> > >
> > >> I've never done that, but it must be worth a go, unless you want to
> just
> > >> install a new system in the new disk and copy over the files you want
> to
> > >> change afterwards ...
> > >>
> > >> On Sat, 20 Jun 2020, 23:46 Judah Richardson, <
> judahrichard...@gmail.com
> > >
> > >> wrote:
> > >>
> > >>> On Sat, Jun 20, 2020 at 2:29 AM Guenther Alka 
> > >> wrote:
> > >>>
> >  Another option is to backup the current BE to the datapool via zfs
> > >> send.
> > 
> > >>> Would it be possible to just zfs send everything from the current SSD
> > to
> > >>> the new one, then enable autoexpand on the new SSD and make it
> > bootable?
> > >>>
> > >>> This can be done continously via incremental send for ongoing
> backups.
> >  If the system disk fails (or you want to replace), add a new disk,
> >  install a default OS, import the datapool and restore the BE via zfs
> >  send. Then activate this BE and reboot to have the exact former OS
> >  installation restored.
> > 
> >    Gea
> >  @napp-it.org
> > 
> >  Am 19.06.2020 um 21:19 schrieb Gary Mills:
> > > On Fri, Jun 19, 2020 at 01:23:35PM -0500, Judah Richardson wrote:
> > >> I currently run OpenIndiana Hipster on a 32 GB SSD. I'd like move
> > >> that
> > >> installation to a 128 GB SSD. What's the easiest way to do this?
> > > The easiest way is to use zpool commands.  First, add the large SSD
> > >> as
> > > half a mirror to the smaller one.  Then, detach the smaller one.
> > > These options are all described in the zpool man page.
> > >
> > > You will likely need to use the installboot command on the large
> SSD
> > > to make it bootable before you do the detach.  This operation is
> > > described in the installboot man page.
> > >
> > >> I was thinking of using Clonezilla, but I'm not sure if that's the
> > >> way
> >  to
> > >> go here.
> > > I'd recommend using native illumos commands 

Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-20 Thread Judah Richardson
I am absolutely thoroughly confused here. It seems a lot of details are
being left out in the docs? Here are my current rpool filesystems:

# zfs list
NAME   USED  AVAIL  REFER
MOUNTPOINT
rpool 27.0G  1.53G  33.5K
/rpool
rpool/ROOT17.9G  1.53G24K
legacy
rpool/ROOT/openindiana15.2M  1.53G  6.09G  /
rpool/ROOT/openindiana-2019:12:01  970M  1.53G  7.32G  /
rpool/ROOT/openindiana-2019:12:02 48.0M  1.53G  7.32G  /
rpool/ROOT/openindiana-2019:12:10  813K  1.53G  8.34G  /
rpool/ROOT/openindiana-2020:01:14 15.7M  1.53G  7.88G  /
rpool/ROOT/openindiana-2020:02:12  858K  1.53G  7.82G  /
rpool/ROOT/openindiana-2020:02:27  650K  1.53G  7.92G  /
rpool/ROOT/openindiana-2020:03:10  656K  1.53G  8.23G  /
rpool/ROOT/openindiana-2020:03:26 16.8G  1.53G  8.85G  /
rpool/ROOT/pre_activate_18.12_1575387063   239K  1.53G  7.31G  /
rpool/ROOT/pre_activate_19.10.homeuse_1575387229   239K  1.53G  7.31G  /
rpool/ROOT/pre_download_19.10.homeuse_1575354576   223K  1.53G  7.29G  /
rpool/ROOT/pre_download_19.12.homeuse_1581739687 1K  1.53G  7.68G  /
rpool/ROOT/pre_napp-it-18.12   273K  1.53G  6.63G  /
rpool/dump3.95G  1.53G  3.95G  -
rpool/export   987M  1.53G24K
/export
rpool/export/home  987M  1.53G24K
/export/home
rpool/export/home/judah987M  1.53G   270M
/export/home/judah
rpool/swap4.20G  4.51G  1.22G  -

I then ran

# zfs snapshot -r rpool@snap01

and then

sudo zfs send rpool@snap01 | sudo zfs recv -F rpool2@snap01

where rpool2 is the ZFS pool on the new SSD.

It doesn't seem anything was copied over.

What am I doing wrong?



On Sat, Jun 20, 2020 at 6:09 PM Till Wegmüller  wrote:

> Have a look at zfs-send(1m)
>
> it's -r. You must have a snapshot to send you cannot sent datasets
> directly. The snapshots must be named exaclty the same on the whole
> pool. You can achieve this with zfs snap -r very easily.
>
> Hope this helps
> Greetings
> Till
>
> On 21.06.20 01:03, Judah Richardson wrote:
> > I can't seem to find any command that recursively sends all the datasets
> on
> > 1 zpool to another ...
> >
> > This is all very confusing and frustrating. Disk upgrades must have been
> a
> > considered user operation, no? Why not make it intuitive and simple? :(
> >
> > On Sat, Jun 20, 2020 at 5:58 PM Jonathan Adams 
> > wrote:
> >
> >> I've never done that, but it must be worth a go, unless you want to just
> >> install a new system in the new disk and copy over the files you want to
> >> change afterwards ...
> >>
> >> On Sat, 20 Jun 2020, 23:46 Judah Richardson,  >
> >> wrote:
> >>
> >>> On Sat, Jun 20, 2020 at 2:29 AM Guenther Alka 
> >> wrote:
> >>>
>  Another option is to backup the current BE to the datapool via zfs
> >> send.
> 
> >>> Would it be possible to just zfs send everything from the current SSD
> to
> >>> the new one, then enable autoexpand on the new SSD and make it
> bootable?
> >>>
> >>> This can be done continously via incremental send for ongoing backups.
>  If the system disk fails (or you want to replace), add a new disk,
>  install a default OS, import the datapool and restore the BE via zfs
>  send. Then activate this BE and reboot to have the exact former OS
>  installation restored.
> 
>    Gea
>  @napp-it.org
> 
>  Am 19.06.2020 um 21:19 schrieb Gary Mills:
> > On Fri, Jun 19, 2020 at 01:23:35PM -0500, Judah Richardson wrote:
> >> I currently run OpenIndiana Hipster on a 32 GB SSD. I'd like move
> >> that
> >> installation to a 128 GB SSD. What's the easiest way to do this?
> > The easiest way is to use zpool commands.  First, add the large SSD
> >> as
> > half a mirror to the smaller one.  Then, detach the smaller one.
> > These options are all described in the zpool man page.
> >
> > You will likely need to use the installboot command on the large SSD
> > to make it bootable before you do the detach.  This operation is
> > described in the installboot man page.
> >
> >> I was thinking of using Clonezilla, but I'm not sure if that's the
> >> way
>  to
> >> go here.
> > I'd recommend using native illumos commands instead.
> >
> 
>  ___
>  openindiana-discuss mailing list
>  openindiana-discuss@openindiana.org
>  https://openindiana.org/mailman/listinfo/openindiana-discuss
> 
> >>> ___
> >>> openindiana-discuss mailing list
> >>> openindiana-discuss@openindiana.org
> >>> 

Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-20 Thread Till Wegmüller
Have a look at zfs-send(1m)

it's -r. You must have a snapshot to send you cannot sent datasets
directly. The snapshots must be named exaclty the same on the whole
pool. You can achieve this with zfs snap -r very easily.

Hope this helps
Greetings
Till

On 21.06.20 01:03, Judah Richardson wrote:
> I can't seem to find any command that recursively sends all the datasets on
> 1 zpool to another ...
> 
> This is all very confusing and frustrating. Disk upgrades must have been a
> considered user operation, no? Why not make it intuitive and simple? :(
> 
> On Sat, Jun 20, 2020 at 5:58 PM Jonathan Adams 
> wrote:
> 
>> I've never done that, but it must be worth a go, unless you want to just
>> install a new system in the new disk and copy over the files you want to
>> change afterwards ...
>>
>> On Sat, 20 Jun 2020, 23:46 Judah Richardson, 
>> wrote:
>>
>>> On Sat, Jun 20, 2020 at 2:29 AM Guenther Alka 
>> wrote:
>>>
 Another option is to backup the current BE to the datapool via zfs
>> send.

>>> Would it be possible to just zfs send everything from the current SSD to
>>> the new one, then enable autoexpand on the new SSD and make it bootable?
>>>
>>> This can be done continously via incremental send for ongoing backups.
 If the system disk fails (or you want to replace), add a new disk,
 install a default OS, import the datapool and restore the BE via zfs
 send. Then activate this BE and reboot to have the exact former OS
 installation restored.

   Gea
 @napp-it.org

 Am 19.06.2020 um 21:19 schrieb Gary Mills:
> On Fri, Jun 19, 2020 at 01:23:35PM -0500, Judah Richardson wrote:
>> I currently run OpenIndiana Hipster on a 32 GB SSD. I'd like move
>> that
>> installation to a 128 GB SSD. What's the easiest way to do this?
> The easiest way is to use zpool commands.  First, add the large SSD
>> as
> half a mirror to the smaller one.  Then, detach the smaller one.
> These options are all described in the zpool man page.
>
> You will likely need to use the installboot command on the large SSD
> to make it bootable before you do the detach.  This operation is
> described in the installboot man page.
>
>> I was thinking of using Clonezilla, but I'm not sure if that's the
>> way
 to
>> go here.
> I'd recommend using native illumos commands instead.
>

 ___
 openindiana-discuss mailing list
 openindiana-discuss@openindiana.org
 https://openindiana.org/mailman/listinfo/openindiana-discuss

>>> ___
>>> openindiana-discuss mailing list
>>> openindiana-discuss@openindiana.org
>>> https://openindiana.org/mailman/listinfo/openindiana-discuss
>>>
>> ___
>> openindiana-discuss mailing list
>> openindiana-discuss@openindiana.org
>> https://openindiana.org/mailman/listinfo/openindiana-discuss
>>
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
> 

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-20 Thread Jonathan Adams
Do you want the full history? If not, create a recursive snapshot, then
just send that snapshot recursively ...

On Sun, 21 Jun 2020, 00:03 Judah Richardson, 
wrote:

> I can't seem to find any command that recursively sends all the datasets on
> 1 zpool to another ...
>
> This is all very confusing and frustrating. Disk upgrades must have been a
> considered user operation, no? Why not make it intuitive and simple? :(
>
> On Sat, Jun 20, 2020 at 5:58 PM Jonathan Adams 
> wrote:
>
> > I've never done that, but it must be worth a go, unless you want to just
> > install a new system in the new disk and copy over the files you want to
> > change afterwards ...
> >
> > On Sat, 20 Jun 2020, 23:46 Judah Richardson, 
> > wrote:
> >
> > > On Sat, Jun 20, 2020 at 2:29 AM Guenther Alka 
> > wrote:
> > >
> > > > Another option is to backup the current BE to the datapool via zfs
> > send.
> > > >
> > > Would it be possible to just zfs send everything from the current SSD
> to
> > > the new one, then enable autoexpand on the new SSD and make it
> bootable?
> > >
> > > This can be done continously via incremental send for ongoing backups.
> > > > If the system disk fails (or you want to replace), add a new disk,
> > > > install a default OS, import the datapool and restore the BE via zfs
> > > > send. Then activate this BE and reboot to have the exact former OS
> > > > installation restored.
> > > >
> > > >   Gea
> > > > @napp-it.org
> > > >
> > > > Am 19.06.2020 um 21:19 schrieb Gary Mills:
> > > > > On Fri, Jun 19, 2020 at 01:23:35PM -0500, Judah Richardson wrote:
> > > > >> I currently run OpenIndiana Hipster on a 32 GB SSD. I'd like move
> > that
> > > > >> installation to a 128 GB SSD. What's the easiest way to do this?
> > > > > The easiest way is to use zpool commands.  First, add the large SSD
> > as
> > > > > half a mirror to the smaller one.  Then, detach the smaller one.
> > > > > These options are all described in the zpool man page.
> > > > >
> > > > > You will likely need to use the installboot command on the large
> SSD
> > > > > to make it bootable before you do the detach.  This operation is
> > > > > described in the installboot man page.
> > > > >
> > > > >> I was thinking of using Clonezilla, but I'm not sure if that's the
> > way
> > > > to
> > > > >> go here.
> > > > > I'd recommend using native illumos commands instead.
> > > > >
> > > >
> > > > ___
> > > > openindiana-discuss mailing list
> > > > openindiana-discuss@openindiana.org
> > > > https://openindiana.org/mailman/listinfo/openindiana-discuss
> > > >
> > > ___
> > > openindiana-discuss mailing list
> > > openindiana-discuss@openindiana.org
> > > https://openindiana.org/mailman/listinfo/openindiana-discuss
> > >
> > ___
> > openindiana-discuss mailing list
> > openindiana-discuss@openindiana.org
> > https://openindiana.org/mailman/listinfo/openindiana-discuss
> >
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-20 Thread Judah Richardson
I can't seem to find any command that recursively sends all the datasets on
1 zpool to another ...

This is all very confusing and frustrating. Disk upgrades must have been a
considered user operation, no? Why not make it intuitive and simple? :(

On Sat, Jun 20, 2020 at 5:58 PM Jonathan Adams 
wrote:

> I've never done that, but it must be worth a go, unless you want to just
> install a new system in the new disk and copy over the files you want to
> change afterwards ...
>
> On Sat, 20 Jun 2020, 23:46 Judah Richardson, 
> wrote:
>
> > On Sat, Jun 20, 2020 at 2:29 AM Guenther Alka 
> wrote:
> >
> > > Another option is to backup the current BE to the datapool via zfs
> send.
> > >
> > Would it be possible to just zfs send everything from the current SSD to
> > the new one, then enable autoexpand on the new SSD and make it bootable?
> >
> > This can be done continously via incremental send for ongoing backups.
> > > If the system disk fails (or you want to replace), add a new disk,
> > > install a default OS, import the datapool and restore the BE via zfs
> > > send. Then activate this BE and reboot to have the exact former OS
> > > installation restored.
> > >
> > >   Gea
> > > @napp-it.org
> > >
> > > Am 19.06.2020 um 21:19 schrieb Gary Mills:
> > > > On Fri, Jun 19, 2020 at 01:23:35PM -0500, Judah Richardson wrote:
> > > >> I currently run OpenIndiana Hipster on a 32 GB SSD. I'd like move
> that
> > > >> installation to a 128 GB SSD. What's the easiest way to do this?
> > > > The easiest way is to use zpool commands.  First, add the large SSD
> as
> > > > half a mirror to the smaller one.  Then, detach the smaller one.
> > > > These options are all described in the zpool man page.
> > > >
> > > > You will likely need to use the installboot command on the large SSD
> > > > to make it bootable before you do the detach.  This operation is
> > > > described in the installboot man page.
> > > >
> > > >> I was thinking of using Clonezilla, but I'm not sure if that's the
> way
> > > to
> > > >> go here.
> > > > I'd recommend using native illumos commands instead.
> > > >
> > >
> > > ___
> > > openindiana-discuss mailing list
> > > openindiana-discuss@openindiana.org
> > > https://openindiana.org/mailman/listinfo/openindiana-discuss
> > >
> > ___
> > openindiana-discuss mailing list
> > openindiana-discuss@openindiana.org
> > https://openindiana.org/mailman/listinfo/openindiana-discuss
> >
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-20 Thread Jonathan Adams
I've never done that, but it must be worth a go, unless you want to just
install a new system in the new disk and copy over the files you want to
change afterwards ...

On Sat, 20 Jun 2020, 23:46 Judah Richardson, 
wrote:

> On Sat, Jun 20, 2020 at 2:29 AM Guenther Alka  wrote:
>
> > Another option is to backup the current BE to the datapool via zfs send.
> >
> Would it be possible to just zfs send everything from the current SSD to
> the new one, then enable autoexpand on the new SSD and make it bootable?
>
> This can be done continously via incremental send for ongoing backups.
> > If the system disk fails (or you want to replace), add a new disk,
> > install a default OS, import the datapool and restore the BE via zfs
> > send. Then activate this BE and reboot to have the exact former OS
> > installation restored.
> >
> >   Gea
> > @napp-it.org
> >
> > Am 19.06.2020 um 21:19 schrieb Gary Mills:
> > > On Fri, Jun 19, 2020 at 01:23:35PM -0500, Judah Richardson wrote:
> > >> I currently run OpenIndiana Hipster on a 32 GB SSD. I'd like move that
> > >> installation to a 128 GB SSD. What's the easiest way to do this?
> > > The easiest way is to use zpool commands.  First, add the large SSD as
> > > half a mirror to the smaller one.  Then, detach the smaller one.
> > > These options are all described in the zpool man page.
> > >
> > > You will likely need to use the installboot command on the large SSD
> > > to make it bootable before you do the detach.  This operation is
> > > described in the installboot man page.
> > >
> > >> I was thinking of using Clonezilla, but I'm not sure if that's the way
> > to
> > >> go here.
> > > I'd recommend using native illumos commands instead.
> > >
> >
> > ___
> > openindiana-discuss mailing list
> > openindiana-discuss@openindiana.org
> > https://openindiana.org/mailman/listinfo/openindiana-discuss
> >
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-20 Thread Judah Richardson
On Sat, Jun 20, 2020 at 2:29 AM Guenther Alka  wrote:

> Another option is to backup the current BE to the datapool via zfs send.
>
Would it be possible to just zfs send everything from the current SSD to
the new one, then enable autoexpand on the new SSD and make it bootable?

This can be done continously via incremental send for ongoing backups.
> If the system disk fails (or you want to replace), add a new disk,
> install a default OS, import the datapool and restore the BE via zfs
> send. Then activate this BE and reboot to have the exact former OS
> installation restored.
>
>   Gea
> @napp-it.org
>
> Am 19.06.2020 um 21:19 schrieb Gary Mills:
> > On Fri, Jun 19, 2020 at 01:23:35PM -0500, Judah Richardson wrote:
> >> I currently run OpenIndiana Hipster on a 32 GB SSD. I'd like move that
> >> installation to a 128 GB SSD. What's the easiest way to do this?
> > The easiest way is to use zpool commands.  First, add the large SSD as
> > half a mirror to the smaller one.  Then, detach the smaller one.
> > These options are all described in the zpool man page.
> >
> > You will likely need to use the installboot command on the large SSD
> > to make it bootable before you do the detach.  This operation is
> > described in the installboot man page.
> >
> >> I was thinking of using Clonezilla, but I'm not sure if that's the way
> to
> >> go here.
> > I'd recommend using native illumos commands instead.
> >
>
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-20 Thread Judah Richardson
On Fri, Jun 19, 2020 at 2:19 PM Gary Mills  wrote:

> On Fri, Jun 19, 2020 at 01:23:35PM -0500, Judah Richardson wrote:
> >
> > I currently run OpenIndiana Hipster on a 32 GB SSD. I'd like move that
> > installation to a 128 GB SSD. What's the easiest way to do this?
>
> The easiest way is to use zpool commands.  First, add the large SSD as
> half a mirror to the smaller one.

Tried that. Apparently the 2 SSDs' optimal sector size don't match:

$ # zpool attach -f rpool c8d0s0 c4t0d0s0
cannot attach c4t0d0s0 to c8d0s0: new device has a different optimal sector
size; use the option '-o ashift=N' to override the optimal size

Any ideas? I thought ashift/sector size was irrelevant on SSDs anyway?


>   Then, detach the smaller one.
> These options are all described in the zpool man page.
>
> You will likely need to use the installboot command on the large SSD
> to make it bootable before you do the detach.  This operation is
> described in the installboot man page.
>
> > I was thinking of using Clonezilla, but I'm not sure if that's the way to
> > go here.
>
> I'd recommend using native illumos commands instead.
>
>
> --
> -Gary Mills--refurb--Winnipeg, Manitoba,
> Canada-
>
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-20 Thread Judah Richardson
On Fri, Jun 19, 2020 at 2:18 PM Jonathan Adams 
wrote:

>
> https://wiki.openindiana.org/plugins/servlet/mobile?contentId=4886716=25231700#content/view/4886716/25231700

I run into some weirdness when I try the SMI instruction:

# prtvtoc /dev/rdsk/c8d0s2 | sudo fmthard -s - /dev/rdsk/c4t0d0s2
fmthard: Partition 0 overlaps partition 2. Overlap is allowed
only on partition on the full disk partition).
fmthard: Partition 8 overlaps partition 2. Overlap is allowed
only on partition on the full disk partition).
expected one reserved partition, but found 0


>
> However, I believe that the installboot command has changed since that
> page, as that one is for grubinstall ...
>
> Jon
>
> On Fri, 19 Jun 2020, 20:10 Judah Richardson, 
> wrote:
>
> > On Fri, Jun 19, 2020 at 2:04 PM Jonathan Adams 
> > wrote:
> >
> > > Weirdly, I'd add an external disk/boot device in illumos, add it as a
> > > failover/clone,
> >
> > This sounds promising. How do I do that?
> >
> >
> > > then after it boots off that, take out the old SSD, put the
> > > new one in and reverse the procedure to make the new SSD the boot
> device
> > > ...
> > >
> > > Or just install directly on the new device, and then copy over the
> > > differences using an external caddy ...
> > >
> > > On Fri, 19 Jun 2020, 19:24 Judah Richardson, <
> judahrichard...@gmail.com>
> > > wrote:
> > >
> > > > Hi All,
> > > >
> > > > I currently run OpenIndiana Hipster on a 32 GB SSD. I'd like move
> that
> > > > installation to a 128 GB SSD. What's the easiest way to do this?
> > > >
> > > > I was thinking of using Clonezilla, but I'm not sure if that's the
> way
> > to
> > > > go here.
> > > >
> > > > Judah
> > > > ___
> > > > openindiana-discuss mailing list
> > > > openindiana-discuss@openindiana.org
> > > > https://openindiana.org/mailman/listinfo/openindiana-discuss
> > > >
> > > ___
> > > openindiana-discuss mailing list
> > > openindiana-discuss@openindiana.org
> > > https://openindiana.org/mailman/listinfo/openindiana-discuss
> > >
> > ___
> > openindiana-discuss mailing list
> > openindiana-discuss@openindiana.org
> > https://openindiana.org/mailman/listinfo/openindiana-discuss
> >
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-20 Thread Aurélien Larcher
It would be nice to have this documented in oi-docs actually :)

On Saturday, June 20, 2020, Guenther Alka  wrote:
> Another option is to backup the current BE to the datapool via zfs send.
This can be done continously via incremental send for ongoing backups. If
the system disk fails (or you want to replace), add a new disk, install a
default OS, import the datapool and restore the BE via zfs send. Then
activate this BE and reboot to have the exact former OS installation
restored.
>
>  Gea
> @napp-it.org
>
> Am 19.06.2020 um 21:19 schrieb Gary Mills:
>>
>> On Fri, Jun 19, 2020 at 01:23:35PM -0500, Judah Richardson wrote:
>>>
>>> I currently run OpenIndiana Hipster on a 32 GB SSD. I'd like move that
>>> installation to a 128 GB SSD. What's the easiest way to do this?
>>
>> The easiest way is to use zpool commands.  First, add the large SSD as
>> half a mirror to the smaller one.  Then, detach the smaller one.
>> These options are all described in the zpool man page.
>>
>> You will likely need to use the installboot command on the large SSD
>> to make it bootable before you do the detach.  This operation is
>> described in the installboot man page.
>>
>>> I was thinking of using Clonezilla, but I'm not sure if that's the way
to
>>> go here.
>>
>> I'd recommend using native illumos commands instead.
>>
>
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>

-- 
---
Praise the Caffeine embeddings
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-20 Thread Guenther Alka
Another option is to backup the current BE to the datapool via zfs send. 
This can be done continously via incremental send for ongoing backups. 
If the system disk fails (or you want to replace), add a new disk, 
install a default OS, import the datapool and restore the BE via zfs 
send. Then activate this BE and reboot to have the exact former OS 
installation restored.


 Gea
@napp-it.org

Am 19.06.2020 um 21:19 schrieb Gary Mills:

On Fri, Jun 19, 2020 at 01:23:35PM -0500, Judah Richardson wrote:

I currently run OpenIndiana Hipster on a 32 GB SSD. I'd like move that
installation to a 128 GB SSD. What's the easiest way to do this?

The easiest way is to use zpool commands.  First, add the large SSD as
half a mirror to the smaller one.  Then, detach the smaller one.
These options are all described in the zpool man page.

You will likely need to use the installboot command on the large SSD
to make it bootable before you do the detach.  This operation is
described in the installboot man page.


I was thinking of using Clonezilla, but I'm not sure if that's the way to
go here.

I'd recommend using native illumos commands instead.



___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-19 Thread Gary Mills
On Fri, Jun 19, 2020 at 01:23:35PM -0500, Judah Richardson wrote:
> 
> I currently run OpenIndiana Hipster on a 32 GB SSD. I'd like move that
> installation to a 128 GB SSD. What's the easiest way to do this?

The easiest way is to use zpool commands.  First, add the large SSD as
half a mirror to the smaller one.  Then, detach the smaller one.
These options are all described in the zpool man page.

You will likely need to use the installboot command on the large SSD
to make it bootable before you do the detach.  This operation is
described in the installboot man page.

> I was thinking of using Clonezilla, but I'm not sure if that's the way to
> go here.

I'd recommend using native illumos commands instead.


-- 
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-19 Thread Jonathan Adams
https://wiki.openindiana.org/plugins/servlet/mobile?contentId=4886716=25231700#content/view/4886716/25231700

However, I believe that the installboot command has changed since that
page, as that one is for grubinstall ...

Jon

On Fri, 19 Jun 2020, 20:10 Judah Richardson, 
wrote:

> On Fri, Jun 19, 2020 at 2:04 PM Jonathan Adams 
> wrote:
>
> > Weirdly, I'd add an external disk/boot device in illumos, add it as a
> > failover/clone,
>
> This sounds promising. How do I do that?
>
>
> > then after it boots off that, take out the old SSD, put the
> > new one in and reverse the procedure to make the new SSD the boot device
> > ...
> >
> > Or just install directly on the new device, and then copy over the
> > differences using an external caddy ...
> >
> > On Fri, 19 Jun 2020, 19:24 Judah Richardson, 
> > wrote:
> >
> > > Hi All,
> > >
> > > I currently run OpenIndiana Hipster on a 32 GB SSD. I'd like move that
> > > installation to a 128 GB SSD. What's the easiest way to do this?
> > >
> > > I was thinking of using Clonezilla, but I'm not sure if that's the way
> to
> > > go here.
> > >
> > > Judah
> > > ___
> > > openindiana-discuss mailing list
> > > openindiana-discuss@openindiana.org
> > > https://openindiana.org/mailman/listinfo/openindiana-discuss
> > >
> > ___
> > openindiana-discuss mailing list
> > openindiana-discuss@openindiana.org
> > https://openindiana.org/mailman/listinfo/openindiana-discuss
> >
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-19 Thread Judah Richardson
On Fri, Jun 19, 2020 at 2:04 PM Jonathan Adams 
wrote:

> Weirdly, I'd add an external disk/boot device in illumos, add it as a
> failover/clone,

This sounds promising. How do I do that?


> then after it boots off that, take out the old SSD, put the
> new one in and reverse the procedure to make the new SSD the boot device
> ...
>
> Or just install directly on the new device, and then copy over the
> differences using an external caddy ...
>
> On Fri, 19 Jun 2020, 19:24 Judah Richardson, 
> wrote:
>
> > Hi All,
> >
> > I currently run OpenIndiana Hipster on a 32 GB SSD. I'd like move that
> > installation to a 128 GB SSD. What's the easiest way to do this?
> >
> > I was thinking of using Clonezilla, but I'm not sure if that's the way to
> > go here.
> >
> > Judah
> > ___
> > openindiana-discuss mailing list
> > openindiana-discuss@openindiana.org
> > https://openindiana.org/mailman/listinfo/openindiana-discuss
> >
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-19 Thread Jonathan Adams
Weirdly, I'd add an external disk/boot device in illumos, add it as a
failover/clone, then after it boots off that, take out the old SSD, put the
new one in and reverse the procedure to make the new SSD the boot device ...

Or just install directly on the new device, and then copy over the
differences using an external caddy ...

On Fri, 19 Jun 2020, 19:24 Judah Richardson, 
wrote:

> Hi All,
>
> I currently run OpenIndiana Hipster on a 32 GB SSD. I'd like move that
> installation to a 128 GB SSD. What's the easiest way to do this?
>
> I was thinking of using Clonezilla, but I'm not sure if that's the way to
> go here.
>
> Judah
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] What's the easiest, straight-shot way to upgrade to a new boot SSD?

2020-06-19 Thread Judah Richardson
Hi All,

I currently run OpenIndiana Hipster on a 32 GB SSD. I'd like move that
installation to a 128 GB SSD. What's the easiest way to do this?

I was thinking of using Clonezilla, but I'm not sure if that's the way to
go here.

Judah
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss