Re: [OpenIndiana-discuss] [discuss] Migrating ZFS RAIDZ1 volume to larger disks

2017-12-31 Thread John D Groenveld
In message <1575151844.6915865.1514758316...@mail.yahoo.com>, Reginald Beardsle
y via openindiana-discuss writes:
> on eBay for $100 each, so it's still relatively pricey ;-)  But if you verify
> that it will work with Sol 10,  I'll buy one just to have on hand.

S10 is EOL next month, but it looks like S10's mpt_sas(7D) should work
as you migrate to Illumos:
https://docs.oracle.com/cd/E26505_01/html/816-5177/mpt-sas-7d.html#scrolltoc>

John
groenv...@acm.org

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] [discuss] Migrating ZFS RAIDZ1 volume to larger disks

2017-12-31 Thread Reginald Beardsley via openindiana-discuss
Thanks.  Do you know from personal experience that that will work with Solaris 
10?  Rather weird that the other vendors are 3x more money.  I bought 3 Z400s 
on eBay for $100 each, so it's still relatively pricey ;-)  But if you verify 
that it will work with Sol 10,  I'll buy one just to have on hand.

After 4+hours it's 40% thru resilvering the 1st of the RAIDZ1 disks with an 
estimated 6 hours to go.  This makes me think that zfs send-recv to a new pool 
would be more practical.


On Sun, 12/31/17, John D Groenveld  wrote:

 Subject: Re: [OpenIndiana-discuss] [discuss] Migrating ZFS RAIDZ1 volume to    
larger disks
 To: "Discussion list for OpenIndiana" 
 Date: Sunday, December 31, 2017, 12:27 PM
 
 In message <615700807.6846958.1514743867...@mail.yahoo.com>,
 Reginald Beardsley
  via openindiana-discuss
 writes:
 >I had considered setting up an
 array at the drive  capacity of the Z400.  The 
 >BIOS behavior would likely  make that
 catastrophic by requiring that the syste
 >m be rebooted with both the old and new
 drives in place, but nowhere to plug i
 >n
 the new drive.
 
 You'll be much less likely to throw your
 workstation out the window
 with this <
 $100 investment:
 https://www.newegg.com/Product/Product.aspx?Item=N82E16816118182>
 
 John
 groenv...@acm.org
 
 ___
 openindiana-discuss mailing list
 openindiana-discuss@openindiana.org
 https://openindiana.org/mailman/listinfo/openindiana-discuss
 

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] [discuss] Migrating ZFS RAIDZ1 volume to larger disks

2017-12-31 Thread John D Groenveld
In message <615700807.6846958.1514743867...@mail.yahoo.com>, Reginald Beardsley
 via openindiana-discuss writes:
>I had considered setting up an array at the drive  capacity of the Z400.  The 
>BIOS behavior would likely  make that catastrophic by requiring that the syste
>m be rebooted with both the old and new drives in place, but nowhere to plug i
>n the new drive.

You'll be much less likely to throw your workstation out the window
with this < $100 investment:
https://www.newegg.com/Product/Product.aspx?Item=N82E16816118182>

John
groenv...@acm.org

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] [discuss] Migrating ZFS RAIDZ1 volume to larger disks

2017-12-31 Thread Reginald Beardsley via openindiana-discuss

Thanks to an email  from russell and a bit more head scratching, I think I've 
got it sorted.  The problem I was encountering was the result of the BIOS on 
the Z400 which does "magic" to help the ignorant remain ignorant.

The key was to pull my mirrored scratch space disks and install the replacement 
drives in those bays.  Then go through the usual boot, press F1 cycle.  Once 
I'd done that then it sees the new drive and normal procedure seems to work. 
It's merrily resilvering the first new disk.

zpool offline  
zpool replace -f (-f to satisfy the presence of a 
traditional full disk s2 slice)
zpool online  


I had considered setting up an array at the drive  capacity of the Z400.  The 
BIOS behavior would likely  make that catastrophic by requiring that the system 
be rebooted with both the old and new drives in place, but nowhere to plug in 
the new drive.

The moral of the story seems to be, never use all of your disk ports. Leave at 
least 1 empty.

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] [discuss] Migrating ZFS RAIDZ1 volume to larger disks

2017-12-29 Thread Reginald Beardsley via openindiana-discuss
Can anyone point me to a description of the contents of the first 8 KB of a 
*bootable* SMI labeled x86 disk?  I looked at dklabel.h, but it's not 
especially informative.  A web search just told me things I already know.

I've got three 1 TB disks which form a 3 way mirror on s0 and a 3 disk RAIDZ1 
on s1:

c0d0
c0d1
c1d1

The first 8 KB of  c0d0 and c0d1 disks are identically zeros.  I've run 
installgrub(8m) on all the disks.  However, it still fails to boot if I pull 
c0d0 (bad PBR)  or c1d1 (can't find rootpool).  All of the disks are 100% 
Solaris according to fdisk, so there is no DOS MBR on any disk and nothing at 
all in sector 0 on c0d0 & c0d1. 

The boot semantics imposed by MS-DOS require a bootstrap in the first sector of 
the disk.  I've not disassembled it, but a bootstrap seems clearly present on 
c1d1, but absent from c0d0 and c0d1.  There is the additional complication of 
the HP BIOS which makes a fuss any time any device is added or removed. I have 
no clue what it is up to and do wish I could make it stop.

When building such a configuration on OI, I create a DOS partition the size of 
the desired root pool and install into it.  I then create a 100% Solaris SMI 
label disk with the desired size s0 and the remainder of the disk in s1.  I 
mirror s0, detach the first disk, relabel it, reattach it, label and attach the 
3rd s0.  Then I create a RAIDZ1 pool using the s1 slices and migrate /export to 
the RAIDZ1 pool.  Tedious, but straight forward.  I developed the scheme to 
work around not being able to boot from a RAIDZ pool.

In this instance, the filesystem was migrated from my Ultra 20 4-5 years ago 
and then migrated to the mirror & RAIDZ model about 3 years ago. So a slightly 
different situation than my normal method of setting this up.  However, I've 
never had or simulated losing a disk from this setup.  So it appears that there 
are some details I've overlooked.  It will keep running if a disk fails, but it 
won't reboot.

My inclination is to copy the first two 4k sectors of c1d1 to c0d0 and c0d1, 
but I *really* don't want to make this any more work than it is.  I was quite 
surprised that a mirrored root pool would not boot if a disk was missing.

With regard to Till's comment .  Is there a clean example of starting up X11 
using twm, either using gdm or a console login?   I don't care as I only log 
out if I'm going to be out of town for several days.  But the last time I 
putzed with this, it was a "maze of twisty passages, all alike".  I found the 
same scripts being called multiple times.  I should like to note that I used to 
run dwm on one screen and twm on the other as a compromise with Motif and CDE.  
My reluctance to tackle this is not lack of ability, but a great unwillingness 
to waste so much time.  I just spent 20+ hrs getting VBox to install guests 
properly on Hipster because it just didn't feel like talking to the DVD most of 
the time and being abused as an idiot by the VBox forum admin who doesn't run 
Solaris and probably never has.

I should hope that the fact that I've been effectively booting Solaris 10 u8 
from a RAIDZ disk set is sufficient proof I have some measure of competence.  
It's a damn sight more elegant than adding a separate boot disk.

Color me frustrated and annoyed.

Reg

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss