Re: [OpenIndiana-discuss] safely cleanup pkg cache?

2021-02-22 Thread Stephan Althaus

On 02/23/21 12:13 AM, Tim Mooney via openindiana-discuss wrote:
In regard to: Re: [OpenIndiana-discuss] safely cleanup pkg cache?, 
Andreas...:



Am 21.02.21 um 22:42 schrieb Stephan Althaus:

Hello!

The "-s" option does the minimal obvious remove of the corresponding
snapshot:


My experience seems to match what Andreas and Toomas are saying: -s isn't
doing what it's supposed to be doing (?).

After using

sudo beadm destroy -F -s -v 

to destroy a dozen or so boot environments, I'm down to just this
for boot environments:

$ beadm list
BE    Active Mountpoint Space  Policy Created
openindiana   -  -  12.05M static 
2019-05-17 10:37
openindiana-2021:02:07    -  -  27.27M static 
2021-02-07 01:01
openindiana-2021:02:07-backup-1   -  -  117K   static 
2021-02-07 13:06
openindiana-2021:02:07-backup-2   -  -  117K   static 
2021-02-07 13:08
openindiana-2021:02:07-1  NR /  51.90G static 
2021-02-07 17:23
openindiana-2021:02:07-1-backup-1 -  -  186K   static 
2021-02-07 17:48
openindiana-2021:02:07-1-backup-2 -  -  665K   static 
2021-02-07 17:58
openindiana-2021:02:07-1-backup-3 -  -  666K   static 
2021-02-07 18:02



However, zfs list still shows (I think) snapshots for some of the
intermediate boot environments that I destroyed:

$ zfs list -t snapshot
NAME  USED AVAIL  
REFER  MOUNTPOINT

rpool/ROOT/openindiana-2021:02:07-1@install 559M  -  5.94G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-05-17-18:34:55 472M  -  
6.28G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-05-17-18:46:32 555K  -  
6.28G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-05-17-18:48:56 2.18M  -  
6.45G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-06-13-22:13:18 1015M  -  
9.74G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-06-21-16:25:04 1.21G  -  
9.85G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-08-23-16:17:28 833M  -  
9.74G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-08-28-21:51:55 1.40G  -  
10.8G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-09-12-23:35:08 643M  -  
11.7G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-10-02-22:55:57 660M  -  
12.0G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-11-09-00:04:17 736M  -  
12.4G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-12-05-01:02:10 1.02G  -  
12.7G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-12-20-19:55:51 788M  -  
12.9G  -
rpool/ROOT/openindiana-2021:02:07-1@2020-02-13-23:17:35 918M  -  
13.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-01-21-02:27:31 1.74G  -  
13.9G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-06-22:47:15 1.71G  -  
18.8G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-06:59:02 1.22G  -  
19.1G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-19:06:07 280M  -  
19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-19:08:29 280M  -  
19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:21:52 640K  -  
19.1G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:23:46 868K  -  
19.2G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:48:07 294M  -  
19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:58:44 280M  -  
19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-08-00:02:17 280M  -  
19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-21-06:24:56 3.49M  -  
19.4G  -


Now I have to figure out how to map the zfs snapshots to the boot
environments that I kept, so that I can "weed out" the zfs snapshots
that I don't need.

I appreciate all the discussion and info my question has spawned! I
didn't anticipate the issue being as complicated as it appears it is.

Tim


Hello!

"beadm -s " destroys snapshots.

"rpool/ROOT/openindiana-2021:02:07-1" is the filesystem of the current BE.

i don't know why these snapshots are in there,
but these are left there from the "pkg upgrade" somehow.

I don't think that "beadm -s" is to blame here.

Maybe an additional Parameter would be nice to get rid of old snaphots 
within the BE-filesystem(s).


Greetings,

Stephan


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Installing grub on a zfs mirror rpool

2021-02-22 Thread Tony Brian Albers
On 23/02/2021 07:06, Tony Brian Albers wrote:
> On 22/02/2021 17:52, Reginald Beardsley via openindiana-discuss wrote:
>> This system has had an issue that if I did a scrub it would kernel panic, 
>> but after I booted it would finish the scrub with no issues. The symptom 
>> suggests a bad DIMM, but short of simply replacing them all or exhaustive 
>> substitution over many days no idea of how to fix that.
> 
> Reg, check out memtest86: https://www.memtest.org/
> 
> It might help you figure out which DIMM is bad.
> 
> /tony
> 

Sorry, didn't notice that you wrote later that you've alredy tried that.

/tony

-- 
Tony Albers - Systems Architect - IT Development Royal Danish Library, 
Victor Albecks Vej 1, 8000 Aarhus C, Denmark
Tel: +45 2566 2383 - CVR/SE: 2898 8842 - EAN: 5798000792142

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Installing grub on a zfs mirror rpool

2021-02-22 Thread Tony Brian Albers
On 22/02/2021 17:52, Reginald Beardsley via openindiana-discuss wrote:
> This system has had an issue that if I did a scrub it would kernel panic, but 
> after I booted it would finish the scrub with no issues. The symptom suggests 
> a bad DIMM, but short of simply replacing them all or exhaustive substitution 
> over many days no idea of how to fix that.

Reg, check out memtest86: https://www.memtest.org/

It might help you figure out which DIMM is bad.

/tony

-- 
Tony Albers - Systems Architect - IT Development Royal Danish Library, 
Victor Albecks Vej 1, 8000 Aarhus C, Denmark
Tel: +45 2566 2383 - CVR/SE: 2898 8842 - EAN: 5798000792142
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] DNS problem

2021-02-22 Thread Toomas Soome via openindiana-discuss
Did you pkill nscd after fixing nsswitch.conf?

Rgds,
Toomas

Sent from my iPhone

> On 23. Feb 2021, at 00:35, Reginald Beardsley via openindiana-discuss 
>  wrote:
> 
> 
> My nnswitch.conf file got stepped on by something which substituted 
> nsswitch.files during a reboot when I took the machine down to remove the 5 
> TB disk. This was after I had fixed the problem once already. Fortunately, I 
> immediately recognized what had happened. I still don't no why though.
> 
> Reg
> 
>> On Monday, February 22, 2021, 04:15:35 PM CST, Toomas Soome via 
>> openindiana-discuss  wrote:  
>> 
>> 
>> 
>> On 22. Feb 2021, at 21:33, L. F. Elia via openindiana-discuss 
>>  wrote:
>> 
>> I usually use 1.1.1.1 and 8.8.8.8 for dns. There are IPv6 options of those 
>> for you who need them
>> lfe...@yahoo.com, Portsmouth VA, 23701
>> Solaris/LINUX/Windows administration CISSP/Security consulting 
>> 
>> On Saturday, February 20, 2021, 10:29:00 AM EST, Reginald Beardsley via 
>> openindiana-discuss  wrote:  
>> 
>> I'd been using a Linksys WRT54GL  and DD-WRT for 12 years without any 
>> problems.  A few days ago I started having issues of  not being able to 
>> properly making connections.  It might work fine for an hour and then web 
>> sites would time out on access attempts.
>> 
>> I have replaced it with a Linksys N600. That is working fine from Debian 
>> 9.3, but not with Hipster 2017.10.
>> 
>> If I do "nslookup login.yahoo.com" I get the usual response from the N600.  
>> But if I attempt "traceroute login.yahoo.com" I get an "unknown host 
>> login.yahoo.com" .
>> 
>> "traceroute " works as expected.
>> 
>> I'm *very* rusty at this as I set everything up many years ago. I suspect 
>> the issue is a conflict with nwamd.  For Hipster I configured  a static LAN 
>> ip address in the traditional fashion.
>> 
>> I have the following settings:
>> 
>> /etc/hostname. 
>> 
>> 
>> /etc/defaultrouter
>> gateway 
>> 
>> /etc/resolv.conf 
>> nameserver 
>> 
>> As an experiment I removed all those but nothing changed.
>> 
>> What things might I have misconfigured to create these symptoms?
>> 
>> 
> 
> You do not mention /etc/nsswitch.conf. I think, that too, is traditional way 
> to break things;)
> 
> rgds,
> toomas
> 
> 
>> 
>> I am *not* a fan of "auto magical" anything.  I've been administering my own 
>> SunOS systems for 30 years starting with a 3/60 and 4.1.1a.  So it's natural 
>> for me to continue using the pattern set by that.
>> 
>> ___
>> openindiana-discuss mailing list
>> openindiana-discuss@openindiana.org
>> https://openindiana.org/mailman/listinfo/openindiana-discuss
>> 
>> ___
>> openindiana-discuss mailing list
>> openindiana-discuss@openindiana.org
>> https://openindiana.org/mailman/listinfo/openindiana-discuss
> 
> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Why would X11 not start?

2021-02-22 Thread Reginald Beardsley via openindiana-discuss
 
After poking through /var/adm/messages it became clear that SMF had disabled 
cde-login. I tried 

svcadm enable -r /application/graphical-login/cde-login

but that didn't solve the issue. Once again I just get an nVidia splash screen. 

svcs -xv /application/graphical-login/cde-login

did not tell me anything useful. Just "disabled by administrator"

Unfortunately, I've never really gotten a good understanding of the SMF system. 
There were messages in /var/dt/Xerrors "Referenced uninitialized screen in 
layout!", but nothing else.

I'd really like to get this beast running properly so I can work on updating my 
Hipster instance.

Reg



 On Monday, February 22, 2021, 06:04:27 PM CST, Reginald Beardsley via 
openindiana-discuss  wrote:  
 
 I finally finished scrubbing the 3 pools on the system.  There were no errors 
reported.  In single user mode "zfs mount -a" seemed to mount everything as it 
should.  I checked the timestamps on the xorg.conf file in /etc/X11 which was 
last modified in 2016.  

But when I exit single user mode I get the nVidia splash  screen.  Same thing 
if I just let grub do a normal boot.  I don't get an xdm login prompt as I did 
before it crashed.  

With zfs reporting no errors I'm at a loss as to why the system is not 
displaying the xdm login screen.  I've not yet gone to look at the X error log, 
but that is next.

The system is not hung because if I press the soft power switch it will revert 
to the console and shutdown properly.

Anyone have any ideas?  It reliably comes up single user.

Very puzzled,
Reg

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss
  
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] Why would X11 not start?

2021-02-22 Thread Reginald Beardsley via openindiana-discuss
I finally finished scrubbing the 3 pools on the system.  There were no errors 
reported.  In single user mode "zfs mount -a" seemed to mount everything as it 
should.  I checked the timestamps on the xorg.conf file in /etc/X11 which was 
last modified in 2016.  

But when I exit single user mode I get the nVidia splash  screen.  Same thing 
if I just let grub do a normal boot.  I don't get an xdm login prompt as I did 
before it crashed.  

With zfs reporting no errors I'm at a loss as to why the system is not 
displaying the xdm login screen.  I've not yet gone to look at the X error log, 
but that is next.

The system is not hung because if I press the soft power switch it will revert 
to the console and shutdown properly.

Anyone have any ideas?  It reliably comes up single user.

Very puzzled,
Reg

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] safely cleanup pkg cache?

2021-02-22 Thread Tim Mooney via openindiana-discuss

In regard to: Re: [OpenIndiana-discuss] safely cleanup pkg cache?, Andreas...:


Am 21.02.21 um 22:42 schrieb Stephan Althaus:

Hello!

The "-s" option does the minimal obvious remove of the corresponding
snapshot:


My experience seems to match what Andreas and Toomas are saying: -s isn't
doing what it's supposed to be doing (?).

After using

sudo beadm destroy -F -s -v 

to destroy a dozen or so boot environments, I'm down to just this
for boot environments:

$ beadm list
BEActive Mountpoint Space  Policy Created
openindiana   -  -  12.05M static 2019-05-17 
10:37
openindiana-2021:02:07-  -  27.27M static 2021-02-07 
01:01
openindiana-2021:02:07-backup-1   -  -  117K   static 2021-02-07 
13:06
openindiana-2021:02:07-backup-2   -  -  117K   static 2021-02-07 
13:08
openindiana-2021:02:07-1  NR /  51.90G static 2021-02-07 
17:23
openindiana-2021:02:07-1-backup-1 -  -  186K   static 2021-02-07 
17:48
openindiana-2021:02:07-1-backup-2 -  -  665K   static 2021-02-07 
17:58
openindiana-2021:02:07-1-backup-3 -  -  666K   static 2021-02-07 
18:02


However, zfs list still shows (I think) snapshots for some of the
intermediate boot environments that I destroyed:

$ zfs list -t snapshot
NAME  USED  AVAIL  REFER  
MOUNTPOINT
rpool/ROOT/openindiana-2021:02:07-1@install   559M  -  5.94G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-05-17-18:34:55   472M  -  6.28G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-05-17-18:46:32   555K  -  6.28G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-05-17-18:48:56  2.18M  -  6.45G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-06-13-22:13:18  1015M  -  9.74G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-06-21-16:25:04  1.21G  -  9.85G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-08-23-16:17:28   833M  -  9.74G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-08-28-21:51:55  1.40G  -  10.8G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-09-12-23:35:08   643M  -  11.7G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-10-02-22:55:57   660M  -  12.0G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-11-09-00:04:17   736M  -  12.4G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-12-05-01:02:10  1.02G  -  12.7G  -
rpool/ROOT/openindiana-2021:02:07-1@2019-12-20-19:55:51   788M  -  12.9G  -
rpool/ROOT/openindiana-2021:02:07-1@2020-02-13-23:17:35   918M  -  13.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-01-21-02:27:31  1.74G  -  13.9G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-06-22:47:15  1.71G  -  18.8G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-06:59:02  1.22G  -  19.1G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-19:06:07   280M  -  19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-19:08:29   280M  -  19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:21:52   640K  -  19.1G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:23:46   868K  -  19.2G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:48:07   294M  -  19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-07-23:58:44   280M  -  19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-08-00:02:17   280M  -  19.3G  -
rpool/ROOT/openindiana-2021:02:07-1@2021-02-21-06:24:56  3.49M  -  19.4G  -

Now I have to figure out how to map the zfs snapshots to the boot
environments that I kept, so that I can "weed out" the zfs snapshots
that I don't need.

I appreciate all the discussion and info my question has spawned!  I
didn't anticipate the issue being as complicated as it appears it is.

Tim
--
Tim Mooney tim.moo...@ndsu.edu
Enterprise Computing & Infrastructure /
Division of Information Technology/701-231-1076 (Voice)
North Dakota State University, Fargo, ND 58105-5164

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] DNS problem

2021-02-22 Thread Reginald Beardsley via openindiana-discuss
 
My nnswitch.conf file got stepped on by something which substituted 
nsswitch.files during a reboot when I took the machine down to remove the 5 TB 
disk. This was after I had fixed the problem once already. Fortunately, I 
immediately recognized what had happened. I still don't no why though.

Reg

 On Monday, February 22, 2021, 04:15:35 PM CST, Toomas Soome via 
openindiana-discuss  wrote:  
 
 

> On 22. Feb 2021, at 21:33, L. F. Elia via openindiana-discuss 
>  wrote:
> 
> I usually use 1.1.1.1 and 8.8.8.8 for dns. There are IPv6 options of those 
> for you who need them
> lfe...@yahoo.com, Portsmouth VA, 23701
> Solaris/LINUX/Windows administration CISSP/Security consulting 
> 
>    On Saturday, February 20, 2021, 10:29:00 AM EST, Reginald Beardsley via 
>openindiana-discuss  wrote:  
> 
> I'd been using a Linksys WRT54GL  and DD-WRT for 12 years without any 
> problems.  A few days ago I started having issues of  not being able to 
> properly making connections.  It might work fine for an hour and then web 
> sites would time out on access attempts.
> 
> I have replaced it with a Linksys N600. That is working fine from Debian 9.3, 
> but not with Hipster 2017.10.
> 
> If I do "nslookup login.yahoo.com" I get the usual response from the N600.  
> But if I attempt "traceroute login.yahoo.com" I get an "unknown host 
> login.yahoo.com" .
> 
> "traceroute " works as expected.
> 
> I'm *very* rusty at this as I set everything up many years ago. I suspect the 
> issue is a conflict with nwamd.  For Hipster I configured  a static LAN ip 
> address in the traditional fashion.
> 
> I have the following settings:
> 
> /etc/hostname. 
> 
> 
> /etc/defaultrouter
> gateway 
> 
> /etc/resolv.conf 
> nameserver 
> 
> As an experiment I removed all those but nothing changed.
> 
> What things might I have misconfigured to create these symptoms?
> 
> 

You do not mention /etc/nsswitch.conf. I think, that too, is traditional way to 
break things;)

rgds,
toomas


> 
> I am *not* a fan of "auto magical" anything.  I've been administering my own 
> SunOS systems for 30 years starting with a 3/60 and 4.1.1a.  So it's natural 
> for me to continue using the pattern set by that.
> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss
  
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] "format -e" segmentation fault attempting to label a 5 TB disk in Hipster 2017.10

2021-02-22 Thread Reginald Beardsley via openindiana-discuss
 format(1m) is an application, not kernel source. Therefore, finding where it 
is crashing is trivial.

#dbx /usr/sbin/format
>run -e
(select large drive)
>where

The issue is that it crashes from a SEGV instead of doing something sensible. 
The version on Solaris 10 u8 is *very* old. But if a label has been put on the 
drive already it will happily handle drives over 2 TB in the u8 single user 
install media shell. You get a choice of writing an MBR or EFI label. The 
former won't allow access to the full drive, but the EFI label will.

I reported the OI version of format(1m) crashing a *very* long time ago. I 
found a note I made about this problem in oi151_a7 8 or 9 years ago. I had 
similar fun with a 3 TB USB drive.

I set up my Ultra 20 to build OI and did a build, but when I asked on the dev 
lists for guidance in making the new build boot I got no response.  As easy a 
fix as the format SEGV is I was quite happy to do that as well as address some 
of the other large drive related issues.

There are a number of complications with regard to sector size, total number of 
sectors, etc. I've been using a 3 TB SATA on Solaris 10 u8 for 10 years. The 
first one had 512 byte sectors. When it failed the replacement had 2k sectors. 
That was its own little adventure, but I got it working and later added a 2nd 3 
TB drive to form a mirror.

If someone doing support work on OI does not have a >2 TB bare drive with which 
to test whether the bug in format(1m) is actually fixed in a more recent 
release, they are not capable of testing a fix. There is nothing magical about 
5 TB. It's what I had on hand.

With my most important system down, doing an update on my internet access host 
is a complete non-starter.  Especially when my router is not working.  I bought 
a new one and will address the issues I was having with my WRT54GL and DD-WRT 
at some other time.

As it happens, with the confirmation from Toomas  about installing grub I 
decided to skip making a backup before repairing grub.

I've been planning to update my Hipster instance, but not without having an 
alternative means of internet access available.

Reg





On Sunday, February 21, 2021, 07:37:15 PM CST, Richard L. Hamilton 
 wrote:


I'm not an OS developer (although I have read a fair bit of Solaris etc source 
over the years, and have written a kernel module or two for my own amusement). 
That said, if you have a core file,

pstack core_file

(whatever the core file's name is) will give a backtrace, which might (although 
with a SEGV, it might not, too) provide SOME clue what's happening.

I don't know where that specific size limit comes in. Typically, the limit for 
a bootable partition in Solaris is 2TB (1TB on older systems, even lower on 
ancient ones). EFI labels/partitions will likely support larger disks than VTOC 
(SPARC) or fdisk (x86/64) label. I don't know whether OpenIndiana has the same 
limits.

People that know this stuff well enough that for any given problem you might 
have, can say "that's fixed in version such-and-such", or "here's a workaround" 
are rare enough even for systems that have big commercial support. 
OpenIndiana/Illumos may have SMALL commercial support, but it's also mostly a 
SMALL number of volunteers. It's a bit much to expect that any of them would 
have a 5TB disk on hand to try and re-create your problem (let alone have a 
saved install of the old version you're running), and finding it by inspecting 
code wouldn't be quick either. That says nothing about the future of anything, 
save that support options are obviously limited, particularly if you're not 
paying someone for priority service.

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] DNS problem

2021-02-22 Thread Toomas Soome via openindiana-discuss



> On 22. Feb 2021, at 21:33, L. F. Elia via openindiana-discuss 
>  wrote:
> 
> I usually use 1.1.1.1 and 8.8.8.8 for dns. There are IPv6 options of those 
> for you who need them
> lfe...@yahoo.com, Portsmouth VA, 23701
> Solaris/LINUX/Windows administration CISSP/Security consulting 
> 
>On Saturday, February 20, 2021, 10:29:00 AM EST, Reginald Beardsley via 
> openindiana-discuss  wrote:  
> 
> I'd been using a Linksys WRT54GL  and DD-WRT for 12 years without any 
> problems.  A few days ago I started having issues of  not being able to 
> properly making connections.  It might work fine for an hour and then web 
> sites would time out on access attempts.
> 
> I have replaced it with a Linksys N600. That is working fine from Debian 9.3, 
> but not with Hipster 2017.10.
> 
> If I do "nslookup login.yahoo.com" I get the usual response from the N600.  
> But if I attempt "traceroute login.yahoo.com" I get an "unknown host 
> login.yahoo.com" .
> 
> "traceroute " works as expected.
> 
> I'm *very* rusty at this as I set everything up many years ago. I suspect the 
> issue is a conflict with nwamd.  For Hipster I configured  a static LAN ip 
> address in the traditional fashion.
> 
> I have the following settings:
> 
> /etc/hostname. 
> 
> 
> /etc/defaultrouter
> gateway 
> 
> /etc/resolv.conf 
> nameserver 
> 
> As an experiment I removed all those but nothing changed.
> 
> What things might I have misconfigured to create these symptoms?
> 
> 

You do not mention /etc/nsswitch.conf. I think, that too, is traditional way to 
break things;)

rgds,
toomas


> 
> I am *not* a fan of "auto magical" anything.  I've been administering my own 
> SunOS systems for 30 years starting with a 3/60 and 4.1.1a.  So it's natural 
> for me to continue using the pattern set by that.
> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] export 2 pools from linux running zfs : import with OI

2021-02-22 Thread Stephan Althaus

On 02/22/21 06:02 PM, reader wrote:

Stephan Althaus  writes:


[...]


Have a look at "zpool get all" and "zfs get all".
If you create a new pool to be shared, use "zpool create -d " to
disable all of them.

Is creation the only time the `-d' operator is usable.  I mean, for
example, can the functionality be disabled just before export?
Possibly making the pool more importable?


At any time after that you could enable features that are shared
amongst your OS Versions.

I am using used a pool with debian and with OI on the same host.

OK, good to hear but it does sound like it takes lots of close
attention.
thx



___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Once a feature is used, it can't be disabled anymore.
So in real world you can only disable a feature on a newly created empty 
pool
"zpool set feature...="(see man page)   or if you just enabled a feature 
seconds bofore.


"lots of close attention"
... the list of possible features is not *that* long,
but this is a very personal feeling ;-) ...

Greetings,
Stephan


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] DNS problem

2021-02-22 Thread L. F. Elia via openindiana-discuss
I usually use 1.1.1.1 and 8.8.8.8 for dns. There are IPv6 options of those for 
you who need them
lfe...@yahoo.com, Portsmouth VA, 23701
Solaris/LINUX/Windows administration CISSP/Security consulting 

On Saturday, February 20, 2021, 10:29:00 AM EST, Reginald Beardsley via 
openindiana-discuss  wrote:  
 
 I'd been using a Linksys WRT54GL  and DD-WRT for 12 years without any 
problems.  A few days ago I started having issues of  not being able to 
properly making connections.  It might work fine for an hour and then web sites 
would time out on access attempts.

I have replaced it with a Linksys N600. That is working fine from Debian 9.3, 
but not with Hipster 2017.10.

If I do "nslookup login.yahoo.com" I get the usual response from the N600.  But 
if I attempt "traceroute login.yahoo.com" I get an "unknown host 
login.yahoo.com" .

"traceroute " works as expected.

I'm *very* rusty at this as I set everything up many years ago. I suspect the 
issue is a conflict with nwamd.  For Hipster I configured  a static LAN ip 
address in the traditional fashion.

I have the following settings:

/etc/hostname. 


/etc/defaultrouter
gateway 

/etc/resolv.conf 
nameserver 

As an experiment I removed all those but nothing changed.

What things might I have misconfigured to create these symptoms?

Thanks,
Reg

I am *not* a fan of "auto magical" anything.  I've been administering my own 
SunOS systems for 30 years starting with a 3/60 and 4.1.1a.  So it's natural 
for me to continue using the pattern set by that.

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss
  
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] safely cleanup pkg cache?

2021-02-22 Thread Andreas Wacknitz

Am 21.02.21 um 22:42 schrieb Stephan Althaus:

Hello!

The "-s" option does the minimal obvious remove of the corresponding
snapshot:

$ beadm list
BE    Active Mountpoint Space   Policy
Created
openindiana-2020:11:03    -  - 42.08M  static 2020-11-03
09:30
openindiana-2020:11:26    -  - 40.50M  static 2020-11-26
13:52
openindiana-2020:11:26-backup-1   -  - 263K    static 2020-12-11
22:27
openindiana-2020:12:29    -  - 34.60M  static 2020-12-29
22:07
openindiana-2021:01:13    -  - 34.68M  static 2021-01-13
21:57
openindiana-2021:02:18    -  - 409.54M static 2021-02-18
22:31
openindiana-2021:02:18-backup-1   -  - 42.21M  static 2021-02-19
13:35
openindiana-2021:02:20    -  - 42.67M  static 2021-02-20
20:52
openindiana-2021:02:20-1  NR / 166.94G static 2021-02-20
21:22
openindiana-2021:02:20-1-backup-1 -  - 261K    static 2021-02-20
21:30
$ zfs list -t all -r rpool|grep "2020:11:03"
rpool/ROOT/openindiana-2020:11:03 42.1M  5.40G  36.4G  /
$ sudo beadm destroy -s openindiana-2020:11:03
Are you sure you want to destroy openindiana-2020:11:03?
This action cannot be undone (y/[n]): y
Destroyed successfully
$ zfs list -t all -r rpool|grep "2020:11:03"
$

Which facts am i missing here ?


Sorry, I was afk when I wrote my answer. It was just from memory. I had
tested with the -s option before and IIRC had similar problems.
I will thorougly re-test when time permits.

Regards,
Andreas


Greetings,
Stephan

On 02/21/21 10:03 PM, Andreas Wacknitz wrote:

That doesn‘t work correctly either.

Von meinem iPhone gesendet


Am 21.02.2021 um 21:43 schrieb Stephan Althaus
:

On 02/21/21 09:17 AM, Andreas Wacknitz wrote:

Am 21.02.21 um 09:10 schrieb Toomas Soome via openindiana-discuss:


On 21. Feb 2021, at 08:45, Tim Mooney via openindiana-discuss
 wrote:


All-

My space-constrained OI hipster build VM is running low on space.

It looks like either pkg caching or pkg history is using quite a
lot of
space:

$ pfexec du -ks /var/pkg/* | sort -n
0   /var/pkg/gui_cache
0   /var/pkg/lock
0   /var/pkg/modified
0   /var/pkg/ssl
6   /var/pkg/pkg5.image
955 /var/pkg/lost+found
5557    /var/pkg/history
23086   /var/pkg/license
203166  /var/pkg/cache
241106  /var/pkg/state
9271692 /var/pkg/publisher

What is the correct, safe way to clean up anything from pkg that
I don't
need?

The closest information I've found is an article from Oracle on
"Minimize
Stored Image Metadata":

https://docs.oracle.com/cd/E53394_01/html/E54739/minvarpkg.html

This suggests changing the 'flush-content-cache-on-success' property
to true (OI defaults to False).

Is that it, or are there other (generally safe) cleanup steps
that I could
take too?  Is 'pkg purge-history' a good idea?


do not forget to check beadm list -a / zfs list -t snapshot

rgds,
toomas


I have a question regarding beadm destroy here:
I do regularly destroy old BEs with "pfexec beadm destroy "
keeping only a handful BEs.
Checking with "zfs list -t snapshot" shows that this won't destroy
most
(all?) related snapshots, eg. it typically frees only some mb.
Thus, my rpool is filling over the time and I have to manually destroy
zfs snapshots that belong to deleted BEs.
Is that an intentional behavior of beadm destroy and is there
something
how I can enhance on my procedure?

Regards,
Andreas

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss

Hello!


I use

beadm destroy -s 

to auto-destroy the corresponding snapshots. See "man beadm"


Greetings,

Stephan



___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss




___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss



___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] Reg's system recovery saga

2021-02-22 Thread Reginald Beardsley via openindiana-discuss
Thanks to all for their advice and commentary.

I was  able to boot from the root pool to single user mode.  However, X11 did 
not come up and I got stuck at the nVidia splash screen.  I was able to take it 
down cleanly via the power button and booted back to single user mode via grub.

I am now scrubbing all 3 pools in single user mode.   I feel pretty confident 
that the cause of my troubles is a bad DIMM which suffers from bit fade.  The 
problem is the fade takes quite a while, so memtest86 hasn't been able to find 
it.  I've downloaded the source code to see how difficult it would be to add a 
fade test with a user settable delay between the write and the subsequent reads.

Once the scrubs finish I'll take the system down and run memtest86 to see if 
perhaps it can find the bad DIMM now.

I've now experienced several instances of systems running ZFS going down hard 
and recovering without a loss of data.  I think it's quite amazing.  The major 
issue I've had is it's been so long between system failures that I barely 
remember how to recover.

I'm hoping that after I finish the scrubs X11 will work without additional 
effort on my part.  Before I took the system off the internet I used to run twm 
on one screen and CDE on the other to keep applications dependent upon that 
happy.  I've always despised CDE and Motif in particular.  So changing to twm 
on both screens was a real pleasure.

Long ago, in a place far away, my day job required building X11R4 and Motif  on 
multiple platforms and distributing it internally for a major oil company.  It 
made getting laid off fairly pleasant.  At that time I knew my way around the 
maze of X11 startup files.  I'd hate to have to go in there again, but 
sometimes one must do things one would prefer not to do.

Have Fun!
Reg

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] export 2 pools from linux running zfs : import with OI

2021-02-22 Thread reader
Stephan Althaus  writes:


[...]

> Have a look at "zpool get all" and "zfs get all".
> If you create a new pool to be shared, use "zpool create -d " to
> disable all of them.

Is creation the only time the `-d' operator is usable.  I mean, for
example, can the functionality be disabled just before export?
Possibly making the pool more importable?

> At any time after that you could enable features that are shared
> amongst your OS Versions.
>
> I am using used a pool with debian and with OI on the same host.

OK, good to hear but it does sound like it takes lots of close
attention.
thx



___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Installing grub on a zfs mirror rpool

2021-02-22 Thread Reginald Beardsley via openindiana-discuss
 Well, close, but not quite done :-(

I booted the single user shell from the install DVD, used "zpool status" to 
verify the disk names and then did "installgrub -m stage1 stage2 
/dev/rdsk/" for the 3 devices in the rpool mirror.

Did an "init 5" which flashed some message I couldn't read before it was gone. 
I then removed the DVD and it came up to the grub menu. I took the standard 
multi-user boot option, but it hung with an nVidia splash screen. I was able to 
shut it down cleanly by pressing the power button.

I rebooted via grub to single user mode. "zpool status" showed the root pool as 
corrupted which was not the case when I booted from the install DVD previously. 
I started a scrub and it crashed. I've rebooted single user and the scrub is 
running on the root pool.

This system has had an issue that if I did a scrub it would kernel panic, but 
after I booted it would finish the scrub with no issues. The symptom suggests a 
bad DIMM, but short of simply replacing them all or exhaustive substitution 
over many days no idea of how to fix that.

After the initial failure I scrubbed all the pools using the install DVD shell 
and all reported no errors. So I'm still hopeful that I'll recover OK. I've 
been extremely impressed by ZFS over the years.

Rather clearly I need to devote more time to my sys admin chores. Dual 
processor Z6x0 and Z8x0 systems have become quite cheap so I think I may get 
one of those and set up a RAIDZ configuration using 5 disks.

Thanks again,
Reg


 On Monday, February 22, 2021, 09:44:49 AM CST, Toomas Soome 
 wrote:  
 
 


On 22. Feb 2021, at 17:38, Reginald Beardsley  wrote:
 

Toomas,

Thanks for the confirmation.

On the Z400 the device names get changed around in an odd fashion that I've 
never quite been able to sort out. I often find that the front panel USB 
connection will have a different number than the last time I plugged a flash 
drive into it a few minutes earlier. 

I manually mount the flash drives as I had issues with volfs. That was so long 
ago I don't remember the exact issue. I just shrugged and disabled it on 
Solaris 10. It or something similar is running on my Hipster instance, but it 
always fails.

When I did an" installgrub stage1 stage 2 /dev/rdsk/???" and it didn't boot I 
became rather nervous. My notes in the system admin log book never seem to have 
enough detail and it doesn't have an index. It's just a record of what I have 
done over a span of 10 years on 5-6 machines.



You need to pay attention also for MBR, you most likely do need -m.

The Z400 BIOS boot menu lets me select USB, optical disk or hard disk, but I've 
never found a way to specify which hard disk. To make it still more 
interesting, the same 3 disks in the same 3 trayless cage slots have appeared 
as c0d0, c0d1, c1d0, c1d1, c2d0, c6d0 all in a single day without my having 
moved them. Simply from one boot from the install DVD to the next. As a 
consequence "zpool import" would list members of the pool "unavailable" and 
report the pool as corrupted when it was not.

As I understand you, to make it boot reliably from the mirror pool in the s0 
slices I should repeat the installgrub for each of the disks in the mirrored 
pool. Is that correct?



Yes. if your primary boot will die, you need to assign next mirror member as 
new boot disk from BIOS setup (or swap the disks), and then you want it to have 
boot blocks installed. Of course, if you have alternate media always available, 
you can use alternate media to fix your bootability.
rgds,toomas

Thanks,
Reg

 On Monday, February 22, 2021, 03:22:56 AM CST, Toomas Soome 
 wrote:  
 
 

> On 21. Feb 2021, at 01:09, Reginald Beardsley via openindiana-discuss 
>  wrote:
> 
> 
> 
> My HP Z400 based Solaris 10 u8 system had some sort of disk system fault.  It 
> has a 100 GB 3 way mirror in s0 for rpool and the rest of each  2 TB disk in 
> s1 forming a RAIDZ1.
> 
> After reseating all the cables I was able to boot to a single user shell 
> using the installation DVD and scrub both pools with no errors reported, 
> though a small part of the rpool mirror was resilvered.
> 
> However, I can't get the system to boot from the hard drives. I've tried 
> using installgrub but had no success.
> 
> What are the correct stage1 and stage2 files to use for a zfs mirrored root 
> pool?  Should I install grub on all 3 disks?

the correct stage files should be /boot/grub/stage1 and /boot/grub/stage2. 
Normally yes, you want all boot pool member disks to be able to be used for 
boot.

rgds,
toomas


> 
> The presence of zfs_stage1_5 makes it very ambiguous.  I can find no 
> explanation of what that is for.
> 
> I've continued to use u8 because I prefer using twm and I've never figured 
> out how to make OI do that. This system does not have internet access and is 
> set up for software development and scientific work.  All of the systems are 
> connected to an 8 port KVM switch.
> 
> I use OI for my internet access and Windows 

Re: [OpenIndiana-discuss] raidz2 as compared to 1:1 mirrors

2021-02-22 Thread reader
Judah Richardson  writes:


[...]

> Usable storage, S = (N-p)C, where:

> N = total number of disks
> p = number of parity disks
> C = (lowest) capacity per disk

Thanks for the formula


[...]

> TL,DR: Yeah the results you're getting should be correct.

There were 12 lines... Sorry if so poorly forumulated and written that
12 lines is TL


[...]
Excellent help JR .. thx

Jason Matthews  writes:

> 11 TB is about right. Your 2TB disks will be "smaller" by nature of
> reporting differences. Much of that is difference between how drive 
> manufacturers report a megabyte as 1,000,000 bytes and computers
> report a megabyte 1048576 bytes.
>
> In a nutshell, in your raidz2 vdev you have eight drives. Two drives
> are for parity so you have six drives for data. 6*2 = 12TB less the 
> accounting differences gives you 11TB and change.
>
> This gives you space at the price of performance. You will have the
> write performance of roughly one disk and the read performance of six 
> disks, excluding cache of course. That might be peachy for your
> application or it might not.  The best part is, you get to judge.
>
> I almost never use raidz* if performance is a consideration.

[...]

Excellent help .. thx for you time... good to hear from a couple of
old hands.


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] "format -e" segmentation fault attempting to label a 5 TB disk in Hipster 2017.10

2021-02-22 Thread Reginald Beardsley via openindiana-discuss
 I have 4 Z400s. I just configured one to dual boot Windows 7 Pro and Debian 
9.3. I had no trouble putting a label on the 5 TB disk using Debian and then 
accessing it via "format -e" using the u8 install DVD shell and writing an EFI 
label. Attempting to print the partition table was problematic because the 
number of entries exceeds the number of lines on the screen. Being able to see 
what I was doing was generally difficult as the shell terminal properties are a 
bit of a puzzle. TERM=sun seemed to work the best, but vi was still pretty much 
unusable.

I'm rather militant about segmentation faults. As simple as they are to locate 
and trap, failure to do so is inexcusable. Rather like using gets(3c) in a 
program.


I wanted to backup the system before I tried to fix grub, but after Toomas' 
confirmation that the "stage1" and "stage2" files are the correct files I feel 
more comfortable about installing grub on all 3 disks.

The BIOS is thoroughly weird. On my 3 $100 Z400s the POST took an inordinate 
amount of time even after I updated the BIOS. Then after I configured one to 
dual boot Windows and Debian it is now much quicker. Nary a clue as to what was 
changed or what changed it. I'd gone over the BIOS setup in great detail 
several times without finding any relief.

Reg

 On Monday, February 22, 2021, 08:21:15 AM CST, Gary Mills 
 wrote:  
 
 On Mon, Feb 22, 2021 at 01:14:26AM +, Jim Klimov wrote:

> The cmdk (and pci-ide) in device paths suggest IDE (emulated?) disk
> access; I am not sure the protocol supported more than some limit
> that was infinite-like in 90's or so.

If there is really such a limit for IDE emulation, then format should
describe the limit in an error message, instead of terminating with a
segmentation fault.

> Can you place it to SATA (possibly changing BIOS settings, and at a
> risk of loading with live media to export-import rpool with new
> device paths)?

The usual setting is AHCI on the disk controller.  My HP z400 had no
AHCI setting, but did have an AHCI+RAID setting.  This is known as a
fake raid controller.  This web page helped me install OI on it:

    
https://superuser.com/questions/635829/how-do-i-install-solaris-on-a-fake-raid-a-k-a-ahciraid-sata-sas-controller/635830#635830


-- 
-Gary Mills-        -refurb-        -Winnipeg, Manitoba, Canada-

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss
  
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] "format -e" segmentation fault attempting to label a 5 TB disk in Hipster 2017.10

2021-02-22 Thread Gary Mills
On Mon, Feb 22, 2021 at 01:14:26AM +, Jim Klimov wrote:

> The cmdk (and pci-ide) in device paths suggest IDE (emulated?) disk
> access; I am not sure the protocol supported more than some limit
> that was infinite-like in 90's or so.

If there is really such a limit for IDE emulation, then format should
describe the limit in an error message, instead of terminating with a
segmentation fault.

> Can you place it to SATA (possibly changing BIOS settings, and at a
> risk of loading with live media to export-import rpool with new
> device paths)?

The usual setting is AHCI on the disk controller.  My HP z400 had no
AHCI setting, but did have an AHCI+RAID setting.  This is known as a
fake raid controller.  This web page helped me install OI on it:


https://superuser.com/questions/635829/how-do-i-install-solaris-on-a-fake-raid-a-k-a-ahciraid-sata-sas-controller/635830#635830


-- 
-Gary Mills--refurb--Winnipeg, Manitoba, Canada-

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] "format -e" segmentation fault attempting to label a 5 TB disk in Hipster 2017.10

2021-02-22 Thread bscuk2

Dear All,

I am not an engineer but this seems to be a relevant bug report filed at 
illumos ?


https://www.illumos.org/issues/11952

Regards,

Robert


On 21/02/2021 20:53, Toomas Soome via openindiana-discuss wrote:



On 21. Feb 2021, at 22:50, Reginald Beardsley via openindiana-discuss 
 wrote:

WTF?

I  had loads of fun with this 4-5 years ago, but was able to put an EFI label 
on a 3 TB disk  and everything was fine when I moved the disk to my Solaris 10 
instance where I have two in a mirror.

I'm trying to label a 5 TB disk so I can make a backup before attempting to fix 
grub on my Solaris 10 u8 system.

So I get this:

rhb@Hipster:/etc# zpool status
  pool: epool
state: ONLINE
  scan: scrub repaired 0 in 2h30m with 0 errors on Wed Feb 17 17:39:53 2021
config:

NAMESTATE READ WRITE CKSUM
epool   ONLINE   0 0 0
  raidz1-0  ONLINE   0 0 0
c4d0s1  ONLINE   0 0 0
c6d1s1  ONLINE   0 0 0
c7d0s1  ONLINE   0 0 0

errors: No known data errors

  pool: rpool
state: ONLINE
  scan: scrub repaired 0 in 1h39m with 0 errors on Wed Feb 17 16:49:12 2021
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  mirror-0  ONLINE   0 0 0
c4d0s0  ONLINE   0 0 0
c6d1s0  ONLINE   0 0 0
c7d0s0  ONLINE   0 0 0

errors: No known data errors
rhb@Hipster:/etc# format -e
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c4d0 
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0
   1. c4d1 
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@1,0
   2. c6d1 
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
   3. c7d0 
  /pci@0,0/pci-ide@1f,5/ide@1/cmdk@0,0
Specify disk (enter its number): 1

Error: can't open disk '/dev/rdsk/c4d1p0'.
Segmentation Fault

I was using OI because I knew that Solaris 10 u8 would dump core, but I 
*thought* this had been fixed in OI/Illumos.  All I want to do is put a UEFI 
label on it so I can use it with u8.

It boggles my mind that I would get a seg fault after all these years.  That 
was always my favorite bug fix because it was so easy. At least it is if you 
can recompile the code.  The overhead cost of being able to build OI is simply 
more than I am willing to pay.  I did get it to compile once *many* years ago, 
but I never got a response when I asked on the developer list how to test the 
new kernel.

But then again, I now find myself in a world where  it is proclaimed "racist" 
to insist that 2+2 = 4.

So time to take OI down and move the disk to Debian to label it.

Sigh..
Reg


Hipster 2017 is 4 years old, please use current version. If it still is dumping 
core, please file issue/let us know.

rgds,
toomas




___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss



___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Installing grub on a zfs mirror rpool

2021-02-22 Thread Toomas Soome via openindiana-discuss



> On 21. Feb 2021, at 01:09, Reginald Beardsley via openindiana-discuss 
>  wrote:
> 
> 
> 
> My HP Z400 based Solaris 10 u8 system had some sort of disk system fault.  It 
> has a 100 GB 3 way mirror in s0 for rpool and the rest of each  2 TB disk in 
> s1 forming a RAIDZ1.
> 
> After reseating all the cables I was able to boot to a single user shell 
> using the installation DVD and scrub both pools with no errors reported, 
> though a small part of the rpool mirror was resilvered.
> 
> However, I can't get the system to boot from the hard drives. I've tried 
> using installgrub but had no success.
> 
> What are the correct stage1 and stage2 files to use for a zfs mirrored root 
> pool?  Should I install grub on all 3 disks?

the correct stage files should be /boot/grub/stage1 and /boot/grub/stage2. 
Normally yes, you want all boot pool member disks to be able to be used for 
boot.

rgds,
toomas


> 
> The presence of zfs_stage1_5 makes it very ambiguous.  I can find no 
> explanation of what that is for.
> 
> I've continued to use u8 because I prefer using twm and I've never figured 
> out how to make OI do that. This system does not have internet access and is 
> set up for software development and scientific work.  All of the systems are 
> connected to an 8 port KVM switch.
> 
> I use OI for my internet access and Windows 7 Pro and Debian 9.3 on a 3rd 
> system to run electronics engineering codes which are not available for any 
> other platforms.  I tried using VirtualBox for Windows on OI, but it was 
> intolerably slow.
> 
> Reg
> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss