Re: [OpenIndiana-discuss] safely cleanup pkg cache?

2021-02-22 Thread Stephan Althaus
On 02/23/21 12:13 AM, Tim Mooney via openindiana-discuss wrote: In regard to: Re: [OpenIndiana-discuss] safely cleanup pkg cache?, Andreas...: Am 21.02.21 um 22:42 schrieb Stephan Althaus: Hello! The "-s" option does the minimal obvious remove of the corresponding snapshot: My experience

Re: [OpenIndiana-discuss] Installing grub on a zfs mirror rpool

2021-02-22 Thread Tony Brian Albers
On 23/02/2021 07:06, Tony Brian Albers wrote: > On 22/02/2021 17:52, Reginald Beardsley via openindiana-discuss wrote: >> This system has had an issue that if I did a scrub it would kernel panic, >> but after I booted it would finish the scrub with no issues. The symptom >> suggests a bad DIMM,

Re: [OpenIndiana-discuss] Installing grub on a zfs mirror rpool

2021-02-22 Thread Tony Brian Albers
On 22/02/2021 17:52, Reginald Beardsley via openindiana-discuss wrote: > This system has had an issue that if I did a scrub it would kernel panic, but > after I booted it would finish the scrub with no issues. The symptom suggests > a bad DIMM, but short of simply replacing them all or

Re: [OpenIndiana-discuss] DNS problem

2021-02-22 Thread Toomas Soome via openindiana-discuss
Did you pkill nscd after fixing nsswitch.conf? Rgds, Toomas Sent from my iPhone > On 23. Feb 2021, at 00:35, Reginald Beardsley via openindiana-discuss > wrote: > >  > My nnswitch.conf file got stepped on by something which substituted > nsswitch.files during a reboot when I took the

Re: [OpenIndiana-discuss] Why would X11 not start?

2021-02-22 Thread Reginald Beardsley via openindiana-discuss
After poking through /var/adm/messages it became clear that SMF had disabled cde-login. I tried svcadm enable -r /application/graphical-login/cde-login but that didn't solve the issue. Once again I just get an nVidia splash screen. svcs -xv /application/graphical-login/cde-login did not

[OpenIndiana-discuss] Why would X11 not start?

2021-02-22 Thread Reginald Beardsley via openindiana-discuss
I finally finished scrubbing the 3 pools on the system. There were no errors reported. In single user mode "zfs mount -a" seemed to mount everything as it should. I checked the timestamps on the xorg.conf file in /etc/X11 which was last modified in 2016. But when I exit single user mode I

Re: [OpenIndiana-discuss] safely cleanup pkg cache?

2021-02-22 Thread Tim Mooney via openindiana-discuss
In regard to: Re: [OpenIndiana-discuss] safely cleanup pkg cache?, Andreas...: Am 21.02.21 um 22:42 schrieb Stephan Althaus: Hello! The "-s" option does the minimal obvious remove of the corresponding snapshot: My experience seems to match what Andreas and Toomas are saying: -s isn't doing

Re: [OpenIndiana-discuss] DNS problem

2021-02-22 Thread Reginald Beardsley via openindiana-discuss
My nnswitch.conf file got stepped on by something which substituted nsswitch.files during a reboot when I took the machine down to remove the 5 TB disk. This was after I had fixed the problem once already. Fortunately, I immediately recognized what had happened. I still don't no why though.

Re: [OpenIndiana-discuss] "format -e" segmentation fault attempting to label a 5 TB disk in Hipster 2017.10

2021-02-22 Thread Reginald Beardsley via openindiana-discuss
format(1m) is an application, not kernel source. Therefore, finding where it is crashing is trivial. #dbx /usr/sbin/format >run -e (select large drive) >where The issue is that it crashes from a SEGV instead of doing something sensible. The version on Solaris 10 u8 is *very* old. But if a

Re: [OpenIndiana-discuss] DNS problem

2021-02-22 Thread Toomas Soome via openindiana-discuss
> On 22. Feb 2021, at 21:33, L. F. Elia via openindiana-discuss > wrote: > > I usually use 1.1.1.1 and 8.8.8.8 for dns. There are IPv6 options of those > for you who need them > lfe...@yahoo.com, Portsmouth VA, 23701 > Solaris/LINUX/Windows administration CISSP/Security consulting > >

Re: [OpenIndiana-discuss] export 2 pools from linux running zfs : import with OI

2021-02-22 Thread Stephan Althaus
On 02/22/21 06:02 PM, reader wrote: Stephan Althaus writes: [...] Have a look at "zpool get all" and "zfs get all". If you create a new pool to be shared, use "zpool create -d " to disable all of them. Is creation the only time the `-d' operator is usable. I mean, for example, can the

Re: [OpenIndiana-discuss] DNS problem

2021-02-22 Thread L. F. Elia via openindiana-discuss
I usually use 1.1.1.1 and 8.8.8.8 for dns. There are IPv6 options of those for you who need them lfe...@yahoo.com, Portsmouth VA, 23701 Solaris/LINUX/Windows administration CISSP/Security consulting On Saturday, February 20, 2021, 10:29:00 AM EST, Reginald Beardsley via

Re: [OpenIndiana-discuss] safely cleanup pkg cache?

2021-02-22 Thread Andreas Wacknitz
Am 21.02.21 um 22:42 schrieb Stephan Althaus: Hello! The "-s" option does the minimal obvious remove of the corresponding snapshot: $ beadm list BE    Active Mountpoint Space   Policy Created openindiana-2020:11:03    -  - 42.08M  static 2020-11-03 09:30

[OpenIndiana-discuss] Reg's system recovery saga

2021-02-22 Thread Reginald Beardsley via openindiana-discuss
Thanks to all for their advice and commentary. I was able to boot from the root pool to single user mode. However, X11 did not come up and I got stuck at the nVidia splash screen. I was able to take it down cleanly via the power button and booted back to single user mode via grub. I am now

Re: [OpenIndiana-discuss] export 2 pools from linux running zfs : import with OI

2021-02-22 Thread reader
Stephan Althaus writes: [...] > Have a look at "zpool get all" and "zfs get all". > If you create a new pool to be shared, use "zpool create -d " to > disable all of them. Is creation the only time the `-d' operator is usable. I mean, for example, can the functionality be disabled just

Re: [OpenIndiana-discuss] Installing grub on a zfs mirror rpool

2021-02-22 Thread Reginald Beardsley via openindiana-discuss
Well, close, but not quite done :-( I booted the single user shell from the install DVD, used "zpool status" to verify the disk names and then did "installgrub -m stage1 stage2 /dev/rdsk/" for the 3 devices in the rpool mirror. Did an "init 5" which flashed some message I couldn't read

Re: [OpenIndiana-discuss] raidz2 as compared to 1:1 mirrors

2021-02-22 Thread reader
Judah Richardson writes: [...] > Usable storage, S = (N-p)C, where: > N = total number of disks > p = number of parity disks > C = (lowest) capacity per disk Thanks for the formula [...] > TL,DR: Yeah the results you're getting should be correct. There were 12 lines... Sorry if so poorly

Re: [OpenIndiana-discuss] "format -e" segmentation fault attempting to label a 5 TB disk in Hipster 2017.10

2021-02-22 Thread Reginald Beardsley via openindiana-discuss
I have 4 Z400s. I just configured one to dual boot Windows 7 Pro and Debian 9.3. I had no trouble putting a label on the 5 TB disk using Debian and then accessing it via "format -e" using the u8 install DVD shell and writing an EFI label. Attempting to print the partition table was problematic

Re: [OpenIndiana-discuss] "format -e" segmentation fault attempting to label a 5 TB disk in Hipster 2017.10

2021-02-22 Thread Gary Mills
On Mon, Feb 22, 2021 at 01:14:26AM +, Jim Klimov wrote: > The cmdk (and pci-ide) in device paths suggest IDE (emulated?) disk > access; I am not sure the protocol supported more than some limit > that was infinite-like in 90's or so. If there is really such a limit for IDE emulation, then

Re: [OpenIndiana-discuss] "format -e" segmentation fault attempting to label a 5 TB disk in Hipster 2017.10

2021-02-22 Thread bscuk2
Dear All, I am not an engineer but this seems to be a relevant bug report filed at illumos ? https://www.illumos.org/issues/11952 Regards, Robert On 21/02/2021 20:53, Toomas Soome via openindiana-discuss wrote: On 21. Feb 2021, at 22:50, Reginald Beardsley via openindiana-discuss

Re: [OpenIndiana-discuss] Installing grub on a zfs mirror rpool

2021-02-22 Thread Toomas Soome via openindiana-discuss
> On 21. Feb 2021, at 01:09, Reginald Beardsley via openindiana-discuss > wrote: > > > > My HP Z400 based Solaris 10 u8 system had some sort of disk system fault. It > has a 100 GB 3 way mirror in s0 for rpool and the rest of each 2 TB disk in > s1 forming a RAIDZ1. > > After