> On 5. Sep 2025, at 11:59, Atiq Rahman <ati...@gmail.com> wrote:
> 
> Hi folks,
> So I tried this tonight (rsync OI from a whole disk installation). Everything 
> (the parts that usually work on my system except for graphics, network, etc.) 
> seemed to go smoothly. BSD Loader booted the kernel without errors.
> 
> Except after rsync and reboot, my home pool won't automount. Since I put 
> /export (/home is inside that) on a separate dataset, I get it. I had to
> manually import it. However, the system is supposed to initialize my home 
> dir. That wouldn't happen due to the pool not auto-importing.
> 
> Any thoughts on getting the automount behavior back?
> 
> root pool:
> spool              38.0G  85.0G  34.5K  /mnt/spool
> spool/ROOT         5.99G  85.0G    24K  legacy
> spool/ROOT/OI      5.99G  85.0G  5.73G  /mnt
> spool/ROOT/OI/var   269M  85.0G   269M  /mnt/var
> spool/dump         15.5G  85.0G  15.5G  -
> spool/swap         16.5G   102G    12K  -
> 
> home / data pool:
> 
> $ sudo zfs list
> NAME             USED  AVAIL  REFER  MOUNTPOINT
> matrix          2.15M   123G    24K  none
> matrix/export    291K   123G   291K  /mnt/export
> 
> As a result, I guess the installer couldn't put the initial config files in
> my /home dir. It only has files which were copied during rsync. They are 
> mostly empty.
> 
> -rw-r--r-- 1 messagebus uucp    159 Sep  4 14:06 .bashrc
> -rw-r--r-- 1 messagebus uucp     44 Sep  4 13:50 .dmrc
> -rw-r--r-- 1 messagebus uucp    377 Sep  4 14:06 .profile
> -rw------- 1 messagebus uucp    804 Sep  5  2025 .viminfo
> 
> which were copied during rsync. They are mostly empty.
> 
> Fortunately, unlike last time, my username/password that was set during
> setup is working. Commands are working.
> 
> I would love some hints/tips and tricks on the initial config. What do you 
> usually have in .profile or .bashrc after first installation?
> 

copy them from /etc/skel

also see useradd(8), uids and gids 0-99 are reserved.

rgds,
toomas


> Any favorites? Feel free to chime in with your brilliance on helping me
> customize OI.
> 
> Thanks, <>
> Atiq <>
> On Wed, Aug 20, 2025 at 9:26 AM Atiq Rahman <ati...@gmail.com 
> <mailto:ati...@gmail.com>> wrote:
>> Hi,
>> I have an idea as a temporary workaround.
>> 
>> Say I install OI on an external SSD using the install in whole disk (GPT) 
>> option. Then I zfs send/receive the pool to a specific GPT partition in the 
>> internal disk. And, then I update paths / install bootloader update-archive 
>> (and update root path for BSD Loaders) to use the internal one so I don't 
>> have to carry the external disk anymore.
>> 
>> Will this work? Did anyone ever try?
>> 
>> Best! <>
>> Atiq <>
>> 
>> On Tue, Aug 19, 2025 at 2:40 PM Atiq Rahman <ati...@gmail.com 
>> <mailto:ati...@gmail.com>> wrote:
>>> Hi Toomas,
>>> booted into live env:
>>> 
>>> > zpool import -R /mnt rpool
>>> did that
>>> 
>>> /mnt/var is pretty much empty.
>>> ----
>>> # ls -a /mnt/var
>>> .
>>> ..
>>> user
>>> ----
>>> 
>>> 
>>> > zfs create -V 8G -o volblocksize=4k -o primarycache=metadata rpool/swap
>>> did that.
>>> 
>>> > make sure your /mnt/etc/vfstab has line
>>> here's vfstab
>>> ----
>>> # cat /mnt/etc/vfstab
>>> #device device mount FS fsck mount mount
>>> #to mount to fsck point type pass at boot options
>>> #
>>> /devices - /devices devfs - no -
>>> /proc - /proc proc - no -
>>> ctfs - /system/contract ctfs - no -
>>> objfs - /system/object objfs - no -
>>> sharefs - /etc/dfs/sharetab sharefs - no -
>>> fd - /dev/fd fd - no -
>>> swap - /tmp tmpfs - yes -
>>> ----
>>> 
>>> > zfs create -V size -o primarycache=metadata rpool/dump
>>> did that
>>> 
>>> However, following throws error,
>>> > dumpadm -r /mnt -d /dev/zvol/dsk/rpool/dump
>>> dumpadm: failed to open /dev/dump: No such file or directory
>>> 
>>> I also tried dumpadm also booting into the installed version from internal 
>>> SSD, gives the same error.
>>> # dumpadm -d /dev/zvol/dsk/rpool/dump
>>> dumpadm: failed to open /dev/dump: No such file or directory
>>> 
>>> Additionally, fs structure,
>>> 
>>> ----
>>> # ls -a /mnt/
>>> .
>>> ..
>>> .cdrom
>>> bin
>>> boot
>>> dev
>>> devices
>>> etc
>>> export
>>> home
>>> jack
>>> kernel
>>> lib
>>> media
>>> mnt
>>> net
>>> opt
>>> platform
>>> proc
>>> reconfigure
>>> root
>>> rpool
>>> save
>>> sbin
>>> system
>>> tmp
>>> usr
>>> var
>>> ---
>>> 
>>> 
>>> Feels like the installer did not create/generate the file system properly? 
>>> Should I reinstall?
>>> 
>>> Thanks, <>
>>> Atiq <>
>>> 
>>> On Mon, Aug 18, 2025 at 2:02 AM Toomas Soome via oi-dev 
>>> <oi-dev@openindiana.org <mailto:oi-dev@openindiana.org>> wrote:
>>>> 
>>>> 
>>>>> On 18. Aug 2025, at 11:36, Atiq Rahman <ati...@gmail.com 
>>>>> <mailto:ati...@gmail.com>> wrote:
>>>>> 
>>>>> Hi,
>>>>> Thanks so much for explaining the bits.
>>>>> 
>>>>> > However, the system goes into maintenance mode after booting. the root 
>>>>> > password I specified during installation wasn't set. It sets the root 
>>>>> > user with the default maintenance password.
>>>>> 
>>>>> How to make it run the remaining parts of the installer and complete it 
>>>>> so I can at least get OI on text mode?
>>>>> 
>>>>> Best! <>
>>>>> Atiq <>
>>>> 
>>>> 
>>>> First of all, we would need to understand why it did drop to maintenance 
>>>> mode. If you can’t get in with root (on maintenance mode), then I’d 
>>>> suggest to boot to live environment with install medium, then
>>>> 
>>>> zpool import -R /mnt rpool — now you have your os instance available in 
>>>> /mnt; from it you should see log files in /mnt/var/svc/log and most 
>>>> likely, some of the system-* files have recorded the reason.
>>>> 
>>>> The install to existing pool does not create swap and dump devices for you
>>>> 
>>>> zfs create -V size -o volblocksize=4k -o primarycache=metadata rpool/swap 
>>>> and make sure your /mnt/etc/vfstab has line:
>>>> 
>>>> /dev/zvol/dsk/rpool/swap        -       -       swap    -       no      -
>>>> 
>>>> zfs create -V size -o primarycache=metadata rpool/dump
>>>> dumpadm -r /mnt -d /dev/zvol/dsk/rpool/dump
>>>> 
>>>> other that the above depend on what you will find from the logs.
>>>> 
>>>> rgds,
>>>> toomas
>>>> 
>>>>> 
>>>>> On Sun, Aug 17, 2025 at 2:50 AM Toomas Soome via oi-dev 
>>>>> <oi-dev@openindiana.org <mailto:oi-dev@openindiana.org>> wrote:
>>>>>>> On 16. Aug 2025, at 08:31, Atiq Rahman <ati...@gmail.com 
>>>>>>> <mailto:ati...@gmail.com>> wrote:
>>>>>>> > bootadm update-archive -R /mntpointfor_rootfs will create it.
>>>>>>> This indeed created it. Now, I am able to boot into it without the USB 
>>>>>>> stick.
>>>>>>> 
>>>>>>> > the finalization steps are likely missing.
>>>>>>> Yep, more of the final steps were missed by the installer.
>>>>>>> 
>>>>>>> Regarding the Install log (bottom of the email after this section).
>>>>>>> 1. set_partition_active: is it running stuffs for legacy MBR even 
>>>>>>> though it is UEFI system?
>>>>>> it is likely the installer logic may need some updating. 
>>>>>>> 2. is /usr/sbin/installboot a script? If yes, I plan to run it manually 
>>>>>>> next time.
>>>>>> 
>>>>>> no. you still can run it manually, or use bootadm install-bootloader 
>>>>>> (which does run installboot). Note that installboot is assuming that 
>>>>>> existing boot programs are from illumos, if not, you will see some error 
>>>>>> (in this case, the multiboot data structure is not found and therefore 
>>>>>> the program is not “ours”).
>>>>>>> 3. EFI/Boot/bootia32.efi: why is it installing 32 bit boot file? Mine 
>>>>>>> is amd64 system.
>>>>>> 
>>>>>> Because bootia32.efi (loader32.efi) implements support of system which 
>>>>>> starts with 32-bit UEFI firmware, but cpu can be switched to 64-bit 
>>>>>> mode. Granted, most systems do not need it, but also, one might want to 
>>>>>> transfer this boot disk, so it would be nice to have it prepared. And 
>>>>>> since ESP does have spare space, it does not really hurt to have it 
>>>>>> installed. Same way we do install both UEFI boot programs in ESP and 
>>>>>> BIOS boot programs into pmbr and os partition start - this way it really 
>>>>>> does not matter if you use CSM or not, you still can boot.
>>>>>> 
>>>>>>> 4. Finally, why did _curses.error: endwin() returned ERR ?
>>>>>> 
>>>>>> from man endwin:
>>>>>> 
>>>>>>        •   endwin returns an error if
>>>>>> 
>>>>>>            •   the terminal was not initialized, or
>>>>>> 
>>>>>>            •   endwin is called more than once without updating the 
>>>>>> screen, or
>>>>>> 
>>>>>>            •   reset_shell_mode(3X) returns an error.
>>>>>> 
>>>>>> So some debugging is needed there.
>>>>>> 
>>>>>> rgds,
>>>>>> toomas
>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> Install log is attached here
>>>>>>> ------------------------
>>>>>>> 2025-08-15 16:00:35,533 - INFO    : text-install:120 **** START ****
>>>>>>> 2025-08-15 16:00:36,166 - ERROR   : ti_install_utils.py:428 Error 
>>>>>>> occured during zpool call: no pools available to import
>>>>>>> 
>>>>>>> 2025-08-15 16:00:38,146 - ERROR   : ti_install_utils.py:428 Error 
>>>>>>> occured during zpool call: no pools available to import
>>>>>>> 
>>>>>>> 2025-08-15 09:11:38,777 - ERROR   : install_utils.py:411 
>>>>>>> be_do_installboot_walk: child 0 of 1 device c2t00A075014881A463d0s6
>>>>>>> 2025-08-15 09:11:38,927 - ERROR   : install_utils.py:411   Command: 
>>>>>>> "/usr/sbin/installboot -F -m -f -b /a/boot 
>>>>>>> /dev/rdsk/c2t00A075014881A463d0s6"
>>>>>>> 2025-08-15 09:11:38,927 - ERROR   : install_utils.py:411   Output:
>>>>>>> 2025-08-15 09:11:38,928 - ERROR   : install_utils.py:411 
>>>>>>> 2025-08-15 09:11:38,928 - ERROR   : install_utils.py:411 
>>>>>>> 2025-08-15 09:11:38,928 - ERROR   : install_utils.py:411 
>>>>>>> 2025-08-15 09:11:38,928 - ERROR   : install_utils.py:411 bootblock 
>>>>>>> written for /dev/rdsk/c2t00A075014881A463d0s6, 280 sectors starting at 
>>>>>>> 1024 (abs 1500001280)
>>>>>>> 2025-08-15 09:11:38,928 - ERROR   : install_utils.py:411 
>>>>>>> 2025-08-15 09:11:38,928 - ERROR   : install_utils.py:411 /a/boot/pmbr 
>>>>>>> is newer than one in /dev/rdsk/c2t00A075014881A463d0s6
>>>>>>> 2025-08-15 09:11:38,928 - ERROR   : install_utils.py:411 stage1 written 
>>>>>>> to slice 6 sector 0 (abs 1500000256)
>>>>>>> 2025-08-15 09:11:38,928 - ERROR   : install_utils.py:411 
>>>>>>> 2025-08-15 09:11:38,928 - ERROR   : install_utils.py:411 bootblock 
>>>>>>> written to /tmp/ibootBxW1oa/EFI/Boot/bootx64.efi
>>>>>>> 2025-08-15 09:11:38,928 - ERROR   : install_utils.py:411 
>>>>>>> 2025-08-15 09:11:38,928 - ERROR   : install_utils.py:411 bootblock 
>>>>>>> written to /tmp/ibootBxW1oa/EFI/Boot/bootia32.efi
>>>>>>> 2025-08-15 09:11:38,928 - ERROR   : install_utils.py:411 
>>>>>>> 2025-08-15 09:11:38,928 - ERROR   : install_utils.py:411 /a/boot/pmbr 
>>>>>>> is newer than one in /dev/rdsk/c2t00A075014881A463d0p0
>>>>>>> 2025-08-15 09:11:38,928 - ERROR   : install_utils.py:411 stage1 written 
>>>>>>> to slice 0 sector 0 (abs 0)
>>>>>>> 2025-08-15 09:11:38,928 - ERROR   : install_utils.py:411 
>>>>>>> 2025-08-15 09:11:38,928 - ERROR   : install_utils.py:411   Errors:
>>>>>>> 2025-08-15 09:11:38,928 - ERROR   : install_utils.py:411 Error reading 
>>>>>>> bootblock from /tmp/ibootBxW1oa/EFI/Boot/bootia32.efi
>>>>>>> 2025-08-15 09:11:38,928 - ERROR   : install_utils.py:411 Unable to find 
>>>>>>> multiboot header
>>>>>>> 2025-08-15 09:11:38,928 - ERROR   : install_utils.py:411 Error reading 
>>>>>>> bootblock from /tmp/ibootBxW1oa/EFI/Boot/bootx64.efi
>>>>>>> 2025-08-15 09:11:38,967 - ERROR   : ti_install.py:680 One or more ICTs 
>>>>>>> failed. See previous log messages
>>>>>>> 2025-08-15 09:12:54,907 - ERROR   : text-install:254 Install Profile:
>>>>>>> Pool: rpool
>>>>>>> BE name: solaris
>>>>>>> Overwrite boot configuration: True
>>>>>>> NIC None:
>>>>>>> Type: none
>>>>>>> System Info:
>>>>>>> Hostname: solaris
>>>>>>> TZ: America - US - America/Los_Angeles
>>>>>>> Time Offset: 0:00:17.242116
>>>>>>> Keyboard: None
>>>>>>> Locale: en_US.UTF-8
>>>>>>> User Info(root):
>>>>>>> Real name: None
>>>>>>> Login name: root
>>>>>>> Is Role: False
>>>>>>> User Info():
>>>>>>> Real name: 
>>>>>>> Login name: 
>>>>>>> Is Role: False
>>>>>>> None
>>>>>>> 2025-08-15 09:12:54,910 - ERROR   : text-install:255 Traceback (most 
>>>>>>> recent call last):
>>>>>>>   File "/usr/bin/text-install", line 247, in <module>
>>>>>>>     cleanup_curses()
>>>>>>>   File "/usr/bin/text-install", line 93, in cleanup_curses
>>>>>>>     curses.endwin()
>>>>>>> _curses.error: endwin() returned ERR
>>>>>>> 
>>>>>>> 2025-08-15 09:12:54,911 - INFO    : text-install:99 **** END ****
>>>>>>> 
>>>>>>>   
>>>>>>> Thanks, <>
>>>>>>> Atiq <>
>>>>>>> 
>>>>>>> On Thu, Aug 14, 2025 at 1:54 AM Toomas Soome via oi-dev 
>>>>>>> <oi-dev@openindiana.org <mailto:oi-dev@openindiana.org>> wrote:
>>>>>>>> 
>>>>>>>> 
>>>>>>>>> On 14. Aug 2025, at 11:26, Atiq Rahman <ati...@gmail.com 
>>>>>>>>> <mailto:ati...@gmail.com>> wrote:
>>>>>>>>> 
>>>>>>>>> So my disk name came out to be a long one: "c2t00A075014881A463d0" 
>>>>>>>>> and I am creating a fresh new pool in s6 (7th slice).
>>>>>>>>> I am trying to get away with my mid week sorcery!
>>>>>>>>> 
>>>>>>>>> Don't judge me. I barely concocted a majestic `zpool create` command 
>>>>>>>>> doing few searches online (been a while, I don't remember any of the 
>>>>>>>>> arguments)
>>>>>>>>> 
>>>>>>>>> $ sudo zpool create -o ashift=12 -O compression=lz4 -O atime=off -O 
>>>>>>>>> normalization=formD -O mountpoint=none -O canmount=off rpool 
>>>>>>>>> /dev/dsk/c2t00A075014881A463d0s6
>>>>>>>>> 
>>>>>>>>> I had a tiny bit of hesitation on putting in the mount restriction 
>>>>>>>>> switches, thinking whether it will prevent the BSD Loader from 
>>>>>>>>> looking up inside the pool.
>>>>>>>>> 
>>>>>>>>> Then before running the text-installer (PS: no GUI yet for me, iGPU: 
>>>>>>>>> radeon not supported, dGPU: Nvidia RTX 4060, driver does modeset 
>>>>>>>>> unloading) I take the zfs pool off,
>>>>>>>>> 
>>>>>>>>> $ sudo zpool export rpool
>>>>>>>>> 
>>>>>>>>> pondered though if that would be a good idea before running the 
>>>>>>>>> installer. And then, I traveled to light blue OpenIndiana land.
>>>>>>>>> 
>>>>>>>>> $ sudo /usr/bin/text-install
>>>>>>>>> 
>>>>>>>>> I got away with a lot of firsts today probably in a decade (zpool 
>>>>>>>>> create worked, was able to treasure hunt the partition/slice that I 
>>>>>>>>> created from another FOSS OS with help) so while that installer is 
>>>>>>>>> running I am almost getting overconfident.
>>>>>>>>> 
>>>>>>>>> An error in the end on /tmp/install_log put a dent on that just a 
>>>>>>>>> tiny bit. Screenshot attached: one of the py scripts invoked by 
>>>>>>>>> /sbin/install-finish: osol_install/ict.py has a TypeError which might 
>>>>>>>>> have caused due to NIC not being supported. I thought for a second 
>>>>>>>>> whether installer missing anything important by terminating after 
>>>>>>>>> that TypeError.
>>>>>>>>> 
>>>>>>>>> Life goes on: I created an entry on EFI boot manager pointing to 
>>>>>>>>> /Boot/bootx64.efi
>>>>>>>>> As I try to boot BSD Loader is complaining that rootfs module not 
>>>>>>>>> found.
>>>>>>>>> 
>>>>>>>> 
>>>>>>>> As your install did end up with errors, the finalization steps are 
>>>>>>>> likely missing.
>>>>>>>> 
>>>>>>>> bootadm update-archive -R /mntpointfor_rootfs will create it.
>>>>>>>> 
>>>>>>>> 
>>>>>>>>> While running some beadm command from OI live image I also noticed it 
>>>>>>>>> said few times that 
>>>>>>>>> Boot menu config missing. I guess I need to mount /boot/efi
>>>>>>>>> 
>>>>>>>> 
>>>>>>>> no, beadm does manage boot environments and one step there is to 
>>>>>>>> maintain BE menu list for bootloader  boot environments menu. beadm 
>>>>>>>> create/destroy/activate will update this menu (its located in <pool 
>>>>>>>> dataset>/boot/menu.lst
>>>>>>>> 
>>>>>>>> 
>>>>>>>>> As I am still learning how to make this work, any comment / 
>>>>>>>>> suggestion is greatly appreciated.
>>>>>>>>> 
>>>>>>>>> Next, I will follow this after above is resolved,
>>>>>>>>> 
>>>>>>>>> per 
>>>>>>>>> https://docs.openindiana.org/handbook/getting-started/#install-openindiana-to-existing-zfs-pool,
>>>>>>>>> > As there's possibly already another OS instance installed to the 
>>>>>>>>> > selected ZFS pool, no additional users or filesystems (like 
>>>>>>>>> > `rpool/export`) are created during installation. Swap and dump ZFS 
>>>>>>>>> > volumes also are not created and should be added manually after 
>>>>>>>>> > installation. Only root user will be available in created boot 
>>>>>>>>> > environment.
>>>>>>>>> 
>>>>>>>> 
>>>>>>>> using existing pool implies you need to take care of many things 
>>>>>>>> yourself;  IMO it is leaving just a bit too many things not done, but 
>>>>>>>> the thing is, because it is ending up with things to be done by the 
>>>>>>>> operator, it is definitely not the best option for someone trying to 
>>>>>>>> do its first install. It really is for “advanced mode”, when you know 
>>>>>>>> what and how to do to get setup completed.
>>>>>>>> 
>>>>>>>> I have not used OI installer for some time, but I do believe you 
>>>>>>>> should be able to point the installer to create pool on existing slice 
>>>>>>>> (not to reuse existing pool). Or as I have suggested before, just use 
>>>>>>>> VM to get the first setup to learn from.
>>>>>>>> 
>>>>>>>> rgds,
>>>>>>>> toomas
>>>>>>>> 
>>>>>>>>> which means our installation process (into existing zfs pool) missed 
>>>>>>>>> a few steps of usual installation even though we started with a fresh 
>>>>>>>>> new zfs pool, no other OS instance.
>>>>>>>>> 
>>>>>>>>> Appreciate the help of everyone who chimed in: Joshua, Toomas, John 
>>>>>>>>> and others to help me discover the magic to install into specific 
>>>>>>>>> partition instead of using the whole disk. <>
>>>>>>>>> 
>>>>>>>>>  <>
>>>>>>>>> Cheers! <>
>>>>>>>>> Atiq <>
>>>>>>>>> 
>>>>>>>>> On Wed, Aug 13, 2025 at 5:53 PM Atiq Rahman <ati...@gmail.com 
>>>>>>>>> <mailto:ati...@gmail.com>> wrote:
>>>>>>>>>> Hi,
>>>>>>>>>> > You can create your single illumos partition on OI as well, it 
>>>>>>>>>> > does not have to be a whole disk setup.
>>>>>>>>>> 
>>>>>>>>>> Yep, I would prefer to create the pool using an OI live image. So, I 
>>>>>>>>>> created a new partition from mkpart using latest parted from a linux 
>>>>>>>>>> system for this purpose,
>>>>>>>>>> 
>>>>>>>>>> parted > mkpart illumos 767GB 100%
>>>>>>>>>> 
>>>>>>>>>> and then set the partition type to solaris launching,
>>>>>>>>>> 
>>>>>>>>>> $ sudo gdisk /dev/nvme0n1
>>>>>>>>>> 
>>>>>>>>>> (set the GUID to "6A85CF4D-1DD2-11B2-99A6-080020736631" )
>>>>>>>>>> 
>>>>>>>>>> I booted using OI live gui image (flashed / created it using 07-23 
>>>>>>>>>> live usb image from test dir under hipster),
>>>>>>>>>> Time to kill the switch!
>>>>>>>>>> I see that parted can list all the partitions and format can list 
>>>>>>>>>> some. However, not all of them were listed under /dev/dsk or 
>>>>>>>>>> /dev/rdsk I could only see p0 to p4 (I guess illumos skipped the 
>>>>>>>>>> linux partitions).
>>>>>>>>>> 
>>>>>>>>>> However, when I applied 'zpool create zfs_main 
>>>>>>>>>> /dev/rdsk/c2t0xxxxxxxxxxd0p4 it reported no such file or directory! 
>>>>>>>>>> Tried various versions of `zpool create`, nothing would succeed!
>>>>>>>>>> 
>>>>>>>>>> Wondering if this is some nvme storage support / enumeration issue.
>>>>>>>>>> 
>>>>>>>>>> Atiq <>
>>>>>>>>>> 
>>>>>>>>>> On Wed, Aug 13, 2025 at 12:18 AM Toomas Soome via illumos-discuss 
>>>>>>>>>> <disc...@lists.illumos.org <mailto:disc...@lists.illumos.org>> wrote:
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>>> On 13. Aug 2025, at 09:29, Stephan Althaus via illumos-discuss 
>>>>>>>>>>>> <disc...@lists.illumos.org <mailto:disc...@lists.illumos.org>> 
>>>>>>>>>>>> wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>> On 8/13/25 04:17, Atiq Rahman wrote:
>>>>>>>>>>>>> Copying this to illumos discussion as well if anyone else knows.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> ---------- Forwarded message ---------
>>>>>>>>>>>>> From: Atiq Rahman <ati...@gmail.com <mailto:ati...@gmail.com>>
>>>>>>>>>>>>> Date: Tue, Aug 12, 2025 at 6:23 PM
>>>>>>>>>>>>> Subject: Re: [OpenIndiana-discuss] Installation on a partition 
>>>>>>>>>>>>> (UEFI System)
>>>>>>>>>>>>> To: Discussion list for OpenIndiana 
>>>>>>>>>>>>> <openindiana-disc...@openindiana.org 
>>>>>>>>>>>>> <mailto:openindiana-disc...@openindiana.org>>
>>>>>>>>>>>>> Cc: John D Groenveld <groenv...@acm.org 
>>>>>>>>>>>>> <mailto:groenv...@acm.org>>
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>> I tried a live usb text installer again. Right after the launch 
>>>>>>>>>>>>> text installer shows following as second option on the bottom of 
>>>>>>>>>>>>> the screen,
>>>>>>>>>>>>>   F5_InstallToExistingPool
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Screenshot is below. If you zoom in, you will see the option at 
>>>>>>>>>>>>> the top of the blue background in the bottom part of the image.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> <text_install_screen.jpg>
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Can I partition and create a pool using FreeBSD as you mentioned. 
>>>>>>>>>>>>> And, then use this option to install on that pool. 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Note that my system is EFI Only (no legacy / no CSM).
>>>>>>>>>>>>> Sincerely, <>
>>>>>>>>>>>>> Atiq <>
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Tue, Aug 12, 2025 at 2:27 PM Atiq Rahman <ati...@gmail.com 
>>>>>>>>>>>>> <mailto:ati...@gmail.com>> wrote:
>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>> Following up again to request confirmation regarding the 
>>>>>>>>>>>>>> "partition the disk" option in the installer.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> At present, it’s impractical for me to carry an external storage 
>>>>>>>>>>>>>> device solely for OpenIndiana. With other operating systems, I 
>>>>>>>>>>>>>> can safely install and test them on my internal SSD without 
>>>>>>>>>>>>>> risking my existing setup. Unfortunately, with OI, this specific 
>>>>>>>>>>>>>> limitation and uncertainty are creating unnecessary friction and 
>>>>>>>>>>>>>> confusion.
>>>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> You can create your single illumos partition on OI as well, it does 
>>>>>>>>>>> not have to be whole disk setup.
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>>>>> Could you please confirm the current state of EFI support — 
>>>>>>>>>>>>>> specifically, whether it is possible to safely perform a 
>>>>>>>>>>>>>> custom-partition installation on an EFI-only system, using a 
>>>>>>>>>>>>>> FreeBSD live USB image with the -d option.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Thank you,
>>>>>>>>>>>>>> Atiq
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> yes, it is possible. However, if you intend to share pool with 
>>>>>>>>>>> illumos, you need to know the specifics. illumos GPT partition type 
>>>>>>>>>>> in FreeBSD is named apple-zfs (its actually the uuid of illumos usr 
>>>>>>>>>>> slice from vtoc). FreeBSD boot loader is searching for partition 
>>>>>>>>>>> freebsd-zfs to boot the FreeBSD system.
>>>>>>>>>>> 
>>>>>>>>>>> In general, I’d suggest to use virtualization to get first glimpse 
>>>>>>>>>>> of the OS, to learn its details and facts and then attempt to build 
>>>>>>>>>>> complicated multi-os/multi boot setups.
>>>>>>>>>>> 
>>>>>>>>>>> rgds,
>>>>>>>>>>> toomas
>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> On Mon, Aug 4, 2025 at 3:54 PM Atiq Rahman <ati...@gmail.com 
>>>>>>>>>>>>>> <mailto:ati...@gmail.com>> wrote:
>>>>>>>>>>>>>>> Hi John,
>>>>>>>>>>>>>>> That's very helpful..
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> So, even though I choose "Partition the disk (MBR)" on the text 
>>>>>>>>>>>>>>> installer, it should work fine on my UEFI System after I set up 
>>>>>>>>>>>>>>> the pool using FreeBSD live image.
>>>>>>>>>>>>>>> Right? I get confused since it mentions "MBR" next to it and my 
>>>>>>>>>>>>>>> System is entirely GPT, no support for legacy / MBR booting at 
>>>>>>>>>>>>>>> all (no CSM).
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> Have a smooth Monday,
>>>>>>>>>>>>>>> Atiq <>
>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>> On Tue, Jul 15, 2025 at 5:56 AM John D Groenveld via 
>>>>>>>>>>>>>>> openindiana-discuss <openindiana-disc...@openindiana.org 
>>>>>>>>>>>>>>> <mailto:openindiana-disc...@openindiana.org>> wrote:
>>>>>>>>>>>>>>>> In message 
>>>>>>>>>>>>>>>> <CABC65rOmBY6YmMSB=HQdXq=tdn7d1xr+k8m9-__rqpbzhjz...@mail.gmail.com
>>>>>>>>>>>>>>>>  <mailto:tdn7d1xr%2bk8m9-__rqpbzhjz...@mail.gmail.com>>
>>>>>>>>>>>>>>>> , Atiq Rahman writes:
>>>>>>>>>>>>>>>> >Most likely, I am gonna go with this. It's been a while since 
>>>>>>>>>>>>>>>> >I last had a
>>>>>>>>>>>>>>>> >Solaris machine. I have to figure out the zpool stuff.
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> If you create the pool for OI under FreeBSD, you'll likely 
>>>>>>>>>>>>>>>> need to
>>>>>>>>>>>>>>>> disable features with the -d option as FreeBSD's ZFS is more 
>>>>>>>>>>>>>>>> feature
>>>>>>>>>>>>>>>> rich.
>>>>>>>>>>>>>>>> <URL:https://man.freebsd.org/cgi/man.cgi?query=zpool-create&apropos=0&sektion=0&manpath=FreeBSD+14.3-RELEASE+and+Ports&arch=default&format=html>
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> Once you get OI installed on your OI pool, you can enable 
>>>>>>>>>>>>>>>> features:
>>>>>>>>>>>>>>>> <URL:https://illumos.org/man/7/zpool-features>
>>>>>>>>>>>>>>>> 
>>>>>>>>>>>>>>>> John
>>>>>>>>>>>>>>>> groenv...@acm.org <mailto:groenv...@acm.org>
>>>>>>>>>>>> Hello!
>>>>>>>>>>>> 
>>>>>>>>>>>> You can install to an existing pool created with FreeBSD, be shure 
>>>>>>>>>>>> to use the option "-d" as "zpool create -d <newpool> <partition> " 
>>>>>>>>>>>> to disable additional ZFS features, see Details here:
>>>>>>>>>>>> 
>>>>>>>>>>>> https://man.freebsd.org/cgi/man.cgi?query=zpool-create&sektion=8&apropos=0&manpath=FreeBSD+14.3-RELEASE+and+Ports
>>>>>>>>>>>> 
>>>>>>>>>>>> And an EFI partition should be there, too.
>>>>>>>>>>>> 
>>>>>>>>>>>> HTH,
>>>>>>>>>>>> 
>>>>>>>>>>>> Stephan
>>>>>>>>>>>> 
> 
> illumos <https://illumos.topicbox.com/latest> / illumos-developer / see 
> discussions <https://illumos.topicbox.com/groups/developer> + participants 
> <https://illumos.topicbox.com/groups/developer/members> + delivery options 
> <https://illumos.topicbox.com/groups/developer/subscription>Permalink 
> <https://illumos.topicbox.com/groups/developer/T240e6222b19f3259-M576efcba8a08e2d43067a21b>
_______________________________________________
oi-dev mailing list
oi-dev@openindiana.org
https://openindiana.org/mailman/listinfo/oi-dev

Reply via email to