Re: [OmniOS-discuss] Installation problem with OmniOSce R151022s

2017-11-14 Thread Lawrence Giam
Hi Andy,

Thanks for the help, I got it to work as instructed.

Cheers!!!

On Thu, Nov 9, 2017 at 6:41 PM, Andy Fiddaman  wrote:

> On Thu, 9 Nov 2017, Lawrence Giam wrote:
>
> ; Hi All,
> ;
> ; I am trying to install OmniOSce R151022s and am facing problem with the
> ; installer.
> ;
> ... snip ...
> ; 4. Run the command: zpool create -f rpool c3t0d0p0.
> ... snip ...
> ; Questions:
>
> ; 1. The BSD Loader take more than 5 minutes to load, is this normal? The
> ; OmniOS installer does not take that long to boot.
>
> Not in my experience except from a USB drive. It is loading a lot of data
> since the miniroot now includes the ZFS stream.
>
> ; 2. How do we install OmniOSce to a partition on a disk instead of using
> the
> ; whole disk?
>
> You are confusing the partition table in the disk's MBR with the VTOC label
> used by Solaris. You should not be creating a zpool using the ...p0 device.
>
> Try this after having created the partition in fdisk:
>
> # Force re-genration of the VTOC.
> # This will result in slice 2 being set up properly to span the entire
> # partition.
> prtvtoc -h /dev/rdsk/c3t0d0s2 | awk '$1==8{print}' \
> | fmthard -s - /dev/rdsk/c3t0d0s2
>
> # Now add slice 0 covering the whole partition
> # The same as slice 2 except tagged as root and mountable.
> # (you can do this interactively via format or run this)
>
> fmthard -d `prtvtoc -h /dev/rdsk/c3t0d0s2 \
> | awk '$1==2{printf("0:2:00:%d:%d\n",$4,$5)}'` /dev/rdsk/c3t0d0s2
>
> # Check
> echo -e 'p\np' | format c3t0d0
>
> You should see that slice 0 is flagged as root and spans the entire
> partition
>
> # Create zpool
> zpool create -f rpool c3t0d0s0
>
> Proceed to option 2 in the menu as before.
>
> I've just tested this with OmniOSce r151024.
>
> HTH,
>
> Andy
>
> --
> Citrus IT Limited | +44 (0)333 0124 007 | enquir...@citrus-it.co.uk
> Rock House Farm | Green Moor | Wortley | Sheffield | S35 7DQ
> Registered in England and Wales | Company number 4899123
>
>
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] 2TB vs 4TB NVMe drives?

2017-11-14 Thread Stephan Budach
Hi Bob,

- Ursprüngliche Mail -
> Von: "Bob Friesenhahn" 
> An: "Stephan Budach" 
> CC: "omnios-discuss" 
> Gesendet: Dienstag, 14. November 2017 15:44:33
> Betreff: Re: [OmniOS-discuss] 2TB vs 4TB NVMe drives?
> 
> On Tue, 14 Nov 2017, Stephan Budach wrote:
> 
> > we are planning on purchasing a Supermicro NVMe server with 48 .U2
> > slots. I intended to initially load it with 24 x 2TB DC P4500,
> > leaving 24 slots empty.
> >
> >
> > Now… I've been also offered an Intel chassis with only 24 slots and
> > thus the offer also included the 4TB P4500s. Just without thinking
> > very long, I instinctively wayed towards the 2TB drives, mainly for
> > the reason, that should a drive really fail, I'd have of course a
> > longer resilver at hand, usind 4TB NVMe drives.
> >
> >
> > Which ones would you choose?
> 
> Assuming that OmniOS works with these devices at all, from a power
> consumption, heat, complexity, and reliability standpoint, the larger
> devices appear to be a win (1/2 the power and 2X the MTBF for the
> same
> storage capacity).  Resilver time is important but NVMe drives do not
> have the rotational latency and seek time issues of rotating media so
> resilver time should not be such an issue and there should only be a
> factor of 2 difference in resilver time.
> 
> A consideration is what zfs pool configuration you would be putting
> on
> these drives.  For throughput, more devices and more vdevs is better.
> It sounds like you would initially (and perhaps forever) have the
> same
> number of devices.
> 
> Are you planning to use zfs mirrors, or raidz2/raidz3?  What about
> dedicated zfs intent log devices?  If synchronous writes are
> important
> to you, dedicated zfs intent log devices should still help with pool
> performance and long-term health by deferring writes to the vdevs so
> writes can be larger and more sequential.
> 

Afaik, all the hardware is in the Illumos HCL, so this config should run fine 
under omniOS.
This setup is intended to replace my current ZFS-HA storage pools and it will 
be configured with zfs mirrors only, where the mirror vdevs will be built on 
iSCSI LUNs from "raw" devices, as much as you can get raw devices served by 
COMSTAR.

So, we will have these boxes serving each NVMe as a LUN to the RSF-1 nodes, 
which then will host 2 zpools of 6 mirror vdevs each. From thsoe zpools, the 
RSF-1 nodes will serve NFS to our Oracle VM cluster servers. 


smime.p7s
Description: S/MIME cryptographic signature
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] 2TB vs 4TB NVMe drives?

2017-11-14 Thread Bob Friesenhahn

On Tue, 14 Nov 2017, Stephan Budach wrote:

we are planning on purchasing a Supermicro NVMe server with 48 .U2 
slots. I intended to initially load it with 24 x 2TB DC P4500, 
leaving 24 slots empty.



Now… I've been also offered an Intel chassis with only 24 slots and 
thus the offer also included the 4TB P4500s. Just without thinking 
very long, I instinctively wayed towards the 2TB drives, mainly for 
the reason, that should a drive really fail, I'd have of course a 
longer resilver at hand, usind 4TB NVMe drives.



Which ones would you choose?


Assuming that OmniOS works with these devices at all, from a power 
consumption, heat, complexity, and reliability standpoint, the larger 
devices appear to be a win (1/2 the power and 2X the MTBF for the same 
storage capacity).  Resilver time is important but NVMe drives do not 
have the rotational latency and seek time issues of rotating media so 
resilver time should not be such an issue and there should only be a 
factor of 2 difference in resilver time.


A consideration is what zfs pool configuration you would be putting on 
these drives.  For throughput, more devices and more vdevs is better. 
It sounds like you would initially (and perhaps forever) have the same 
number of devices.


Are you planning to use zfs mirrors, or raidz2/raidz3?  What about 
dedicated zfs intent log devices?  If synchronous writes are important 
to you, dedicated zfs intent log devices should still help with pool 
performance and long-term health by deferring writes to the vdevs so 
writes can be larger and more sequential.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] 2TB vs 4TB NVMe drives?

2017-11-14 Thread Stephan Budach

Hi, 


we are planning on purchasing a Supermicro NVMe server with 48 .U2 slots. I 
intended to initially load it with 24 x 2TB DC P4500, leaving 24 slots empty. 


Now… I've been also offered an Intel chassis with only 24 slots and thus the 
offer also included the 4TB P4500s. Just without thinking very long, I 
instinctively wayed towards the 2TB drives, mainly for the reason, that should 
a drive really fail, I'd have of course a longer resilver at hand, usind 4TB 
NVMe drives. 


Which ones would you choose? 





Thanks, 
Stephan

smime.p7s
Description: S/MIME cryptographic signature
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss