Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-30 Thread Nomen Nescio
 Actually, you do want /usr and much of /var on the root pool, they
 are integral parts of the svc:/filesystem/local needed to bring up
 your system to a useable state (regardless of whether the other
 pools are working or not).

Ok. I have my feelings on that topic but they may not be as relevant for
ZFS. It may be because I tried to avoid single points of failure on other
systems with techniques that don't map to ZFS or Solaris. I believe I can
bring up several OS without /usr or /var although they complain they will
work. But I'll take your point here.

 Depending on the OS versions, you can do manual data migrations
 to separate datasets of the root pool, in order to keep some data
 common between OE's or to enforce different quotas or compression
 rules. For example, on SXCE and Solaris 10 (but not on oi_148a)
 we successfully splice out many filesystems in such a layout
 (the example below also illustrates multiple OEs):

Thanks, I have done similar things but I didn't know if they were
approved.

 And you can not boot from any pool other than a mirror or a
 single drive. Rationale: a single BIOS device must be sufficient
 to boot the system and contain all the data needed to boot.

Definitely important fact here.

Thanks for all the info!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-27 Thread Jim Klimov


- Исходное сообщение -
От: Dave U. Random anonym...@anonymitaet-im-inter.net
Дата: Tuesday, June 21, 2011 18:32
Тема: Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?
Кому (To): zfs-discuss@opensolaris.org

 Hello Jim! I understood ZFS doesn't like slices but from your 
 reply maybe I
 should reconsider. I have a few older servers with 4 bays x 73G. 
 If I make a
 root mirror pool and swap on the other 2 as you suggest, then I 
 would have
 about 63G x 4 left over.


For the sake of completeness, I should mention that you can also
create a fast and redundant 4-way mirrored root pool ;)

 If so then I am back to wondering what 
 to do about
 4 drives. Is raidz1 worthwhile in this scenario? That is less 
 redundancythat a mirror and much less than a 3 way mirror, isn't 
 it? Is it even
 possible to do raidz2 on 4 slices? Or would 2, 2 way mirrors be 
 better? I
 don't understand what RAID10 is, is it simply a stripe of two 
 mirrors? 
Yes, by that I meant a striping over two mirrors.

 Or would it be best to do a 3 way mirror and a hot spare? I would 
 like to be
 able to tolerate losing one drive without loss of integrity.

Any of the scenarios above allow you to lose one drive and not 
lose data immediately. The rest is a compromise between both
performance, space and further redundancy:
* 3- or 4-way mirror: least useable space (25% of total disk capacity),
most redundancy, highest read speeds for concurrent loads
* striping of mirrors (raid10): average useable space (50%), high 
read speeds for concurrent loads, can tolerate loss of up to 2 drives
(slices) in a good scenario (if they are from different mirrors)
* raidz2: average useable space (50%), can tolerate loss of any 2 drives
* raidz1: max useable space (75%), can tolerate loss of any 1 drive
 
After all the discussions about performance recently on this forum,
I would not try to guess which performance would be better in 
general - raidz1 or raidz2 (there are reads, writes, scrubs and 
resilvers seemingly all with different preferences toward disk layout),
but with a generic workload we have (i.e. serving up zones with
some development databases and J2SE app servers) this was not
seen to matter much. So for us it was usually raidz2 for tolerance
or raidz1 for space.
 

 I will be doing new installs of Solaris 10. Is there an option 
 in the
 installer for me to issue ZFS commands and set up pools or do I 
 need to
 format the disks before installing and if so how do I do that? 
 
Unfortunately, I last installed Solaris 10u7 or so from scratch, 
others were liveupdates of existing systems and OpenSolaris 
machines, so I am not certain. 

From what I gather, the text installer is much more powerful
than the graphical one, and its ZFS root setup might encompass 
creating a root pool in a slice of given size, and possibly mirror 
it right away. Maybe you can do likewise in JumpStart, but we 
did not do that after all.
 
Anyhow, after you install a ZFS root of your sufficient size
(i.e. our minimalist Solaris 10 installs are often under 1-2Gb 
per boot environment, multiply for storing different OEs like 
LiveUpdate and for snapshot history), you can create a slice
for the data pool component (s3 in our setups), and then 
clone the disk slice layout to the other 3 drives like this:
#  prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t1d0s2
(you might need to install the slice table spanning 100% of 
drives with the fdisk command, first).

Then you attach one of the slices to the ZFS root pool to make
a mirror, if the installer did not do that:
# zpool attach rpool c1t0d0s0 c1t1d0s0

If you have several controllers (perhaps even on different PCI buses) 
you might want to pick a drive on a different controller than the first 
one in order to have less SPoF's, but make sure that the second 
controller is bootable from BIOS.

And make that drive bootable:
SPARC:
# installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c1t1d0s0
x86/x86_64:
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
 
For two other drives you just create a new pool in slices *s0:
# zpool create swappool mirror c1t2d0s0 c1t3d0s0
# zfs create -V2g swappool/dump
# zfs create -V6g swappool/swap

Sizes are arbitrary here, they depend on your RAM sizing.
You can later add swap from other pools, including a data pool.
Dump device size can be tested by configuring dumpadm to
use the new device - it would either refuse to use a device too 
small (then you recreate it bigger), or accept it.

The installer would probably create a dump and a swap devices
in your root pool, you may elect to destroy them since you have
another swap device, at least.

Make sure to update the /etc/vfstab file to reference the swap 
areas which your system should use further on.

After this is all completed, you can create a data pool in the
s3 slices with your chosen geometry, i.e.
# zpool create pool raidz2 c1t0d0s3 c1t1d0s3 c1t2d0s3 c1t3d0s3

In our setups

Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-27 Thread Jim Klimov
 In this setup that will install everything on the root mirror so 
 I will
 have to move things around later? Like /var and /usr or whatever 
 I don't
 want on the root mirror?
Actually, you do want /usr and much of /var on the root pool, they
are integral parts of the svc:/filesystem/local needed to bring up
your system to a useable state (regardless of whether the other
pools are working or not).
 
Depending on the OS versions, you can do manual data migrations
to separate datasets of the root pool, in order to keep some data
common between OE's or to enforce different quotas or compression
rules. For example, on SXCE and Solaris 10 (but not on oi_148a)
we successfully splice out many filesystems in such a layout
(the example below also illustrates multiple OEs):
 
# zfs list -o name,refer,quota,compressratio,canmount,mountpoint -t filesystem 
-r rpool
NAMEREFER  QUOTA  RATIO  CANMOUNT  MOUNTPOINT
rpool  7.92M   none  1.45xon  /rpool
rpool/ROOT   21K   none  1.38xnoauto  /rpool/ROOT
rpool/ROOT/snv_117  758M   none  1.00xnoauto  /
rpool/ROOT/snv_117/opt 27.1M   none  1.00xnoauto  /opt
rpool/ROOT/snv_117/usr  416M   none  1.00xnoauto  /usr
rpool/ROOT/snv_117/var  122M   none  1.00xnoauto  /var
rpool/ROOT/snv_129  930M   none  1.45xnoauto  /
rpool/ROOT/snv_129/opt  109M   none  2.70xnoauto  /opt
rpool/ROOT/snv_129/usr  509M   none  2.71xnoauto  /usr
rpool/ROOT/snv_129/var  288M   none  2.54xnoauto  /var
rpool/SHARED 18K   none  3.36xnoauto  legacy
rpool/SHARED/var 18K   none  3.36xnoauto  legacy
rpool/SHARED/var/adm   2.97M 5G  4.43xnoauto  legacy
rpool/SHARED/var/cores  118M 5G  3.44xnoauto  legacy
rpool/SHARED/var/crash 1.39G 5G  3.41xnoauto  legacy
rpool/SHARED/var/log102M 5G  3.43xnoauto  legacy
rpool/SHARED/var/mail  66.4M   none  1.79xnoauto  legacy
rpool/SHARED/var/tmp 20K   none  1.00xnoauto  legacy
rpool/test 50.5K   none  1.00xnoauto  /rpool/test
 
Mounts of /var/* components are done via /etc/vfstab lines like:
rpool/SHARED/var/adm-   /var/admzfs -   yes -
rpool/SHARED/var/log-   /var/logzfs -   yes -
rpool/SHARED/var/mail   -   /var/mail   zfs -   yes -
rpool/SHARED/var/crash  -   /var/crash  zfs -   yes -
rpool/SHARED/var/cores  -   /var/cores  zfs -   yes -

While system paths /usr /var /opt are mounted by SMF services
directly.
 
 
 And then I just make a RAID10 like Jim 
 was saying
 with the other 4x60 slices? How should I move mountpoints that aren't
 separate ZFS filesystems?
 
 

 
  The only conclusion you can draw from that is:  First 
 take it as a given
  that you can't boot from a raidz volume.  Given, you must 
 have one mirror.
 
 Thanks, I will keep it in mind.
 
  Then you raidz all the remaining space that's capable of being 
 put into a
  raidz...  And what you have left is a pair of unused 
 space, equal to the
  size of your boot volume.  You either waste that space, 
 or you mirror it
  and put it into your tank.
...or use it as swap space :)
 
 I didn't understand what you suggested about appending a 13G 
 mirror to tank. Would that be something like RAID10 without
 actually being RAID10 so I could still boot from it? How would
 the system use it?
No, this would be an uneven striping over a raid10 (or raidzN) 
bank of 60Gb slices and a 13Gb mirror. ZFS can do that too,
although for performance considerations unbalanced pools are 
not recommended and should be forced on command-line.

And you can not boot from any pool other than a mirror or a
single drive. Rationale: a single BIOS device must be sufficient
to boot the system and contain all the data needed to boot.
 
 So RAID10 sounds like the only reasonable choice since there are 
 an even
 number of slices, I mean is RAIDZ1 even possible with 4 slices?
Yes, it is possible with any amount of slices starting from 3.

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 
-- 

++ 
|| 
| Климов Евгений, Jim Klimov | 
| технический директор   CTO | 
| ЗАО ЦОС и ВТ  JSC COSHT | 
|| 
| +7-903-7705859 (cellular)  mailto:jimkli...@cos.ru | 
|CC:ad...@cos.ru,jimkli...@gmail.com | 
++ 
| ()  ascii ribbon 

Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-27 Thread Jim Klimov
 Hello Bob! Thanks for the reply. I was thinking about going with 
 a 3 way
 mirror and a hot spare.
Keep in mind that you can have problems in Sol10u8 if you use
a mirror+spare config for the root pool. Should be fixed in u9.
 
 But I don't think I can upgrade to 
 larger drives
 unless I do it all at once, is that correct?
You can replace the drives one by one, but the pool will only
expand when all the data drives have newer bigger capacity.
 
//Jim
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-23 Thread Dave U . Random
Edward Ned Harvey opensolarisisdeadlongliveopensola...@nedharvey.com
wrote:

 Well ... 
 Slice all 4 drives into 13G and 60G.
 Use a mirror of 13G for the rpool.
 Use 4x 60G in some way (raidz, or stripe of mirrors) for tank
 Use a mirror of 13G appended to tank

Hi Edward! Thanks for your post. I think I understand what you are saying
but I don't know how to actually do most of that. If I am going to make a
new install of Solaris 10 does it give me the option to slice and dice my
disks and to issue zpool commands? Until now I have only used Solaris on
Intel with boxes and used both complete drives as a mirror.

Can you please tell me what are the steps to do your suggestion?

I imagine I can slice the drives in the installer and then setup a 4 way
root mirror (stupid but as you say not much choice) on the 13G section. Or
maybe one root mirror on two slices and then have 13G aux storage left to
mirror for something like /var/spool? What would you recommend? I didn't
understand what you suggested about appending a 13G mirror to tank. Would
that be something like RAID10 without actually being RAID10 so I could still
boot from it? How would the system use it?

In this setup that will install everything on the root mirror so I will
have to move things around later? Like /var and /usr or whatever I don't
want on the root mirror? And then I just make a RAID10 like Jim was saying
with the other 4x60 slices? How should I move mountpoints that aren't
separate ZFS filesystems?

 The only conclusion you can draw from that is:  First take it as a given
 that you can't boot from a raidz volume.  Given, you must have one mirror.

Thanks, I will keep it in mind.

 Then you raidz all the remaining space that's capable of being put into a
 raidz...  And what you have left is a pair of unused space, equal to the
 size of your boot volume.  You either waste that space, or you mirror it
 and put it into your tank.

So RAID10 sounds like the only reasonable choice since there are an even
number of slices, I mean is RAIDZ1 even possible with 4 slices?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-23 Thread Nomen Nescio
Hello Bob! Thanks for the reply. I was thinking about going with a 3 way
mirror and a hot spare. But I don't think I can upgrade to larger drives
unless I do it all at once, is that correct?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-23 Thread Paul Kraus
On Thu, Jun 23, 2011 at 12:48 AM, Nomen Nescio nob...@dizum.com wrote:

 Hello Bob! Thanks for the reply. I was thinking about going with a 3 way
 mirror and a hot spare. But I don't think I can upgrade to larger drives
 unless I do it all at once, is that correct?

Why keep one out as a Hot Spare ? If you have another zpool and
the Hot Spare will be shared, that makes sense. If the drive is
powered on and spinning, I don't see any downside to making it a 4-way
mirror instead of 3-way + HS.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-23 Thread Craig Cory

Paul Kraus wrote:
 On Thu, Jun 23, 2011 at 12:48 AM, Nomen Nescio nob...@dizum.com wrote:

 Hello Bob! Thanks for the reply. I was thinking about going with a 3 way
 mirror and a hot spare. But I don't think I can upgrade to larger drives
 unless I do it all at once, is that correct?

 Why keep one out as a Hot Spare ? If you have another zpool and
 the Hot Spare will be shared, that makes sense. If the drive is
 powered on and spinning, I don't see any downside to making it a 4-way
 mirror instead of 3-way + HS.

 --

Also, to add larger disks to a mirrored pool, you can replace the mirror
members, one at a time, with the larger disk and wait for resilver to
complete. Then replace the other disk, resilver again.

Craig


-- 
Craig Cory
 Senior Instructor :: ExitCertified
 : Oracle/Sun Certified System Administrator
 : Oracle/Sun Certified Network Administrator
 : Oracle/Sun Certified Security Administrator
 : Symantec/Veritas Certified Instructor
 : RedHat Certified Systems Administrator


+-+
 ExitCertified :: Excellence in IT Certified Education

  Certified training with Oracle, Sun Microsystems, Apple, Symantec, IBM,
   Red Hat, MySQL, Hitachi Storage, SpringSource and VMWare.

 1.800.803.EXIT (3948)  |  www.ExitCertified.com
+-+
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-23 Thread Cindy Swearingen

Hi Dave,

Consider the easiest configuration first and it will probably save
you time and money in the long run, like this:

73g x 73g mirror (one large s0 on each disk) - rpool
73g x 73g mirror (use whole disks) - data pool

Then, get yourself two replacement disks, a good backup strategy,
and we all sleep better.

Convert the complexity of some of the suggestions to time and money
for replacement if something bad happens, and the formula would look
like this:

time to configure x time to replace x replacement disks = $$ 
cost of two replacement for two mirrored pools

A complex configuration of slices and a combination of raidZ and
mirrored pools across the same disks will be difficult to administer,
performance will be unknown, not to mention how much time it might take
to replace a disk.

Use the simplicity of ZFS as it was intended is my advice and you
will save time and money in the long run.

Cindy


On 06/23/11 07:38, Dave U. Random wrote:

Edward Ned Harvey opensolarisisdeadlongliveopensola...@nedharvey.com
wrote:

Well ... 
Slice all 4 drives into 13G and 60G.

Use a mirror of 13G for the rpool.
Use 4x 60G in some way (raidz, or stripe of mirrors) for tank
Use a mirror of 13G appended to tank


Hi Edward! Thanks for your post. I think I understand what you are saying
but I don't know how to actually do most of that. If I am going to make a
new install of Solaris 10 does it give me the option to slice and dice my
disks and to issue zpool commands? Until now I have only used Solaris on
Intel with boxes and used both complete drives as a mirror.

Can you please tell me what are the steps to do your suggestion?

I imagine I can slice the drives in the installer and then setup a 4 way
root mirror (stupid but as you say not much choice) on the 13G section. Or
maybe one root mirror on two slices and then have 13G aux storage left to
mirror for something like /var/spool? What would you recommend? I didn't
understand what you suggested about appending a 13G mirror to tank. Would
that be something like RAID10 without actually being RAID10 so I could still
boot from it? How would the system use it?

In this setup that will install everything on the root mirror so I will
have to move things around later? Like /var and /usr or whatever I don't
want on the root mirror? And then I just make a RAID10 like Jim was saying
with the other 4x60 slices? How should I move mountpoints that aren't
separate ZFS filesystems?


The only conclusion you can draw from that is:  First take it as a given
that you can't boot from a raidz volume.  Given, you must have one mirror.


Thanks, I will keep it in mind.


Then you raidz all the remaining space that's capable of being put into a
raidz...  And what you have left is a pair of unused space, equal to the
size of your boot volume.  You either waste that space, or you mirror it
and put it into your tank.


So RAID10 sounds like the only reasonable choice since there are an even
number of slices, I mean is RAIDZ1 even possible with 4 slices?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-23 Thread Anonymous Remailer (austria)

 Hi Dave,

Hi Cindy.

 Consider the easiest configuration first and it will probably save
 you time and money in the long run, like this:
 
 73g x 73g mirror (one large s0 on each disk) - rpool
 73g x 73g mirror (use whole disks) - data pool
 
 Then, get yourself two replacement disks, a good backup strategy,
 and we all sleep better.

Oh, you're throwing in free replacement disks too?! This is great! :P

 A complex configuration of slices and a combination of raidZ and
 mirrored pools across the same disks will be difficult to administer,
 performance will be unknown, not to mention how much time it might take
 to replace a disk.

Yeah that's a very good point. But if you guys will make ZFS filesystems
span vdevs then this could work even better! You're right about the
complexity but OTOH the great thing about ZFS is not having to worry about
how to plan mount point allocations and with this scenario (I also have a
few servers with 4x36) the planning issue raises its ugly head again. That's
why I kind of like Edward's suggestion even though it is complicated (for
me) still I think it may be best given my goals. I like breathing room and
not having to worry about a filesystem filling, it's great not having to
know exactly ahead of time how much I have to allocate for a filesystem and
instead let the whole drive be used as needed.

 Use the simplicity of ZFS as it was intended is my advice and you
 will save time and money in the long run.

Thanks. I guess the answer is really using the small drives for root pools
and then getting the biggest drives I can afford for the other bays.

Thanks to everybody.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-23 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Nomen Nescio
 
 Hello Bob! Thanks for the reply. I was thinking about going with a 3 way
 mirror and a hot spare. But I don't think I can upgrade to larger drives
 unless I do it all at once, is that correct?

No point in doing 3-way mirror and hotspare.  Just do 4-way mirror.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-23 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Dave U.Random
 
 If I am going to make a
 new install of Solaris 10 does it give me the option to slice and dice my
 disks and to issue zpool commands? 

No way that I know of, to install Solaris 10 into partitions.  Solaris 11
does it.

On solaris 10, if you want to do this, you have to do a bunch of extra
hassle.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-22 Thread Dave U . Random
Hello!

 I don't see the problem. Install the OS onto a mirrored partition, and
 configure all the remaining storage however you like - raid or mirror or
 watever. 

I didn't understand your point of view until I read the next paragraph.

 My personal preference, assuming 4 disks, since the OS is mostly reads and
 only a little bit of writes, is to create a 4-way mirrored 100G partition
 for the OS, and the remaining 900G of each disk (or whatever) becomes
 either a stripe of mirrors or raidz, as appropriate in your case, for the
 storagepool.

Oh, you are talking about 1T drives and my servers are all 4x73G! So it's a
fairly big deal since I have little storage to waste and still want to be
able to survive losing one drive. I should have given the numbers at the
beginning, sorry. Given this meager storage do you have any suggestions?
Thank you.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-22 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Dave U.Random
 
  My personal preference, assuming 4 disks, since the OS is mostly reads
and
  only a little bit of writes, is to create a 4-way mirrored 100G
partition
  for the OS, and the remaining 900G of each disk (or whatever) becomes
  either a stripe of mirrors or raidz, as appropriate in your case, for
the
  storagepool.
 
 Oh, you are talking about 1T drives and my servers are all 4x73G! So it's
a
 fairly big deal since I have little storage to waste and still want to be
 able to survive losing one drive. 

Well ... 
Slice all 4 drives into 13G and 60G.
Use a mirror of 13G for the rpool.
Use 4x 60G in some way (raidz, or stripe of mirrors) for tank
Use a mirror of 13G appended to tank

That would use all your space as efficiently as possible, while providing at
least one level of redundancy, and the only sacrifice you're making is the
fact that you get different performance characteristics between a raidz and
a mirror, which are both in the same pool.  For example, you might decide
the ideal performance characteristics for your workload are to use raidz...
Or to use mirrors ... but your pool is a hybrid, so you can't achieve the
ideal performance characteristics no matter which type of data workload you
have.

That is a very small sacrifice, considering the constraints you're up
against for initial conditions.  I have 4x 73G disks I want to survive a
single disk failure I don't want to waste any space and My boot pool
must be included.

The only conclusion you can draw from that is:  First take it as a given
that you can't boot from a raidz volume.  Given, you must have one mirror.
Then you raidz all the remaining space that's capable of being put into a
raidz...  And what you have left is a pair of unused space, equal to the
size of your boot volume.  You either waste that space, or you mirror it and
put it into your tank.

It's really the only solution, without changing your hardware or design
constraints.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-21 Thread Dave U . Random
Hello Jim! I understood ZFS doesn't like slices but from your reply maybe I
should reconsider. I have a few older servers with 4 bays x 73G. If I make a
root mirror pool and swap on the other 2 as you suggest, then I would have
about 63G x 4 left over. If so then I am back to wondering what to do about
4 drives. Is raidz1 worthwhile in this scenario? That is less redundancy
that a mirror and much less than a 3 way mirror, isn't it? Is it even
possible to do raidz2 on 4 slices? Or would 2, 2 way mirrors be better? I
don't understand what RAID10 is, is it simply a stripe of two mirrors? Or
would it be best to do a 3 way mirror and a hot spare? I would like to be
able to tolerate losing one drive without loss of integrity.

I will be doing new installs of Solaris 10. Is there an option in the
installer for me to issue ZFS commands and set up pools or do I need to
format the disks before installing and if so how do I do that? Thank you.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-21 Thread Nomen Nescio
Hello Marty! 

 With four drives you could also make a RAIDZ3 set, allowing you to have
 the lowest usable space, poorest performance and worst resilver times
 possible.

That's not funny. I was actually considering this :p

But you have to admit, it would probably be somewhat reliable!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-21 Thread Tomas Ögren
On 21 June, 2011 - Nomen Nescio sent me these 0,4K bytes:

 Hello Marty! 
 
  With four drives you could also make a RAIDZ3 set, allowing you to have
  the lowest usable space, poorest performance and worst resilver times
  possible.
 
 That's not funny. I was actually considering this :p

4-way mirror would be way more useful.

 But you have to admit, it would probably be somewhat reliable!

/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-20 Thread Richard Elling
On Jun 15, 2011, at 1:33 PM, Nomen Nescio wrote:

 Has there been any change to the server hardware with respect to number of
 drives since ZFS has come out? Many of the servers around still have an even
 number of drives (2, 4) etc. and it seems far from optimal from a ZFS
 standpoint. All you can do is make one or two mirrors, or a 3 way mirror and
 a spare, right? Wouldn't it make sense to ship with an odd number of drives
 so you could at least RAIDZ? Or stop making provision for anything except 1
 or two drives or no drives at all and require CD or netbooting and just
 expect everybody to be using NAS boxes? I am just a home server user, what
 do you guys who work on commercial accounts think? How are people using
 these servers?

I see 2 disks for boot and usually one or more 24-disk JBODs.  A few 12-disk 
JBODs
are still being sold, but I rarely see a single 12-disk JBOD. I'm also seeing a 
few
SBBs that have 16 disks and boot from SATA DOMs. Anyone else?

 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-16 Thread Jim Klimov

As recently discussed on this list, after all ZFS does not care
very much for the number of drives in a raidzN set, so optimization
is not about stripe alignment and stuff but about number of spindles,
resilver times, number of redundancy disks, etc.

In my setups with 4 identical drives in a server I typically made
a 10-20Gb rpool as a mirror of slices on a couple of drives, a
same-sized pool for swap on the other couple of drives, and
this leaves me with 4 identical-sized slices for a separate
data pool. Depending on requirements we can do any layout:
performance (raid10) vs. reliable (raidz2) vs space (raidz1).

HTH,
//Jim


2011-06-16 0:33, Nomen Nescio пишет:

Has there been any change to the server hardware with respect to number of
drives since ZFS has come out? Many of the servers around still have an even
number of drives (2, 4) etc. and it seems far from optimal from a ZFS
standpoint. All you can do is make one or two mirrors, or a 3 way mirror and
a spare, right? Wouldn't it make sense to ship with an odd number of drives
so you could at least RAIDZ? Or stop making provision for anything except 1
or two drives or no drives at all and require CD or netbooting and just
expect everybody to be using NAS boxes? I am just a home server user, what
do you guys who work on commercial accounts think? How are people using
these servers?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-16 Thread Bob Friesenhahn

On Wed, 15 Jun 2011, Nomen Nescio wrote:


Has there been any change to the server hardware with respect to number of
drives since ZFS has come out? Many of the servers around still have an even
number of drives (2, 4) etc. and it seems far from optimal from a ZFS
standpoint. All you can do is make one or two mirrors, or a 3 way mirror and
a spare, right? Wouldn't it make sense to ship with an odd number of drives
so you could at least RAIDZ? Or stop making provision for anything except 1


Yes, it all seems pretty silly.  Using a small dedicated boot drive 
(maybe an SSD or Compact Flash) would make sense so that the main 
disks can all be used in one pool.  FreeBSD apparently supports 
booting from raidz so it would allow booting from a four-disk raidz 
pool.  Unfortunately, Solaris does not support that.


Given a fixed number of drive bays, there may be value to keeping one 
drive bay completely unused (hot/cold spare, or empty).  The reason 
for this is that it allows you to insert new drives in order to 
upgrade the drives in your pool, or handle the case of a broken drive 
bay.  Without the ability to insert a new drive, you need to 
compromise the safety of your pool in order to replace a drive or 
upgrade the drives to a larger size.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-16 Thread Marty Scholes
 Has there been any change to the server hardware with
 respect to number of
 drives since ZFS has come out? Many of the servers
 around still have an even
 number of drives (2, 4) etc. and it seems far from
 optimal from a ZFS
 standpoint. All you can do is make one or two
 mirrors, or a 3 way mirror and
 a spare, right? 

With four drives you could also make a RAIDZ3 set, allowing you to have the 
lowest usable space, poorest performance and worst resilver times possible.

Sorry, couldn't resist.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-16 Thread Eric Sproul
On Wed, Jun 15, 2011 at 4:33 PM, Nomen Nescio nob...@dizum.com wrote:
 Has there been any change to the server hardware with respect to number of
 drives since ZFS has come out? Many of the servers around still have an even
 number of drives (2, 4) etc. and it seems far from optimal from a ZFS
 standpoint.

With enterprise-level 2.5 drives hitting 1TB, I've decided to buy
only 2.5-based chassis, which typically provide 6-8 bays in a 1U form
factor.  That's more than enough to build an rpool mirror and a
raidz1+spare, raidz2, or 3x-mirror pool for data.  Having 8 bays is
also a nice fit for the typical 8-port SAS HBA.

Eric
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-16 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Nomen Nescio
 
 Has there been any change to the server hardware with respect to number
 of
 drives since ZFS has come out? Many of the servers around still have an
even
 number of drives (2, 4) etc. and it seems far from optimal from a ZFS
 standpoint. 

I don't see the problem. Install the OS onto a mirrored partition, and
configure all the remaining storage however you like - raid or mirror or
whatever.

My personal preference, assuming 4 disks, since the OS is mostly reads and
only a little bit of writes, is to create a 4-way mirrored 100G partition
for the OS, and the remaining 900G of each disk (or whatever) becomes either
a stripe of mirrors or raidz, as appropriate in your case, for the
storagepool.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss