Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Grant
 Is the Gentoo Software RAID + LVM guide the best place for RAID
 install info if I'm not using LVM and I'll have a hardware RAID
 controller?

 Not ready to take the ZFS plunge? That would greatly reduce the complexity
 of RAID+LVM, since ZFS best practice is to set your hardware raid controller
 to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
 has mucho better tools). That is my next big project for when I switch to my
 next new server.

 I'm just hoping I can get comfortable with a process for getting ZFS
 compiled into the kernel that is workable/tenable for ongoing kernel updates
 (with minimal fear of breaking things due to a complex/fragile
 methodology)...

 That sounds interesting.  I don't think I'm up to it this time around,
 but ZFS manages a RAID array better than a good hardware card?

 Yes. If you use ZFS to wrestle a JBOD array into its version of
 RAID1+0, when comes time for resilvering (i.e., rebuilding a failed
 drive), ZFS smartly only copies the used blocks and skips over unused
 blocks.

I'm seriously considering ZFS now.  I'm going to start a new thread on
that topic.

- Grant


 It sounds like ZFS isn't included in the mainline kernel.  Is it on its way 
 in?


 Unlikely. There has been a discussion on that in this list, and there
 is some concern that ZFS' license (CDDL) is not compatible with the
 Linux kernel license (GPL), so never the twain shall be integrated.

 That said, if your kernel supports modules, it's a piece of cake to
 compile the ZFS modules on your own. @ryao has a zfs-overlay you can
 use to emerge ZFS as a module.

 If you have configured your kernel to not support modules, it's a bit
 more work, but ZFS can still be integrated statically into the kernel.

 But the onus is on us ZFS users to do the necessary steps.



Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Grant
 If it's Type 2, then four drives with a spare is equally tolerant.
 Slightly better, even, if you take into account the reduced probability
 of 2/5 of the drives failing compared to 2/6.

 Thank you very much for this info.  I had no idea.  Is there another
 label for these RAID types besides Type 1 and Type 2?  I can't
 find reference to those designations via Google.

 Nothing standard. RAID 10 pretty intuitively comes from RAID 1+0, which
 can be read aloud to figure out what it means: RAID 1, plus RAID 0,
 i.e. you do RAID 1, then stripe (RAID 0) the result.

 The trick is that RAID 1 can refer to either mirroring (2-way) or
 multi-mirroring (3-way) [1]. In the end, the designation is the same:
 RAID 1. So if you stripe either of them, you wind up with RAID 10. In
 other words, RAID 10 doesn't tell you which one you're going to get.

 If I ever find a controller that will do multi-mirroring + RAID 0, I'll
 let you know what they call it =)

Is multi-mirroring (3-disk RAID1) support without RAID0 common in
hardware RAID cards?

- Grant



Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Grant
 Is the Gentoo Software RAID + LVM guide the best place for RAID
 install info if I'm not using LVM and I'll have a hardware RAID
 controller?

 Not ready to take the ZFS plunge? That would greatly reduce the complexity
 of RAID+LVM, since ZFS best practice is to set your hardware raid controller
 to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
 has mucho better tools). That is my next big project for when I switch to my
 next new server.

 I'm just hoping I can get comfortable with a process for getting ZFS
 compiled into the kernel that is workable/tenable for ongoing kernel updates
 (with minimal fear of breaking things due to a complex/fragile
 methodology)...

Can't you just emerge zfs-kmod?  Or maybe you're trying to do it
without module support?

- Grant



Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Pandu Poluan
On Tue, Sep 17, 2013 at 2:28 PM, Grant emailgr...@gmail.com wrote:
 Is the Gentoo Software RAID + LVM guide the best place for RAID
 install info if I'm not using LVM and I'll have a hardware RAID
 controller?

 Not ready to take the ZFS plunge? That would greatly reduce the complexity
 of RAID+LVM, since ZFS best practice is to set your hardware raid controller
 to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
 has mucho better tools). That is my next big project for when I switch to my
 next new server.

 I'm just hoping I can get comfortable with a process for getting ZFS
 compiled into the kernel that is workable/tenable for ongoing kernel updates
 (with minimal fear of breaking things due to a complex/fragile
 methodology)...

 Can't you just emerge zfs-kmod?  Or maybe you're trying to do it
 without module support?


@tanstaafl's kernels have no module support.

Rgds,
-- 
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pepoluan.tumblr.com
 • Linked-In : http://id.linkedin.com/in/pepoluan



Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Neil Bothwick
On Tue, 17 Sep 2013 12:36:20 +0700, Pandu Poluan wrote:

 That said, if your kernel supports modules, it's a piece of cake to
 compile the ZFS modules on your own. @ryao has a zfs-overlay you can
 use to emerge ZFS as a module.

It's also in the main portage tree.


-- 
Neil Bothwick

Get your grubby hands off my tagline! I stole it first!


signature.asc
Description: PGP signature


Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Grant
 Is the Gentoo Software RAID + LVM guide the best place for RAID
 install info if I'm not using LVM and I'll have a hardware RAID
 controller?

 Not ready to take the ZFS plunge? That would greatly reduce the complexity
 of RAID+LVM, since ZFS best practice is to set your hardware raid controller
 to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
 has mucho better tools). That is my next big project for when I switch to my
 next new server.

 I'm just hoping I can get comfortable with a process for getting ZFS
 compiled into the kernel that is workable/tenable for ongoing kernel updates
 (with minimal fear of breaking things due to a complex/fragile
 methodology)...

 Can't you just emerge zfs-kmod?  Or maybe you're trying to do it
 without module support?

 @tanstaafl's kernels have no module support.

OK, but why exclude module support?

- Grant



Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Tanstaafl

On 2013-09-17 1:36 AM, Pandu Poluan pa...@poluan.info wrote:

On Sun, Sep 15, 2013 at 6:10 PM, Grant emailgr...@gmail.com wrote:

It sounds like ZFS isn't included in the mainline kernel.  Is it on its way in?



Unlikely. There has been a discussion on that in this list, and there
is some concern that ZFS' license (CDDL) is not compatible with the
Linux kernel license (GPL), so never the twain shall be integrated.


You must have missed the part that determined that integrated ZFS is 
easily doable via a simple ebuild (they said it didn't even need to be 
in an overlay) containing the code to do the integration at compile time.


So, yes, it *could* easily be done without any fear of licensing issues.

The question is, will someone with the knowledge and skills of how to do 
it right also have the desire to do the work.




Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Alan McKinnon
On 17/09/2013 11:49, Grant wrote:
 Is the Gentoo Software RAID + LVM guide the best place for RAID
 install info if I'm not using LVM and I'll have a hardware RAID
 controller?

 Not ready to take the ZFS plunge? That would greatly reduce the complexity
 of RAID+LVM, since ZFS best practice is to set your hardware raid 
 controller
 to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
 has mucho better tools). That is my next big project for when I switch to 
 my
 next new server.

 I'm just hoping I can get comfortable with a process for getting ZFS
 compiled into the kernel that is workable/tenable for ongoing kernel 
 updates
 (with minimal fear of breaking things due to a complex/fragile
 methodology)...

 Can't you just emerge zfs-kmod?  Or maybe you're trying to do it
 without module support?

 @tanstaafl's kernels have no module support.
 
 OK, but why exclude module support?



Noo, please for the love of god and all that's holy, let's not
go there again :-)

taanstafl has his reasons for using fully monolithic kernels without
module support. This works for him and nothing will dissuade him from
this strategy - we tried, we really did. He won.


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Michael Orlitzky
On 09/17/2013 02:43 AM, Grant wrote:
 
 Is multi-mirroring (3-disk RAID1) support without RAID0 common in
 hardware RAID cards?
 

Nope. Not at my pay grade, anyway. The only ones I know of are the
Hewlett-Packard MSA/EVA, but they don't call it plain RAID1. They allow
you to create virtual disk groups, though, so you can mirror a mirror to
achieve the same effect.

The only other place I've seen it in real life is Linux's mdraid.




Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Grant
 Is the Gentoo Software RAID + LVM guide the best place for RAID
 install info if I'm not using LVM and I'll have a hardware RAID
 controller?

 Not ready to take the ZFS plunge? That would greatly reduce the complexity
 of RAID+LVM, since ZFS best practice is to set your hardware raid 
 controller
 to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
 has mucho better tools). That is my next big project for when I switch to 
 my
 next new server.

 I'm just hoping I can get comfortable with a process for getting ZFS
 compiled into the kernel that is workable/tenable for ongoing kernel 
 updates
 (with minimal fear of breaking things due to a complex/fragile
 methodology)...

 Can't you just emerge zfs-kmod?  Or maybe you're trying to do it
 without module support?

 @tanstaafl's kernels have no module support.

 OK, but why exclude module support?

 Noo, please for the love of god and all that's holy, let's not
 go there again :-)

Oopsie!

 taanstafl has his reasons for using fully monolithic kernels without
 module support. This works for him and nothing will dissuade him from
 this strategy - we tried, we really did. He won.

It must be for security.

- Grant



Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Grant
 Is multi-mirroring (3-disk RAID1) support without RAID0 common in
 hardware RAID cards?

 Nope. Not at my pay grade, anyway. The only ones I know of are the
 Hewlett-Packard MSA/EVA, but they don't call it plain RAID1. They allow
 you to create virtual disk groups, though, so you can mirror a mirror to
 achieve the same effect.

 The only other place I've seen it in real life is Linux's mdraid.

Thanks Michael.  This really pushes me in the ZFS direction.

- Grant



Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Alan McKinnon
On 17/09/2013 15:11, Grant wrote:
 Is the Gentoo Software RAID + LVM guide the best place for RAID
 install info if I'm not using LVM and I'll have a hardware RAID
 controller?

 Not ready to take the ZFS plunge? That would greatly reduce the 
 complexity
 of RAID+LVM, since ZFS best practice is to set your hardware raid 
 controller
 to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
 has mucho better tools). That is my next big project for when I switch 
 to my
 next new server.

 I'm just hoping I can get comfortable with a process for getting ZFS
 compiled into the kernel that is workable/tenable for ongoing kernel 
 updates
 (with minimal fear of breaking things due to a complex/fragile
 methodology)...

 Can't you just emerge zfs-kmod?  Or maybe you're trying to do it
 without module support?

 @tanstaafl's kernels have no module support.

 OK, but why exclude module support?

 Noo, please for the love of god and all that's holy, let's not
 go there again :-)
 
 Oopsie!
 
 taanstafl has his reasons for using fully monolithic kernels without
 module support. This works for him and nothing will dissuade him from
 this strategy - we tried, we really did. He won.
 
 It must be for security.


Essentially, yes. He once explained his position to me nicely - he knows
exactly what hardware he has and what he needs, it never changes and he
never needs to tweak it on the fly. Once the driver is in the running
kernel, it stays there till a reboot. Modules he benefits, but he
doesn't need them.

That was the point I realised I didn't have an answer in his world.


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Alan McKinnon
On 17/09/2013 15:13, Grant wrote:
 Is multi-mirroring (3-disk RAID1) support without RAID0 common in
 hardware RAID cards?

 Nope. Not at my pay grade, anyway. The only ones I know of are the
 Hewlett-Packard MSA/EVA, but they don't call it plain RAID1. They allow
 you to create virtual disk groups, though, so you can mirror a mirror to
 achieve the same effect.

 The only other place I've seen it in real life is Linux's mdraid.
 
 Thanks Michael.  This really pushes me in the ZFS direction.



If you need another gentle push, ZFS checksums everything it does as it
does it, so it catches data corruption that almost all other systems
can't detect. And it doesn't have write holes either.


A very good analogy I find is Google, and why Google decided to take the
software/hardware route they did (it simplifies down to scalability).
Hardware will break and at their scale it will do it three times a day.
Google detects and works around this in software.

ZFS's approach to how to store stuff on disk in an fs is similar to
Google's approach to storing search data across the world. With the same
benefit - take the uber-expensive hardware and chuck it. Use regular
stuff instead and use it smart.



-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] {OT} Need a new server

2013-09-16 Thread Grant
 Instead, how about a 6-drive RAID 10 array with no hot spare?  My
 guess is this would mean much greater fault-tolerance both overall and
 during the rebuild process (once a new drive is swapped in).  That
 would mean not only potentially increased uptime but decreased
 monitoring responsibility.

 RAID10 with six drives can be implemented one of two ways,

   Type 1: A B A B A B

   Type 2: A B C A B C

 If your controller can do Type 1, then going with six drives gives you
 better fault tolerance than four with a hot spare.

 I've only ever seen Type 2, so I would bet that's what your controller
 will do. It's easy to check: set up RAID10 with four drives, then with
 six. Did the drive get bigger? If so, it's Type 2.

 If it's Type 2, then four drives with a spare is equally tolerant.
 Slightly better, even, if you take into account the reduced probability
 of 2/5 of the drives failing compared to 2/6.

Thank you very much for this info.  I had no idea.  Is there another
label for these RAID types besides Type 1 and Type 2?  I can't
find reference to those designations via Google.

- Grant



Re: [gentoo-user] {OT} Need a new server

2013-09-16 Thread Tanstaafl

On 2013-09-15 7:15 AM, Grant emailgr...@gmail.com wrote:

You would prefer 4-drive RAID 10 plus a hot spare to 6-drive RAID 10?
Isn't 6-drive RAID 10 superior in every way except for cost (1 extra
drive)?


I would prefer X-drive RAID10 plus hot spare in *any* situation.

But, this always loses 50+% of the potential storage space available...

Again, I'd love to see some comparisons of rebuild times on RAID5/RAID6 
systems, using slow SATA drives vs fast 15K SAS drives vs fastest SSD 
drives.


The problem with RAID5/6 has always been, the larger the array, the 
longer the rebuild times - and the longer the rebuild times, the larger 
the chance of another drive failure during the rebuild.




Re: [gentoo-user] {OT} Need a new server

2013-09-16 Thread Michael Orlitzky
On 09/16/2013 02:49 AM, Grant wrote:

 If it's Type 2, then four drives with a spare is equally tolerant.
 Slightly better, even, if you take into account the reduced probability
 of 2/5 of the drives failing compared to 2/6.
 
 Thank you very much for this info.  I had no idea.  Is there another
 label for these RAID types besides Type 1 and Type 2?  I can't
 find reference to those designations via Google.

Nothing standard. RAID 10 pretty intuitively comes from RAID 1+0, which
can be read aloud to figure out what it means: RAID 1, plus RAID 0,
i.e. you do RAID 1, then stripe (RAID 0) the result.

The trick is that RAID 1 can refer to either mirroring (2-way) or
multi-mirroring (3-way) [1]. In the end, the designation is the same:
RAID 1. So if you stripe either of them, you wind up with RAID 10. In
other words, RAID 10 doesn't tell you which one you're going to get.

If I ever find a controller that will do multi-mirroring + RAID 0, I'll
let you know what they call it =)


[1] http://www.snia.org/tech_activities/standards/curr_standards/ddf




Re: [gentoo-user] {OT} Need a new server

2013-09-16 Thread Pandu Poluan
On Sun, Sep 15, 2013 at 6:10 PM, Grant emailgr...@gmail.com wrote:
 Is the Gentoo Software RAID + LVM guide the best place for RAID
 install info if I'm not using LVM and I'll have a hardware RAID
 controller?

 Not ready to take the ZFS plunge? That would greatly reduce the complexity
 of RAID+LVM, since ZFS best practice is to set your hardware raid controller
 to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
 has mucho better tools). That is my next big project for when I switch to my
 next new server.

 I'm just hoping I can get comfortable with a process for getting ZFS
 compiled into the kernel that is workable/tenable for ongoing kernel updates
 (with minimal fear of breaking things due to a complex/fragile
 methodology)...

 That sounds interesting.  I don't think I'm up to it this time around,
 but ZFS manages a RAID array better than a good hardware card?


Yes. If you use ZFS to wrestle a JBOD array into its version of
RAID1+0, when comes time for resilvering (i.e., rebuilding a failed
drive), ZFS smartly only copies the used blocks and skips over unused
blocks.

 It sounds like ZFS isn't included in the mainline kernel.  Is it on its way 
 in?


Unlikely. There has been a discussion on that in this list, and there
is some concern that ZFS' license (CDDL) is not compatible with the
Linux kernel license (GPL), so never the twain shall be integrated.

That said, if your kernel supports modules, it's a piece of cake to
compile the ZFS modules on your own. @ryao has a zfs-overlay you can
use to emerge ZFS as a module.

If you have configured your kernel to not support modules, it's a bit
more work, but ZFS can still be integrated statically into the kernel.

But the onus is on us ZFS users to do the necessary steps.


Rgds,
-- 
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pepoluan.tumblr.com
 • Linked-In : http://id.linkedin.com/in/pepoluan



Re: [gentoo-user] {OT} Need a new server

2013-09-15 Thread Grant
 Is the Gentoo Software RAID + LVM guide the best place for RAID
 install info if I'm not using LVM and I'll have a hardware RAID
 controller?

 Not ready to take the ZFS plunge? That would greatly reduce the complexity
 of RAID+LVM, since ZFS best practice is to set your hardware raid controller
 to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
 has mucho better tools). That is my next big project for when I switch to my
 next new server.

 I'm just hoping I can get comfortable with a process for getting ZFS
 compiled into the kernel that is workable/tenable for ongoing kernel updates
 (with minimal fear of breaking things due to a complex/fragile
 methodology)...

That sounds interesting.  I don't think I'm up to it this time around,
but ZFS manages a RAID array better than a good hardware card?

It sounds like ZFS isn't included in the mainline kernel.  Is it on its way in?

- Grant



Re: [gentoo-user] {OT} Need a new server

2013-09-15 Thread Grant
 http://blog.open-e.com/why-a-hot-spare-hard-disk-is-a-bad-idea/

 Based on our long years of experience we have learned that during a
 RAID rebuild the probability of an additional drive failure is quite
 high – a rebuild is stressful on the existing drives.

 This is NOT true on a RAID 10... a rebuild is only stressful on the other
 drive in the mirrored pair, not the other drives.

 But, it is true for that one drive.

Why wouldn't it be true of RAID 10?  Each drive only has one mirror,
so if a drive fails, its only mirror will be stressed by the rebuild
won't it?

 That said, it would be nice is the auto rebuild could be scripted such that
 a backup could be triggered and the auto-rebuild queued until the backup was
 complete.

 But, here is the problem there... a backup will stress the drive almost as
 much as the rebuild, because all the rebuild does is read/copy the contents
 of the one drive to the other one (ie, it re-mirrors).

 Instead, how about a 6-drive RAID 10 array with no hot spare?  My
 guess is this would mean much greater fault-tolerance both overall and
 during the rebuild process (once a new drive is swapped in).  That
 would mean not only potentially increased uptime but decreased
 monitoring responsibility.

 I would still prefer a hot spare to not... in the real world, it has saved
 me exactly 3 out of 3 times...

You would prefer 4-drive RAID 10 plus a hot spare to 6-drive RAID 10?
Isn't 6-drive RAID 10 superior in every way except for cost (1 extra
drive)?

- Grant



Re: [gentoo-user] {OT} Need a new server

2013-09-14 Thread Alan McKinnon
On 13/09/2013 23:39, Grant wrote:
 Exactly what RAID controller are you getting?
 
  My personal rule of thumb: on-board RAID controllers are not worth the
  silicon they are written on. Decent hardware raid controllers do exist,
  but they plug into big meaty slots and cost a fortune. By a fortune I
  mean a number that will make you gulp then head off to the nearest pub
  and make the barkeep's day. (Expensive, very expensive).
 
  Sans such decent hardware, best bet is always to do it using Linux
  software RAID, and the Gentoo guide is a fine start.
 I'm told it will likely be an Adaptec 7000 series controller.
 


I'm not familiar with that model, but the white paper at the vendor's
site indicates it's of the decent variety. You might as well use it then :-)


Adaptec's stuff is rather good on the whole, we use exclusively Dell and
Adaptec is by far the most common controller shipped. I can only recall
one hardware failure or problem since 2003 over 300+ machines. The odds
are in your favour today :-)




-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] {OT} Need a new server

2013-09-14 Thread Grant
 Would the hot spare be in case I lose 2 drives at once?  Isn't that
 extraordinarily unlikely?

 Not really. One fails and you don't notice for a while, or it takes a while 
 to
 recover from it. Then a second one fails. You're up queer street.

 I like to do RAID6 now because I've been burned by this. The hot spare
 did work and automatically start rebuilding, but another drive failed
 during the rebuild process. Not that RAID6 will help if three drives
 fail, but hey.

This article references the same scenario:

http://blog.open-e.com/why-a-hot-spare-hard-disk-is-a-bad-idea/

Based on our long years of experience we have learned that during a
RAID rebuild the probability of an additional drive failure is quite
high – a rebuild is stressful on the existing drives.

Instead, how about a 6-drive RAID 10 array with no hot spare?  My
guess is this would mean much greater fault-tolerance both overall and
during the rebuild process (once a new drive is swapped in).  That
would mean not only potentially increased uptime but decreased
monitoring responsibility.

- Grant



Re: [gentoo-user] {OT} Need a new server

2013-09-14 Thread Grant
 Are modern SSDs reliable enough to negate the need for mirroring or do
 they still crap out?

 I don't have any experience with SSDs, but a general principle: ignore
 what anyone says, mirror them anyway, and make lots of backups.

I'm onboard with that.

- Grant



Re: [gentoo-user] {OT} Need a new server

2013-09-14 Thread Grant
 Exactly what RAID controller are you getting?
 
  My personal rule of thumb: on-board RAID controllers are not worth the
  silicon they are written on. Decent hardware raid controllers do exist,
  but they plug into big meaty slots and cost a fortune. By a fortune I
  mean a number that will make you gulp then head off to the nearest pub
  and make the barkeep's day. (Expensive, very expensive).
 
  Sans such decent hardware, best bet is always to do it using Linux
  software RAID, and the Gentoo guide is a fine start.
 I'm told it will likely be an Adaptec 7000 series controller.

 I'm not familiar with that model, but the white paper at the vendor's
 site indicates it's of the decent variety. You might as well use it then :-)

 Adaptec's stuff is rather good on the whole, we use exclusively Dell and
 Adaptec is by far the most common controller shipped. I can only recall
 one hardware failure or problem since 2003 over 300+ machines. The odds
 are in your favour today :-)

Can a controller like that handle a 6-drive RAID 10 array?

Is a hot spare handled by the controller or is it configured in the OS?

- Grant



Re: [gentoo-user] {OT} Need a new server

2013-09-14 Thread Alan McKinnon
On 14/09/2013 10:54, Grant wrote:
 Exactly what RAID controller are you getting?

 My personal rule of thumb: on-board RAID controllers are not worth the
 silicon they are written on. Decent hardware raid controllers do exist,
 but they plug into big meaty slots and cost a fortune. By a fortune I
 mean a number that will make you gulp then head off to the nearest pub
 and make the barkeep's day. (Expensive, very expensive).

 Sans such decent hardware, best bet is always to do it using Linux
 software RAID, and the Gentoo guide is a fine start.
 I'm told it will likely be an Adaptec 7000 series controller.

 I'm not familiar with that model, but the white paper at the vendor's
 site indicates it's of the decent variety. You might as well use it then :-)

 Adaptec's stuff is rather good on the whole, we use exclusively Dell and
 Adaptec is by far the most common controller shipped. I can only recall
 one hardware failure or problem since 2003 over 300+ machines. The odds
 are in your favour today :-)
 
 Can a controller like that handle a 6-drive RAID 10 array?
 
 Is a hot spare handled by the controller or is it configured in the OS?


The problem with questions of that nature is that the answer is always
It depends

With hardware, the vendor can release almost any imaginable
configuration and it's up to them what they want to build into their
product and the variations are endless.

Typically, a Series designation is a bunch of products built to a
certain form factor with the same basic silicon on board. The difference
in the models if how many drives they support and the feature list.
Series 7000 tells us very little. You will need to get the exact model
number from your hardware vendor then consult Adaptec's tech docs to
find out the supported feature set


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] {OT} Need a new server

2013-09-14 Thread Grant
 I'm told it will likely be an Adaptec 7000 series controller.

 Can a controller like that handle a 6-drive RAID 10 array?

 Is a hot spare handled by the controller or is it configured in the OS?

 The problem with questions of that nature is that the answer is always
 It depends

 With hardware, the vendor can release almost any imaginable
 configuration and it's up to them what they want to build into their
 product and the variations are endless.

 Typically, a Series designation is a bunch of products built to a
 certain form factor with the same basic silicon on board. The difference
 in the models if how many drives they support and the feature list.
 Series 7000 tells us very little. You will need to get the exact model
 number from your hardware vendor then consult Adaptec's tech docs to
 find out the supported feature set

Yeah, that should have been a question for the host, sorry about that.

- Grant



Re: [gentoo-user] {OT} Need a new server

2013-09-14 Thread Tanstaafl

On 2013-09-13 4:00 PM, Grant emailgr...@gmail.com wrote:

Is the Gentoo Software RAID + LVM guide the best place for RAID
install info if I'm not using LVM and I'll have a hardware RAID
controller?


Not ready to take the ZFS plunge? That would greatly reduce the 
complexity of RAID+LVM, since ZFS best practice is to set your hardware 
raid controller to JBOD mode and let ZFS take care of the RAID - and no 
LVM required (ZFS has mucho better tools). That is my next big project 
for when I switch to my next new server.


I'm just hoping I can get comfortable with a process for getting ZFS 
compiled into the kernel that is workable/tenable for ongoing kernel 
updates (with minimal fear of breaking things due to a complex/fragile 
methodology)...




Re: [gentoo-user] {OT} Need a new server

2013-09-14 Thread Tanstaafl

On 2013-09-14 4:50 AM, Grant emailgr...@gmail.com wrote:

http://blog.open-e.com/why-a-hot-spare-hard-disk-is-a-bad-idea/

Based on our long years of experience we have learned that during a
RAID rebuild the probability of an additional drive failure is quite
high – a rebuild is stressful on the existing drives.


This is NOT true on a RAID 10... a rebuild is only stressful on the 
other drive in the mirrored pair, not the other drives.


But, it is true for that one drive.

That said, it would be nice is the auto rebuild could be scripted such 
that a backup could be triggered and the auto-rebuild queued until the 
backup was complete.


But, here is the problem there... a backup will stress the drive almost 
as much as the rebuild, because all the rebuild does is read/copy the 
contents of the one drive to the other one (ie, it re-mirrors).



Instead, how about a 6-drive RAID 10 array with no hot spare?  My
guess is this would mean much greater fault-tolerance both overall and
during the rebuild process (once a new drive is swapped in).  That
would mean not only potentially increased uptime but decreased
monitoring responsibility.


I would still prefer a hot spare to not... in the real world, it has 
saved me exactly 3 out of 3 times...




Re: [gentoo-user] {OT} Need a new server

2013-09-14 Thread Tanstaafl

On 2013-09-13 5:47 PM, Grant emailgr...@gmail.com wrote:

Are modern SSDs reliable enough to negate the need for mirroring or do
they still crap out?


You definitely want to mirror, but I'd be very interested in some 
statistics comparing rebuild times on a RAID5 and RAID 6 with SSD's, vs 
15K SAS drives, vs 7200 SATA drives.


My gut feeling is, the rebuild times on SSDs just might eliminate the 
biggest problem with RAID5/6, which has always been, the more 
drives/larger the RAID, the longer the rebuild times when (not if) you 
lose a drive.


With regular hard drives, rebuild times can be DAYS using SATA drives. 
If this can be reduced to a few hours (or less?) if using SSDs, then I'd 
seriously consider using RAID 6, since you don't lose nearly as much 
usable storage as you do when using RAID10 (you always lose 50%).


But of course, with ZFS, most of these questions become moot...

If you can, I'd go with JBOD and ZFS RAID...



Re: [gentoo-user] {OT} Need a new server

2013-09-14 Thread Michael Orlitzky
On 09/14/2013 04:50 AM, Grant wrote:
 
 Instead, how about a 6-drive RAID 10 array with no hot spare?  My
 guess is this would mean much greater fault-tolerance both overall and
 during the rebuild process (once a new drive is swapped in).  That
 would mean not only potentially increased uptime but decreased
 monitoring responsibility.
 

RAID10 with six drives can be implemented one of two ways,

  Type 1: A B A B A B

  Type 2: A B C A B C

If your controller can do Type 1, then going with six drives gives you
better fault tolerance than four with a hot spare.

I've only ever seen Type 2, so I would bet that's what your controller
will do. It's easy to check: set up RAID10 with four drives, then with
six. Did the drive get bigger? If so, it's Type 2.

If it's Type 2, then four drives with a spare is equally tolerant.
Slightly better, even, if you take into account the reduced probability
of 2/5 of the drives failing compared to 2/6.

No one believes me when I say this, so here are all possibilities for a
two-drive failure enumerated for four-drive Type 2 (with a spare) and
six-drive Type 2. Both have a 20% uh oh ratio.

Layout: A B A B S

1.  A-bad B-bad A B S -- OK
2.  A-bad B A-bad B S -- UH OH
3.  A-bad B A B-bad S -- OK
4.  A-bad B A B S-bad -- OK
5.  A B-bad A-bad B S -- OK
6.  A B-bad A B-bad S -- UH OH
7.  A B-bad A B S-bad -- OK
8.  A B A-bad B-bad S -- OK
9.  A B A-bad B S-bad -- OK
10. A B A B-bad S-bad -- OK

Layout: A B C A B C

1.  A-bad B-bad C A B C -- OK
2.  A-bad B C-bad A B C -- OK
3.  A-bad B C A-bad B C -- UH OH
4.  A-bad B C A B-bad C -- OK
5.  A-bad B C A B C-bad -- OK
6.  A B-bad C-bad A B C -- OK
7.  A B-bad C A-bad B C -- OK
8.  A B-bad C A B-bad C -- UH OH
9.  A B-bad C A B C-bad -- OK
10. A B C-bad A-bad B C -- OK
11. A B C-bad A B-bad C -- OK
12. A B C-bad A B C-bad -- UH OH
13. A B C A-bad B-bad C -- OK
14. A B C A-bad B C-bad -- OK
15. A B C A B-bad C-bad -- OK




Re: [gentoo-user] {OT} Need a new server

2013-09-13 Thread Alan McKinnon
On 13/09/2013 22:00, Grant wrote:
 It's time to switch hosts.  I'm looking at the following:
 
 Dual Xeon E5-2690
 32GB RAM
 4x SSD RAID10
 
 This would be my first experience with multiple CPUs and RAID.  Advice
 on any of the following would be greatly appreciated.
 
 Are there any administrative variations for a dual-CPU system or do I
 just need to make sure I enable the right kernel option(s)?

Just use the right kernel options, nothing special needs to be done.

Individual packages may or may not benefit from lots of cpus, such
packages must be configured individually of course


 Is the Gentoo Software RAID + LVM guide the best place for RAID
 install info if I'm not using LVM and I'll have a hardware RAID
 controller?

Exactly what RAID controller are you getting?

My personal rule of thumb: on-board RAID controllers are not worth the
silicon they are written on. Decent hardware raid controllers do exist,
but they plug into big meaty slots and cost a fortune. By a fortune I
mean a number that will make you gulp then head off to the nearest pub
and make the barkeep's day. (Expensive, very expensive).

Sans such decent hardware, best bet is always to do it using Linux
software RAID, and the Gentoo guide is a fine start.

 http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml
 
 Since RAM is so nice for buffers/cache, how do I know when to stop
 adding it to my server?

When more RAM stops making a difference.

The proper answer to your question is mu, meaning it can't really be
satisfactorily answered with the info available. Only you can really
answer answer it, and only after you have examined your system in
detail. But, assuming you will use this hardware for mostly routine
normal tasks, 32G RAM is heaps and should be plenty for a long time to come.

Nothing you've ever posted leads me to believe you need crazy amounts of
RAM. It's not like your business model is to eg load every public blog
at wordpress.com with all comments and store it all in an in-memory
database :-)

 
 Can I count on this system to keep running if I lose an SSD?

Yes. If you do RAID even half-way right, you can always tolerate the
loss of one disk out of four. It's only if you do striping that you have
no redundancy at all


 
 Is a 100M uplink enough if this is my only system on the LAN?

You mean 100M ethernet right?

100M is actually a lot of traffic. However, if you have a file server
and you have on it big files  1G, it can become a drag waiting that
extra minute to push 1G through the network.

Your NICs on that hardware are 99.9% guaranteed to be 1G. It is well
worth the money to replace your switch with a 1000Mb model and invest in
decent cables. It's not expensive (a fraction of what that hardware will
cost) and you will be glad you did it, even if all the other clients are
100M

Law of diminishing returns doesn't apply here. It's a whole lot of bang
for relatively little buck

 
 Is hyperthreading worthwhile?

Yes. Horror stories about hyperthreading being bad and badly implemented
date back to 2004 or thereabouts. All that stuff got fixed.

Some software out there does not like current hyperthreading models, but
these are a) rather specialized and b) the issue is known and the vendor
will tell you upfront.

Software that uses threads in the modern style tends to fly if
hyperthreading is available. But again, this is a very general answer
and YMMV

 
 Any opinions on Soft Layer?

Never heard of it.
What is it?



-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] {OT} Need a new server

2013-09-13 Thread Michael Orlitzky
On 09/13/2013 04:00 PM, Grant wrote:
 It's time to switch hosts.  I'm looking at the following:
 
 Dual Xeon E5-2690
 32GB RAM
 4x SSD RAID10
 
 This would be my first experience with multiple CPUs and RAID.  Advice
 on any of the following would be greatly appreciated.
 
 Are there any administrative variations for a dual-CPU system or do I
 just need to make sure I enable the right kernel option(s)?

Just enable it in the kernel.


 Is the Gentoo Software RAID + LVM guide the best place for RAID
 install info if I'm not using LVM and I'll have a hardware RAID
 controller?
 
 http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml

No need. Hardware RAID is handled on the RAID controller. Gentoo won't
even know about it.

LVM is (optionally) up to you.


 Since RAM is so nice for buffers/cache, how do I know when to stop
 adding it to my server?

Run `htop` every once in a while. If you're using it all and you're not
out of money, add more RAM. Otherwise, stop.


 Can I count on this system to keep running if I lose an SSD?

Yes. RAID10 both stripes and mirrors. So you can lose one, and it's
definitely mirrored on another drive. Now you have three drives. If you
lose another one, is it mirrored? Well, maybe, if you're lucky. There's
a 2/3 chance that the second drive you lose will be one of the remaining
mirror pair.

Recommendation: add a hot spare to the system.


 Is a 100M uplink enough if this is my only system on the LAN?

If you're using it all and you're not out of money, add more bandwidth.
Otherwise, stop.





Re: [gentoo-user] {OT} Need a new server

2013-09-13 Thread Grant
 It's time to switch hosts.  I'm looking at the following:

 Dual Xeon E5-2690
 32GB RAM
 4x SSD RAID10

 This would be my first experience with multiple CPUs and RAID.  Advice
 on any of the following would be greatly appreciated.

 Is the Gentoo Software RAID + LVM guide the best place for RAID
 install info if I'm not using LVM and I'll have a hardware RAID
 controller?

 Exactly what RAID controller are you getting?

 My personal rule of thumb: on-board RAID controllers are not worth the
 silicon they are written on. Decent hardware raid controllers do exist,
 but they plug into big meaty slots and cost a fortune. By a fortune I
 mean a number that will make you gulp then head off to the nearest pub
 and make the barkeep's day. (Expensive, very expensive).

 Sans such decent hardware, best bet is always to do it using Linux
 software RAID, and the Gentoo guide is a fine start.

I'm told it will likely be an Adaptec 7000 series controller.

 Since RAM is so nice for buffers/cache, how do I know when to stop
 adding it to my server?

 When more RAM stops making a difference.

 The proper answer to your question is mu, meaning it can't really be
 satisfactorily answered with the info available. Only you can really
 answer answer it, and only after you have examined your system in
 detail. But, assuming you will use this hardware for mostly routine
 normal tasks, 32G RAM is heaps and should be plenty for a long time to come.

 Nothing you've ever posted leads me to believe you need crazy amounts of
 RAM. It's not like your business model is to eg load every public blog
 at wordpress.com with all comments and store it all in an in-memory
 database :-)

In that case maybe I'll go with 16GB instead.  It's easy to add more
later I suppose.

 Any opinions on Soft Layer?

 Never heard of it.
 What is it?

It's a host in the US.  I should have said so.

http://www.softlayer.com

- Grant



Re: [gentoo-user] {OT} Need a new server

2013-09-13 Thread Grant
 It's time to switch hosts.  I'm looking at the following:

 Dual Xeon E5-2690
 32GB RAM
 4x SSD RAID10

 This would be my first experience with multiple CPUs and RAID.  Advice
 on any of the following would be greatly appreciated.

 Is the Gentoo Software RAID + LVM guide the best place for RAID
 install info if I'm not using LVM and I'll have a hardware RAID
 controller?

 http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml

 No need. Hardware RAID is handled on the RAID controller. Gentoo won't
 even know about it.

I had no idea.  How awesome.  So the entire array shows up as /dev/sda
when using a real hardware controller?  Just enable an extra kernel
config option or two and it works?

 Can I count on this system to keep running if I lose an SSD?

 Yes. RAID10 both stripes and mirrors. So you can lose one, and it's
 definitely mirrored on another drive. Now you have three drives. If you
 lose another one, is it mirrored? Well, maybe, if you're lucky. There's
 a 2/3 chance that the second drive you lose will be one of the remaining
 mirror pair.

 Recommendation: add a hot spare to the system.

Would the hot spare be in case I lose 2 drives at once?  Isn't that
extraordinarily unlikely?

Are modern SSDs reliable enough to negate the need for mirroring or do
they still crap out?

- Grant



Re: [gentoo-user] {OT} Need a new server

2013-09-13 Thread thegeezer
On 09/13/2013 09:00 PM, Grant wrote:
 It's time to switch hosts.  I'm looking at the following:

 Dual Xeon E5-2690
 32GB RAM
 4x SSD RAID10
nice
 Can I count on this system to keep running if I lose an SSD?
if a built in raid controller, yes. one thing you might want to check is
linux tools for management -- you wouldn't want to reboot just go to go
into the raid tools and check if it requires a rebuild, and you want to
be able to schedule regular scrubs and maybe get a report.
you might also like to consider OOB management such as IPMI, dell and HP
do very lovely web based control panels that are independent of your
main o/s allowing you to get alerts when bad things happen, and
crucially watch reboot process from remote locations.

 Is a 100M uplink enough if this is my only system on the LAN?
gigabit NICs are pretty cheap i'd be surprised if any new machine didn't
have gigabit. i would suggest if you ever want to transfer data over
10GB across the network you should request gigabit

 Is hyperthreading worthwhile?

 Any opinions on Soft Layer?

 - Grant
are you putting this server in colocation at softlayer? if so OOB is a
requirement, and gigabit is not




Re: [gentoo-user] {OT} Need a new server

2013-09-13 Thread Grant
 It's time to switch hosts.  I'm looking at the following:

 Dual Xeon E5-2690
 32GB RAM
 4x SSD RAID10
 nice
 Can I count on this system to keep running if I lose an SSD?
 if a built in raid controller, yes. one thing you might want to check is
 linux tools for management -- you wouldn't want to reboot just go to go
 into the raid tools and check if it requires a rebuild, and you want to
 be able to schedule regular scrubs and maybe get a report.
 you might also like to consider OOB management such as IPMI, dell and HP
 do very lovely web based control panels that are independent of your
 main o/s allowing you to get alerts when bad things happen, and
 crucially watch reboot process from remote locations.

Good idea, I will look into IPMI.

 Is a 100M uplink enough if this is my only system on the LAN?
 gigabit NICs are pretty cheap i'd be surprised if any new machine didn't
 have gigabit. i would suggest if you ever want to transfer data over
 10GB across the network you should request gigabit

I should be OK with 100M.  I shouldn't be copying anything across the LAN.

 Any opinions on Soft Layer?

 - Grant
 are you putting this server in colocation at softlayer? if so OOB is a
 requirement, and gigabit is not

I decided against colocation because I don't want to be responsible
for fixing hardware problems.  It would be a hosted machine.

- Grant



Re: [gentoo-user] {OT} Need a new server

2013-09-13 Thread Peter Humphrey
On Friday 13 Sep 2013 14:47:35 Grant wrote:

 Would the hot spare be in case I lose 2 drives at once?  Isn't that
 extraordinarily unlikely?

Not really. One fails and you don't notice for a while, or it takes a while to 
recover from it. Then a second one fails. You're up queer street.

-- 
Regards,
Peter




Re: [gentoo-user] {OT} Need a new server

2013-09-13 Thread Daniel Frey
On 09/13/2013 03:47 PM, Peter Humphrey wrote:
 On Friday 13 Sep 2013 14:47:35 Grant wrote:
 
 Would the hot spare be in case I lose 2 drives at once?  Isn't that
 extraordinarily unlikely?
 
 Not really. One fails and you don't notice for a while, or it takes a while 
 to 
 recover from it. Then a second one fails. You're up queer street.
 

I like to do RAID6 now because I've been burned by this. The hot spare
did work and automatically start rebuilding, but another drive failed
during the rebuild process. Not that RAID6 will help if three drives
fail, but hey.

Another thing I've read is that firmware bugs on SSDs can wipe out a
whole array. I suspect it is when the raid has all the same
manufacturer/model in it and a bug appears on multiple drives killing
the array. I can't remember the details but I do believe the rebuild
procedure causing lots of writes and the drives bug out because of all
the writes. I'll admit this is not something that I've directly seen but
you may want to consider it, maybe even having 2 sets of 2 different
models in the array. My google-fu is failing me, I can't find that
article where I read this.

Dan



Re: [gentoo-user] {OT} Need a new server

2013-09-13 Thread Michael Orlitzky
On 09/13/2013 05:47 PM, Grant wrote:
 
 I had no idea.  How awesome.  So the entire array shows up as /dev/sda
 when using a real hardware controller?  Just enable an extra kernel
 config option or two and it works?
 

Yep.


 Yes. RAID10 both stripes and mirrors. So you can lose one, and it's
 definitely mirrored on another drive. Now you have three drives. If you
 lose another one, is it mirrored? Well, maybe, if you're lucky. There's
 a 2/3 chance that the second drive you lose will be one of the remaining
 mirror pair.

 Recommendation: add a hot spare to the system.
 
 Would the hot spare be in case I lose 2 drives at once?

It's just to minimize the amount of time that you're running with a
busted drive. The RAID controller will switch to the hot spare
automatically without any human intervention, so you only have to keep
your fingers crossed for e.g. 3 hours while the array rebuilds. This is
as opposed to 3 hours + (however long it took the admin to notice that a
drive has failed).


   Isn't that extraordinarily unlikely?

If the failures were random, yes, but they aren't -- they just seem that
way. The drives that you use in a hardware RAID array should ideally be
exactly the same size and have the same firmware. It's therefore not
uncommon to wind up with a set of drives that all came off the same
manufacturing line at around the same time.

If there's a minor defect in a component, like say a solder joint that
melts at too low of a temperature, then they're all much more likely to
fail at around the same time as the first one.


 Are modern SSDs reliable enough to negate the need for mirroring or do
 they still crap out?

I don't have any experience with SSDs, but a general principle: ignore
what anyone says, mirror them anyway, and make lots of backups.