Re: [gentoo-user] Re: {OT} tried Nimsoft Monitoring?

2013-09-17 Thread Alan McKinnon
On 17/09/2013 07:42, Grant wrote:
 Has anyone tried Nimsoft Monitoring?  It's included at Soft Layer
 which must mean a free license.

 No. IBM has a general strategy to suck you in
 so caveat emptor.. You really wan to install
 IBM binaries on any machine? (Think NSA).
 
 Nevermind!
 
 It looks like a substitute for Nagios.

 Nagios has been under numerous stresses for
 quite some time, for a variety of reasons, imho.
 Forking, Borking, and Porking out is what I see
 of Nagios; ymmv.
 
 I didn't realize that.
 
 jffnms is well written, modular and quite responsive
 to the individual's (organization's) needs, imho.
 All in source code form.

 Last time I checked, there was a new (recent) ebuild
 for jffnms. Patches are easy to apply and I think
 (Gentoo) folks are starting to use jffnms much more.

 Check it out, most are happy with it, and find it
 easy (particulary with SNMP 1,2.3) to install and extend.
 
 It looks great, thank you for the recommendation.  Have you used
 munin?  If so, do you think jffnms is a substitute or compliment to
 that package?


Munin and jffnms bear no real relation to each other. Yes they are
similar in that both can draw graphs but that's about where the
similarity ends.

Munin's job is to periodically poll a device using whatever means is
available and gather data from the device. The data is always in the
form of a number - it measures something. The data can be anything you
can generate a number for - logged in users, traffic through an
interface, load, number of database queries. The list is endless. Point
being, the device/computer/hosts reports it's own numbers to munin, and
munin draws graphs. Munin does not record state, it has no idea what the
state of something is.

Nagios is a problem child, it does not do what people assume it does (I
have constant fights about this at work). Nagios is a state monitoring
and reporting engine (simply because this is what it does well and
everything else it does it does poorly). Nagios will track if things are
up or down, if you acknowledged the condition and when, who to notify
when state changes (sms, mail, dashboard etc etc).

What Nagios does poorly (despite this being it's advertised purpose) is
getting state events into the system. It really really sucks at this and
is coded from an extremely narrow point of view. Which explains the
numerous forks around (they all implement vital real world features that
Ethan refuses to commit).

jffnms is something I don't use myself, but it looks like the same class
of app as Nagios. Don't be fooled into choosing between munin and
nagios/jffnms - they are not the same thing, not even close. Use both.


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Grant
 Is the Gentoo Software RAID + LVM guide the best place for RAID
 install info if I'm not using LVM and I'll have a hardware RAID
 controller?

 Not ready to take the ZFS plunge? That would greatly reduce the complexity
 of RAID+LVM, since ZFS best practice is to set your hardware raid controller
 to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
 has mucho better tools). That is my next big project for when I switch to my
 next new server.

 I'm just hoping I can get comfortable with a process for getting ZFS
 compiled into the kernel that is workable/tenable for ongoing kernel updates
 (with minimal fear of breaking things due to a complex/fragile
 methodology)...

 That sounds interesting.  I don't think I'm up to it this time around,
 but ZFS manages a RAID array better than a good hardware card?

 Yes. If you use ZFS to wrestle a JBOD array into its version of
 RAID1+0, when comes time for resilvering (i.e., rebuilding a failed
 drive), ZFS smartly only copies the used blocks and skips over unused
 blocks.

I'm seriously considering ZFS now.  I'm going to start a new thread on
that topic.

- Grant


 It sounds like ZFS isn't included in the mainline kernel.  Is it on its way 
 in?


 Unlikely. There has been a discussion on that in this list, and there
 is some concern that ZFS' license (CDDL) is not compatible with the
 Linux kernel license (GPL), so never the twain shall be integrated.

 That said, if your kernel supports modules, it's a piece of cake to
 compile the ZFS modules on your own. @ryao has a zfs-overlay you can
 use to emerge ZFS as a module.

 If you have configured your kernel to not support modules, it's a bit
 more work, but ZFS can still be integrated statically into the kernel.

 But the onus is on us ZFS users to do the necessary steps.



Re: [gentoo-user] Re: {OT} tried Nimsoft Monitoring?

2013-09-17 Thread Grant
 Munin and jffnms bear no real relation to each other. Yes they are
 similar in that both can draw graphs but that's about where the
 similarity ends.

 Munin's job is to periodically poll a device using whatever means is
 available and gather data from the device. The data is always in the
 form of a number - it measures something. The data can be anything you
 can generate a number for - logged in users, traffic through an
 interface, load, number of database queries. The list is endless. Point
 being, the device/computer/hosts reports it's own numbers to munin, and
 munin draws graphs. Munin does not record state, it has no idea what the
 state of something is.

 Nagios is a problem child, it does not do what people assume it does (I
 have constant fights about this at work). Nagios is a state monitoring
 and reporting engine (simply because this is what it does well and
 everything else it does it does poorly). Nagios will track if things are
 up or down, if you acknowledged the condition and when, who to notify
 when state changes (sms, mail, dashboard etc etc).

 What Nagios does poorly (despite this being it's advertised purpose) is
 getting state events into the system. It really really sucks at this and
 is coded from an extremely narrow point of view. Which explains the
 numerous forks around (they all implement vital real world features that
 Ethan refuses to commit).

 jffnms is something I don't use myself, but it looks like the same class
 of app as Nagios. Don't be fooled into choosing between munin and
 nagios/jffnms - they are not the same thing, not even close. Use both.

Understood.  Thank you James and Alan.

- Grant



Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Grant
 If it's Type 2, then four drives with a spare is equally tolerant.
 Slightly better, even, if you take into account the reduced probability
 of 2/5 of the drives failing compared to 2/6.

 Thank you very much for this info.  I had no idea.  Is there another
 label for these RAID types besides Type 1 and Type 2?  I can't
 find reference to those designations via Google.

 Nothing standard. RAID 10 pretty intuitively comes from RAID 1+0, which
 can be read aloud to figure out what it means: RAID 1, plus RAID 0,
 i.e. you do RAID 1, then stripe (RAID 0) the result.

 The trick is that RAID 1 can refer to either mirroring (2-way) or
 multi-mirroring (3-way) [1]. In the end, the designation is the same:
 RAID 1. So if you stripe either of them, you wind up with RAID 10. In
 other words, RAID 10 doesn't tell you which one you're going to get.

 If I ever find a controller that will do multi-mirroring + RAID 0, I'll
 let you know what they call it =)

Is multi-mirroring (3-disk RAID1) support without RAID0 common in
hardware RAID cards?

- Grant



[gentoo-user] ZFS

2013-09-17 Thread Grant
I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
running.  I'd also like to stripe for performance, resulting in
RAID10.  It sounds like most hardware controllers do not support
6-disk RAID10 so ZFS looks very interesting.

Can I operate ZFS RAID without a hardware RAID controller?

From a RAID perspective only, is ZFS a better choice than conventional
software RAID?

ZFS seems to have many excellent features and I'd like to ease into
them slowly (like an old man into a nice warm bath).  Does ZFS allow
you to set up additional features later (e.g. snapshots, encryption,
deduplication, compression) or is some forethought required when first
making the filesystem?

It looks like there are comprehensive ZFS Gentoo docs
(http://wiki.gentoo.org/wiki/ZFS) but can anyone tell me from the real
world about how much extra difficulty/complexity is added to
installation and ongoing administration when choosing ZFS over ext4?

Performance doesn't seem to be one of ZFS's strong points.  Is it
considered suitable for a high-performance server?

http://www.phoronix.com/scan.php?page=news_itempx=MTM1NTA

Besides performance, are there any drawbacks to ZFS compared to ext4?

- Grant



Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Grant
 Is the Gentoo Software RAID + LVM guide the best place for RAID
 install info if I'm not using LVM and I'll have a hardware RAID
 controller?

 Not ready to take the ZFS plunge? That would greatly reduce the complexity
 of RAID+LVM, since ZFS best practice is to set your hardware raid controller
 to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
 has mucho better tools). That is my next big project for when I switch to my
 next new server.

 I'm just hoping I can get comfortable with a process for getting ZFS
 compiled into the kernel that is workable/tenable for ongoing kernel updates
 (with minimal fear of breaking things due to a complex/fragile
 methodology)...

Can't you just emerge zfs-kmod?  Or maybe you're trying to do it
without module support?

- Grant



Re: [gentoo-user] ZFS

2013-09-17 Thread Marc Stürmer

Am 17.09.2013 09:20, schrieb Grant:


Performance doesn't seem to be one of ZFS's strong points.  Is it
considered suitable for a high-performance server?


A high performance server for what?

But you've already given yourself the answer: if high performance is 
what you are aiming for it depends on your performance needs and 
probably ZFS on Linux is not got to meet those - yet. It is still evolving.


Of course benchmarks are static, real world usage is another cup of coffee.


Besides performance, are there any drawbacks to ZFS compared to ext4?


Well it only comes as kernel module at the moment. Some people dislike 
that.




Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Pandu Poluan
On Tue, Sep 17, 2013 at 2:28 PM, Grant emailgr...@gmail.com wrote:
 Is the Gentoo Software RAID + LVM guide the best place for RAID
 install info if I'm not using LVM and I'll have a hardware RAID
 controller?

 Not ready to take the ZFS plunge? That would greatly reduce the complexity
 of RAID+LVM, since ZFS best practice is to set your hardware raid controller
 to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
 has mucho better tools). That is my next big project for when I switch to my
 next new server.

 I'm just hoping I can get comfortable with a process for getting ZFS
 compiled into the kernel that is workable/tenable for ongoing kernel updates
 (with minimal fear of breaking things due to a complex/fragile
 methodology)...

 Can't you just emerge zfs-kmod?  Or maybe you're trying to do it
 without module support?


@tanstaafl's kernels have no module support.

Rgds,
-- 
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pepoluan.tumblr.com
 • Linked-In : http://id.linkedin.com/in/pepoluan



Re: [gentoo-user] ZFS

2013-09-17 Thread Pandu Poluan
On Tue, Sep 17, 2013 at 2:20 PM, Grant emailgr...@gmail.com wrote:
 I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
 running.  I'd also like to stripe for performance, resulting in
 RAID10.  It sounds like most hardware controllers do not support
 6-disk RAID10 so ZFS looks very interesting.

 Can I operate ZFS RAID without a hardware RAID controller?


Yes. In fact, that's ZFS' preferred mode of operation (i.e., it
handles all redundancy by itself).

 From a RAID perspective only, is ZFS a better choice than conventional
 software RAID?


Yes.

ZFS checksummed all blocks during writes, and verifies those checksums
during read.

It is possible to have 2 bits flipped at the same time among 2 hard
disks. In such case, the RAID controller will never see the bitflips.
But ZFS will see it.

 ZFS seems to have many excellent features and I'd like to ease into
 them slowly (like an old man into a nice warm bath).  Does ZFS allow
 you to set up additional features later (e.g. snapshots, encryption,
 deduplication, compression) or is some forethought required when first
 making the filesystem?


Snapshots is built-in from the beginning. All you have to do is create
one when you want it.

Deduplication can be turned on and off at will -- but be warned: You
need HUGE amount of RAM.

Compression can be turned on and off at will. Previously-compressed
data won't become uncompressed unless you modify them.

 It looks like there are comprehensive ZFS Gentoo docs
 (http://wiki.gentoo.org/wiki/ZFS) but can anyone tell me from the real
 world about how much extra difficulty/complexity is added to
 installation and ongoing administration when choosing ZFS over ext4?


Very very minimal. So minimal, in fact, that if you don't plan to use
ZFS as a root filesystem, it's laughably simple. You don't even have
to edit /etc/fstab

 Performance doesn't seem to be one of ZFS's strong points.  Is it
 considered suitable for a high-performance server?

 http://www.phoronix.com/scan.php?page=news_itempx=MTM1NTA


Several points:

1. The added steps of checksumming (and verifying the checksums)
*will* give a performance penalty.

2. When comparing performance of 1 (one) drive, of course ZFS will
lose. But when you build a ZFS pool out of 3 pairs of mirrored drives,
throughput will increase significantly as ZFS has the ability to do
'load-balancing' among mirror-pairs (or, in ZFS parlance, mirrored
vdevs)

Go directly to this post:
http://phoronix.com/forums/showthread.php?79922-Benchmarks-Of-The-New-ZFS-On-Linux-EXT4-Winsp=326838#post326838

Notice how ZFS won against ext4 in 8 scenarios out of 9. (The only
scenario where ZFS lost is in the single-client RAID-1 scenario)

 Besides performance, are there any drawbacks to ZFS compared to ext4?


1. You need a huge amount of RAM to let ZFS do its magic. But RAM is
cheap nowadays. Data... possibly priceless.

2. Be careful when using ZFS on a server on which processes rapidly
spawn and terminate. ZFS doesn't like memory fragmentation.

For point #2, I can give you a real-life example:

My mail server, for some reasons, choke if too many TLS errors happen.
So, I placed Perdition in to capture all POP3 connections and
'un-TLS' them. Perdition spawns a new process for *every* connection.
My mail server has 2000 users, I regularly see more than 100 Perdition
child processes. Many very ephemeral (i.e., existing for less than 5
seconds). The RAM is undoubtedly *extremely* fragmented. ZFS cries
murder when it cannot allocate a contiguous SLAB of memory to increase
its ARC Cache.

OTOH, on another very busy server (mail archiving server using
MailArchiva, handling 2000+ emails per hour), ZFS run flawlessly. No
incident _at_all_. Undoubtedly because MailArchiva use one single huge
process (Java-based) to handle all transactions, so no RAM
fragmentation here.


Rgds,
-- 
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pepoluan.tumblr.com
 • Linked-In : http://id.linkedin.com/in/pepoluan



Re: [gentoo-user] ZFS

2013-09-17 Thread Alan McKinnon
On 17/09/2013 10:05, Pandu Poluan wrote:
 On Tue, Sep 17, 2013 at 2:20 PM, Grant emailgr...@gmail.com wrote:
 I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
 running.  I'd also like to stripe for performance, resulting in
 RAID10.  It sounds like most hardware controllers do not support
 6-disk RAID10 so ZFS looks very interesting.

 Can I operate ZFS RAID without a hardware RAID controller?

 
 Yes. In fact, that's ZFS' preferred mode of operation (i.e., it
 handles all redundancy by itself).

I would take it a step further and say that a hardware RAID controller
actively interferes with ZFS and gets in the way. It gets in the way so
much that one should not do it at all.

Running the controller in JBOD mode is not a good idea, I'd say it's a
requirement.





-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] [SOLVED] All KDE-related programs don't play music anymore

2013-09-17 Thread Alexander Puchmayr
Hi,

I remember to have accidently started amarok via ssh on my notebook (wrong 
console window), with DISPLAY on my desktop. This seems to have scrapped 
phonon-config and/or pulseaudio config on both systems. 
After having removed these files (.pulse/ and 
.kde4/share/config/phonondevicesrc) on both systems, amarok played my music as 
expected.

Thanks to all who answered me.

Best Regards
Alex





On Monday 16 September 2013, 13:37:37 Alexander Puchmayr wrote:
 Hi there,
 
 I've got a somewhat strange problem, which occurs on both my laptop and my
 desktop-pc, both running gentoo. From one day to the other, all KDE-based
 music/media player don't start playing music anymore.
 I've tried Amarok, Kaffeine and Juk, and all of them seem to hang when
 starting any kind of music file (mp3/flac/wma). But, strangly, kaffeine
 plays videos with sound without a problem. Also, other programs (non-kde),
 for example audacity and xine, play mp3 files properly.
 In kde's system settings, the phonon configuration dialog play its test
 sounds properly.
 
 Any ideas? Any idea why it affects two different systems pretty much at the
 same time? Any idea why it worked yesterday morning and started to have
 troubles yesterday evening, without having changed anything in the system?
 
 BTW: dmesg shows a lot of lines that might be relevant:
 [21219.855176] traps: alsa-sink[12956] general protection ip:7f08486569f2
 sp:7f083fff6b70 error:0 in libasound.so.2.0.0[7f084860+ea000]
 one per attempt to play anything in amarok.
 
 Best regards
   Alex
 
 
 
 !DSPAM:506,5237149d673921892913720!



Re: [gentoo-user] trouble installing cups

2013-09-17 Thread Neil Bothwick
On Mon, 16 Sep 2013 21:23:50 -0400, gottl...@nyu.edu wrote:

  So I reinstalled cups but /etc/cups/cupd.conf was not changed and
  still has its old date and contents.  The merge looks clean (output
  below)  
 
  /etc/ is CONFIG_PROTECTed.  
 
 This part I knew, but would have expected to hear that config files
 have new versions

You would if upgrading. But you are reinstalling the same version so
portage assumes you have already dealt with any config updates and don't
want to be bothered again. 

Remember when we had to go through loads of updates over again when
revdep-rebuild rebuilt a package with lots of config files. This avoids
that behaviour, --noconfmem brings it back.


-- 
Neil Bothwick

Biology is the only science in which multiplication means the same thing
as division.


signature.asc
Description: PGP signature


Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Neil Bothwick
On Tue, 17 Sep 2013 12:36:20 +0700, Pandu Poluan wrote:

 That said, if your kernel supports modules, it's a piece of cake to
 compile the ZFS modules on your own. @ryao has a zfs-overlay you can
 use to emerge ZFS as a module.

It's also in the main portage tree.


-- 
Neil Bothwick

Get your grubby hands off my tagline! I stole it first!


signature.asc
Description: PGP signature


Re: [gentoo-user] ZFS

2013-09-17 Thread Grant
 It looks like there are comprehensive ZFS Gentoo docs
 (http://wiki.gentoo.org/wiki/ZFS) but can anyone tell me from the real
 world about how much extra difficulty/complexity is added to
 installation and ongoing administration when choosing ZFS over ext4?

 Very very minimal. So minimal, in fact, that if you don't plan to use
 ZFS as a root filesystem, it's laughably simple. You don't even have
 to edit /etc/fstab

I do plan to use it as the root filesystem but it sounds like I
shouldn't worry about extra headaches.

 Performance doesn't seem to be one of ZFS's strong points.  Is it
 considered suitable for a high-performance server?

 http://www.phoronix.com/scan.php?page=news_itempx=MTM1NTA

 Go directly to this post:
 http://phoronix.com/forums/showthread.php?79922-Benchmarks-Of-The-New-ZFS-On-Linux-EXT4-Winsp=326838#post326838

 Notice how ZFS won against ext4 in 8 scenarios out of 9. (The only
 scenario where ZFS lost is in the single-client RAID-1 scenario)

Very encouraging.  I'll let that assuage my performance concerns.

 Besides performance, are there any drawbacks to ZFS compared to ext4?

 1. You need a huge amount of RAM to let ZFS do its magic. But RAM is
 cheap nowadays. Data... possibly priceless.

Is this a requirement for deduplication, or for ZFS in general?

How can you determine how much RAM you'll need?

 2. Be careful when using ZFS on a server on which processes rapidly
 spawn and terminate. ZFS doesn't like memory fragmentation.

I don't think I have that sort of scenario on my server.  Is there a
way to check for memory fragmentation to be sure?

 For point #2, I can give you a real-life example:

 My mail server, for some reasons, choke if too many TLS errors happen.
 So, I placed Perdition in to capture all POP3 connections and
 'un-TLS' them. Perdition spawns a new process for *every* connection.
 My mail server has 2000 users, I regularly see more than 100 Perdition
 child processes. Many very ephemeral (i.e., existing for less than 5
 seconds). The RAM is undoubtedly *extremely* fragmented. ZFS cries
 murder when it cannot allocate a contiguous SLAB of memory to increase
 its ARC Cache.

Did you have to switch to a different filesystem on that server?

- Grant



Re: [gentoo-user] ZFS

2013-09-17 Thread Grant
 I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
 running.  I'd also like to stripe for performance, resulting in
 RAID10.  It sounds like most hardware controllers do not support
 6-disk RAID10 so ZFS looks very interesting.

 Can I operate ZFS RAID without a hardware RAID controller?

 Yes. In fact, that's ZFS' preferred mode of operation (i.e., it
 handles all redundancy by itself).

 I would take it a step further and say that a hardware RAID controller
 actively interferes with ZFS and gets in the way. It gets in the way so
 much that one should not do it at all.

 Running the controller in JBOD mode is not a good idea, I'd say it's a
 requirement.

If I go with ZFS I won't have a RAID controller installed at all.  One
less point of hardware failure too.

- Grant



Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Grant
 Is the Gentoo Software RAID + LVM guide the best place for RAID
 install info if I'm not using LVM and I'll have a hardware RAID
 controller?

 Not ready to take the ZFS plunge? That would greatly reduce the complexity
 of RAID+LVM, since ZFS best practice is to set your hardware raid controller
 to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
 has mucho better tools). That is my next big project for when I switch to my
 next new server.

 I'm just hoping I can get comfortable with a process for getting ZFS
 compiled into the kernel that is workable/tenable for ongoing kernel updates
 (with minimal fear of breaking things due to a complex/fragile
 methodology)...

 Can't you just emerge zfs-kmod?  Or maybe you're trying to do it
 without module support?

 @tanstaafl's kernels have no module support.

OK, but why exclude module support?

- Grant



Re: [gentoo-user] ZFS

2013-09-17 Thread Joerg Schilling
Grant emailgr...@gmail.com wrote:

 Performance doesn't seem to be one of ZFS's strong points.  Is it
 considered suitable for a high-performance server?

ZFS is one of the fastest FS I am aware of (if not the fastest).
You need a sufficient amount of RAM to make the ARC useful.

The only problem I am aware with ZFS is the fact that if you ask it to grant 
consistency for a specific file at a specific time, you force it to become slow.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily



[gentoo-user] Cyrus IMAPd .. sieve extensions

2013-09-17 Thread Stefan G. Weichinger

Does anyone know how I get the date-extension into my gentoo-based
cyrus-imapd-server?

Stefan



Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Tanstaafl

On 2013-09-17 1:36 AM, Pandu Poluan pa...@poluan.info wrote:

On Sun, Sep 15, 2013 at 6:10 PM, Grant emailgr...@gmail.com wrote:

It sounds like ZFS isn't included in the mainline kernel.  Is it on its way in?



Unlikely. There has been a discussion on that in this list, and there
is some concern that ZFS' license (CDDL) is not compatible with the
Linux kernel license (GPL), so never the twain shall be integrated.


You must have missed the part that determined that integrated ZFS is 
easily doable via a simple ebuild (they said it didn't even need to be 
in an overlay) containing the code to do the integration at compile time.


So, yes, it *could* easily be done without any fear of licensing issues.

The question is, will someone with the knowledge and skills of how to do 
it right also have the desire to do the work.




Re: [gentoo-user] ZFS

2013-09-17 Thread Tanstaafl

On 2013-09-17 4:05 AM, Pandu Poluan pa...@poluan.info wrote:

2. When comparing performance of 1 (one) drive, of course ZFS will
lose. But when you build a ZFS pool out of 3 pairs of mirrored drives,
throughput will increase significantly as ZFS has the ability to do
'load-balancing' among mirror-pairs (or, in ZFS parlance, mirrored
vdevs)


Hmmm...

If conventional wisdom is to run a hardware RAID card in JBOD mode, how 
can you also set it up with mirrored pairs at the same time?


So, for best performance  reliability, which is it? JBOD mode? Or 
mirrored vdevs?




Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Alan McKinnon
On 17/09/2013 11:49, Grant wrote:
 Is the Gentoo Software RAID + LVM guide the best place for RAID
 install info if I'm not using LVM and I'll have a hardware RAID
 controller?

 Not ready to take the ZFS plunge? That would greatly reduce the complexity
 of RAID+LVM, since ZFS best practice is to set your hardware raid 
 controller
 to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
 has mucho better tools). That is my next big project for when I switch to 
 my
 next new server.

 I'm just hoping I can get comfortable with a process for getting ZFS
 compiled into the kernel that is workable/tenable for ongoing kernel 
 updates
 (with minimal fear of breaking things due to a complex/fragile
 methodology)...

 Can't you just emerge zfs-kmod?  Or maybe you're trying to do it
 without module support?

 @tanstaafl's kernels have no module support.
 
 OK, but why exclude module support?



Noo, please for the love of god and all that's holy, let's not
go there again :-)

taanstafl has his reasons for using fully monolithic kernels without
module support. This works for him and nothing will dissuade him from
this strategy - we tried, we really did. He won.


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] ZFS

2013-09-17 Thread Tanstaafl

On 2013-09-17 3:20 AM, Grant emailgr...@gmail.com wrote:

It sounds like most hardware controllers do not support
6-disk RAID10 so ZFS looks very interesting.


?? RAID 10 simply requires an even number of drives with a minimum of 4.

So, you certainly can have a 6 disk RAID10 - I've got a system with one 
right now in fact.



Can I operate ZFS RAID without a hardware RAID controller?


Yes.





Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Michael Orlitzky
On 09/17/2013 02:43 AM, Grant wrote:
 
 Is multi-mirroring (3-disk RAID1) support without RAID0 common in
 hardware RAID cards?
 

Nope. Not at my pay grade, anyway. The only ones I know of are the
Hewlett-Packard MSA/EVA, but they don't call it plain RAID1. They allow
you to create virtual disk groups, though, so you can mirror a mirror to
achieve the same effect.

The only other place I've seen it in real life is Linux's mdraid.




[gentoo-user] Re: {OT} tried Nimsoft Monitoring?

2013-09-17 Thread James Horton PE
Grant emailgrant at gmail.com writes:


  jffnms is something I don't use myself, but it looks like the same class
  of app as Nagios. Don't be fooled into choosing between munin and
  nagios/jffnms - they are not the same thing, not even close. Use both.


thank Alan, I did not know about munin.

Grant, Craig, the main devloper of JFFNMS is a very cool human being.
If you have good ideas, he will listen. If you send him some
code to fix/extend functionality, he is as flexible as can be
with integrating the ideas of others. Craig is an older, accomplished
programmer with a mantra of writing clean and understandable code.

That is what separates JFFNMS from the rest. (Craig) He's like an Alan
that codes quite a lot and quite well..

 =) 

 Understood.  Thank you James and Alan.
 - Grant

net-analyzer/jffnms
Available versions:  ~0.9.3 {{mysql postgres snmp}}


James





Re: [gentoo-user] Python Imaging/Pillow

2013-09-17 Thread Silvio Siefke
Hello,

On Thu, 12 Sep 2013 06:57:15 +0200 Alan McKinnon
alan.mckin...@gmail.com wrote:

 What I usually is download the sources by hand, run ./configure and
 see if the app directly supports pillow and take it from there. A fix
 usually just needs a few minor tweaks to the ebuild.


Sabayon has changed the ebuild, so Openshot can used. 

http://bugs.sabayon.org/show_bug.cgi?id=4395

Thank you for help  Nice Day
Silvio



Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Grant
 Is the Gentoo Software RAID + LVM guide the best place for RAID
 install info if I'm not using LVM and I'll have a hardware RAID
 controller?

 Not ready to take the ZFS plunge? That would greatly reduce the complexity
 of RAID+LVM, since ZFS best practice is to set your hardware raid 
 controller
 to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
 has mucho better tools). That is my next big project for when I switch to 
 my
 next new server.

 I'm just hoping I can get comfortable with a process for getting ZFS
 compiled into the kernel that is workable/tenable for ongoing kernel 
 updates
 (with minimal fear of breaking things due to a complex/fragile
 methodology)...

 Can't you just emerge zfs-kmod?  Or maybe you're trying to do it
 without module support?

 @tanstaafl's kernels have no module support.

 OK, but why exclude module support?

 Noo, please for the love of god and all that's holy, let's not
 go there again :-)

Oopsie!

 taanstafl has his reasons for using fully monolithic kernels without
 module support. This works for him and nothing will dissuade him from
 this strategy - we tried, we really did. He won.

It must be for security.

- Grant



Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Grant
 Is multi-mirroring (3-disk RAID1) support without RAID0 common in
 hardware RAID cards?

 Nope. Not at my pay grade, anyway. The only ones I know of are the
 Hewlett-Packard MSA/EVA, but they don't call it plain RAID1. They allow
 you to create virtual disk groups, though, so you can mirror a mirror to
 achieve the same effect.

 The only other place I've seen it in real life is Linux's mdraid.

Thanks Michael.  This really pushes me in the ZFS direction.

- Grant



Re: [gentoo-user] ZFS

2013-09-17 Thread Grant
 It sounds like most hardware controllers do not support
 6-disk RAID10 so ZFS looks very interesting.

 ?? RAID 10 simply requires an even number of drives with a minimum of 4.

OK, there seems to be some disagreement on this.  Michael?

- Grant



Re: [gentoo-user] ZFS

2013-09-17 Thread Grant
 Performance doesn't seem to be one of ZFS's strong points.  Is it
 considered suitable for a high-performance server?

 ZFS is one of the fastest FS I am aware of (if not the fastest).
 You need a sufficient amount of RAM to make the ARC useful.

How much RAM is that?

- Grant



Re: [gentoo-user] ZFS

2013-09-17 Thread Joerg Schilling
Grant emailgr...@gmail.com wrote:

  Performance doesn't seem to be one of ZFS's strong points.  Is it
  considered suitable for a high-performance server?
 
  ZFS is one of the fastest FS I am aware of (if not the fastest).
  You need a sufficient amount of RAM to make the ARC useful.

 How much RAM is that?

How much do you have?

File servers usually have at least 20 GB but 64+ is usual...

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily



Re: [gentoo-user] ZFS

2013-09-17 Thread Michael Orlitzky
On 09/17/2013 09:21 AM, Grant wrote:
 It sounds like most hardware controllers do not support
 6-disk RAID10 so ZFS looks very interesting.

 ?? RAID 10 simply requires an even number of drives with a minimum of 4.
 
 OK, there seems to be some disagreement on this.  Michael?
 

Any controller that claims RAID10 on a server with 6 drive bays should
be able to put all six drives in an array. But you'll get a three-way
stripe (better performance) instead of a three-way mirror (better fault
tolerance).

So,

  A B C
  A B C

and not,

  A B
  A B
  A B

The former gives you more space but slightly less fault tolerance than
four drives with a hot spare.




Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Alan McKinnon
On 17/09/2013 15:11, Grant wrote:
 Is the Gentoo Software RAID + LVM guide the best place for RAID
 install info if I'm not using LVM and I'll have a hardware RAID
 controller?

 Not ready to take the ZFS plunge? That would greatly reduce the 
 complexity
 of RAID+LVM, since ZFS best practice is to set your hardware raid 
 controller
 to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
 has mucho better tools). That is my next big project for when I switch 
 to my
 next new server.

 I'm just hoping I can get comfortable with a process for getting ZFS
 compiled into the kernel that is workable/tenable for ongoing kernel 
 updates
 (with minimal fear of breaking things due to a complex/fragile
 methodology)...

 Can't you just emerge zfs-kmod?  Or maybe you're trying to do it
 without module support?

 @tanstaafl's kernels have no module support.

 OK, but why exclude module support?

 Noo, please for the love of god and all that's holy, let's not
 go there again :-)
 
 Oopsie!
 
 taanstafl has his reasons for using fully monolithic kernels without
 module support. This works for him and nothing will dissuade him from
 this strategy - we tried, we really did. He won.
 
 It must be for security.


Essentially, yes. He once explained his position to me nicely - he knows
exactly what hardware he has and what he needs, it never changes and he
never needs to tweak it on the fly. Once the driver is in the running
kernel, it stays there till a reboot. Modules he benefits, but he
doesn't need them.

That was the point I realised I didn't have an answer in his world.


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] ZFS

2013-09-17 Thread Michael Orlitzky
On 09/17/2013 11:40 AM, Tanstaafl wrote:
 On 2013-09-17 11:18 AM, Michael Orlitzky mich...@orlitzky.com wrote:
 Any controller that claims RAID10 on a server with 6 drive bays should
 be able to put all six drives in an array. But you'll get a three-way
 stripe (better performance) instead of a three-way mirror (better fault
 tolerance).

 So,

A B C
A B C

 and not,

A B
A B
A B

 The former gives you more space but slightly less fault tolerance than
 four drives with a hot spare.
 
 Sorry, don't understand what you're saying.
 
 Are you talking about the difference between RAID1+0 and RAID0+1?

Nope. Both of my examples above are stripes of mirrors, i.e. 1 + 0.


 If not, then please point to *authoritative* docs on what you mean.

http://www.snia.org/tech_activities/standards/curr_standards/ddf


 Googling on just RAID10 doesn't confuse the issues like you seem to be 
 doing (probably my ignorance though)...
 

It's not my fault, the standard confuses the issue =)

Controllers that can do multi-mirroring are next to nonexistent, so
produce few Google results. You can generally assume that RAID10 with 6
drives is going to give you,

  A B C
  A B C

so you don't get much more fault tolerance by throwing more drives at
it. The controller in Grant's server can do this, I'm sure.

For maximum fault tolerance, what you really want is,

  A B
  A B
  A B

but, like I said, it's hard to find in hardware. The standard I linked
to calls both of these RAID10, thus the confusion.

I forget why I even brought it up. I think it was in order to argue that
4 drives w/ spare is more tolerant that 6 drives in RAID10. To make that
argument, we need to be clear about what RAID10 means.




Re: [gentoo-user] trouble installing cups

2013-09-17 Thread gottlieb
On Tue, Sep 17 2013, Neil Bothwick wrote:

 On Mon, 16 Sep 2013 21:23:50 -0400, gottl...@nyu.edu wrote:

  So I reinstalled cups but /etc/cups/cupd.conf was not changed and
  still has its old date and contents.  The merge looks clean (output
  below)  
 
  /etc/ is CONFIG_PROTECTed.  
 
 This part I knew, but would have expected to hear that config files
 have new versions

 You would if upgrading. But you are reinstalling the same version so
 portage assumes you have already dealt with any config updates and don't
 want to be bothered again. 

 Remember when we had to go through loads of updates over again when
 revdep-rebuild rebuilt a package with lots of config files. This avoids
 that behaviour, --noconfmem brings it back.

Understood.  Thanks for the explanation.
allan



Re: [gentoo-user] ZFS

2013-09-17 Thread Michael Orlitzky
On 09/17/2013 01:00 PM, Tanstaafl wrote:
 
 But not 6-drive RAID w/ hot spare... ;) Anyone who can't afford to add a 
 single additional drive for the piece of mind has no business buying the 
 RAID card to begin with...

Most of our servers only come with 6 drive bays -- that's why I have
this speech already rehearsed!





Re: [gentoo-user] ZFS

2013-09-17 Thread Tanstaafl

On 2013-09-17 12:34 PM, Michael Orlitzky mich...@orlitzky.com wrote:

For maximum fault tolerance, what you really want is,

   A B
   A B
   A B

but, like I said, it's hard to find in hardware. The standard I linked
to calls both of these RAID10, thus the confusion.


Ok, I see where my confusion came in... when you first referred to this, 
you said that the *latter* was the more common version, but I guess you 
meant the former (since you're no saying the latter is 'hard to find in 
hardware')...



I forget why I even brought it up. I think it was in order to argue that
4 drives w/ spare is more tolerant that 6 drives in RAID10.


But not 6-drive RAID w/ hot spare... ;) Anyone who can't afford to add a 
single additional drive for the piece of mind has no business buying the 
RAID card to begin with...





Re: [gentoo-user] ZFS

2013-09-17 Thread Alan McKinnon
On 17/09/2013 15:22, Grant wrote:
 Performance doesn't seem to be one of ZFS's strong points.  Is it
 considered suitable for a high-performance server?

 ZFS is one of the fastest FS I am aware of (if not the fastest).
 You need a sufficient amount of RAM to make the ARC useful.
 
 How much RAM is that?
 
 - Grant
 


1G of RAM per 1TB of data is the recommendation.

For de-duped data, it is considerably more, something on the order of 6G
of RAM per 1TB of data.

The first guideline is actually not too onerous. It *seems* like a huge
amount of RAM, but

a) Most modern motherboards can handle that with ease
b) RAM is comparatively cheap
c) It's a once-of purchase
d) RAM is very reliable so once-off really does mean once-off




-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Alan McKinnon
On 17/09/2013 15:13, Grant wrote:
 Is multi-mirroring (3-disk RAID1) support without RAID0 common in
 hardware RAID cards?

 Nope. Not at my pay grade, anyway. The only ones I know of are the
 Hewlett-Packard MSA/EVA, but they don't call it plain RAID1. They allow
 you to create virtual disk groups, though, so you can mirror a mirror to
 achieve the same effect.

 The only other place I've seen it in real life is Linux's mdraid.
 
 Thanks Michael.  This really pushes me in the ZFS direction.



If you need another gentle push, ZFS checksums everything it does as it
does it, so it catches data corruption that almost all other systems
can't detect. And it doesn't have write holes either.


A very good analogy I find is Google, and why Google decided to take the
software/hardware route they did (it simplifies down to scalability).
Hardware will break and at their scale it will do it three times a day.
Google detects and works around this in software.

ZFS's approach to how to store stuff on disk in an fs is similar to
Google's approach to storing search data across the world. With the same
benefit - take the uber-expensive hardware and chuck it. Use regular
stuff instead and use it smart.



-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] ZFS

2013-09-17 Thread covici
Pandu Poluan pa...@poluan.info wrote:

 On Tue, Sep 17, 2013 at 2:20 PM, Grant emailgr...@gmail.com wrote:
  I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
  running.  I'd also like to stripe for performance, resulting in
  RAID10.  It sounds like most hardware controllers do not support
  6-disk RAID10 so ZFS looks very interesting.
 
  Can I operate ZFS RAID without a hardware RAID controller?
 
 
 Yes. In fact, that's ZFS' preferred mode of operation (i.e., it
 handles all redundancy by itself).
 
  From a RAID perspective only, is ZFS a better choice than conventional
  software RAID?
 
 
 Yes.
 
 ZFS checksummed all blocks during writes, and verifies those checksums
 during read.
 
 It is possible to have 2 bits flipped at the same time among 2 hard
 disks. In such case, the RAID controller will never see the bitflips.
 But ZFS will see it.
 
  ZFS seems to have many excellent features and I'd like to ease into
  them slowly (like an old man into a nice warm bath).  Does ZFS allow
  you to set up additional features later (e.g. snapshots, encryption,
  deduplication, compression) or is some forethought required when first
  making the filesystem?
 
 
 Snapshots is built-in from the beginning. All you have to do is create
 one when you want it.
 
 Deduplication can be turned on and off at will -- but be warned: You
 need HUGE amount of RAM.
 
 Compression can be turned on and off at will. Previously-compressed
 data won't become uncompressed unless you modify them.
 
  It looks like there are comprehensive ZFS Gentoo docs
  (http://wiki.gentoo.org/wiki/ZFS) but can anyone tell me from the real
  world about how much extra difficulty/complexity is added to
  installation and ongoing administration when choosing ZFS over ext4?
 
 
 Very very minimal. So minimal, in fact, that if you don't plan to use
 ZFS as a root filesystem, it's laughably simple. You don't even have
 to edit /etc/fstab
 
  Performance doesn't seem to be one of ZFS's strong points.  Is it
  considered suitable for a high-performance server?
 
  http://www.phoronix.com/scan.php?page=news_itempx=MTM1NTA
 
 
 Several points:
 
 1. The added steps of checksumming (and verifying the checksums)
 *will* give a performance penalty.
 
 2. When comparing performance of 1 (one) drive, of course ZFS will
 lose. But when you build a ZFS pool out of 3 pairs of mirrored drives,
 throughput will increase significantly as ZFS has the ability to do
 'load-balancing' among mirror-pairs (or, in ZFS parlance, mirrored
 vdevs)
 
 Go directly to this post:
 http://phoronix.com/forums/showthread.php?79922-Benchmarks-Of-The-New-ZFS-On-Linux-EXT4-Winsp=326838#post326838
 
 Notice how ZFS won against ext4 in 8 scenarios out of 9. (The only
 scenario where ZFS lost is in the single-client RAID-1 scenario)
 
  Besides performance, are there any drawbacks to ZFS compared to ext4?
 
 
 1. You need a huge amount of RAM to let ZFS do its magic. But RAM is
 cheap nowadays. Data... possibly priceless.
 
 2. Be careful when using ZFS on a server on which processes rapidly
 spawn and terminate. ZFS doesn't like memory fragmentation.
 
 For point #2, I can give you a real-life example:
 
 My mail server, for some reasons, choke if too many TLS errors happen.
 So, I placed Perdition in to capture all POP3 connections and
 'un-TLS' them. Perdition spawns a new process for *every* connection.
 My mail server has 2000 users, I regularly see more than 100 Perdition
 child processes. Many very ephemeral (i.e., existing for less than 5
 seconds). The RAM is undoubtedly *extremely* fragmented. ZFS cries
 murder when it cannot allocate a contiguous SLAB of memory to increase
 its ARC Cache.
 
 OTOH, on another very busy server (mail archiving server using
 MailArchiva, handling 2000+ emails per hour), ZFS run flawlessly. No
 incident _at_all_. Undoubtedly because MailArchiva use one single huge
 process (Java-based) to handle all transactions, so no RAM
 fragmentation here.
Spo do I need that overlay at all, or just emerge zfs and its module?
Also, I now have lvm volumes, including root, but not boot, how to
convert and do I have to do anything to my initramfs?

-- 
Your life is like a penny.  You're going to lose it.  The question is:
How do
you spend it?

 John Covici
 cov...@ccs.covici.com



Re: [gentoo-user] ZFS

2013-09-17 Thread covici
Volker Armin Hemmann volkerar...@googlemail.com wrote:

 Am 17.09.2013 09:20, schrieb Grant:
  I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
  running.  I'd also like to stripe for performance, resulting in
  RAID10.  It sounds like most hardware controllers do not support
  6-disk RAID10 so ZFS looks very interesting.
 
  Can I operate ZFS RAID without a hardware RAID controller?
 
  From a RAID perspective only, is ZFS a better choice than conventional
  software RAID?
 
  ZFS seems to have many excellent features and I'd like to ease into
  them slowly (like an old man into a nice warm bath).  Does ZFS allow
  you to set up additional features later (e.g. snapshots, encryption,
  deduplication, compression) or is some forethought required when first
  making the filesystem?
 
  It looks like there are comprehensive ZFS Gentoo docs
  (http://wiki.gentoo.org/wiki/ZFS) but can anyone tell me from the real
  world about how much extra difficulty/complexity is added to
  installation and ongoing administration when choosing ZFS over ext4?
 
  Performance doesn't seem to be one of ZFS's strong points.  Is it
  considered suitable for a high-performance server?
 
  http://www.phoronix.com/scan.php?page=news_itempx=MTM1NTA
 
  Besides performance, are there any drawbacks to ZFS compared to ext4?
 
 do yourself three favours:
 
 use ECC ram. Lots of it. 16GB DDR3 1600 ECC ram cost you less than 170€.
 And it is worth it. ZFS showed me just how many silent corruptions can
 happen on a 'stable' system. Errors never seen neither detected thanks
 to using 'standard' ram.
 
 turn off readahead. ZFS' own readahead and the kernel's clash - badly.
 Turn off kernel's readahead for a visible performance boon.
 
 use noop as io-scheduler.

How do you turnoff read ahead?

-- 
Your life is like a penny.  You're going to lose it.  The question is:
How do
you spend it?

 John Covici
 cov...@ccs.covici.com



Re: [gentoo-user] cross-compiling mosh

2013-09-17 Thread Stefan G. Weichinger
Am 13.09.2013 19:54, schrieb Stefan G. Weichinger:
 Am 12.09.2013 16:55, schrieb Yohan Pereira:
 On 12/09/13 at 09:21am, Stefan G. Weichinger wrote:
 ps: anyone using mosh already? Experiences? opinions?


Could someone tell me just from a general point of view if I could just
*try* to copy my gentoo-binaries over and test them or if I should not
do that to not crash that server?

;-)

Thanks!





Re: [gentoo-user] ZFS

2013-09-17 Thread Volker Armin Hemmann
Am 17.09.2013 09:20, schrieb Grant:
 I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
 running.  I'd also like to stripe for performance, resulting in
 RAID10.  It sounds like most hardware controllers do not support
 6-disk RAID10 so ZFS looks very interesting.

 Can I operate ZFS RAID without a hardware RAID controller?

 From a RAID perspective only, is ZFS a better choice than conventional
 software RAID?

 ZFS seems to have many excellent features and I'd like to ease into
 them slowly (like an old man into a nice warm bath).  Does ZFS allow
 you to set up additional features later (e.g. snapshots, encryption,
 deduplication, compression) or is some forethought required when first
 making the filesystem?

 It looks like there are comprehensive ZFS Gentoo docs
 (http://wiki.gentoo.org/wiki/ZFS) but can anyone tell me from the real
 world about how much extra difficulty/complexity is added to
 installation and ongoing administration when choosing ZFS over ext4?

 Performance doesn't seem to be one of ZFS's strong points.  Is it
 considered suitable for a high-performance server?

 http://www.phoronix.com/scan.php?page=news_itempx=MTM1NTA

 Besides performance, are there any drawbacks to ZFS compared to ext4?

do yourself three favours:

use ECC ram. Lots of it. 16GB DDR3 1600 ECC ram cost you less than 170€.
And it is worth it. ZFS showed me just how many silent corruptions can
happen on a 'stable' system. Errors never seen neither detected thanks
to using 'standard' ram.

turn off readahead. ZFS' own readahead and the kernel's clash - badly.
Turn off kernel's readahead for a visible performance boon.

use noop as io-scheduler.



Re: [gentoo-user] ZFS

2013-09-17 Thread Tanstaafl
On 2013-09-17 2:00 PM, Volker Armin Hemmann volkerar...@googlemail.com 
wrote:

use ECC ram. Lots of it. 16GB DDR3 1600 ECC ram cost you less than 170€.
And it is worth it. ZFS showed me just how many silent corruptions can
happen on a 'stable' system. Errors never seen neither detected thanks
to using 'standard' ram.

turn off readahead. ZFS' own readahead and the kernel's clash - badly.
Turn off kernel's readahead for a visible performance boon.

use noop as io-scheduler.


Is there a good place to read about these kinds of tuning parameters?



Re: [gentoo-user] ZFS

2013-09-17 Thread Tanstaafl

On 2013-09-17 11:18 AM, Michael Orlitzky mich...@orlitzky.com wrote:

Any controller that claims RAID10 on a server with 6 drive bays should
be able to put all six drives in an array. But you'll get a three-way
stripe (better performance) instead of a three-way mirror (better fault
tolerance).

So,

   A B C
   A B C

and not,

   A B
   A B
   A B

The former gives you more space but slightly less fault tolerance than
four drives with a hot spare.


Sorry, don't understand what you're saying.

Are you talking about the difference between RAID1+0 and RAID0+1?

If not, then please point to *authoritative* docs on what you mean.

Googling on just RAID10 doesn't confuse the issues like you seem to be 
doing (probably my ignorance though)...




Re: [gentoo-user] ZFS

2013-09-17 Thread Volker Armin Hemmann
Am 17.09.2013 20:11, schrieb cov...@ccs.covici.com:
 Volker Armin Hemmann volkerar...@googlemail.com wrote:

 Am 17.09.2013 09:20, schrieb Grant:
 I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
 running.  I'd also like to stripe for performance, resulting in
 RAID10.  It sounds like most hardware controllers do not support
 6-disk RAID10 so ZFS looks very interesting.

 Can I operate ZFS RAID without a hardware RAID controller?

 From a RAID perspective only, is ZFS a better choice than conventional
 software RAID?

 ZFS seems to have many excellent features and I'd like to ease into
 them slowly (like an old man into a nice warm bath).  Does ZFS allow
 you to set up additional features later (e.g. snapshots, encryption,
 deduplication, compression) or is some forethought required when first
 making the filesystem?

 It looks like there are comprehensive ZFS Gentoo docs
 (http://wiki.gentoo.org/wiki/ZFS) but can anyone tell me from the real
 world about how much extra difficulty/complexity is added to
 installation and ongoing administration when choosing ZFS over ext4?

 Performance doesn't seem to be one of ZFS's strong points.  Is it
 considered suitable for a high-performance server?

 http://www.phoronix.com/scan.php?page=news_itempx=MTM1NTA

 Besides performance, are there any drawbacks to ZFS compared to ext4?

 do yourself three favours:

 use ECC ram. Lots of it. 16GB DDR3 1600 ECC ram cost you less than 170€.
 And it is worth it. ZFS showed me just how many silent corruptions can
 happen on a 'stable' system. Errors never seen neither detected thanks
 to using 'standard' ram.

 turn off readahead. ZFS' own readahead and the kernel's clash - badly.
 Turn off kernel's readahead for a visible performance boon.

 use noop as io-scheduler.
 How do you turnoff read ahead?

set it with blockdev to 8 (for example). Doesn't turn it off. Just makes
it none-obstrusive.



Re: [gentoo-user] ZFS

2013-09-17 Thread Volker Armin Hemmann
Am 17.09.2013 20:11, schrieb Tanstaafl:
 On 2013-09-17 2:00 PM, Volker Armin Hemmann
 volkerar...@googlemail.com wrote:
 use ECC ram. Lots of it. 16GB DDR3 1600 ECC ram cost you less than 170€.
 And it is worth it. ZFS showed me just how many silent corruptions can
 happen on a 'stable' system. Errors never seen neither detected thanks
 to using 'standard' ram.

 turn off readahead. ZFS' own readahead and the kernel's clash - badly.
 Turn off kernel's readahead for a visible performance boon.

 use noop as io-scheduler.

 Is there a good place to read about these kinds of tuning parameters?


zfsonlinux?
google?



Re: [gentoo-user] ZFS

2013-09-17 Thread Stefan G. Weichinger
Am 17.09.2013 19:34, schrieb Tanstaafl:
 On 2013-09-17 1:07 PM, Michael Orlitzky mich...@orlitzky.com wrote:
 On 09/17/2013 01:00 PM, Tanstaafl wrote:

 But not 6-drive RAID w/ hot spare... ;) Anyone who can't afford to add a
 single additional drive for the piece of mind has no business buying the
 RAID card to begin with...

 Most of our servers only come with 6 drive bays -- that's why I have
 this speech already rehearsed!
 
 Ahh...
 

So what would be the recommended setup with ZFS and 6 drives?

I have to set up a server w/ 8x 1TB in about 2 weeks and consider ZFS as
well, at least for data. So root-fs would go onto 2x 1TB hdds with
conventional partitioning and something like ext4.

6x 1TB would be available for data ... on one hand for a file-server
part ... on the other hand for VMs based on KVM.

The server has 64 gigs of RAM so that won't be a problem here.

I still wonder if the virtual disks for the VMs will run fine on ZFS ...
no way to test it until I am there and set the box up.

S



Re: [gentoo-user] ZFS

2013-09-17 Thread Tanstaafl

On 2013-09-17 1:07 PM, Michael Orlitzky mich...@orlitzky.com wrote:

On 09/17/2013 01:00 PM, Tanstaafl wrote:


But not 6-drive RAID w/ hot spare... ;) Anyone who can't afford to add a
single additional drive for the piece of mind has no business buying the
RAID card to begin with...


Most of our servers only come with 6 drive bays -- that's why I have
this speech already rehearsed!


Ahh...



Re: [gentoo-user] ZFS

2013-09-17 Thread Grant
 Any controller that claims RAID10 on a server with 6 drive bays should
 be able to put all six drives in an array. But you'll get a three-way
 stripe (better performance) instead of a three-way mirror (better fault
 tolerance).

 I forget why I even brought it up. I think it was in order to argue that
 4 drives w/ spare is more tolerant that 6 drives in RAID10. To make that
 argument, we need to be clear about what RAID10 means.

I'm extremely glad you did.  Otherwise I would have booted my new
hardware RAID server and been very disappointed.

- Grant



Re: [gentoo-user] ZFS

2013-09-17 Thread Grant
 Performance doesn't seem to be one of ZFS's strong points.  Is it
 considered suitable for a high-performance server?

 ZFS is one of the fastest FS I am aware of (if not the fastest).
 You need a sufficient amount of RAM to make the ARC useful.

 How much RAM is that?

 1G of RAM per 1TB of data is the recommendation.

 For de-duped data, it is considerably more, something on the order of 6G
 of RAM per 1TB of data.

Well, my entire server uses only about 50GB so I guess I'm OK with the
host's minimum of 16GB RAM.

- Grant



Re: [gentoo-user] ZFS

2013-09-17 Thread Grant
 I have to set up a server w/ 8x 1TB in about 2 weeks and consider ZFS as
 well, at least for data. So root-fs would go onto 2x 1TB hdds with
 conventional partitioning and something like ext4.

Is a layout like this with the data on ZFS and the root-fs on ext4 a
better choice than ZFS all around?

- Grant



Re: [gentoo-user] ZFS

2013-09-17 Thread Grant
 Besides performance, are there any drawbacks to ZFS compared to ext4?

 do yourself three favours:

 use ECC ram. Lots of it. 16GB DDR3 1600 ECC ram cost you less than 170€.
 And it is worth it. ZFS showed me just how many silent corruptions can
 happen on a 'stable' system. Errors never seen neither detected thanks
 to using 'standard' ram.

 turn off readahead. ZFS' own readahead and the kernel's clash - badly.
 Turn off kernel's readahead for a visible performance boon.

 use noop as io-scheduler.

Thank you, I'm taking notes.  Please feel free to toss out any more tips.

- Grant



Re: [gentoo-user] ZFS

2013-09-17 Thread Grant
 Besides performance, are there any drawbacks to ZFS compared to ext4?

How about hardened?  Does ZFS have any problems interacting with
grsecurity or a hardened profile?

- Grant



Re: [gentoo-user] ZFS

2013-09-17 Thread Bruce Hill
On Tue, Sep 17, 2013 at 02:11:33PM -0400, Tanstaafl wrote:
 
 Is there a good place to read about these kinds of tuning parameters?

Just wondering if anyone experienced running ZFS on Gentoo finds this wiki
article worthy of use: http://wiki.gentoo.org/wiki/ZFS
-- 
Happy Penguin Computers   ')
126 Fenco Drive   ( \
Tupelo, MS 38801   ^^
supp...@happypenguincomputers.com
662-269-2706 662-205-6424
http://happypenguincomputers.com/

A: Because it messes up the order in which people normally read text.   

   
Q: Why is top-posting such a bad thing? 

   
A: Top-posting. 

   
Q: What is the most annoying thing in e-mail?

Don't top-post: http://en.wikipedia.org/wiki/Top_post#Top-posting