Re: [gentoo-user] zfs repair needed (due to fingers being faster than brain)

2021-03-01 Thread Grant Taylor

On 3/1/21 3:25 PM, John Blinka wrote:

HI, Gentooers!


Hi,

So, I typed dd if=/dev/zero of=/dev/sd, and despite 
hitting ctrl-c quite quickly, zeroed out some portion of the initial 
part of a disk.  Which did this to my zfs raidz3 array:


OOPS!!!


 NAME STATE READ WRITE CKSUM
 zfs  DEGRADED 0 0 0
   raidz3-0   DEGRADED 0 0 0
 ata-HGST_HUS724030ALE640_PK1234P8JJJVKP  ONLINE   0 0 0
 ata-HGST_HUS724030ALE640_PK1234P8JJP3AP  ONLINE   0 0 0
 ata-ST4000NM0033-9ZM170_Z1Z80P4C ONLINE   0 0 0
 ata-ST4000NM0033-9ZM170_Z1ZAZ8F1 ONLINE   0 0 0
 14296253848142792483 UNAVAIL  0 0
0  was /dev/disk/by-id/ata-ST4000NM0033-9ZM170_Z1ZAZDJ0-part1
 ata-ST4000NM0033-9ZM170_Z1Z80KG0 ONLINE   0 0 0


Okay.  So the pool is online and the data is accessible.  That's 
actually better than I originally thought.  --  I thought you had 
accidentally damaged part of the ZFS partition that existed on a single 
disk.  --  I've been able to repair this with minimal data loss (zeros) 
with Oracle's help on Solaris in the past.


Aside:  My understanding is that ZFS stores multiple copies of it's 
metadata on the disk (assuming single disk) and that it is possible to 
recover a pool if any one (or maybe two for consistency checks) are 
viable.  Though doing so is further into the weeds than you normally 
want to be.


Could have been worse.  I do have backups, and it is raid3, so all I've 
injured is my pride, but I do want to fix things.I'd appreciate 
some guidance before I attempt doing this - I have no experience at 
it myself.


First, your pool / it's raidz3 is only 'DEGRADED', which means that the 
data is still accessible.  'OFFLINE' would be more problematic.



The steps I envision are

1) zpool offline zfs 14296253848142792483 (What's that number?)


I'm guessing it's an internal ZFS serial number.  You will probably need 
to reference it.


I see no reason to take the pool offline.


2) do something to repair the damaged disk


I don't think you need to do anything at the individual disk level yet.


3) zpool online zfs 


I think you can fix this with the pool online.

Right now, the device name for the damaged disk is /dev/sda. 
Gdisk says this about it:


Caution: invalid main GPT header,


This is to be expected.


but valid backup; regenerating main header from backup!


This looks promising.


Warning: Invalid CRC on main header data; loaded backup partition table.
Warning! Main and backup partition tables differ! Use the 'c' and 'e' options
on the recovery & transformation menu to examine the two tables.


I'm assuming that the main partition table is at the start of the disk 
and that it's what got wiped out.


So I'd think that you can look at the 'c' and 'e' options on the 
recovery & transformation menu for options to repair the main partition 
table.



Warning! Main partition table CRC mismatch! Loaded backup partition table
instead of main partition table!


I know.  Thank you for using the backup partition table.


Warning! One or more CRCs don't match. You should repair the disk!


I'm guessing that this is a direct result of the dd oops.  I would want 
more evidence to support it being a larger problem.


The CRC may be calculated over a partially zeroed chunk of disk.  (Chunk 
because I don't know what term is best here and I want to avoid implying 
anything specific or incorrectly.)



Main header: ERROR
Backup header: OK
Main partition table: ERROR
Backup partition table: OK


ACK


Partition table scan:
   MBR: not present
   BSD: not present
   APM: not present
   GPT: damaged

Found invalid MBR and corrupt GPT. What do you want to do? (Using the
GPT MAY permit recovery of GPT data.)
  1 - Use current GPT
  2 - Create blank GPT

Your answer: ( I haven't given one yet)


I'd assume #1, Use current GPT.


I'm not exactly sure what this is telling me.  But I'm guessing it
means that the main partition table is gone, but there's a good
backup.


That's my interpretation too.

It jives with the description of what happened.


In addition, some, but not all disk id info is gone:
1) /dev/disk/by-id still shows ata-ST4000NM0033-9ZM170_Z1ZAZDJ0 
(the damaged disk) but none of its former partitions


The disk ID still being there may be a symptom / side effect of when 
udev creates the links.  I would expect it to not be there post-reboot.


Well, maybe.  The disk serial number is independent of any data on the disk.

Partitions by ID would probably be gone post reboot (or eject and 
re-insertion).


2) /dev/disk/by-partlabel shows entries for the undamaged disks in 
the pool, but not the damaged one


Okay.  That means that udev is recognizing the change faster than I 
would have expected.


That 

Re: [gentoo-user] zfs repair needed (due to fingers being faster than brain)

2021-03-01 Thread antlists
Firstly, I'll say I'm not experienced, but knowing a fair bit about raid 
and recovering corrupted arrays ...


On 01/03/2021 22:25, John Blinka wrote:

HI, Gentooers!

So, I typed dd if=/dev/zero of=/dev/sd, and despite
hitting ctrl-c quite quickly, zeroed out some portion of the initial
part of a disk.  Which did this to my zfs raidz3 array:

 NAME STATE READ WRITE CKSUM
 zfs  DEGRADED 0 0 0
   raidz3-0   DEGRADED 0 0 0
 ata-HGST_HUS724030ALE640_PK1234P8JJJVKP  ONLINE   0 0 0
 ata-HGST_HUS724030ALE640_PK1234P8JJP3AP  ONLINE   0 0 0
 ata-ST4000NM0033-9ZM170_Z1Z80P4C ONLINE   0 0 0
 ata-ST4000NM0033-9ZM170_Z1ZAZ8F1 ONLINE   0 0 0
 14296253848142792483 UNAVAIL  0 0
0  was /dev/disk/by-id/ata-ST4000NM0033-9ZM170_Z1ZAZDJ0-part1
 ata-ST4000NM0033-9ZM170_Z1Z80KG0 ONLINE   0 0 0

Could have been worse.  I do have backups, and it is raid3, so all
I've injured is my pride, but I do want to fix things.I'd
appreciate some guidance before I attempt doing this - I have no
experience at it myself.

The steps I envision are

1) zpool offline zfs 14296253848142792483 (What's that number?)
2) do something to repair the damaged disk
3) zpool online zfs 

Right now, the device name for the damaged disk is /dev/sda.  Gdisk
says this about it:

Caution: invalid main GPT header, but valid backup; regenerating main header
from backup!


The GPT table is stored at least twice, this is telling you the primary 
copy is trashed, but the backup seems okay ...


Warning: Invalid CRC on main header data; loaded backup partition table.
Warning! Main and backup partition tables differ! Use the 'c' and 'e' options
on the recovery & transformation menu to examine the two tables.

Warning! Main partition table CRC mismatch! Loaded backup partition table
instead of main partition table!

Warning! One or more CRCs don't match. You should repair the disk!
Main header: ERROR
Backup header: OK
Main partition table: ERROR
Backup partition table: OK

Partition table scan:
   MBR: not present
   BSD: not present
   APM: not present
   GPT: damaged

Found invalid MBR and corrupt GPT. What do you want to do? (Using the
GPT MAY permit recovery of GPT data.)
  1 - Use current GPT
  2 - Create blank GPT

Your answer: ( I haven't given one yet)

I'm not exactly sure what this is telling me.  But I'm guessing it
means that the main partition table is gone, but there's a good
backup.


Yup. I don't understand that prompt, but I THINK it's saying that if you 
do choose choice 1, it will recover your partition table for you.



 In addition, some, but not all disk id info is gone:
1) /dev/disk/by-id still shows ata-ST4000NM0033-9ZM170_Z1ZAZDJ0 (the
damaged disk) but none of its former partitions


Because this is the disk, and you've damaged the contents, so this is 
completely unaffected.



2) /dev/disk/by-partlabel shows entries for the undamaged disks in the
pool, but not the damaged one
3) /dev/disk/by-partuuid similar to /dev/disk/by-partlabel


For both of these, "part" is short for partition, and you've just 
trashed them ...



4) /dev/disk/by-uuid does not show the damaged disk


Because the uuid is part of the partition table.


This particular disk is from a batch of 4 I bought with the same make
and specification and very similar ids (/dev/disk/by-id).  Can I
repair this disk by copying something off one of those other disks
onto this one? 


GOD NO! You'll start copying uuids, so they'll no longer be unique, and 
things really will be broken!



Is repair just repartitioning - as in the Gentoo
handbook?  Is it as simple as running gdisk and typing 1 to accept
gdisk's attempt at recovering the gpt?  Is running gdisk's recovery
and transformation facilities the way to go (the b option looks like
it's made for exactly this situation)?

Anybody experienced at this and willing to guide me?

Make sure that option 1 really does recover the GPT, then use it. Of 
course, the question then becomes what further damage will rear its head.


You need to make sure that your raid 3 array can recover from a corrupt 
disk. THIS IS IMPORTANT. If you tried to recover an md-raid-5 array from 
this situation you'd almost certainly trash it completely.



Actually, if your setup is raid, I'd just blow out the trashed disk 
completely. Take it out of your system, replace it, and let zfs repair 
itself onto the new disk.


You can then zero out the old disk and it's now a spare.

Just be careful here, because I don't know what zfs does, but btrfs by 
default mirrors metadata but not data, so with that you'd think a 
mirrored filesystem could repair itself but it can't ... if you want to 
repair the filesystem without rebuilding from scratch, you need 

Re: [gentoo-user] zfs emerge failure (solved)

2017-09-26 Thread John Blinka
Rich Freeman had the right clue.

Some time ago, after successfully installing zfs, I changed root's
umask to 0027.  This had the effect of changing the permissions on
/lib/modules/X.Y.Z-gentoo to drwxr-x--- on a subsequent kernel
upgrade.  This prevents emerge (once it switches to user:group
portage:portage) from being able to explore the contents of
/lib/modules/X.Y.Z-gentoo.  Unfortunately for me, spl's configure
script locates the current kernel source by following the
/lib/modules/X.Y.Z-gentoo/build soft link.  And it couldn't do that
with the overly restrictive umask.  The solution was simple: eliminate
the 0027 umask for root, and chmod o+rx /lib/modules/X.Y.Z-gentoo.

Thanks for all the suggestions.  They all helped.

John Blinka



Re: [gentoo-user] zfs emerge failure

2017-08-23 Thread John Blinka
On Tue, Aug 15, 2017 at 7:13 PM, John Blinka  wrote:
> On Tue, Aug 15, 2017 at 6:54 PM, John Covici  wrote:
>
>> What is your umask?   I had troubles like this when I had too
>> aggressive umask of I think 027 rather than 022.
>
> It is indeed 027, and I wondered whether that might have been what was
> behind the error, hence I tried chmod -R 777 the entire kernel tree.
> But maybe that mask is doing something nasty during the actual config
> step apart from the kernel tree.  I'll try backing off the umask.
> Thanks!
>
> John

Back at debugging the spl configuration failure after a hiatus.  Tried
a umask of 022.  No change in failed spl configuration.

John



Re: [gentoo-user] zfs emerge failure

2017-08-23 Thread John Blinka
On Tue, Aug 15, 2017 at 7:14 PM, John Blinka  wrote:
> On Tue, Aug 15, 2017 at 6:51 PM, Rich Freeman  wrote:
>>
>> Yes, and in fact it is in the output when emerge fails:
>>  /var/tmp/portage/sys-kernel/spl-0.7.1/work/spl-0.7.1/config.log
>

Digging into config.log after a hiatus to attend to other demands of
life.  Comparing config.log output to the code in the corresponding
"configure" script was a little enlightening - at least it was clear
what the configure script was trying to do when it failed.   In
anticipation of throwing some echo statements into a modified script
to help debug further, I tried to see if the configure script could be
invoked using the command line arguments documented in config.log.  To
my surprise, when invoking configure that way, the script proceeded to
completion without any problems.  There's a clue.  Executing on the
command line as user root and group root leads to success, and
executing through portage as portage:portage (judging from the
ownership of files in
/var/tmp/portage/sys-kernel/spl-0.7.1/work/spl-0.7.1) leads to
failure.

Thanks for the hint. back to debugging.

John



Re: [gentoo-user] zfs emerge failure

2017-08-15 Thread John Blinka
On Tue, Aug 15, 2017 at 6:51 PM, Rich Freeman  wrote:
>
> Yes, and in fact it is in the output when emerge fails:
>  /var/tmp/portage/sys-kernel/spl-0.7.1/work/spl-0.7.1/config.log


Ah-ha!  I see it now. That['s valuable, and I'll take a closer look.  Thanks!

John



Re: [gentoo-user] zfs emerge failure

2017-08-15 Thread John Blinka
On Tue, Aug 15, 2017 at 6:54 PM, John Covici  wrote:

> What is your umask?   I had troubles like this when I had too
> aggressive umask of I think 027 rather than 022.

It is indeed 027, and I wondered whether that might have been what was
behind the error, hence I tried chmod -R 777 the entire kernel tree.
But maybe that mask is doing something nasty during the actual config
step apart from the kernel tree.  I'll try backing off the umask.
Thanks!

John



Re: [gentoo-user] zfs emerge failure

2017-08-15 Thread John Covici
On Tue, 15 Aug 2017 18:46:59 -0400,
John Blinka wrote:
> 
> On Tue, Aug 15, 2017 at 6:04 PM, Rich Freeman  wrote:
> 
> First, I appreciate your thoughts and comments.
> 
> >
> > I suspect your sources have gotten messed up in some way.  I've run
> > into issues like this when I do something like build a kernel with an
> > odd umask so that the portage user can't read the files it needs to
> > build a module.  Your chmod should have fixed that but there could be
> > something else going on.  It might just be that you didn't prepare the
> > sources?
> 
> Same thought occurred to me, hence the chmod.  Not sure what "prepare
> the sources" is all about; not a step I've ever used with kernels.
> But see below.
> 
> >
> > I actually do all my kernel builds in a tmpfs under /var/tmp these
> > days which keeps my /usr/src/linux pristine.  (make O=/var/tmp/linux
> > modules_install and so on)  It does involve more building during
> > upgrades but I know everything is clean, and I prefer no-issues to
> > faster-builds.
> 
> I have the same preference.  Will have to take a look at following
> your example..
> 
> >
> > In theory that isn't essential, but I would definitely just wipe out
> > /usr/src/linux and unpack clean kernel sources.  If you're using the
> > gentoo-sources package you can just rm -rf the symlink and the actual
> > tree, and just re-emerge the package and it will set up both.  If
> > you're using git then I'd probably wipe it and re-pull as I'm not sure
> > if a clean/reset will actually take care of all the permissions.
> >
> > Then you need to run at least make oldconfig and make modules_prepare
> > before you can build a module against it.  Doing a full kernel build
> > is also fine.
> 
> I think I've done that (multiple times over the past 8 months).  When
> a new kernel shows up as stable in the tree, I do (as root)
> 
> emerge -DuNv gentoo-sources
> set up symlink
> cd into usr/src/linux
> zcat /proc/config.gz > .config
> make olddefconfig
> make menu_config (as a sanity check)
> make
> make modules_install
> make install
> 
> I don't know what could have messed up the kernel tree other than
> whatever magic happens behind the scenes in the various make commands.
> 
> Just now tried a make modules_prepare followed by an emerge -1 spl.  Same 
> error.
> 
> Started again from scratch.  Moved the kernel tree I've been working
> with (building kernel, modules, etc.) aside, then re-emerged
> gentoo-sources.  Kernel tree should be pristine now, right?  Then
> copied the config from my running kernel (same version 4.12.5) into
> /usr/src/linux.  Then did a make modules_prepare.  Finally did an
> emerge -1 spl.  Same error as always.  So, as attractive as the idea
> of a messed up kernel tree is to me, I don't think that's the source
> of the problem.
> 
> I think it would be informative if I could somehow see exactly what
> commands are being run when the error occurs.  Is there a way of doing
> that?

What is your umask?   I had troubles like this when I had too
aggressive umask of I think 027 rather than 022.

-- 
Your life is like a penny.  You're going to lose it.  The question is:
How do
you spend it?

 John Covici
 cov...@ccs.covici.com



Re: [gentoo-user] zfs emerge failure

2017-08-15 Thread Rich Freeman
On Tue, Aug 15, 2017 at 3:46 PM, John Blinka  wrote:
>
> I think it would be informative if I could somehow see exactly what
> commands are being run when the error occurs.  Is there a way of doing
> that?
>

Yes, and in fact it is in the output when emerge fails:
 /var/tmp/portage/sys-kernel/spl-0.7.1/work/spl-0.7.1/config.log

-- 
Rich



Re: [gentoo-user] zfs emerge failure

2017-08-15 Thread John Blinka
On Tue, Aug 15, 2017 at 6:04 PM, Rich Freeman  wrote:

First, I appreciate your thoughts and comments.

>
> I suspect your sources have gotten messed up in some way.  I've run
> into issues like this when I do something like build a kernel with an
> odd umask so that the portage user can't read the files it needs to
> build a module.  Your chmod should have fixed that but there could be
> something else going on.  It might just be that you didn't prepare the
> sources?

Same thought occurred to me, hence the chmod.  Not sure what "prepare
the sources" is all about; not a step I've ever used with kernels.
But see below.

>
> I actually do all my kernel builds in a tmpfs under /var/tmp these
> days which keeps my /usr/src/linux pristine.  (make O=/var/tmp/linux
> modules_install and so on)  It does involve more building during
> upgrades but I know everything is clean, and I prefer no-issues to
> faster-builds.

I have the same preference.  Will have to take a look at following
your example..

>
> In theory that isn't essential, but I would definitely just wipe out
> /usr/src/linux and unpack clean kernel sources.  If you're using the
> gentoo-sources package you can just rm -rf the symlink and the actual
> tree, and just re-emerge the package and it will set up both.  If
> you're using git then I'd probably wipe it and re-pull as I'm not sure
> if a clean/reset will actually take care of all the permissions.
>
> Then you need to run at least make oldconfig and make modules_prepare
> before you can build a module against it.  Doing a full kernel build
> is also fine.

I think I've done that (multiple times over the past 8 months).  When
a new kernel shows up as stable in the tree, I do (as root)

emerge -DuNv gentoo-sources
set up symlink
cd into usr/src/linux
zcat /proc/config.gz > .config
make olddefconfig
make menu_config (as a sanity check)
make
make modules_install
make install

I don't know what could have messed up the kernel tree other than
whatever magic happens behind the scenes in the various make commands.

Just now tried a make modules_prepare followed by an emerge -1 spl.  Same error.

Started again from scratch.  Moved the kernel tree I've been working
with (building kernel, modules, etc.) aside, then re-emerged
gentoo-sources.  Kernel tree should be pristine now, right?  Then
copied the config from my running kernel (same version 4.12.5) into
/usr/src/linux.  Then did a make modules_prepare.  Finally did an
emerge -1 spl.  Same error as always.  So, as attractive as the idea
of a messed up kernel tree is to me, I don't think that's the source
of the problem.

I think it would be informative if I could somehow see exactly what
commands are being run when the error occurs.  Is there a way of doing
that?

John



Re: [gentoo-user] zfs emerge failure

2017-08-15 Thread Rich Freeman
On Tue, Aug 15, 2017 at 5:19 PM, John Blinka  wrote:
>
> Hope someone can shed some light on continuing emerge failures for zfs
> since gnetoo-sources-4.4.39 and zfs-0.6.5.8.  I was able to install
> that version of zfs with that kernel last November on one of my
> machines, but have been unable to upgrade zfs since then, or to
> install it in any newer kernel, or even to re-install the same version
> on the same kernel.

I've been running various zfs+4.4.y versions without issue on a stable
amd64 config (using upstream kernels).

Currently I'm on 0.7.1+4.4.82.

> checking kernel source version... Not found
> configure: error: *** Cannot find UTS_RELEASE definition.
>
...
>
> Googling around for the "Cannot find UTS_RELEASE" complaint reveals
> that a few people have encountered this problem over the years.  It
> appeared in those cases to be attributable to the user running the
> configuration script not having sufficient authority to read
> ./include/generated/utsrelease.h in the kernel tree.

I suspect your sources have gotten messed up in some way.  I've run
into issues like this when I do something like build a kernel with an
odd umask so that the portage user can't read the files it needs to
build a module.  Your chmod should have fixed that but there could be
something else going on.  It might just be that you didn't prepare the
sources?

I actually do all my kernel builds in a tmpfs under /var/tmp these
days which keeps my /usr/src/linux pristine.  (make O=/var/tmp/linux
modules_install and so on)  It does involve more building during
upgrades but I know everything is clean, and I prefer no-issues to
faster-builds.

In theory that isn't essential, but I would definitely just wipe out
/usr/src/linux and unpack clean kernel sources.  If you're using the
gentoo-sources package you can just rm -rf the symlink and the actual
tree, and just re-emerge the package and it will set up both.  If
you're using git then I'd probably wipe it and re-pull as I'm not sure
if a clean/reset will actually take care of all the permissions.

Then you need to run at least make oldconfig and make modules_prepare
before you can build a module against it.  Doing a full kernel build
is also fine.

-- 
Rich



Re: [gentoo-user] zfs io scheduler

2015-02-26 Thread Volker Armin Hemmann
Am 23.02.2015 um 22:57 schrieb lee:
 Hi,

 is zfs setting the io scheduler to noop for the disks in the pool?

no?

I have it set in an init script.



 I'm currently finding that the IO performance is horrible with a pool
 made from two mirrored disks ...



then set it to noop.



Re: [gentoo-user] ZFS on Linux (spl build error)

2013-12-13 Thread Michael Rühmann

Am 13.12.2013 18:34, schrieb Michael Rühmann:

Hi all,

had some troubles to build sys-kernel/spl-0.6.2-r2.

snip
 Emerging (4 of 6) sys-kernel/spl-0.6.2-r2
 * spl-0.6.2.tar.gz SHA256 SHA512 WHIRLPOOL size ;-) 
...[ ok ]
 * spl-0.6.2-p1.tar.xz SHA256 SHA512 WHIRLPOOL size ;-) 
... [ ok ]

 * Determining the location of the kernel source code
 * Found kernel source directory:
 * /usr/src/linux
 * Found kernel object directory:
 * /lib/modules/3.10.17-gentoo/build
 * Found sources for kernel version:
 * 3.10.17-gentoo
 * Checking for suitable kernel configuration options...
 *   CONFIG_ZLIB_DEFLATE:is not set when it should be.
 * Please check to make sure these options are set correctly.
 * Failure to do so may cause unexpected problems.
 * Once you have satisfied these options, please try merging
 * this package again.
 * ERROR: sys-kernel/spl-0.6.2-r2::gentoo failed (setup phase):
 *   Incorrect kernel configuration options
/snap

The problem is now: How do i set CONFIG_ZLIB_DEFLATE in menuconfig?
Maybe i'm completely blind...


Thanks in advance for any help,
Mosh


lol, done!
As i thought...i was blind :D



Re: [gentoo-user] ZFS on Linux (spl build error)

2013-12-13 Thread hasufell
On 12/13/2013 06:48 PM, Michael Rühmann wrote:
 Am 13.12.2013 18:34, schrieb Michael Rühmann:
 Hi all,

 had some troubles to build sys-kernel/spl-0.6.2-r2.

 snip
  Emerging (4 of 6) sys-kernel/spl-0.6.2-r2
  * spl-0.6.2.tar.gz SHA256 SHA512 WHIRLPOOL size ;-)
 ...[ ok ]
  * spl-0.6.2-p1.tar.xz SHA256 SHA512 WHIRLPOOL size ;-)
 ... [ ok ]
  * Determining the location of the kernel source code
  * Found kernel source directory:
  * /usr/src/linux
  * Found kernel object directory:
  * /lib/modules/3.10.17-gentoo/build
  * Found sources for kernel version:
  * 3.10.17-gentoo
  * Checking for suitable kernel configuration options...
  *   CONFIG_ZLIB_DEFLATE:is not set when it should be.
  * Please check to make sure these options are set correctly.
  * Failure to do so may cause unexpected problems.
  * Once you have satisfied these options, please try merging
  * this package again.
  * ERROR: sys-kernel/spl-0.6.2-r2::gentoo failed (setup phase):
  *   Incorrect kernel configuration options
 /snap

 The problem is now: How do i set CONFIG_ZLIB_DEFLATE in menuconfig?
 Maybe i'm completely blind...


 Thanks in advance for any help,
 Mosh

 lol, done!
 As i thought...i was blind :D
 

You could at least say how you did it. *sigh*

maybe even add the kernel part to https://wiki.gentoo.org/wiki/ZFS



Re: [gentoo-user] ZFS on Linux (spl build error)

2013-12-13 Thread Bruce Hill
On Fri, Dec 13, 2013 at 07:59:41PM +0100, hasufell wrote:
 
  The problem is now: How do i set CONFIG_ZLIB_DEFLATE in menuconfig?
  Maybe i'm completely blind...
 
 
  Thanks in advance for any help,
  Mosh
 
  lol, done!
  As i thought...i was blind :D
  
 
 You could at least say how you did it. *sigh*
 
 maybe even add the kernel part to https://wiki.gentoo.org/wiki/ZFS

mingdao@baruch ~ $ zgrep CONFIG_ZLIB_DEFLATE /proc/config.gz 
CONFIG_ZLIB_DEFLATE=y

What *is* so difficult about that?
-- 
Happy Penguin Computers   ')
126 Fenco Drive   ( \
Tupelo, MS 38801   ^^
supp...@happypenguincomputers.com
662-269-2706 662-205-6424
http://happypenguincomputers.com/

A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?

Don't top-post: http://en.wikipedia.org/wiki/Top_post#Top-posting



Re: [gentoo-user] ZFS on Linux (spl build error)

2013-12-13 Thread Volker Armin Hemmann
Am 13.12.2013 20:21, schrieb Bruce Hill:
 On Fri, Dec 13, 2013 at 07:59:41PM +0100, hasufell wrote:
 The problem is now: How do i set CONFIG_ZLIB_DEFLATE in menuconfig?
 Maybe i'm completely blind...


 Thanks in advance for any help,
 Mosh

 lol, done!
 As i thought...i was blind :D

 You could at least say how you did it. *sigh*

 maybe even add the kernel part to https://wiki.gentoo.org/wiki/ZFS
 mingdao@baruch ~ $ zgrep CONFIG_ZLIB_DEFLATE /proc/config.gz 
 CONFIG_ZLIB_DEFLATE=y

 What *is* so difficult about that?

well, you won't find it in menuconfig. Or at least I couldn't. You can
reach that option in xconfig.

On the other hand ZLIB_DEFLATE is turned on by a douzend of other
options that it is VERY probable you never have to touch it.



Re: [gentoo-user] ZFS on Linux (spl build error)

2013-12-13 Thread hasufell
On 12/13/2013 08:21 PM, Bruce Hill wrote:
 
 What *is* so difficult about that?
 

Nothing.



Re: [gentoo-user] ZFS on Linux (spl build error)

2013-12-13 Thread Neil Bothwick
On Fri, 13 Dec 2013 13:21:42 -0600, Bruce Hill wrote:

  You could at least say how you did it. *sigh*
  
  maybe even add the kernel part to https://wiki.gentoo.org/wiki/ZFS  
 
 mingdao@baruch ~ $ zgrep CONFIG_ZLIB_DEFLATE /proc/config.gz 
 CONFIG_ZLIB_DEFLATE=y
 
 What *is* so difficult about that?

Nothing, but you've not answered the question. you have only shown that
you do have the option set, not how you set it.


-- 
Neil Bothwick

There's a fine line between fishing and standing on the shore looking
like an idiot.


signature.asc
Description: PGP signature


Re: [gentoo-user] ZFS on Linux (spl build error)

2013-12-13 Thread Michael Rühmann

Am 13.12.2013 21:08, schrieb Volker Armin Hemmann:

Am 13.12.2013 20:21, schrieb Bruce Hill:

On Fri, Dec 13, 2013 at 07:59:41PM +0100, hasufell wrote:

The problem is now: How do i set CONFIG_ZLIB_DEFLATE in menuconfig?
Maybe i'm completely blind...


Thanks in advance for any help,
Mosh


lol, done!
As i thought...i was blind :D


You could at least say how you did it. *sigh*

maybe even add the kernel part to https://wiki.gentoo.org/wiki/ZFS

mingdao@baruch ~ $ zgrep CONFIG_ZLIB_DEFLATE /proc/config.gz
CONFIG_ZLIB_DEFLATE=y

What *is* so difficult about that?

well, you won't find it in menuconfig. Or at least I couldn't. You can
reach that option in xconfig.

On the other hand ZLIB_DEFLATE is turned on by a douzend of other
options that it is VERY probable you never have to touch it.


exactly.. i couldn't find it in menuconfig.
The answer was to set CONFIG_CRYPTO_DEFLATE=y manually in .config.
After building the new kernel, CONFIG_ZLIB_DEFLATE was pulled in and spl 
compiled without any problem.


There is nothing difficult about that :-)



Re: [gentoo-user] ZFS on Linux (spl build error)

2013-12-13 Thread Bruce Hill
On Fri, Dec 13, 2013 at 09:08:54PM +0100, Volker Armin Hemmann wrote:
 
 well, you won't find it in menuconfig. Or at least I couldn't. You can
 reach that option in xconfig.
 
 On the other hand ZLIB_DEFLATE is turned on by a douzend of other
 options that it is VERY probable you never have to touch it.

xconfig doesn't turn on options that aren't there in menuconfig ... you just
might be able to navigate xconfig's interface better.

Any time you can't see how to enable a kernel option, just search for it and
look at the Selected By field to see what you need to turn it on:

Symbol: ZLIB_DEFLATE [=y]
Type  : tristate
  Defined at lib/Kconfig:198
  Selected by: PPP_DEFLATE [=n]  NETDEVICES [=y]  PPP [=n] || BTRFS_FS [=n] 
 BLOCK [=y] || JFFS2_ZLIB [=n]  MISC_FILESYSTEMS [=y]  JFFS2_FS [=n] || 
LOGFS [=n]  MISC_FILESYSTEMS [=y]  (MTD [=n] || BLOCK [=y]

My personal preference is nconfig ... easy to navigate, nice colors on black 
bgd.
-- 
Happy Penguin Computers   ')
126 Fenco Drive   ( \
Tupelo, MS 38801   ^^
supp...@happypenguincomputers.com
662-269-2706 662-205-6424
http://happypenguincomputers.com/

A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?

Don't top-post: http://en.wikipedia.org/wiki/Top_post#Top-posting



Re: [gentoo-user] ZFS on Linux (spl build error)

2013-12-13 Thread Bruce Hill
On Sat, Dec 14, 2013 at 12:53:39AM +0100, Michael Rühmann wrote:

  mingdao@baruch ~ $ zgrep CONFIG_ZLIB_DEFLATE /proc/config.gz
  CONFIG_ZLIB_DEFLATE=y
 
  What *is* so difficult about that?
  well, you won't find it in menuconfig. Or at least I couldn't. You can
  reach that option in xconfig.
 
  On the other hand ZLIB_DEFLATE is turned on by a douzend of other
  options that it is VERY probable you never have to touch it.
 
 exactly.. i couldn't find it in menuconfig.
 The answer was to set CONFIG_CRYPTO_DEFLATE=y manually in .config.
 After building the new kernel, CONFIG_ZLIB_DEFLATE was pulled in and spl 
 compiled without any problem.

YDIW ... it's never a good idea to edit .config by hand. Always use one of the
make someconfig commands.

 There is nothing difficult about that :-)
-- 
Happy Penguin Computers   ')
126 Fenco Drive   ( \
Tupelo, MS 38801   ^^
supp...@happypenguincomputers.com
662-269-2706 662-205-6424
http://happypenguincomputers.com/

A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?

Don't top-post: http://en.wikipedia.org/wiki/Top_post#Top-posting



Re: [gentoo-user] ZFS on Linux (spl build error)

2013-12-13 Thread Volker Armin Hemmann
Am 14.12.2013 01:04, schrieb Bruce Hill:
 On Fri, Dec 13, 2013 at 09:08:54PM +0100, Volker Armin Hemmann wrote:
 well, you won't find it in menuconfig. Or at least I couldn't. You can
 reach that option in xconfig.

 On the other hand ZLIB_DEFLATE is turned on by a douzend of other
 options that it is VERY probable you never have to touch it.
 xconfig doesn't turn on options that aren't there in menuconfig ... you just
 might be able to navigate xconfig's interface better.

I saw the option in xconfig. I did not see it in menuconfig.

xconfig has a setting to show options that are only enabled by other
options.

Show normal options: ZLIB_DEFLATE is hidden
Show all options: ZLIB_DEFLATE is visible and can be changed.


 Any time you can't see how to enable a kernel option, just search for it and
 look at the Selected By field to see what you need to turn it on:

 Symbol: ZLIB_DEFLATE [=y]
 Type  : tristate
   Defined at lib/Kconfig:198
   Selected by: PPP_DEFLATE [=n]  NETDEVICES [=y]  PPP [=n] || BTRFS_FS 
 [=n]  BLOCK [=y] || JFFS2_ZLIB [=n]  MISC_FILESYSTEMS [=y]  JFFS2_FS 
 [=n] || LOGFS [=n]  MISC_FILESYSTEMS [=y]  (MTD [=n] || BLOCK [=y]

and you are missing half of it:
Selected by: PPP_DEFLATE [=n]  NETDEVICES [=y]  PPP [=n] || BTRFS_FS
[=n]  BLOCK [=y] || JFFS2_ZLIB [=n]  MISC_FILESYSTEMS [=y] 
JFFS2_FS [=n] || LOGFS [=n]  MISC_FILESYSTEMS [=y]  (MTD [=n] ||
BLOCK [=y]) || CRYPTO_DEFLATE [=y]  CRYPTO [=y] || CRYPTO_ZLIB [=m] 
CRYPTO [=y]

oh look: crypto_zlib turns it on too.

 My personal preference is nconfig ... easy to navigate, nice colors on black 
 bgd.

but it seems that nconfig is hiding information from you, that xconfig
delivers.



Re: [gentoo-user] ZFS on Linux (spl build error)

2013-12-13 Thread Michael Rühmann

Am 14.12.2013 01:04, schrieb Bruce Hill:

On Fri, Dec 13, 2013 at 09:08:54PM +0100, Volker Armin Hemmann wrote:

well, you won't find it in menuconfig. Or at least I couldn't. You can
reach that option in xconfig.

On the other hand ZLIB_DEFLATE is turned on by a douzend of other
options that it is VERY probable you never have to touch it.

xconfig doesn't turn on options that aren't there in menuconfig ... you just
might be able to navigate xconfig's interface better.

Any time you can't see how to enable a kernel option, just search for it and
look at the Selected By field to see what you need to turn it on:

Symbol: ZLIB_DEFLATE [=y]
Type  : tristate
   Defined at lib/Kconfig:198
   Selected by: PPP_DEFLATE [=n]  NETDEVICES [=y]  PPP [=n] || BTRFS_FS [=n]  BLOCK [=y] || JFFS2_ZLIB 
[=n]  MISC_FILESYSTEMS [=y]  JFFS2_FS [=n] || LOGFS [=n]  MISC_FILESYSTEMS [=y]  (MTD [=n] || 
BLOCK [=y]

My personal preference is nconfig ... easy to navigate, nice colors on black 
bgd.


There's always a lot to learn :D
I will have a look at nconfig and give it a try in the future.

Many thanks for the tips, Bruce



Re: [gentoo-user] ZFS on Linux (spl build error)

2013-12-13 Thread Bruce Hill
On Fri, Dec 13, 2013 at 09:47:44PM +, Neil Bothwick wrote:
 On Fri, 13 Dec 2013 13:21:42 -0600, Bruce Hill wrote:
 
   You could at least say how you did it. *sigh*
   
   maybe even add the kernel part to https://wiki.gentoo.org/wiki/ZFS  
  
  mingdao@baruch ~ $ zgrep CONFIG_ZLIB_DEFLATE /proc/config.gz 
  CONFIG_ZLIB_DEFLATE=y
  
  What *is* so difficult about that?
 
 Nothing, but you've not answered the question. you have only shown that
 you do have the option set, not how you set it.
 
 
 -- 
 Neil Bothwick
 
 There's a fine line between fishing and standing on the shore looking
 like an idiot.

As per another post, but my mouse paste came up short. Let me try again:

Selected by: PPP_DEFLATE [=n]  NETDEVICES [=y]  PPP [=n] || BTRFS_FS [=n] 
 BLOCK [=y] || JFFS2_ZLIB [=n]  MISC_FILESYSTEMS [=y]  JFFS2_FS [=n] || 
LOGFS [=n]  MISC_FILESYSTEMS [=y]  (MTD [=n] || BLOCK [=y] ) || PSTORE [=n] 
 MISC_FILESYSTEMS [=y] || CRYPTO_DEFLATE [=y]  CRYPTO [=y] || CRYPTO_ZLIB 
[=y]  CRYPTO [=y]

Which combination depends upon your use case.
-- 
Happy Penguin Computers   ')
126 Fenco Drive   ( \
Tupelo, MS 38801   ^^
supp...@happypenguincomputers.com
662-269-2706 662-205-6424
http://happypenguincomputers.com/

A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?

Don't top-post: http://en.wikipedia.org/wiki/Top_post#Top-posting



Re: [gentoo-user] ZFS on Linux (spl build error)

2013-12-13 Thread Bruce Hill
On Sat, Dec 14, 2013 at 01:13:06AM +0100, Volker Armin Hemmann wrote:
 
  Any time you can't see how to enable a kernel option, just search for it and
  look at the Selected By field to see what you need to turn it on:
 
  Symbol: ZLIB_DEFLATE [=y]
  Type  : tristate
Defined at lib/Kconfig:198
Selected by: PPP_DEFLATE [=n]  NETDEVICES [=y]  PPP [=n] || BTRFS_FS 
  [=n]  BLOCK [=y] || JFFS2_ZLIB [=n]  MISC_FILESYSTEMS [=y]  JFFS2_FS 
  [=n] || LOGFS [=n]  MISC_FILESYSTEMS [=y]  (MTD [=n] || BLOCK [=y]
 
 and you are missing half of it:
 Selected by: PPP_DEFLATE [=n]  NETDEVICES [=y]  PPP [=n] || BTRFS_FS
 [=n]  BLOCK [=y] || JFFS2_ZLIB [=n]  MISC_FILESYSTEMS [=y] 
 JFFS2_FS [=n] || LOGFS [=n]  MISC_FILESYSTEMS [=y]  (MTD [=n] ||
 BLOCK [=y]) || CRYPTO_DEFLATE [=y]  CRYPTO [=y] || CRYPTO_ZLIB [=m] 
 CRYPTO [=y]
 
 oh look: crypto_zlib turns it on too.
 
  My personal preference is nconfig ... easy to navigate, nice colors on 
  black bgd.
 
 but it seems that nconfig is hiding information from you, that xconfig
 delivers.

No, it was 100% user error. Trying to do 14 things at once, and no matter what
any human says, when we multi-task we don't do *any* of the 1 tasks as well
as we do exec task1 ; exec task2 ; exec task3 ; done

Here's what I should have pasted:

Selected by: PPP_DEFLATE [=n]  NETDEVICES [=y]  PPP [=n] || BTRFS_FS [=n] 
 BLOCK [=y] || JFFS2_ZLIB [=n]  MISC_FILESYSTEMS [=y]  JFFS2_FS [=n] || 
LOGFS [=n]  MISC_FILESYSTEMS [=y]  (MTD [=n] || BLOCK [=y]) || PSTORE [=n] 
 MISC_FILESYSTEMS [=y] || CRYPTO_DEFLATE [=y]  CRYPTO [=y] || CRYPTO_ZLIB 
[=y]  CRYPTO [=y]
-- 
Happy Penguin Computers   ')
126 Fenco Drive   ( \
Tupelo, MS 38801   ^^
supp...@happypenguincomputers.com
662-269-2706 662-205-6424
http://happypenguincomputers.com/

A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?

Don't top-post: http://en.wikipedia.org/wiki/Top_post#Top-posting



Re: [gentoo-user] ZFS formating

2013-11-01 Thread Douglas J Hunley
On Fri, Nov 1, 2013 at 9:48 AM, James wirel...@tampabay.rr.com wrote:

 Is the latest version of SystemRescue the best media to use to format
 disks with ZFS? Caveats?


the latest gentoo live image has full zfs support on it


-- 
Douglas J Hunley (doug.hun...@gmail.com)
Twitter: @hunleyd   Web:
douglasjhunley.com
G+: http://google.com/+DouglasHunley


Re: [gentoo-user] ZFS

2013-09-21 Thread thegeezer
On 09/17/2013 08:20 AM, Grant wrote:
 I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
 running.  I'd also like to stripe for performance, resulting in
 RAID10.  It sounds like most hardware controllers do not support
 6-disk RAID10 so ZFS looks very interesting.

 Can I operate ZFS RAID without a hardware RAID controller?

 From a RAID perspective only, is ZFS a better choice than conventional
 software RAID?

 ZFS seems to have many excellent features and I'd like to ease into
 them slowly (like an old man into a nice warm bath).  Does ZFS allow
 you to set up additional features later (e.g. snapshots, encryption,
 deduplication, compression) or is some forethought required when first
 making the filesystem?

 It looks like there are comprehensive ZFS Gentoo docs
 (http://wiki.gentoo.org/wiki/ZFS) but can anyone tell me from the real
 world about how much extra difficulty/complexity is added to
 installation and ongoing administration when choosing ZFS over ext4?

 Performance doesn't seem to be one of ZFS's strong points.  Is it
 considered suitable for a high-performance server?

 http://www.phoronix.com/scan.php?page=news_itempx=MTM1NTA

 Besides performance, are there any drawbacks to ZFS compared to ext4?

 - Grant

Howdy,
been reading this thread and am pretty intrigued, ZFS is much more than
i thought it was.
I was wondering though does ZFS work as a multiple client single storage
cluster such as GFS/OCFS/VMFS/OrangeFS ?
I was also wondering if anyone could share their experience with ZFS on
iscsi - especially considering the readahead /proc changes required on
same system ?
thanks!




Re: [gentoo-user] ZFS

2013-09-21 Thread Pandu Poluan
On Sep 21, 2013 7:54 PM, thegeezer thegee...@thegeezer.net wrote:

 On 09/17/2013 08:20 AM, Grant wrote:
  I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
  running.  I'd also like to stripe for performance, resulting in
  RAID10.  It sounds like most hardware controllers do not support
  6-disk RAID10 so ZFS looks very interesting.
 
  Can I operate ZFS RAID without a hardware RAID controller?
 
  From a RAID perspective only, is ZFS a better choice than conventional
  software RAID?
 
  ZFS seems to have many excellent features and I'd like to ease into
  them slowly (like an old man into a nice warm bath).  Does ZFS allow
  you to set up additional features later (e.g. snapshots, encryption,
  deduplication, compression) or is some forethought required when first
  making the filesystem?
 
  It looks like there are comprehensive ZFS Gentoo docs
  (http://wiki.gentoo.org/wiki/ZFS) but can anyone tell me from the real
  world about how much extra difficulty/complexity is added to
  installation and ongoing administration when choosing ZFS over ext4?
 
  Performance doesn't seem to be one of ZFS's strong points.  Is it
  considered suitable for a high-performance server?
 
  http://www.phoronix.com/scan.php?page=news_itempx=MTM1NTA
 
  Besides performance, are there any drawbacks to ZFS compared to ext4?
 
  - Grant
 
 Howdy,
 been reading this thread and am pretty intrigued, ZFS is much more than
 i thought it was.
 I was wondering though does ZFS work as a multiple client single storage
 cluster such as GFS/OCFS/VMFS/OrangeFS ?

Well... not really.

Of course you could run ZFS over DRBD, or run any of those filesystems on
top a zvol...

But I'll say, ZFS is not (yet?) a clustered filesystem.

 I was also wondering if anyone could share their experience with ZFS on
 iscsi - especially considering the readahead /proc changes required on
 same system ?
 thanks!


Although I have no experience of ZFS over iSCSI, I don't think that's any
problem.

As long as ZFS can 'see' the block device comes time for it to mount the
pool and all 'child' datasets (or zvols), all should be well.

In this case, however, you would want the iSCSI target to not perform a
readahead. Let ZFS 'instructs' the iSCSI target on which sectors to read.

Rgds,
--


Re: [gentoo-user] ZFS

2013-09-21 Thread Dale
Joerg Schilling wrote:
 Dale rdalek1...@gmail.com wrote:

 Why do you believe it has forked?
 This project does not even has a source code repository and the fact that
 they refer to illumos for sources makes me wonder whether it is open for 
 contributing.

 Jörg

 Well, it seemed to me that it either changed its name or forked or
 something.  I was hoping that whatever the reason for this, it would
 eventually be in the kernel like ext* and others.  It seems that is not
 the case.  That's why I was asking questions. 
 It is in the Kernel...

 It may not be in the Linux kernel ;-)

 It seems that they just came out of their caves and created a web page.
 Note that until recently, they used secret mailing lists.

 Jörg


Well, I only use the Linux kernel.  When I mention the kernel, I'm only
concerned with the Linux one which I use. 

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




Re: [gentoo-user] ZFS

2013-09-20 Thread Joerg Schilling
Douglas J Hunley doug.hun...@gmail.com wrote:

 1TB drives are right on the border of switching from RAIDZ to RAIDZ2.
 You'll see people argue for both sides at this size, but the 'saner
 default' would be to use RAIDZ2. You're going to lose storage space, but
 gain an extra parity drive (think RAID6). Consumer grade hard drives are
 /going/ to fail during a resilver (Murphy's Law) and that extra parity
 drive is going to save your bacon.

The main advantage of RAIDZ2 is that you can remove one disk and the RAID is 
still operative. Now you put in a bigger disk. repeat until you replaced 
all disks and you did grow your storage.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily



Re: [gentoo-user] ZFS

2013-09-20 Thread Tanstaafl
On 2013-09-20 5:17 AM, Joerg Schilling 
joerg.schill...@fokus.fraunhofer.de wrote:

Douglas J Hunley doug.hun...@gmail.com wrote:


1TB drives are right on the border of switching from RAIDZ to RAIDZ2.
You'll see people argue for both sides at this size, but the 'saner
default' would be to use RAIDZ2. You're going to lose storage space, but
gain an extra parity drive (think RAID6). Consumer grade hard drives are
/going/ to fail during a resilver (Murphy's Law) and that extra parity
drive is going to save your bacon.


The main advantage of RAIDZ2 is that you can remove one disk and the RAID is
still operative. Now you put in a bigger disk. repeat until you replaced
all disks and you did grow your storage.


Interesting, thanks... :)



Re: [gentoo-user] ZFS

2013-09-20 Thread Volker Armin Hemmann
Am 19.09.2013 06:47, schrieb Grant:
 turn off readahead. ZFS' own readahead and the kernel's clash - badly.
 Turn off kernel's readahead for a visible performance boon.
 You are probably not talking about ZFS readahead but about the ARC.
 which does prefetching. So yes.
 I'm taking notes on this so I want to clarify, when using ZFS,
 readahead in the kernel should be disabled by using blockdev to set it
 to 8?

 - Grant

 .

you can't turn it off (afaik) but 8 is a good value - because it is just
a 4k block.



Re: [gentoo-user] ZFS

2013-09-20 Thread Grant
 How about hardened?  Does ZFS have any problems interacting with
 grsecurity or a hardened profile?

Has anyone tried hardened and ZFS together?

- Grant



Re: [gentoo-user] ZFS

2013-09-20 Thread Hinnerk van Bruinehsen
On Fri, Sep 20, 2013 at 11:20:53AM -0700, Grant wrote:
  How about hardened?  Does ZFS have any problems interacting with
  grsecurity or a hardened profile?

 Has anyone tried hardened and ZFS together?


Hi,

I did - I had some problems, but I'm not sure if they were caused by the
combination of ZFS and hardened. There were some issues updating kernel and ZFS
(most likely due to ZFS on root and me using ~arch hardened-sources and the
live ebuild for zfs).
There are some hardened options that are known to be not working (constify was
one of them but that should be patched now). I think another one was HIDESYM.

There is a (more or less regularly updated blogpost by prometheanfire
(installation guide zfs+hardened+luks [1]).
So you could ask him or ryao (he seems to support hardened+zfs at least to
a certain degree).

WKR
Hinnerk


[1] 
https://mthode.org/posts/2013/Sep/gentoo-hardened-zfs-rootfs-with-dm-cryptluks-062/
 


signature.asc
Description: Digital signature


Re: [gentoo-user] ZFS

2013-09-20 Thread Hinnerk van Bruinehsen
On Thu, Sep 19, 2013 at 06:41:47PM -0400, Douglas J Hunley wrote:

 On Tue, Sep 17, 2013 at 12:32 PM, cov...@ccs.covici.com wrote:

 Spo do I need that overlay at all, or just emerge zfs and its module?


 You do *not* need the overlay. Everything you need is in portage nowadays


Afaik the overlay even comes with a warning from ryao not to use it unless
being told by him to do so (since it's very experimental and includes patches
that were not reviewed). Unless you want to do heavy testing (best while
communicating with ryao) you should use the ebuilds from portage.

WKR
Hinnerk


signature.asc
Description: Digital signature


Re: [gentoo-user] ZFS

2013-09-20 Thread Grant
  How about hardened?  Does ZFS have any problems interacting with
  grsecurity or a hardened profile?

 Has anyone tried hardened and ZFS together?

 I did - I had some problems, but I'm not sure if they were caused by the
 combination of ZFS and hardened. There were some issues updating kernel and 
 ZFS
 (most likely due to ZFS on root and me using ~arch hardened-sources and the
 live ebuild for zfs).
 There are some hardened options that are known to be not working (constify was
 one of them but that should be patched now). I think another one was HIDESYM.

 There is a (more or less regularly updated blogpost by prometheanfire
 (installation guide zfs+hardened+luks [1]).
 So you could ask him or ryao (he seems to support hardened+zfs at least to
 a certain degree).
 [1] 
 https://mthode.org/posts/2013/Sep/gentoo-hardened-zfs-rootfs-with-dm-cryptluks-062/

Thanks for the link.  It doesn't look too bad.

- Grant



Re: [gentoo-user] ZFS

2013-09-19 Thread Dale
Grant wrote:
 Interesting news related to ZFS:

 http://open-zfs.org/wiki/Main_Page
 I wonder if this will be added to the kernel at some point in the
 future?  May even be their intention?
 I think the CDDL license is what's keeping ZFS out of the kernel,
 although some argue that it should be integrated anyway.  OpenZFS
 retains the same license.

 - Grant

 .


Then I wonder why it seems to have forked?  scratches head 

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




Re: [gentoo-user] ZFS

2013-09-19 Thread Pandu Poluan
On Thu, Sep 19, 2013 at 2:40 PM, Dale rdalek1...@gmail.com wrote:
 Grant wrote:
 Interesting news related to ZFS:

 http://open-zfs.org/wiki/Main_Page
 I wonder if this will be added to the kernel at some point in the
 future?  May even be their intention?
 I think the CDDL license is what's keeping ZFS out of the kernel,
 although some argue that it should be integrated anyway.  OpenZFS
 retains the same license.

 - Grant

 .


 Then I wonder why it seems to have forked?  scratches head 


At the moment, only to 'decouple' ZFS development from Illumos development.

Changing a license require the approval of all rightsholders, and that
takes time.

At least, with a decoupling, ZFS can quickly improve to fulfill the
needs of its users, no longer depending on Illumos' dev cycle.


Rgds,
-- 
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pepoluan.tumblr.com
 • Linked-In : http://id.linkedin.com/in/pepoluan



Re: [gentoo-user] ZFS

2013-09-19 Thread Joerg Schilling
Grant emailgr...@gmail.com wrote:

  Interesting news related to ZFS:
 
  http://open-zfs.org/wiki/Main_Page
 
  I wonder if this will be added to the kernel at some point in the
  future?  May even be their intention?

 I think the CDDL license is what's keeping ZFS out of the kernel,
 although some argue that it should be integrated anyway.  OpenZFS
 retains the same license.

As long as there are people that claim ZFS was derived from the Linux kernel 
(i.e. is a derived work from GPL code and thus needs to be put under GPL), 
there seems to be a problem.

I am not sure whether it is possible to educate these people...

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily



Re: [gentoo-user] ZFS

2013-09-19 Thread Joerg Schilling
Dale rdalek1...@gmail.com wrote:

 Grant wrote:
  Interesting news related to ZFS:
 
  http://open-zfs.org/wiki/Main_Page
  I wonder if this will be added to the kernel at some point in the
  future?  May even be their intention?
  I think the CDDL license is what's keeping ZFS out of the kernel,
  although some argue that it should be integrated anyway.  OpenZFS
  retains the same license.
 
  - Grant
 
  .
 

 Then I wonder why it seems to have forked?  scratches head 

Why do you believe it has forked?
This project does not even has a source code repository and the fact that
they refer to illumos for sources makes me wonder whether it is open for 
contributing.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily



Re: [gentoo-user] ZFS

2013-09-19 Thread Dale
Joerg Schilling wrote:
 Dale rdalek1...@gmail.com wrote:

 Grant wrote:
 Interesting news related to ZFS:

 http://open-zfs.org/wiki/Main_Page
 I wonder if this will be added to the kernel at some point in the
 future?  May even be their intention?
 I think the CDDL license is what's keeping ZFS out of the kernel,
 although some argue that it should be integrated anyway.  OpenZFS
 retains the same license.

 - Grant

 .

 Then I wonder why it seems to have forked?  scratches head 
 Why do you believe it has forked?
 This project does not even has a source code repository and the fact that
 they refer to illumos for sources makes me wonder whether it is open for 
 contributing.

 Jörg


Well, it seemed to me that it either changed its name or forked or
something.  I was hoping that whatever the reason for this, it would
eventually be in the kernel like ext* and others.  It seems that is not
the case.  That's why I was asking questions. 

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




Re: [gentoo-user] ZFS

2013-09-19 Thread Joerg Schilling
Dale rdalek1...@gmail.com wrote:

  Why do you believe it has forked?
  This project does not even has a source code repository and the fact that
  they refer to illumos for sources makes me wonder whether it is open for 
  contributing.
 
  Jörg
 

 Well, it seemed to me that it either changed its name or forked or
 something.  I was hoping that whatever the reason for this, it would
 eventually be in the kernel like ext* and others.  It seems that is not
 the case.  That's why I was asking questions. 

It is in the Kernel...

It may not be in the Linux kernel ;-)

It seems that they just came out of their caves and created a web page.
Note that until recently, they used secret mailing lists.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily



Re: [gentoo-user] ZFS

2013-09-19 Thread Douglas J Hunley
On Tue, Sep 17, 2013 at 12:32 PM, cov...@ccs.covici.com wrote:

 Spo do I need that overlay at all, or just emerge zfs and its module?


You do *not* need the overlay. Everything you need is in portage nowadays


-- 
Douglas J Hunley (doug.hun...@gmail.com)
Twitter: @hunleyd   Web:
douglasjhunley.com
G+: http://goo.gl/sajR3


Re: [gentoo-user] ZFS

2013-09-19 Thread Douglas J Hunley
On Tue, Sep 17, 2013 at 1:54 PM, Stefan G. Weichinger li...@xunil.atwrote:

 I have to set up a server w/ 8x 1TB in about 2 weeks and consider ZFS as
 well, at least for data. So root-fs would go onto 2x 1TB hdds with
 conventional partitioning and something like ext4.

 6x 1TB would be available for data ... on one hand for a file-server
 part ... on the other hand for VMs based on KVM.


1TB drives are right on the border of switching from RAIDZ to RAIDZ2.
You'll see people argue for both sides at this size, but the 'saner
default' would be to use RAIDZ2. You're going to lose storage space, but
gain an extra parity drive (think RAID6). Consumer grade hard drives are
/going/ to fail during a resilver (Murphy's Law) and that extra parity
drive is going to save your bacon.

I create

-- 
Douglas J Hunley (doug.hun...@gmail.com)
Twitter: @hunleyd   Web:
douglasjhunley.com
G+: http://goo.gl/sajR3


Re: [gentoo-user] ZFS

2013-09-19 Thread Douglas J Hunley
On Tue, Sep 17, 2013 at 12:32 PM, cov...@ccs.covici.com wrote:

 Spo do I need that overlay at all, or just emerge zfs and its module?


You do *not* need the overlay. Everything you need is in portage nowadays


-- 
Douglas J Hunley (doug.hun...@gmail.com)
Twitter: @hunleyd   Web:
douglasjhunley.com
G+: http://goo.gl/sajR3


Re: [gentoo-user] ZFS

2013-09-18 Thread Stefan G. Weichinger
Am 18.09.2013 06:11, schrieb Grant:
 I have to set up a server w/ 8x 1TB in about 2 weeks and consider ZFS as
 well, at least for data. So root-fs would go onto 2x 1TB hdds with
 conventional partitioning and something like ext4.
 
 Is a layout like this with the data on ZFS and the root-fs on ext4 a
 better choice than ZFS all around?

Not better ... I just suggested this being conservative and cautious.

With a classic root-fs things would be splitted ... if the root-fs
breaks or I need to use some live-media to fix things this would all be
non-zfs-related operations.

In the specific case I am still unsure if I want to use zfs at all. And
I could suggest the customer a test-phase ... if it is not working as
intended I could easily roll back the 6 disks to an LVM-based software
RAID etc (moving data aside for the conversion).

I am hesitating because I don't have zfs anywhere productive at
customers ... only for my own purposes in the basement where there is no
real performance issue.

And the customer in case wants reliability ... ok that would be provided
by zfs but I am not as used to admin that as I am with native linux
file systems. It also leads to other topics ... I can only backup VMs
via LVM-based-snapshots (virt-backup.pl) when I use LVM, for example.

rootfs on ZFS or everything on ZFS would have advantages, sure. No
partitioning at all, resizeable zfs-filesystems for everything,
checksums for everything ... you name it.

In my case I have to decide until Sep, 25th - installation day ;-)

Stefan




Re: [gentoo-user] ZFS

2013-09-18 Thread Neil Bothwick
On Tue, 17 Sep 2013 23:22:29 -0500, Bruce Hill wrote:

 Just wondering if anyone experienced running ZFS on Gentoo finds this
 wiki article worthy of use: http://wiki.gentoo.org/wiki/ZFS

Yes, it is useful. However I have recently stopped using the option to
built ZFS into the kernel as I ran into problems with vdevs reported as
corrupt on the system I was trying this on. They weren't corrupt and
mounted fine in System Rescue Cd with modules, and the problem
disappeared when I switched to modules. So use caution and plenty of
testing if you want to go this root. I haven't had a chance to try and
find the exact cause yet.


-- 
Neil Bothwick

Am I ignorant or apathetic? I don't know and don't care!


signature.asc
Description: PGP signature


Re: [gentoo-user] ZFS

2013-09-18 Thread Joerg Schilling
Volker Armin Hemmann volkerar...@googlemail.com wrote:

 turn off readahead. ZFS' own readahead and the kernel's clash - badly.
 Turn off kernel's readahead for a visible performance boon.

You are probably not talking about ZFS readahead but about the ARC.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily



Re: [gentoo-user] ZFS

2013-09-18 Thread Stefan G. Weichinger
Am 18.09.2013 09:26, schrieb Stefan G. Weichinger:

 rootfs on ZFS or everything on ZFS would have advantages, sure. No
 partitioning at all, resizeable zfs-filesystems for everything,
 checksums for everything ... you name it.
 
 In my case I have to decide until Sep, 25th - installation day ;-)

playing around now with a gentoo-guest on an ZFS-mirror ... with
raw-format via virtio ... nice so far.




Re: [gentoo-user] ZFS

2013-09-18 Thread Volker Armin Hemmann
Am 18.09.2013 11:56, schrieb Joerg Schilling:
 Volker Armin Hemmann volkerar...@googlemail.com wrote:

 turn off readahead. ZFS' own readahead and the kernel's clash - badly.
 Turn off kernel's readahead for a visible performance boon.
 You are probably not talking about ZFS readahead but about the ARC.

 Jörg


which does prefetching. So yes.



Re: [gentoo-user] ZFS

2013-09-18 Thread Dale
Stefan G. Weichinger wrote:
 Interesting news related to ZFS:

 http://open-zfs.org/wiki/Main_Page



I wonder if this will be added to the kernel at some point in the
future?  May even be their intention?

Dale

:-)  :-) 

-- 
I am only responsible for what I said ... Not for what you understood or how 
you interpreted my words!




Re: [gentoo-user] ZFS

2013-09-18 Thread Grant
 Interesting news related to ZFS:

 http://open-zfs.org/wiki/Main_Page

 I wonder if this will be added to the kernel at some point in the
 future?  May even be their intention?

I think the CDDL license is what's keeping ZFS out of the kernel,
although some argue that it should be integrated anyway.  OpenZFS
retains the same license.

- Grant



Re: [gentoo-user] ZFS

2013-09-18 Thread Grant
 turn off readahead. ZFS' own readahead and the kernel's clash - badly.
 Turn off kernel's readahead for a visible performance boon.
 You are probably not talking about ZFS readahead but about the ARC.

 which does prefetching. So yes.

I'm taking notes on this so I want to clarify, when using ZFS,
readahead in the kernel should be disabled by using blockdev to set it
to 8?

- Grant



Re: [gentoo-user] ZFS

2013-09-17 Thread Marc Stürmer

Am 17.09.2013 09:20, schrieb Grant:


Performance doesn't seem to be one of ZFS's strong points.  Is it
considered suitable for a high-performance server?


A high performance server for what?

But you've already given yourself the answer: if high performance is 
what you are aiming for it depends on your performance needs and 
probably ZFS on Linux is not got to meet those - yet. It is still evolving.


Of course benchmarks are static, real world usage is another cup of coffee.


Besides performance, are there any drawbacks to ZFS compared to ext4?


Well it only comes as kernel module at the moment. Some people dislike 
that.




Re: [gentoo-user] ZFS

2013-09-17 Thread Pandu Poluan
On Tue, Sep 17, 2013 at 2:20 PM, Grant emailgr...@gmail.com wrote:
 I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
 running.  I'd also like to stripe for performance, resulting in
 RAID10.  It sounds like most hardware controllers do not support
 6-disk RAID10 so ZFS looks very interesting.

 Can I operate ZFS RAID without a hardware RAID controller?


Yes. In fact, that's ZFS' preferred mode of operation (i.e., it
handles all redundancy by itself).

 From a RAID perspective only, is ZFS a better choice than conventional
 software RAID?


Yes.

ZFS checksummed all blocks during writes, and verifies those checksums
during read.

It is possible to have 2 bits flipped at the same time among 2 hard
disks. In such case, the RAID controller will never see the bitflips.
But ZFS will see it.

 ZFS seems to have many excellent features and I'd like to ease into
 them slowly (like an old man into a nice warm bath).  Does ZFS allow
 you to set up additional features later (e.g. snapshots, encryption,
 deduplication, compression) or is some forethought required when first
 making the filesystem?


Snapshots is built-in from the beginning. All you have to do is create
one when you want it.

Deduplication can be turned on and off at will -- but be warned: You
need HUGE amount of RAM.

Compression can be turned on and off at will. Previously-compressed
data won't become uncompressed unless you modify them.

 It looks like there are comprehensive ZFS Gentoo docs
 (http://wiki.gentoo.org/wiki/ZFS) but can anyone tell me from the real
 world about how much extra difficulty/complexity is added to
 installation and ongoing administration when choosing ZFS over ext4?


Very very minimal. So minimal, in fact, that if you don't plan to use
ZFS as a root filesystem, it's laughably simple. You don't even have
to edit /etc/fstab

 Performance doesn't seem to be one of ZFS's strong points.  Is it
 considered suitable for a high-performance server?

 http://www.phoronix.com/scan.php?page=news_itempx=MTM1NTA


Several points:

1. The added steps of checksumming (and verifying the checksums)
*will* give a performance penalty.

2. When comparing performance of 1 (one) drive, of course ZFS will
lose. But when you build a ZFS pool out of 3 pairs of mirrored drives,
throughput will increase significantly as ZFS has the ability to do
'load-balancing' among mirror-pairs (or, in ZFS parlance, mirrored
vdevs)

Go directly to this post:
http://phoronix.com/forums/showthread.php?79922-Benchmarks-Of-The-New-ZFS-On-Linux-EXT4-Winsp=326838#post326838

Notice how ZFS won against ext4 in 8 scenarios out of 9. (The only
scenario where ZFS lost is in the single-client RAID-1 scenario)

 Besides performance, are there any drawbacks to ZFS compared to ext4?


1. You need a huge amount of RAM to let ZFS do its magic. But RAM is
cheap nowadays. Data... possibly priceless.

2. Be careful when using ZFS on a server on which processes rapidly
spawn and terminate. ZFS doesn't like memory fragmentation.

For point #2, I can give you a real-life example:

My mail server, for some reasons, choke if too many TLS errors happen.
So, I placed Perdition in to capture all POP3 connections and
'un-TLS' them. Perdition spawns a new process for *every* connection.
My mail server has 2000 users, I regularly see more than 100 Perdition
child processes. Many very ephemeral (i.e., existing for less than 5
seconds). The RAM is undoubtedly *extremely* fragmented. ZFS cries
murder when it cannot allocate a contiguous SLAB of memory to increase
its ARC Cache.

OTOH, on another very busy server (mail archiving server using
MailArchiva, handling 2000+ emails per hour), ZFS run flawlessly. No
incident _at_all_. Undoubtedly because MailArchiva use one single huge
process (Java-based) to handle all transactions, so no RAM
fragmentation here.


Rgds,
-- 
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pepoluan.tumblr.com
 • Linked-In : http://id.linkedin.com/in/pepoluan



Re: [gentoo-user] ZFS

2013-09-17 Thread Alan McKinnon
On 17/09/2013 10:05, Pandu Poluan wrote:
 On Tue, Sep 17, 2013 at 2:20 PM, Grant emailgr...@gmail.com wrote:
 I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
 running.  I'd also like to stripe for performance, resulting in
 RAID10.  It sounds like most hardware controllers do not support
 6-disk RAID10 so ZFS looks very interesting.

 Can I operate ZFS RAID without a hardware RAID controller?

 
 Yes. In fact, that's ZFS' preferred mode of operation (i.e., it
 handles all redundancy by itself).

I would take it a step further and say that a hardware RAID controller
actively interferes with ZFS and gets in the way. It gets in the way so
much that one should not do it at all.

Running the controller in JBOD mode is not a good idea, I'd say it's a
requirement.





-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] ZFS

2013-09-17 Thread Grant
 It looks like there are comprehensive ZFS Gentoo docs
 (http://wiki.gentoo.org/wiki/ZFS) but can anyone tell me from the real
 world about how much extra difficulty/complexity is added to
 installation and ongoing administration when choosing ZFS over ext4?

 Very very minimal. So minimal, in fact, that if you don't plan to use
 ZFS as a root filesystem, it's laughably simple. You don't even have
 to edit /etc/fstab

I do plan to use it as the root filesystem but it sounds like I
shouldn't worry about extra headaches.

 Performance doesn't seem to be one of ZFS's strong points.  Is it
 considered suitable for a high-performance server?

 http://www.phoronix.com/scan.php?page=news_itempx=MTM1NTA

 Go directly to this post:
 http://phoronix.com/forums/showthread.php?79922-Benchmarks-Of-The-New-ZFS-On-Linux-EXT4-Winsp=326838#post326838

 Notice how ZFS won against ext4 in 8 scenarios out of 9. (The only
 scenario where ZFS lost is in the single-client RAID-1 scenario)

Very encouraging.  I'll let that assuage my performance concerns.

 Besides performance, are there any drawbacks to ZFS compared to ext4?

 1. You need a huge amount of RAM to let ZFS do its magic. But RAM is
 cheap nowadays. Data... possibly priceless.

Is this a requirement for deduplication, or for ZFS in general?

How can you determine how much RAM you'll need?

 2. Be careful when using ZFS on a server on which processes rapidly
 spawn and terminate. ZFS doesn't like memory fragmentation.

I don't think I have that sort of scenario on my server.  Is there a
way to check for memory fragmentation to be sure?

 For point #2, I can give you a real-life example:

 My mail server, for some reasons, choke if too many TLS errors happen.
 So, I placed Perdition in to capture all POP3 connections and
 'un-TLS' them. Perdition spawns a new process for *every* connection.
 My mail server has 2000 users, I regularly see more than 100 Perdition
 child processes. Many very ephemeral (i.e., existing for less than 5
 seconds). The RAM is undoubtedly *extremely* fragmented. ZFS cries
 murder when it cannot allocate a contiguous SLAB of memory to increase
 its ARC Cache.

Did you have to switch to a different filesystem on that server?

- Grant



Re: [gentoo-user] ZFS

2013-09-17 Thread Grant
 I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
 running.  I'd also like to stripe for performance, resulting in
 RAID10.  It sounds like most hardware controllers do not support
 6-disk RAID10 so ZFS looks very interesting.

 Can I operate ZFS RAID without a hardware RAID controller?

 Yes. In fact, that's ZFS' preferred mode of operation (i.e., it
 handles all redundancy by itself).

 I would take it a step further and say that a hardware RAID controller
 actively interferes with ZFS and gets in the way. It gets in the way so
 much that one should not do it at all.

 Running the controller in JBOD mode is not a good idea, I'd say it's a
 requirement.

If I go with ZFS I won't have a RAID controller installed at all.  One
less point of hardware failure too.

- Grant



Re: [gentoo-user] ZFS

2013-09-17 Thread Joerg Schilling
Grant emailgr...@gmail.com wrote:

 Performance doesn't seem to be one of ZFS's strong points.  Is it
 considered suitable for a high-performance server?

ZFS is one of the fastest FS I am aware of (if not the fastest).
You need a sufficient amount of RAM to make the ARC useful.

The only problem I am aware with ZFS is the fact that if you ask it to grant 
consistency for a specific file at a specific time, you force it to become slow.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily



Re: [gentoo-user] ZFS

2013-09-17 Thread Tanstaafl

On 2013-09-17 4:05 AM, Pandu Poluan pa...@poluan.info wrote:

2. When comparing performance of 1 (one) drive, of course ZFS will
lose. But when you build a ZFS pool out of 3 pairs of mirrored drives,
throughput will increase significantly as ZFS has the ability to do
'load-balancing' among mirror-pairs (or, in ZFS parlance, mirrored
vdevs)


Hmmm...

If conventional wisdom is to run a hardware RAID card in JBOD mode, how 
can you also set it up with mirrored pairs at the same time?


So, for best performance  reliability, which is it? JBOD mode? Or 
mirrored vdevs?




Re: [gentoo-user] ZFS

2013-09-17 Thread Tanstaafl

On 2013-09-17 3:20 AM, Grant emailgr...@gmail.com wrote:

It sounds like most hardware controllers do not support
6-disk RAID10 so ZFS looks very interesting.


?? RAID 10 simply requires an even number of drives with a minimum of 4.

So, you certainly can have a 6 disk RAID10 - I've got a system with one 
right now in fact.



Can I operate ZFS RAID without a hardware RAID controller?


Yes.





Re: [gentoo-user] ZFS

2013-09-17 Thread Grant
 It sounds like most hardware controllers do not support
 6-disk RAID10 so ZFS looks very interesting.

 ?? RAID 10 simply requires an even number of drives with a minimum of 4.

OK, there seems to be some disagreement on this.  Michael?

- Grant



Re: [gentoo-user] ZFS

2013-09-17 Thread Grant
 Performance doesn't seem to be one of ZFS's strong points.  Is it
 considered suitable for a high-performance server?

 ZFS is one of the fastest FS I am aware of (if not the fastest).
 You need a sufficient amount of RAM to make the ARC useful.

How much RAM is that?

- Grant



Re: [gentoo-user] ZFS

2013-09-17 Thread Joerg Schilling
Grant emailgr...@gmail.com wrote:

  Performance doesn't seem to be one of ZFS's strong points.  Is it
  considered suitable for a high-performance server?
 
  ZFS is one of the fastest FS I am aware of (if not the fastest).
  You need a sufficient amount of RAM to make the ARC useful.

 How much RAM is that?

How much do you have?

File servers usually have at least 20 GB but 64+ is usual...

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily



Re: [gentoo-user] ZFS

2013-09-17 Thread Michael Orlitzky
On 09/17/2013 09:21 AM, Grant wrote:
 It sounds like most hardware controllers do not support
 6-disk RAID10 so ZFS looks very interesting.

 ?? RAID 10 simply requires an even number of drives with a minimum of 4.
 
 OK, there seems to be some disagreement on this.  Michael?
 

Any controller that claims RAID10 on a server with 6 drive bays should
be able to put all six drives in an array. But you'll get a three-way
stripe (better performance) instead of a three-way mirror (better fault
tolerance).

So,

  A B C
  A B C

and not,

  A B
  A B
  A B

The former gives you more space but slightly less fault tolerance than
four drives with a hot spare.




Re: [gentoo-user] ZFS

2013-09-17 Thread Michael Orlitzky
On 09/17/2013 11:40 AM, Tanstaafl wrote:
 On 2013-09-17 11:18 AM, Michael Orlitzky mich...@orlitzky.com wrote:
 Any controller that claims RAID10 on a server with 6 drive bays should
 be able to put all six drives in an array. But you'll get a three-way
 stripe (better performance) instead of a three-way mirror (better fault
 tolerance).

 So,

A B C
A B C

 and not,

A B
A B
A B

 The former gives you more space but slightly less fault tolerance than
 four drives with a hot spare.
 
 Sorry, don't understand what you're saying.
 
 Are you talking about the difference between RAID1+0 and RAID0+1?

Nope. Both of my examples above are stripes of mirrors, i.e. 1 + 0.


 If not, then please point to *authoritative* docs on what you mean.

http://www.snia.org/tech_activities/standards/curr_standards/ddf


 Googling on just RAID10 doesn't confuse the issues like you seem to be 
 doing (probably my ignorance though)...
 

It's not my fault, the standard confuses the issue =)

Controllers that can do multi-mirroring are next to nonexistent, so
produce few Google results. You can generally assume that RAID10 with 6
drives is going to give you,

  A B C
  A B C

so you don't get much more fault tolerance by throwing more drives at
it. The controller in Grant's server can do this, I'm sure.

For maximum fault tolerance, what you really want is,

  A B
  A B
  A B

but, like I said, it's hard to find in hardware. The standard I linked
to calls both of these RAID10, thus the confusion.

I forget why I even brought it up. I think it was in order to argue that
4 drives w/ spare is more tolerant that 6 drives in RAID10. To make that
argument, we need to be clear about what RAID10 means.




Re: [gentoo-user] ZFS

2013-09-17 Thread Michael Orlitzky
On 09/17/2013 01:00 PM, Tanstaafl wrote:
 
 But not 6-drive RAID w/ hot spare... ;) Anyone who can't afford to add a 
 single additional drive for the piece of mind has no business buying the 
 RAID card to begin with...

Most of our servers only come with 6 drive bays -- that's why I have
this speech already rehearsed!





Re: [gentoo-user] ZFS

2013-09-17 Thread Tanstaafl

On 2013-09-17 12:34 PM, Michael Orlitzky mich...@orlitzky.com wrote:

For maximum fault tolerance, what you really want is,

   A B
   A B
   A B

but, like I said, it's hard to find in hardware. The standard I linked
to calls both of these RAID10, thus the confusion.


Ok, I see where my confusion came in... when you first referred to this, 
you said that the *latter* was the more common version, but I guess you 
meant the former (since you're no saying the latter is 'hard to find in 
hardware')...



I forget why I even brought it up. I think it was in order to argue that
4 drives w/ spare is more tolerant that 6 drives in RAID10.


But not 6-drive RAID w/ hot spare... ;) Anyone who can't afford to add a 
single additional drive for the piece of mind has no business buying the 
RAID card to begin with...





Re: [gentoo-user] ZFS

2013-09-17 Thread Alan McKinnon
On 17/09/2013 15:22, Grant wrote:
 Performance doesn't seem to be one of ZFS's strong points.  Is it
 considered suitable for a high-performance server?

 ZFS is one of the fastest FS I am aware of (if not the fastest).
 You need a sufficient amount of RAM to make the ARC useful.
 
 How much RAM is that?
 
 - Grant
 


1G of RAM per 1TB of data is the recommendation.

For de-duped data, it is considerably more, something on the order of 6G
of RAM per 1TB of data.

The first guideline is actually not too onerous. It *seems* like a huge
amount of RAM, but

a) Most modern motherboards can handle that with ease
b) RAM is comparatively cheap
c) It's a once-of purchase
d) RAM is very reliable so once-off really does mean once-off




-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] ZFS

2013-09-17 Thread covici
Pandu Poluan pa...@poluan.info wrote:

 On Tue, Sep 17, 2013 at 2:20 PM, Grant emailgr...@gmail.com wrote:
  I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
  running.  I'd also like to stripe for performance, resulting in
  RAID10.  It sounds like most hardware controllers do not support
  6-disk RAID10 so ZFS looks very interesting.
 
  Can I operate ZFS RAID without a hardware RAID controller?
 
 
 Yes. In fact, that's ZFS' preferred mode of operation (i.e., it
 handles all redundancy by itself).
 
  From a RAID perspective only, is ZFS a better choice than conventional
  software RAID?
 
 
 Yes.
 
 ZFS checksummed all blocks during writes, and verifies those checksums
 during read.
 
 It is possible to have 2 bits flipped at the same time among 2 hard
 disks. In such case, the RAID controller will never see the bitflips.
 But ZFS will see it.
 
  ZFS seems to have many excellent features and I'd like to ease into
  them slowly (like an old man into a nice warm bath).  Does ZFS allow
  you to set up additional features later (e.g. snapshots, encryption,
  deduplication, compression) or is some forethought required when first
  making the filesystem?
 
 
 Snapshots is built-in from the beginning. All you have to do is create
 one when you want it.
 
 Deduplication can be turned on and off at will -- but be warned: You
 need HUGE amount of RAM.
 
 Compression can be turned on and off at will. Previously-compressed
 data won't become uncompressed unless you modify them.
 
  It looks like there are comprehensive ZFS Gentoo docs
  (http://wiki.gentoo.org/wiki/ZFS) but can anyone tell me from the real
  world about how much extra difficulty/complexity is added to
  installation and ongoing administration when choosing ZFS over ext4?
 
 
 Very very minimal. So minimal, in fact, that if you don't plan to use
 ZFS as a root filesystem, it's laughably simple. You don't even have
 to edit /etc/fstab
 
  Performance doesn't seem to be one of ZFS's strong points.  Is it
  considered suitable for a high-performance server?
 
  http://www.phoronix.com/scan.php?page=news_itempx=MTM1NTA
 
 
 Several points:
 
 1. The added steps of checksumming (and verifying the checksums)
 *will* give a performance penalty.
 
 2. When comparing performance of 1 (one) drive, of course ZFS will
 lose. But when you build a ZFS pool out of 3 pairs of mirrored drives,
 throughput will increase significantly as ZFS has the ability to do
 'load-balancing' among mirror-pairs (or, in ZFS parlance, mirrored
 vdevs)
 
 Go directly to this post:
 http://phoronix.com/forums/showthread.php?79922-Benchmarks-Of-The-New-ZFS-On-Linux-EXT4-Winsp=326838#post326838
 
 Notice how ZFS won against ext4 in 8 scenarios out of 9. (The only
 scenario where ZFS lost is in the single-client RAID-1 scenario)
 
  Besides performance, are there any drawbacks to ZFS compared to ext4?
 
 
 1. You need a huge amount of RAM to let ZFS do its magic. But RAM is
 cheap nowadays. Data... possibly priceless.
 
 2. Be careful when using ZFS on a server on which processes rapidly
 spawn and terminate. ZFS doesn't like memory fragmentation.
 
 For point #2, I can give you a real-life example:
 
 My mail server, for some reasons, choke if too many TLS errors happen.
 So, I placed Perdition in to capture all POP3 connections and
 'un-TLS' them. Perdition spawns a new process for *every* connection.
 My mail server has 2000 users, I regularly see more than 100 Perdition
 child processes. Many very ephemeral (i.e., existing for less than 5
 seconds). The RAM is undoubtedly *extremely* fragmented. ZFS cries
 murder when it cannot allocate a contiguous SLAB of memory to increase
 its ARC Cache.
 
 OTOH, on another very busy server (mail archiving server using
 MailArchiva, handling 2000+ emails per hour), ZFS run flawlessly. No
 incident _at_all_. Undoubtedly because MailArchiva use one single huge
 process (Java-based) to handle all transactions, so no RAM
 fragmentation here.
Spo do I need that overlay at all, or just emerge zfs and its module?
Also, I now have lvm volumes, including root, but not boot, how to
convert and do I have to do anything to my initramfs?

-- 
Your life is like a penny.  You're going to lose it.  The question is:
How do
you spend it?

 John Covici
 cov...@ccs.covici.com



Re: [gentoo-user] ZFS

2013-09-17 Thread covici
Volker Armin Hemmann volkerar...@googlemail.com wrote:

 Am 17.09.2013 09:20, schrieb Grant:
  I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
  running.  I'd also like to stripe for performance, resulting in
  RAID10.  It sounds like most hardware controllers do not support
  6-disk RAID10 so ZFS looks very interesting.
 
  Can I operate ZFS RAID without a hardware RAID controller?
 
  From a RAID perspective only, is ZFS a better choice than conventional
  software RAID?
 
  ZFS seems to have many excellent features and I'd like to ease into
  them slowly (like an old man into a nice warm bath).  Does ZFS allow
  you to set up additional features later (e.g. snapshots, encryption,
  deduplication, compression) or is some forethought required when first
  making the filesystem?
 
  It looks like there are comprehensive ZFS Gentoo docs
  (http://wiki.gentoo.org/wiki/ZFS) but can anyone tell me from the real
  world about how much extra difficulty/complexity is added to
  installation and ongoing administration when choosing ZFS over ext4?
 
  Performance doesn't seem to be one of ZFS's strong points.  Is it
  considered suitable for a high-performance server?
 
  http://www.phoronix.com/scan.php?page=news_itempx=MTM1NTA
 
  Besides performance, are there any drawbacks to ZFS compared to ext4?
 
 do yourself three favours:
 
 use ECC ram. Lots of it. 16GB DDR3 1600 ECC ram cost you less than 170€.
 And it is worth it. ZFS showed me just how many silent corruptions can
 happen on a 'stable' system. Errors never seen neither detected thanks
 to using 'standard' ram.
 
 turn off readahead. ZFS' own readahead and the kernel's clash - badly.
 Turn off kernel's readahead for a visible performance boon.
 
 use noop as io-scheduler.

How do you turnoff read ahead?

-- 
Your life is like a penny.  You're going to lose it.  The question is:
How do
you spend it?

 John Covici
 cov...@ccs.covici.com



Re: [gentoo-user] ZFS

2013-09-17 Thread Volker Armin Hemmann
Am 17.09.2013 09:20, schrieb Grant:
 I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
 running.  I'd also like to stripe for performance, resulting in
 RAID10.  It sounds like most hardware controllers do not support
 6-disk RAID10 so ZFS looks very interesting.

 Can I operate ZFS RAID without a hardware RAID controller?

 From a RAID perspective only, is ZFS a better choice than conventional
 software RAID?

 ZFS seems to have many excellent features and I'd like to ease into
 them slowly (like an old man into a nice warm bath).  Does ZFS allow
 you to set up additional features later (e.g. snapshots, encryption,
 deduplication, compression) or is some forethought required when first
 making the filesystem?

 It looks like there are comprehensive ZFS Gentoo docs
 (http://wiki.gentoo.org/wiki/ZFS) but can anyone tell me from the real
 world about how much extra difficulty/complexity is added to
 installation and ongoing administration when choosing ZFS over ext4?

 Performance doesn't seem to be one of ZFS's strong points.  Is it
 considered suitable for a high-performance server?

 http://www.phoronix.com/scan.php?page=news_itempx=MTM1NTA

 Besides performance, are there any drawbacks to ZFS compared to ext4?

do yourself three favours:

use ECC ram. Lots of it. 16GB DDR3 1600 ECC ram cost you less than 170€.
And it is worth it. ZFS showed me just how many silent corruptions can
happen on a 'stable' system. Errors never seen neither detected thanks
to using 'standard' ram.

turn off readahead. ZFS' own readahead and the kernel's clash - badly.
Turn off kernel's readahead for a visible performance boon.

use noop as io-scheduler.



Re: [gentoo-user] ZFS

2013-09-17 Thread Tanstaafl
On 2013-09-17 2:00 PM, Volker Armin Hemmann volkerar...@googlemail.com 
wrote:

use ECC ram. Lots of it. 16GB DDR3 1600 ECC ram cost you less than 170€.
And it is worth it. ZFS showed me just how many silent corruptions can
happen on a 'stable' system. Errors never seen neither detected thanks
to using 'standard' ram.

turn off readahead. ZFS' own readahead and the kernel's clash - badly.
Turn off kernel's readahead for a visible performance boon.

use noop as io-scheduler.


Is there a good place to read about these kinds of tuning parameters?



Re: [gentoo-user] ZFS

2013-09-17 Thread Tanstaafl

On 2013-09-17 11:18 AM, Michael Orlitzky mich...@orlitzky.com wrote:

Any controller that claims RAID10 on a server with 6 drive bays should
be able to put all six drives in an array. But you'll get a three-way
stripe (better performance) instead of a three-way mirror (better fault
tolerance).

So,

   A B C
   A B C

and not,

   A B
   A B
   A B

The former gives you more space but slightly less fault tolerance than
four drives with a hot spare.


Sorry, don't understand what you're saying.

Are you talking about the difference between RAID1+0 and RAID0+1?

If not, then please point to *authoritative* docs on what you mean.

Googling on just RAID10 doesn't confuse the issues like you seem to be 
doing (probably my ignorance though)...




Re: [gentoo-user] ZFS

2013-09-17 Thread Volker Armin Hemmann
Am 17.09.2013 20:11, schrieb cov...@ccs.covici.com:
 Volker Armin Hemmann volkerar...@googlemail.com wrote:

 Am 17.09.2013 09:20, schrieb Grant:
 I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
 running.  I'd also like to stripe for performance, resulting in
 RAID10.  It sounds like most hardware controllers do not support
 6-disk RAID10 so ZFS looks very interesting.

 Can I operate ZFS RAID without a hardware RAID controller?

 From a RAID perspective only, is ZFS a better choice than conventional
 software RAID?

 ZFS seems to have many excellent features and I'd like to ease into
 them slowly (like an old man into a nice warm bath).  Does ZFS allow
 you to set up additional features later (e.g. snapshots, encryption,
 deduplication, compression) or is some forethought required when first
 making the filesystem?

 It looks like there are comprehensive ZFS Gentoo docs
 (http://wiki.gentoo.org/wiki/ZFS) but can anyone tell me from the real
 world about how much extra difficulty/complexity is added to
 installation and ongoing administration when choosing ZFS over ext4?

 Performance doesn't seem to be one of ZFS's strong points.  Is it
 considered suitable for a high-performance server?

 http://www.phoronix.com/scan.php?page=news_itempx=MTM1NTA

 Besides performance, are there any drawbacks to ZFS compared to ext4?

 do yourself three favours:

 use ECC ram. Lots of it. 16GB DDR3 1600 ECC ram cost you less than 170€.
 And it is worth it. ZFS showed me just how many silent corruptions can
 happen on a 'stable' system. Errors never seen neither detected thanks
 to using 'standard' ram.

 turn off readahead. ZFS' own readahead and the kernel's clash - badly.
 Turn off kernel's readahead for a visible performance boon.

 use noop as io-scheduler.
 How do you turnoff read ahead?

set it with blockdev to 8 (for example). Doesn't turn it off. Just makes
it none-obstrusive.



Re: [gentoo-user] ZFS

2013-09-17 Thread Volker Armin Hemmann
Am 17.09.2013 20:11, schrieb Tanstaafl:
 On 2013-09-17 2:00 PM, Volker Armin Hemmann
 volkerar...@googlemail.com wrote:
 use ECC ram. Lots of it. 16GB DDR3 1600 ECC ram cost you less than 170€.
 And it is worth it. ZFS showed me just how many silent corruptions can
 happen on a 'stable' system. Errors never seen neither detected thanks
 to using 'standard' ram.

 turn off readahead. ZFS' own readahead and the kernel's clash - badly.
 Turn off kernel's readahead for a visible performance boon.

 use noop as io-scheduler.

 Is there a good place to read about these kinds of tuning parameters?


zfsonlinux?
google?



Re: [gentoo-user] ZFS

2013-09-17 Thread Stefan G. Weichinger
Am 17.09.2013 19:34, schrieb Tanstaafl:
 On 2013-09-17 1:07 PM, Michael Orlitzky mich...@orlitzky.com wrote:
 On 09/17/2013 01:00 PM, Tanstaafl wrote:

 But not 6-drive RAID w/ hot spare... ;) Anyone who can't afford to add a
 single additional drive for the piece of mind has no business buying the
 RAID card to begin with...

 Most of our servers only come with 6 drive bays -- that's why I have
 this speech already rehearsed!
 
 Ahh...
 

So what would be the recommended setup with ZFS and 6 drives?

I have to set up a server w/ 8x 1TB in about 2 weeks and consider ZFS as
well, at least for data. So root-fs would go onto 2x 1TB hdds with
conventional partitioning and something like ext4.

6x 1TB would be available for data ... on one hand for a file-server
part ... on the other hand for VMs based on KVM.

The server has 64 gigs of RAM so that won't be a problem here.

I still wonder if the virtual disks for the VMs will run fine on ZFS ...
no way to test it until I am there and set the box up.

S



Re: [gentoo-user] ZFS

2013-09-17 Thread Tanstaafl

On 2013-09-17 1:07 PM, Michael Orlitzky mich...@orlitzky.com wrote:

On 09/17/2013 01:00 PM, Tanstaafl wrote:


But not 6-drive RAID w/ hot spare... ;) Anyone who can't afford to add a
single additional drive for the piece of mind has no business buying the
RAID card to begin with...


Most of our servers only come with 6 drive bays -- that's why I have
this speech already rehearsed!


Ahh...



Re: [gentoo-user] ZFS

2013-09-17 Thread Grant
 Any controller that claims RAID10 on a server with 6 drive bays should
 be able to put all six drives in an array. But you'll get a three-way
 stripe (better performance) instead of a three-way mirror (better fault
 tolerance).

 I forget why I even brought it up. I think it was in order to argue that
 4 drives w/ spare is more tolerant that 6 drives in RAID10. To make that
 argument, we need to be clear about what RAID10 means.

I'm extremely glad you did.  Otherwise I would have booted my new
hardware RAID server and been very disappointed.

- Grant



Re: [gentoo-user] ZFS

2013-09-17 Thread Grant
 Performance doesn't seem to be one of ZFS's strong points.  Is it
 considered suitable for a high-performance server?

 ZFS is one of the fastest FS I am aware of (if not the fastest).
 You need a sufficient amount of RAM to make the ARC useful.

 How much RAM is that?

 1G of RAM per 1TB of data is the recommendation.

 For de-duped data, it is considerably more, something on the order of 6G
 of RAM per 1TB of data.

Well, my entire server uses only about 50GB so I guess I'm OK with the
host's minimum of 16GB RAM.

- Grant



Re: [gentoo-user] ZFS

2013-09-17 Thread Grant
 I have to set up a server w/ 8x 1TB in about 2 weeks and consider ZFS as
 well, at least for data. So root-fs would go onto 2x 1TB hdds with
 conventional partitioning and something like ext4.

Is a layout like this with the data on ZFS and the root-fs on ext4 a
better choice than ZFS all around?

- Grant



Re: [gentoo-user] ZFS

2013-09-17 Thread Grant
 Besides performance, are there any drawbacks to ZFS compared to ext4?

 do yourself three favours:

 use ECC ram. Lots of it. 16GB DDR3 1600 ECC ram cost you less than 170€.
 And it is worth it. ZFS showed me just how many silent corruptions can
 happen on a 'stable' system. Errors never seen neither detected thanks
 to using 'standard' ram.

 turn off readahead. ZFS' own readahead and the kernel's clash - badly.
 Turn off kernel's readahead for a visible performance boon.

 use noop as io-scheduler.

Thank you, I'm taking notes.  Please feel free to toss out any more tips.

- Grant



Re: [gentoo-user] ZFS

2013-09-17 Thread Grant
 Besides performance, are there any drawbacks to ZFS compared to ext4?

How about hardened?  Does ZFS have any problems interacting with
grsecurity or a hardened profile?

- Grant



Re: [gentoo-user] ZFS

2013-09-17 Thread Bruce Hill
On Tue, Sep 17, 2013 at 02:11:33PM -0400, Tanstaafl wrote:
 
 Is there a good place to read about these kinds of tuning parameters?

Just wondering if anyone experienced running ZFS on Gentoo finds this wiki
article worthy of use: http://wiki.gentoo.org/wiki/ZFS
-- 
Happy Penguin Computers   ')
126 Fenco Drive   ( \
Tupelo, MS 38801   ^^
supp...@happypenguincomputers.com
662-269-2706 662-205-6424
http://happypenguincomputers.com/

A: Because it messes up the order in which people normally read text.   

   
Q: Why is top-posting such a bad thing? 

   
A: Top-posting. 

   
Q: What is the most annoying thing in e-mail?

Don't top-post: http://en.wikipedia.org/wiki/Top_post#Top-posting



Re: [gentoo-user] zfs-fuse

2010-06-03 Thread Stefan G. Weichinger
Am 30.05.2010 22:49, schrieb Stefan G. Weichinger:

 http://bugs.gentoo.org/show_bug.cgi?id=291540

new stable release 0.6.9 out today.
ebuild also in the mentioned bug.