Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2018-01-06 Thread Richard Hector
On 03/01/18 14:02, Gene Heskett wrote:
> ... so 
> used to winslow ...

...

> Will it actually happen? Chances are I'd have better results offering a 
> bridge in Sun City AZ for sale...

Is someone used to Winslow likely to be confused in Sun City?

(I've never been to either (or, within my memory, the US at all))

Richard



signature.asc
Description: OpenPGP digital signature


Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2018-01-04 Thread deloptes
Pascal Hambourg wrote:

> If old things are running, then they have already been set up a long
> time ago, and there is not need to tell what's best.

ok, I reverse the "best" - you are really persistent



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2018-01-04 Thread Pascal Hambourg

Le 04/01/2018 à 08:55, deloptes a écrit :

Pascal Hambourg wrote:


How is it better than using an initramfs ?


By the way, if you compile md in the kernel, you should also compile all
necessary host controller and disk drivers in. And expect failure with
current drivers which do not guarantee that a given disk gets the same
device name at each boot.


While your statement is true, I personally use UUID (/etc/fstab) and have
no problem with it at all.


The kernel cannot use UUIDs to mount the root filesystem. Using UUIDs
requires an initramfs.


It is not about not using initramfs, it is about keep old things running.


If old things are running, then they have already been set up a long 
time ago, and there is not need to tell what's best.




Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2018-01-04 Thread deloptes
Pascal Hambourg wrote:

> The kernel cannot use UUIDs to mount the root filesystem. Using UUIDs
> requires an initramfs.

I forgot to mention that UUID is not meant to be used (only) by the kernel
or the initrd, but by GRUB - to find the boot md - no idea how it works but
it works.

Those machines were running lilo and grub1 and now grub2 - I never looked
into it. There is a process to upgrade kernel and it works fine.
Perhaps you are right that we need to plan doing it "the modern" way, but I
doubt that one would agree to change partition types etc

regards



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2018-01-03 Thread deloptes
Pascal Hambourg wrote:

> How is it better than using an initramfs ?
> 
>>> By the way, if you compile md in the kernel, you should also compile all
>>> necessary host controller and disk drivers in. And expect failure with
>>> current drivers which do not guarantee that a given disk gets the same
>>> device name at each boot.
>> 
>> While your statement is true, I personally use UUID (/etc/fstab) and have
>> no problem with it at all.
> 
> The kernel cannot use UUIDs to mount the root filesystem. Using UUIDs
> requires an initramfs.


It is not about not using initramfs, it is about keep old things running.



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2018-01-03 Thread Pascal Hambourg

Le 03/01/2018 à 00:52, deloptes a écrit :

Pascal Hambourg wrote:


Best for what ?


for booting of raid


How is it better than using an initramfs ?


By the way, if you compile md in the kernel, you should also compile all
necessary host controller and disk drivers in. And expect failure with
current drivers which do not guarantee that a given disk gets the same
device name at each boot.


While your statement is true, I personally use UUID (/etc/fstab) and have no
problem with it at all.


The kernel cannot use UUIDs to mount the root filesystem. Using UUIDs 
requires an initramfs.




Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2018-01-02 Thread David Christensen

On 01/02/18 15:45, Darac Marjal wrote:

On 02/01/18 23:02, David Christensen wrote:

This is the second incorrect attribution to myself I've seen in the
recent past...


Really? It looks like you [wrote the mis-attributed text].


Yes, I know.



So, what exactly are you complaining about?


I'm not complaining; just pointing it out in case someone wants to 
respond to the mis-attributed text.



I think Gene Heskett is right -- somebody (and/or their mailer) boogered 
the indentation levels along the way once the levels got too deep.  (I 
have to pay careful attention not to do exactly that when I'm editing 
down previous content in a reply.)



David



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2018-01-02 Thread Andy Smith
Hello,

On Mon, Jan 01, 2018 at 11:15:00AM +1300, Joel Wirāmu Pauling wrote:
> The reason Redhat dropped btrfs support is because it currently has no
> native cryptographic function.

Red Hat's former filesystem maintainer Josef Bacik said it was
simply because Red Hat now lacks engineers familiar with btrfs.

https://news.ycombinator.com/item?id=14909843

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2018-01-02 Thread Gene Heskett
On Tuesday 02 January 2018 18:45:00 Darac Marjal wrote:

> On 02/01/18 23:02, David Christensen wrote:
> > On 01/02/18 10:05, deloptes wrote:
> >> David Christensen wrote:
> >>> You can boot with your md device with the following kernel command
> >>> lines:
> >>>
> >>> for old raid arrays without persistent superblocks:
> >>> md=,,, >>> level>,dev0,dev1,...,devn
> >
> > I did not write that.
> >
> >
> > This is the second incorrect attribution to myself I've seen in the
> > recent past...
>
> Really? It looks like you did.
>
> In message <7603d1fa-88db-25c0-678b-88424fec6...@holgerdanske.com>, I
> see four levels of quoting (so five levels of conversation). The
> message seems to read that you're quoting to...@tuxteam.de quoting
> David Christensen quoting Sven Hartge quoting David Christensen. About
> half way down the message there is an unquoted passage that starts
> "Yes, I saw that when I STFW. It starts with" and continues into the
> above text ("You can boot with").
>
> So, what exactly are you complaining about?
>
> That you didn't write the text originally (i.e. because the author of
> the document did)? You may not have authored the text, but you still
> wrote into your email (perhaps you copy/pasted it. It's impossible to
> know that).
>
After 4 or so quote levels, the chances of its having come thru someones 
email agent who doesn't know about proper quoting, or worse yet is so 
used to winslow and the broken quoting its been doing since dos days, 
thinks its doing it right and won't fix it even if called to his 
attention, really is quite astronomical.

After all these years, I only object if something not correct has been 
miss-attributed to me. And its a better than even bet the first person 
to reply to my objection, will do so with an email agent that doesn't do 
it correctly. I swear the damned stuff is self-perpetuating.

Sometimes its not pc to be so friendly you ignore the broken stuff. But 
these lists would get a lot less "friendly" if the perps were called on 
it every time they post.  For a while, but once the offenders fixed 
their agents, I think they would find a new friendliness to a list that 
did enforce it.

Will it actually happen? Chances are I'd have better results offering a 
bridge in Sun City AZ for sale...

> Are you complaining that you didn't write the text as literally shown
> in deloptes' email? The bar or characters in the left column of the
> email are merely a formatting convention to show which parts of the
> email are quoted. You didn't write them, but deloptes added them to
> demarcate your text.
>
> Are you complaining that the text wasn't from an email by you, but
> from someone else? If so, who do you think it should be attributed to
> instead?
>
> > David


Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page 



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2018-01-02 Thread deloptes
Pascal Hambourg wrote:

> Best for what ?

for booting of raid

> Who still uses RAID arrays without persistent superblocks ?

historic reasons - systems aged 10y+

> Who still uses RAID assembly by the kernel instead of mdadm ?

same as above

> All this has beed obsoleted by the superblock format 1.x and the use of
> an initrd or initramfs.
> 

true :)
man mdadm
"In-kernel autodetect is not recommended for new installations."

> By the way, if you compile md in the kernel, you should also compile all
> necessary host controller and disk drivers in. And expect failure with
> current drivers which do not guarantee that a given disk gets the same
> device name at each boot.

While your statement is true, I personally use UUID (/etc/fstab) and have no
problem with it at all.

Overall summary - you are right - it should not be needed nowdays

regards





Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2018-01-02 Thread Darac Marjal
On 02/01/18 23:02, David Christensen wrote:
> On 01/02/18 10:05, deloptes wrote:
>> David Christensen wrote:
>>
>>> You can boot with your md device with the following kernel command
>>> lines:
>>>
>>> for old raid arrays without persistent superblocks:
>>> md=,,,>> level>,dev0,dev1,...,devn
>
> I did not write that.
>
>
> This is the second incorrect attribution to myself I've seen in the
> recent past...

Really? It looks like you did.

In message <7603d1fa-88db-25c0-678b-88424fec6...@holgerdanske.com>, I
see four levels of quoting (so five levels of conversation). The message
seems to read that you're quoting to...@tuxteam.de quoting David
Christensen quoting Sven Hartge quoting David Christensen. About half
way down the message there is an unquoted passage that starts "Yes, I
saw that when I STFW. It starts with" and continues into the above text
("You can boot with").

So, what exactly are you complaining about?

That you didn't write the text originally (i.e. because the author of
the document did)? You may not have authored the text, but you still
wrote into your email (perhaps you copy/pasted it. It's impossible to
know that).

Are you complaining that you didn't write the text as literally shown in
deloptes' email? The bar or characters in the left column of the email
are merely a formatting convention to show which parts of the email are
quoted. You didn't write them, but deloptes added them to demarcate your
text.

Are you complaining that the text wasn't from an email by you, but from
someone else? If so, who do you think it should be attributed to instead?

>
>
> David
>




signature.asc
Description: OpenPGP digital signature


Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2018-01-02 Thread David Christensen

On 01/02/18 10:05, deloptes wrote:

David Christensen wrote:


You can boot with your md device with the following kernel command
lines:

for old raid arrays without persistent superblocks:
md=dev0,dev1,...,devn


I did not write that.


This is the second incorrect attribution to myself I've seen in the 
recent past...



David



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2018-01-02 Thread Pascal Hambourg

Le 02/01/2018 à 19:05, deloptes a écrit :

David Christensen wrote (quoting md.txt from the kernel documentation) :


You can boot with your md device with the following kernel command
lines:

for old raid arrays without persistent superblocks:
md=dev0,dev1,...,devn


yes and best is you compile raid in, so that boot can be also raided


Best for what ?
Who still uses RAID arrays without persistent superblocks ?
Who still uses RAID assembly by the kernel instead of mdadm ?
All this has beed obsoleted by the superblock format 1.x and the use of 
an initrd or initramfs.


By the way, if you compile md in the kernel, you should also compile all 
necessary host controller and disk drivers in. And expect failure with 
current drivers which do not guarantee that a given disk gets the same 
device name at each boot.




Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2018-01-02 Thread deloptes
David Christensen wrote:

> You can boot with your md device with the following kernel command
> lines:
> 
> for old raid arrays without persistent superblocks:
> md=,,, level>,dev0,dev1,...,devn

yes and best is you compile raid in, so that boot can be also raided

regards



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2018-01-02 Thread David Christensen

On 01/02/18 02:29, to...@tuxteam.de wrote:


On Mon, Jan 01, 2018 at 06:01:20PM -0800, David Christensen wrote:

On 12/31/17 14:45, Sven Hartge wrote:

David Christensen  wrote:

 $ man 4 md



 SCRUBBING AND MISMATCHES
 ...
If check was used, then no action is taken to handle the mismatch,  it
is  simply  recorded.   If  repair  was  used,  then a mismatch will be
repaired in the same way that resync repairs arrays.   For RAID5/RAID6
new parity blocks are written.  For RAID1/RAID10, all but one block are
overwritten with the content of that one block.




I wonder how md picks "that one block"?


Only if one drives reports an error. Then data from the good block is
used to overwrite the bad block, hoping the drive remaps the sector and
everything is fine again.

If both devices report no error but differing data has been read,
MD-RAID1 can't know which block is good.

MD-RAID5/6 could calculate all parity combinations and use the data a
majority agrees upon. (I don't know if it does it, though).

I tried looking at the Kernel RAID code, but I must admit: it is all
Esperanto to me, the code is far too low level for me to understand.


That's why "programming systems product" [1] includes architectural,
functional, design, construction, etc., documentation.


FreeBSD is better is this regard [2].


Look for documentation in the right shelf. Hint: it's called
Documentation (there are books in there, not nuts and bolts).

As an example: I'm on 4.9.0 (plus some assorted Debian-specific
patches). If I don't want to download the whole kaboodle (although
that would be a good idea), I might be tempted to use the nice
gitweb interface at git.kernel.org:

   
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/?h=v4.9

The "tree" view (provided above for your convenience, but you
mith be able to click your way through), you might perhaps find
docs in or around Documentation/md.txt.


Yes, I saw that when STFW.  It starts with:

Boot time assembly of RAID arrays
-

You can boot with your md device with the following kernel command
lines:

for old raid arrays without persistent superblocks:
  md=,,,level>,dev0,dev1,...,devn



That is the crumb under the elephants little toe on its front left foot.


I am looking for a narrative that starts with "We are talking about an 
creature called an 'Elephant'.  This is what the whole animal looks 
like" (and presents a picture).  It should proceed to divide and expand 
that narrative, paying attention to conceptual dependencies, with 
increasing detail (more pictures and diagrams), down to file structures, 
data structures, and algorithms -- e.g. explains the "what" in proper 
English using computer science terms.  And, most importantly: it must 
also explain the "why" at every level.




Reading source only helps if you have a rough idea on what's going
on.


Yes.

"Show me your algorithms, and I will be confused.  Show me your data
structures, and your algorithms will be obvious."
-- Edsger W. Dijkstra (?) (I can't can't a find citation)


David



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2018-01-02 Thread tomas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Mon, Jan 01, 2018 at 06:01:20PM -0800, David Christensen wrote:
> On 12/31/17 14:45, Sven Hartge wrote:
> >David Christensen  wrote:
> >> $ man 4 md
> >
> >> SCRUBBING AND MISMATCHES
> >> ...
> >>If check was used, then no action is taken to handle the mismatch,  
> >> it
> >>is  simply  recorded.   If  repair  was  used,  then a mismatch 
> >> will be
> >>repaired in the same way that resync repairs arrays.   For 
> >> RAID5/RAID6
> >>new parity blocks are written.  For RAID1/RAID10, all but one block 
> >> are
> >>overwritten with the content of that one block.
> >
> >
> >>I wonder how md picks "that one block"?
> >
> >Only if one drives reports an error. Then data from the good block is
> >used to overwrite the bad block, hoping the drive remaps the sector and
> >everything is fine again.
> >
> >If both devices report no error but differing data has been read,
> >MD-RAID1 can't know which block is good.
> >
> >MD-RAID5/6 could calculate all parity combinations and use the data a
> >majority agrees upon. (I don't know if it does it, though).
> >
> >I tried looking at the Kernel RAID code, but I must admit: it is all
> >Esperanto to me, the code is far too low level for me to understand.
> 
> That's why "programming systems product" [1] includes architectural,
> functional, design, construction, etc., documentation.
> 
> 
> FreeBSD is better is this regard [2].

Look for documentation in the right shelf. Hint: it's called
Documentation (there are books in there, not nuts and bolts).

As an example: I'm on 4.9.0 (plus some assorted Debian-specific
patches). If I don't want to download the whole kaboodle (although
that would be a good idea), I might be tempted to use the nice
gitweb interface at git.kernel.org:

  
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/?h=v4.9

The "tree" view (provided above for your convenience, but you
mith be able to click your way through), you might perhaps find
docs in or around Documentation/md.txt.

Reading source only helps if you have a rough idea on what's going
on.

Cheers
- -- t
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iEYEARECAAYFAlpLXwUACgkQBcgs9XrR2kbhgACeJKSDGa3qKVFnu24d7me2YR2v
sGYAn310MNqQUtRUQf3YA4UMziMDx4M6
=vVpK
-END PGP SIGNATURE-



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2018-01-01 Thread David Christensen

On 12/31/17 14:45, Sven Hartge wrote:

David Christensen  wrote:

 $ man 4 md



 SCRUBBING AND MISMATCHES
 ...
If check was used, then no action is taken to handle the mismatch,  it
is  simply  recorded.   If  repair  was  used,  then a mismatch will be
repaired in the same way that resync repairs arrays.   For RAID5/RAID6
new parity blocks are written.  For RAID1/RAID10, all but one block are
overwritten with the content of that one block.




I wonder how md picks "that one block"?


Only if one drives reports an error. Then data from the good block is
used to overwrite the bad block, hoping the drive remaps the sector and
everything is fine again.

If both devices report no error but differing data has been read,
MD-RAID1 can't know which block is good.

MD-RAID5/6 could calculate all parity combinations and use the data a
majority agrees upon. (I don't know if it does it, though).

I tried looking at the Kernel RAID code, but I must admit: it is all
Esperanto to me, the code is far too low level for me to understand.


That's why "programming systems product" [1] includes architectural, 
functional, design, construction, etc., documentation.



FreeBSD is better is this regard [2].


David


[1] 
https://www.pearson.com/us/higher-education/program/Brooks-Mythical-Man-Month-The-Essays-on-Software-Engineering-Anniversary-Edition-2nd-Edition/PGM172844.html


[2] 
https://www.pearson.com/us/higher-education/program/Mc-Kusick-Design-and-Implementation-of-the-Free-BSD-Operating-System-The-2nd-Edition/PGM224032.html




Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-31 Thread Sven Hartge
David Christensen  wrote:
> On 12/31/17 09:44, Sven Hartge wrote:
>> David Christensen  wrote:
>>> On 12/30/17 14:38, Matthew Crews wrote:

 The main issue I see with using BTRFS with MDADM is that you lose
 the benefit of bit-rot repair. MDADM can't correct bit rot, but
 BTRFS-Raid (and ZFS raid arrays) can, but only with native raid
 configurations.
>> 
>>> AFAIK:
>> 
>>> 1.  mdadm RAID1 can fix bit rot, so long as one drive has a good
>>> block to fix the others.
 
>> Yes, but it can't fix silent bit-rot, where incorrect bytes are read
>> from the drive without the drive noticing. In that case the Kernel
>> has no way of knowing which bytes are the correct ones, you need some
>> sort of checksum for that.

> My bad -- the only way for md to detect bit-rot is via scrubbing:

> $ man 4 md

> SCRUBBING AND MISMATCHES
> ...
>If check was used, then no action is taken to handle the mismatch,  it
>is  simply  recorded.   If  repair  was  used,  then a mismatch will be
>repaired in the same way that resync repairs arrays.   For RAID5/RAID6
>new parity blocks are written.  For RAID1/RAID10, all but one block are
>overwritten with the content of that one block.


> I wonder how md picks "that one block"?

Only if one drives reports an error. Then data from the good block is
used to overwrite the bad block, hoping the drive remaps the sector and
everything is fine again.

If both devices report no error but differing data has been read,
MD-RAID1 can't know which block is good. 

MD-RAID5/6 could calculate all parity combinations and use the data a
majority agrees upon. (I don't know if it does it, though).

I tried looking at the Kernel RAID code, but I must admit: it is all
Esperanto to me, the code is far too low level for me to understand.

S°

-- 
Sigmentation fault. Core dumped.



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-31 Thread Joel Wirāmu Pauling
The reason Redhat dropped btrfs support is because it currently has no
native cryptographic function. And from the various threads I've read on
the topic there is no easy answer to the problem.

On 1 January 2018 at 06:44, Sven Hartge  wrote:

> David Christensen  wrote:
> > On 12/30/17 14:38, Matthew Crews wrote:
>
> >> The main issue I see with using BTRFS with MDADM is that you lose the
> >> benefit of bit-rot repair. MDADM can't correct bit rot, but
> >> BTRFS-Raid (and ZFS raid arrays) can, but only with native raid
> >> configurations.
>
> > AFAIK:
>
> > 1.  mdadm RAID1 can fix bit rot, so long as one drive has a good block
> > to fix the others.
>
> Yes, but it can't fix silent bit-rot, where incorrect bytes are read
> from the drive without the drive noticing. In that case the Kernel has
> no way of knowing which bytes are the correct ones, you need some sort
> of checksum for that.
>
> Grüße,
> Sven.
>
> --
> Sigmentation fault. Core dumped.
>
>


Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-31 Thread David Christensen

On 12/31/17 09:44, Sven Hartge wrote:

David Christensen  wrote:

On 12/30/17 14:38, Matthew Crews wrote:



The main issue I see with using BTRFS with MDADM is that you lose the
benefit of bit-rot repair. MDADM can't correct bit rot, but
BTRFS-Raid (and ZFS raid arrays) can, but only with native raid
configurations.



AFAIK:



1.  mdadm RAID1 can fix bit rot, so long as one drive has a good block
to fix the others.


Yes, but it can't fix silent bit-rot, where incorrect bytes are read
from the drive without the drive noticing. In that case the Kernel has
no way of knowing which bytes are the correct ones, you need some sort
of checksum for that.


My bad -- the only way for md to detect bit-rot is via scrubbing:

$ man 4 md

SCRUBBING AND MISMATCHES
...
   If check was used, then no action is taken to handle the 
mismatch,  it
   is  simply  recorded.   If  repair  was  used,  then a mismatch 
will be
   repaired in the same way that resync repairs arrays.   For 
RAID5/RAID6
   new parity blocks are written.  For RAID1/RAID10, all but one 
block are

   overwritten with the content of that one block.


I wonder how md picks "that one block"?


I tried to STFW for an md design document -- e.g. explains what file 
structures, data structures, algorithms, etc., are used by md and why -- 
nope.



David



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-31 Thread Sven Hartge
David Christensen  wrote:
> On 12/30/17 14:38, Matthew Crews wrote:

>> The main issue I see with using BTRFS with MDADM is that you lose the
>> benefit of bit-rot repair. MDADM can't correct bit rot, but
>> BTRFS-Raid (and ZFS raid arrays) can, but only with native raid
>> configurations.

> AFAIK:

> 1.  mdadm RAID1 can fix bit rot, so long as one drive has a good block 
> to fix the others.

Yes, but it can't fix silent bit-rot, where incorrect bytes are read
from the drive without the drive noticing. In that case the Kernel has
no way of knowing which bytes are the correct ones, you need some sort
of checksum for that.

Grüße,
Sven.

-- 
Sigmentation fault. Core dumped.



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-30 Thread David Christensen

On 12/30/17 14:38, Matthew Crews wrote:

The main issue I see with using BTRFS with MDADM is that you lose the benefit 
of bit-rot repair. MDADM can't correct bit rot, but BTRFS-Raid (and ZFS raid 
arrays) can, but only with native raid configurations.


AFAIK:

1.  mdadm RAID1 can fix bit rot, so long as one drive has a good block 
to fix the others.


2.  btrfs on top of mdadm RAID1 will not see the bit rot that mdadm 
RAID1 fixes.



David









Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-30 Thread Matthew Crews
> Original Message 
>Subject: Re: Experiences with BTRFS -- is it mature enough for enterprise use?
>Local Time: December 29, 2017 5:37 PM
>UTC Time: December 30, 2017 12:37 AM
>From: j...@jvales.net
> The problem with btrfs-raid10 (with 6 disks): it self-destructed itself
> on our soon to be production server twice in december.

That's a very unusual setup for raid10. I'm trying to understand how such a 
setup actually works. I know BTRFS Raid1 works with 3 disks, where it acts like 
a pseudo-raid5 (but not a true raid5). So with 6 disks in a raid10 setup, would 
it act like a pseudo-raid50?

Side note, with 6 disks, it might be better to use a Raid6 array instead of 
Raid10 or Raid5/50.

The main issue I see with using BTRFS with MDADM is that you lose the benefit 
of bit-rot repair. MDADM can't correct bit rot, but BTRFS-Raid (and ZFS raid 
arrays) can, but only with native raid configurations.



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-30 Thread Pascal Hambourg

Le 30/12/2017 à 00:48, Jan Vales a écrit :


You still can go md-raid + btrfs, if you want some btrfs features.
Snapshots (and send/receive) are what I really love on my laptop and
could not live without anymore.
(fulldisk encryption may be mandatory, as btrfs at least some time ago,
had the tendency to brick itself, if it sees its uuid on multiple disks
at the same time (md-raid1))


How could it see its UUID on multiple disks with the default md 
superblock format 1.2 ?




Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-29 Thread Jan Vales
On 12/30/17 01:26, Matthew Crews wrote:
>>  Original Message 
>> Subject: Re: Experiences with BTRFS -- is it mature enough for enterprise 
>> use?
>> Local Time: December 29, 2017 4:48 PM
>> UTC Time: December 29, 2017 11:48 PM
>> From: j...@jvales.net
> 
>> You still can go md-raid + btrfs, if you want some btrfs features.
> 
> If you're using Raid1 or Raid10, md-raid + btrfs is probably worse than 
> native btrfs Raid1 + Raid10.
> 

The problem with btrfs-raid10 (with 6 disks): it self-destructed itself
on our soon to be production server twice in december.

So no way btrfs-raid is going to see production on our machines anytime
soon.
It seems to work without issues on md-raid6 + luks + btrfs so far...
So hopefully we will at least have snaphots <3

Its just a file/backup-server and nothing that would need best disk
performance - if we wanted that, we would have gone for ssd's. :)

br,
Jan Vales
--
I only read plaintext emails.



signature.asc
Description: OpenPGP digital signature


Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-29 Thread Matthew Crews
> Original Message 
>Subject: Re: Experiences with BTRFS -- is it mature enough for enterprise use?
>Local Time: December 29, 2017 4:48 PM
>UTC Time: December 29, 2017 11:48 PM
>From: j...@jvales.net

> You still can go md-raid + btrfs, if you want some btrfs features.

If you're using Raid1 or Raid10, md-raid + btrfs is probably worse than native 
btrfs Raid1 + Raid10.



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-29 Thread Jan Vales
On 12/29/17 00:55, Andy Smith wrote:
> The killer feature of ZFS is its checksumming of all data and
> metadata to protect against bitrot and other forms of data
> corruption. The only other filesystem offering this on Linux is
> btrfs, hence the many mentions of ZFS in this thread. Putting the
> filesystem under MD RAID (or hardware RAID) and scrubbing it will
> detect corruption but cannot fix it.
> 

md-raid6 can fix most few-byte issues online.

# dd if=/dev/zero of=/dev/sdb1 bs=1 count=1 seek=1234
md-raid6 scrub will fix that byte.
Remember to always flush disc caches when testing!

But unlike btrfs-raid10, md-raid6 cannot recover a whole disk or
partition getting fully zero-dd'd - with or without a reboot.

I didnt actually test how much of a disk must be zeroed or random'ed
before md-raid6 scrub starts to fail. A few calls to the above dd with
diffrent seek= and then scrubbing will fix the corrupted bytes every time.
A full zero-dd will not.
-> drive will be "failed" and you need to re-add it.

Which was the reason why we initially gave btrfs-raid10 a try...
... it would be a really cool FS, if it was as stable as it is on my
laptop (and I really dislike btrfs-raid definitions)

You still can go md-raid + btrfs, if you want some btrfs features.
Snapshots (and send/receive) are what I really love on my laptop and
could not live without anymore.
(fulldisk encryption may be mandatory, as btrfs at least some time ago,
had the tendency to brick itself, if it sees its uuid on multiple disks
at the same time (md-raid1))

br,
Jan Vales
--
I only read plaintext emails.




signature.asc
Description: OpenPGP digital signature


Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-28 Thread Igor Cicimov
On 27 Dec 2017 6:45 am, "Rick Thomas"  wrote:


Is btrfs mature enough to use in enterprise applications?

If you are using it, I’d like to hear from you about your experiences —
good or bad.

My proposed application is for a small community radio station music
library.
We currently have about 5TB of data in a RAID10 using four 3TB drives, with
ext4 over the RAID.  So we’re about 75% full, growing at the rate of about
1TB/year, so we’ll run out of space by the end of 2018.

I’m proposing to go to three 6TB drives in a btrfs/RAID5 configuration.
This would give us 12TB of usable data space and hold off til the end of
2024 before needing the next upgrade.

Will it work?  Would I be safer with ext4 over RAID5?

Thanks in advance!
Rick


For production I would stick with RAID10 over RAID5 and XFS over ext4 since
its made for large files which suits your media storage user case. It also
provides for file system backups and restore via xfsdump/xfsrestore and I
also find xfs_freeze useful for consitent backup in case of secondary db
backup lets say. Basically it has everything that ext4 has and some more.

As others have mentioned you should consider ZFS too its been stable for
the past year on linux and the features like compression, dedup, snapshots,
various types of storage pools and even builtin NFS are hard to overlook.


Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-28 Thread Andy Smith
Hello,

On Thu, Dec 28, 2017 at 07:35:27PM +, Glenn English wrote:
> Is there something wrong with ext4 in a RAID1?

Not if you don't need any of the features of ZFS that ext4 lacks,
no. But if you do, then ext4 is not an option.

The killer feature of ZFS is its checksumming of all data and
metadata to protect against bitrot and other forms of data
corruption. The only other filesystem offering this on Linux is
btrfs, hence the many mentions of ZFS in this thread. Putting the
filesystem under MD RAID (or hardware RAID) and scrubbing it will
detect corruption but cannot fix it.

But like everything else, ZFS has its downsides too, so it is a
matter of requirements.

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-28 Thread Dan Ritter
On Thu, Dec 28, 2017 at 07:29:06PM +0200, Eero Volotinen wrote:
> That really doesn't sound like critical production use.

It's critical to me. What's your definition?

-dsr-



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-28 Thread Matthew Crews
> Original Message 
>From: ghe2...@gmail.com
>On Thu, Dec 28, 2017 at 5:29 PM, Eero Volotinen eero.voloti...@iki.fi wrote:
>>That really doesn't sound like critical production use.
>>I really cannot recommend zfs on linux for production use. It works better
>> on FreeBSD and it's not included in standard dist due to licence issues.
>>
> Is there something wrong with ext4 in a RAID1?

Nothing wrong per se, just that ZFS or BTRFS are better in many cases.



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-28 Thread Glenn English
On Thu, Dec 28, 2017 at 5:29 PM, Eero Volotinen  wrote:
> That really doesn't sound like critical production use.
>
> I really cannot recommend zfs on linux for production use. It works better
> on FreeBSD and it's not included in standard dist due to licence issues.

Is there something wrong with ext4 in a RAID1?

--
Glenn English



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-28 Thread Eero Volotinen
That really doesn't sound like critical production use.

I really cannot recommend zfs on linux for production use. It works better
on FreeBSD and it's not included in standard dist due to licence issues.

--
Eero

2017-12-28 17:53 GMT+02:00 Dan Ritter :

> On Thu, Dec 28, 2017 at 08:01:44AM +0200, Eero Volotinen wrote:
> > Are you really using it in production?
> >
>
> I'm using ZFS at home (3 pools, including my main server's
> /home and two backup pools) and at work (my desktop machine's
> root and /home, some of the backup servers).
>
> It is much more stable than btrfs, which I used to use at home.
>
> I stick to RAID1 and RAID10 layouts everywhere.
>
> -dsr-
>


Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-28 Thread deloptes
Eero Volotinen wrote:

> Are you really using it in production?

many solaris machines in the past years - good



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-28 Thread Dan Ritter
On Thu, Dec 28, 2017 at 08:01:44AM +0200, Eero Volotinen wrote:
> Are you really using it in production?
> 

I'm using ZFS at home (3 pools, including my main server's
/home and two backup pools) and at work (my desktop machine's
root and /home, some of the backup servers).

It is much more stable than btrfs, which I used to use at home.

I stick to RAID1 and RAID10 layouts everywhere.

-dsr-



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-27 Thread Eero Volotinen
Are you really using it in production?



Eero

28.12.2017 3.12 "deloptes"  kirjoitti:

> Rick Thomas wrote:
>
> > Since it doesn't look like I'll be using BTRFS for my application, I too
> > would appreciate hearing about experiences with ZFS as an alternative.
> > Unfortunately, the application we're using is only available for
> CentOS-6,
> > so we'll have to pressure the developer to release his CentOS-7 code, but
> > we've got a year to do it, so it's probably do-able.
>
> +1 for ZFS
>
>


Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-27 Thread deloptes
Rick Thomas wrote:

> Since it doesn't look like I'll be using BTRFS for my application, I too
> would appreciate hearing about experiences with ZFS as an alternative. 
> Unfortunately, the application we're using is only available for CentOS-6,
> so we'll have to pressure the developer to release his CentOS-7 code, but
> we've got a year to do it, so it's probably do-able.

+1 for ZFS



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-27 Thread Rick Thomas
On Wed, Dec 27, 2017, at 1:46 PM, Tom Dial wrote:
> 
> 
> On 12/27/2017 04:57 AM, Matthew Crews wrote:
> > I wouldn't trust BTRFS in an enterprise environment, but I have good 
> > experience in a personal environment. Make sure you are using modern 
> > kernels though (I wouldn't use anything earlier than 4.4, and realistically 
> > I would use 4.9 or 4.13 or higher), and I definitely would not use RAID5/6.
> > 
> > For an enterprise environment, ZFS wins, hands down.
> 
> Based on prior experience with ZFS under Solaris and FreeBSD, I've been
> considering the possibility of using it with Debian now that it is
> pretty much a first class file system.
> 
> Reports of actual use, pointers, and gotchas, if any, would be useful to
> me and probably others.
> 
> Thanks,
> Tom Dial

Since it doesn't look like I'll be using BTRFS for my application, I too would 
appreciate hearing about experiences with ZFS as an alternative.  
Unfortunately, the application we're using is only available for CentOS-6, so 
we'll have to pressure the developer to release his CentOS-7 code, but we've 
got a year to do it, so it's probably do-able.

Thanks in advance!
Rick



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-27 Thread Tom Dial


On 12/27/2017 04:57 AM, Matthew Crews wrote:
> I wouldn't trust BTRFS in an enterprise environment, but I have good 
> experience in a personal environment. Make sure you are using modern kernels 
> though (I wouldn't use anything earlier than 4.4, and realistically I would 
> use 4.9 or 4.13 or higher), and I definitely would not use RAID5/6.
> 
> For an enterprise environment, ZFS wins, hands down.

Based on prior experience with ZFS under Solaris and FreeBSD, I've been
considering the possibility of using it with Debian now that it is
pretty much a first class file system.

Reports of actual use, pointers, and gotchas, if any, would be useful to
me and probably others.

Thanks,
Tom Dial
> 



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-27 Thread Matthew Crews
I wouldn't trust BTRFS in an enterprise environment, but I have good experience 
in a personal environment. Make sure you are using modern kernels though (I 
wouldn't use anything earlier than 4.4, and realistically I would use 4.9 or 
4.13 or higher), and I definitely would not use RAID5/6.

For an enterprise environment, ZFS wins, hands down.



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-26 Thread David Christensen

On 12/26/17 11:37, Rick Thomas wrote:

Is btrfs mature enough to use in enterprise applications?

If you are using it, I’d like to hear from you about your experiences — good or 
bad.

My proposed application is for a small community radio station music library.
We currently have about 5TB of data in a RAID10 using four 3TB drives, with 
ext4 over the RAID.  So we’re about 75% full, growing at the rate of about 
1TB/year, so we’ll run out of space by the end of 2018.

I’m proposing to go to three 6TB drives in a btrfs/RAID5 configuration.  This 
would give us 12TB of usable data space and hold off til the end of 2024 before 
needing the next upgrade.

Will it work?  Would I be safer with ext4 over RAID5?


Take a look at ZFS before you decide -- either ZFS on Linux or FreeBSD:

https://packages.debian.org/stretch/zfs-dkms

https://www.freebsd.org/


I would suggest building a ZFS pool with two 8 TB mirrored drives.  When 
you want more space, add another mirrored pair.



Live compression and live de-duplication are useful features of ZFS, but 
the killer features are live snapshots and live replication.



David



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-26 Thread Michael Stone

On Tue, Dec 26, 2017 at 05:21:10PM -0500, Roberto C. Sánchez wrote:

I second XFS for your application.  Another to consider might be JFS.
Either one of those would be very mature and suitable for the
enterprise.  I don't have any real experience with JFS


I can't think of any reason to go with JFS. Among other things, XFS is 
actively maintained...


Mike Stone



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-26 Thread Sven Hartge
Roberto C. Sánchez  wrote:
> On Tue, Dec 26, 2017 at 09:48:09PM +0200, Eero Volotinen wrote:

>>use XFS, it's mature and suitable for big storage. (or gluster or
>>cehp?)

> I second XFS for your application.  Another to consider might be JFS.

I believe development of JFS has stopped several years ago and I don't
think it is suitable for new deployments into production.

Grüße,
Sven.

-- 
Sigmentation fault. Core dumped.



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-26 Thread Roberto C . Sánchez
On Tue, Dec 26, 2017 at 09:48:09PM +0200, Eero Volotinen wrote:
>use XFS, it's mature and suitable for big storage. (or gluster or cehp?)

I second XFS for your application.  Another to consider might be JFS.
Either one of those would be very mature and suitable for the
enterprise.  I don't have any real experience with JFS, but I used XFS
for many years starting in 2002 or 2003.  These days the sorts of
applications I work with don't really call for delving deeply enough to
warrant worrying about the filesystem, but it sounds like something that
merits evaluation in your case.

In one of the early applications that I dealt with we had very large
files (on the order of 10s to 100s of GB, which was rather large in
those days.  On ext3, a file removal could take many minutes.  The
performance for ext2 was a bit better, but of course you take on added
risk with a non-journaled filesystem.  XFS, on the otherhand, could
delete a multi-100 GB file just quickly as it could delete a 1 KB file.

Regards,

-Roberto

-- 
Roberto C. Sánchez



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-26 Thread Andy Smith
Hi Rick,

On Tue, Dec 26, 2017 at 11:37:32AM -0800, Rick Thomas wrote:
> Is btrfs mature enough to use in enterprise applications?

Not in my opinion. I've dabbled with it at home and based on those
experiences I will not be using it professionally any time soon.

> If you are using it, I’d like to hear from you about your experiences — good 
> or bad.

During the time of Debian wheezy I made use of btrfs for my home
fileserver, which is an HP Microserver with 4x 3.5" SATA drives and
an 8 bay disk chassis with 6 more 3.5" SATA HDDs in it, connected by
eSATA. It had previously been using LVM on top of Linux MD without
issue, but I'd become mindful of the amount of storage that was
being used without consistency checks (except for a weekly MD
scrub).

In order to do this I required a backports kernel and btrfs-tools
from git. I went for one of the more simple btrfs configurations
which is a raid1.

Over the next few years I didn't lose any data, but I did
experience:

- Out of space errors even when there was plenty of space

- Filesystems that went read-only on a device failure even though
  there was enough redundancy

- Filesystems that couldn't be remounted read-write after failed
  device replacement, even though the hardware is hot-swap, due to
  bugs in btrfs which required a kernel upgrade to fix (therefore a
  reboot, despite having the redundancy otherwise).

In summary, btrfs lowered the availability of the system due to
being buggy even in a relatively unexciting configuration. I could
not recommend it for serious use yet.

I am sure there will be plenty of people who've used it for years
without experiencing any issue. I am still subscribed to the btrfs
mailing list though, and unfortunately I still see people on there
reporting serious issues including data loss.

> My proposed application is for a small community radio station
> music library. We currently have about 5TB of data in a RAID10
> using four 3TB drives, with ext4 over the RAID.  So we’re about
> 75% full, growing at the rate of about 1TB/year, so we’ll run out
> of space by the end of 2018.

A year to solve this problem is nice to have, though. :)

> I’m proposing to go to three 6TB drives in a btrfs/RAID5 configuration.

If I were you I'd think very very carefully before using RAID-5 for
anything, btrfs or not.

- Four spindles to three means reduced performance.

- RAID-5 means parity means reduced performance.

- Lose one device and you're operating on just two spindles and
  recalculating parity. It will run like a dog and any further error
  means data loss, and array failure, which can be stressful and
  nerve-wracking to repair.

- I wouldn't like to chance my arm finding previously-unknown bad
  areas on two 6TB devices. When you have a three device RAID-5 and
  one device dies, you REQUIRE every sector on both the other
  devices to be readable in order to reconstruct the data onto the
  new device.

Personally I would not risk 3x6TB in any kind of RAID-5. HDDs are
pretty cheap so I really have to question the wisdom of cutting down
the number of spindles this far, and then using RAID-5.

RAID-10: okay, it "wastes" the most capacity in exchange for better
write performance. If this is a media library I get that maybe you
don't need the write performance (have you benchmarked your current
system??). RAID-10 is what you use when you can afford it, but if
you can't then compromises have to be made. In that case consider 4
or more spindles RAID-6. At least you stand a chance of being able
to replace a failed device before coming across a bad area on
another device.

Spread the extra cost of whatever you need to make it to four
devices in RAID-6 across the expected lifetime of your system (~6
years?) and does it still seem too much to pay?

Finally, the parity RAID levels in btrfs are newer than RAID-1 and
-10 and have seen a lot more bugs. Including really bad data loss
bugs.

One look at https://btrfs.wiki.kernel.org/index.php/RAID56 should be
enough.

> Would I be safer with ext4 over RAID5?

It's a bit of a frying pan / ground zero nuclear blast situation,
really.

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting

"I remember the first time I made love.  Perhaps it was not love exactly but I
 made it and it still works." — The League Against Tedium



Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-26 Thread Jan Vales
tl;dr:
save yourself the hassle and dont. go for md-raid5/6 + (luks +) XFS.

long version:

Just last week we migrated our soon to be production server (6 disks)
btrfs-raid10 to md-raid6+XFS, after btrfs managed to die twice in december.

So cool btrfs-raid/filesystem-level-raid sounds, so broken it seems atm :(
Not only did they not manage to use "common" definitions of raid - as
in: md-raid being de-facto-standard, they get to define certain aspects
of raid and others have to live by that ...

READ THEIR DOCS + MAILING LIST!

For example: when people talk about raid1 with say 4 disks, they expect
that all disks have the same contents... Therefore expecting that losing
3 disks means no data loss.
Well... btrfs decided to stick with some ancient definition of raid1
which is imho more like md-raid10, than md-raid1, as there are only 2
copies of data and you cant make btrfs to store more than 2 copies.
Therefore losing 2/4 disks -> btrfs irrecoverably broken.

btrfs-raid1 and btrfs-raid10 seem to be basically the same thing, with
some imho performance optimizations (not striping vs striping), which
should imho not even be user-settable as it makes no sense to go for
suboptimal performance, but then again, maybe i missed some of their
nearly inexistent documentation pointing out the good part of
btrfs-raid1 ...

Also btrfs-raid5/6 are broken and didnt even survive our in-vm-testing,
before getting a chance on bare-metal.


Also it seems way easier to go "full-raid-encryption" with luks + md-raid.
(Our intention is to be able to send in disks to get them replaced
without having to worry, that data could be easily extracted; there is
an usb-stick with the luks key sticking out of every machine...)

If you need some btrfs-features (snapshots <3), consider going
md-raid+luks+btrfs.
* Unsure if it still holds that btrfs will fail horribly if it sees its
uuid on more than one disk, possibly making the
full-disk-encryption-layer mandatory.

br,
Jan


On 12/26/17 20:37, Rick Thomas wrote:
> 
> Is btrfs mature enough to use in enterprise applications?
> 
> If you are using it, I’d like to hear from you about your experiences — good 
> or bad.
> 
> My proposed application is for a small community radio station music library.
> We currently have about 5TB of data in a RAID10 using four 3TB drives, with 
> ext4 over the RAID.  So we’re about 75% full, growing at the rate of about 
> 1TB/year, so we’ll run out of space by the end of 2018.
> 
> I’m proposing to go to three 6TB drives in a btrfs/RAID5 configuration.  This 
> would give us 12TB of usable data space and hold off til the end of 2024 
> before needing the next upgrade.
> 
> Will it work?  Would I be safer with ext4 over RAID5?
> 
> Thanks in advance!
> Rick
> 


-- 
lg
Jan Vales
--
I only read plaintext emails.

Someone @ irc://irc.fsinf.at:6667/tuwien
webIRC: https://frost.fsinf.at/iris/



signature.asc
Description: OpenPGP digital signature


Re: Experiences with BTRFS -- is it mature enough for enterprise use?

2017-12-26 Thread Eero Volotinen
use XFS, it's mature and suitable for big storage. (or gluster or cehp?)


Eero

26.12.2017 21.45 "Rick Thomas"  kirjoitti:

>
> Is btrfs mature enough to use in enterprise applications?
>
> If you are using it, I’d like to hear from you about your experiences —
> good or bad.
>
> My proposed application is for a small community radio station music
> library.
> We currently have about 5TB of data in a RAID10 using four 3TB drives,
> with ext4 over the RAID.  So we’re about 75% full, growing at the rate of
> about 1TB/year, so we’ll run out of space by the end of 2018.
>
> I’m proposing to go to three 6TB drives in a btrfs/RAID5 configuration.
> This would give us 12TB of usable data space and hold off til the end of
> 2024 before needing the next upgrade.
>
> Will it work?  Would I be safer with ext4 over RAID5?
>
> Thanks in advance!
> Rick
>