On 03/01/18 14:02, Gene Heskett wrote:
> ... so
> used to winslow ...
...
> Will it actually happen? Chances are I'd have better results offering a
> bridge in Sun City AZ for sale...
Is someone used to Winslow likely to be confused in Sun City?
(I've never been to either (or, within my
Pascal Hambourg wrote:
> If old things are running, then they have already been set up a long
> time ago, and there is not need to tell what's best.
ok, I reverse the "best" - you are really persistent
Le 04/01/2018 à 08:55, deloptes a écrit :
Pascal Hambourg wrote:
How is it better than using an initramfs ?
By the way, if you compile md in the kernel, you should also compile all
necessary host controller and disk drivers in. And expect failure with
current drivers which do not guarantee
Pascal Hambourg wrote:
> The kernel cannot use UUIDs to mount the root filesystem. Using UUIDs
> requires an initramfs.
I forgot to mention that UUID is not meant to be used (only) by the kernel
or the initrd, but by GRUB - to find the boot md - no idea how it works but
it works.
Those machines
Pascal Hambourg wrote:
> How is it better than using an initramfs ?
>
>>> By the way, if you compile md in the kernel, you should also compile all
>>> necessary host controller and disk drivers in. And expect failure with
>>> current drivers which do not guarantee that a given disk gets the same
Le 03/01/2018 à 00:52, deloptes a écrit :
Pascal Hambourg wrote:
Best for what ?
for booting of raid
How is it better than using an initramfs ?
By the way, if you compile md in the kernel, you should also compile all
necessary host controller and disk drivers in. And expect failure with
On 01/02/18 15:45, Darac Marjal wrote:
On 02/01/18 23:02, David Christensen wrote:
This is the second incorrect attribution to myself I've seen in the
recent past...
Really? It looks like you [wrote the mis-attributed text].
Yes, I know.
So, what exactly are you complaining about?
I'm
Hello,
On Mon, Jan 01, 2018 at 11:15:00AM +1300, Joel Wirāmu Pauling wrote:
> The reason Redhat dropped btrfs support is because it currently has no
> native cryptographic function.
Red Hat's former filesystem maintainer Josef Bacik said it was
simply because Red Hat now lacks engineers familiar
On Tuesday 02 January 2018 18:45:00 Darac Marjal wrote:
> On 02/01/18 23:02, David Christensen wrote:
> > On 01/02/18 10:05, deloptes wrote:
> >> David Christensen wrote:
> >>> You can boot with your md device with the following kernel command
> >>> lines:
> >>>
> >>> for old raid arrays without
Pascal Hambourg wrote:
> Best for what ?
for booting of raid
> Who still uses RAID arrays without persistent superblocks ?
historic reasons - systems aged 10y+
> Who still uses RAID assembly by the kernel instead of mdadm ?
same as above
> All this has beed obsoleted by the superblock
On 02/01/18 23:02, David Christensen wrote:
> On 01/02/18 10:05, deloptes wrote:
>> David Christensen wrote:
>>
>>> You can boot with your md device with the following kernel command
>>> lines:
>>>
>>> for old raid arrays without persistent superblocks:
>>> md=,,,>> level>,dev0,dev1,...,devn
>
> I
On 01/02/18 10:05, deloptes wrote:
David Christensen wrote:
You can boot with your md device with the following kernel command
lines:
for old raid arrays without persistent superblocks:
md=dev0,dev1,...,devn
I did not write that.
This is the second incorrect attribution to myself I've
Le 02/01/2018 à 19:05, deloptes a écrit :
David Christensen wrote (quoting md.txt from the kernel documentation) :
You can boot with your md device with the following kernel command
lines:
for old raid arrays without persistent superblocks:
md=dev0,dev1,...,devn
yes and best is you
David Christensen wrote:
> You can boot with your md device with the following kernel command
> lines:
>
> for old raid arrays without persistent superblocks:
> md=,,, level>,dev0,dev1,...,devn
yes and best is you compile raid in, so that boot can be also raided
regards
On 01/02/18 02:29, to...@tuxteam.de wrote:
On Mon, Jan 01, 2018 at 06:01:20PM -0800, David Christensen wrote:
On 12/31/17 14:45, Sven Hartge wrote:
David Christensen wrote:
$ man 4 md
SCRUBBING AND MISMATCHES
...
If check was used, then
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Mon, Jan 01, 2018 at 06:01:20PM -0800, David Christensen wrote:
> On 12/31/17 14:45, Sven Hartge wrote:
> >David Christensen wrote:
> >> $ man 4 md
> >
> >> SCRUBBING AND MISMATCHES
> >> ...
> >>If
On 12/31/17 14:45, Sven Hartge wrote:
David Christensen wrote:
$ man 4 md
SCRUBBING AND MISMATCHES
...
If check was used, then no action is taken to handle the mismatch, it
is simply recorded. If repair was used, then a
David Christensen wrote:
> On 12/31/17 09:44, Sven Hartge wrote:
>> David Christensen wrote:
>>> On 12/30/17 14:38, Matthew Crews wrote:
The main issue I see with using BTRFS with MDADM is that you lose
the benefit of bit-rot
The reason Redhat dropped btrfs support is because it currently has no
native cryptographic function. And from the various threads I've read on
the topic there is no easy answer to the problem.
On 1 January 2018 at 06:44, Sven Hartge wrote:
> David Christensen
On 12/31/17 09:44, Sven Hartge wrote:
David Christensen wrote:
On 12/30/17 14:38, Matthew Crews wrote:
The main issue I see with using BTRFS with MDADM is that you lose the
benefit of bit-rot repair. MDADM can't correct bit rot, but
BTRFS-Raid (and ZFS raid
David Christensen wrote:
> On 12/30/17 14:38, Matthew Crews wrote:
>> The main issue I see with using BTRFS with MDADM is that you lose the
>> benefit of bit-rot repair. MDADM can't correct bit rot, but
>> BTRFS-Raid (and ZFS raid arrays) can, but only with native raid
On 12/30/17 14:38, Matthew Crews wrote:
The main issue I see with using BTRFS with MDADM is that you lose the benefit
of bit-rot repair. MDADM can't correct bit rot, but BTRFS-Raid (and ZFS raid
arrays) can, but only with native raid configurations.
AFAIK:
1. mdadm RAID1 can fix bit rot,
> Original Message
>Subject: Re: Experiences with BTRFS -- is it mature enough for enterprise use?
>Local Time: December 29, 2017 5:37 PM
>UTC Time: December 30, 2017 12:37 AM
>From: j...@jvales.net
> The problem with btrfs-raid10 (with 6 disks): it self-destructed itself
> on our
Le 30/12/2017 à 00:48, Jan Vales a écrit :
You still can go md-raid + btrfs, if you want some btrfs features.
Snapshots (and send/receive) are what I really love on my laptop and
could not live without anymore.
(fulldisk encryption may be mandatory, as btrfs at least some time ago,
had the
On 12/30/17 01:26, Matthew Crews wrote:
>> Original Message
>> Subject: Re: Experiences with BTRFS -- is it mature enough for enterprise
>> use?
>> Local Time: December 29, 2017 4:48 PM
>> UTC Time: December 29, 2017 11:48 PM
>> From: j...@jvales.net
>
>> You still can go
> Original Message
>Subject: Re: Experiences with BTRFS -- is it mature enough for enterprise use?
>Local Time: December 29, 2017 4:48 PM
>UTC Time: December 29, 2017 11:48 PM
>From: j...@jvales.net
> You still can go md-raid + btrfs, if you want some btrfs features.
If you're
On 12/29/17 00:55, Andy Smith wrote:
> The killer feature of ZFS is its checksumming of all data and
> metadata to protect against bitrot and other forms of data
> corruption. The only other filesystem offering this on Linux is
> btrfs, hence the many mentions of ZFS in this thread. Putting the
>
On 27 Dec 2017 6:45 am, "Rick Thomas" wrote:
Is btrfs mature enough to use in enterprise applications?
If you are using it, I’d like to hear from you about your experiences —
good or bad.
My proposed application is for a small community radio station music
library.
We
Hello,
On Thu, Dec 28, 2017 at 07:35:27PM +, Glenn English wrote:
> Is there something wrong with ext4 in a RAID1?
Not if you don't need any of the features of ZFS that ext4 lacks,
no. But if you do, then ext4 is not an option.
The killer feature of ZFS is its checksumming of all data and
On Thu, Dec 28, 2017 at 07:29:06PM +0200, Eero Volotinen wrote:
> That really doesn't sound like critical production use.
It's critical to me. What's your definition?
-dsr-
> Original Message
>From: ghe2...@gmail.com
>On Thu, Dec 28, 2017 at 5:29 PM, Eero Volotinen eero.voloti...@iki.fi wrote:
>>That really doesn't sound like critical production use.
>>I really cannot recommend zfs on linux for production use. It works better
>> on FreeBSD and it's
On Thu, Dec 28, 2017 at 5:29 PM, Eero Volotinen wrote:
> That really doesn't sound like critical production use.
>
> I really cannot recommend zfs on linux for production use. It works better
> on FreeBSD and it's not included in standard dist due to licence issues.
Is
That really doesn't sound like critical production use.
I really cannot recommend zfs on linux for production use. It works better
on FreeBSD and it's not included in standard dist due to licence issues.
--
Eero
2017-12-28 17:53 GMT+02:00 Dan Ritter :
> On Thu, Dec 28,
Eero Volotinen wrote:
> Are you really using it in production?
many solaris machines in the past years - good
On Thu, Dec 28, 2017 at 08:01:44AM +0200, Eero Volotinen wrote:
> Are you really using it in production?
>
I'm using ZFS at home (3 pools, including my main server's
/home and two backup pools) and at work (my desktop machine's
root and /home, some of the backup servers).
It is much more stable
Are you really using it in production?
Eero
28.12.2017 3.12 "deloptes" kirjoitti:
> Rick Thomas wrote:
>
> > Since it doesn't look like I'll be using BTRFS for my application, I too
> > would appreciate hearing about experiences with ZFS as an alternative.
> >
Rick Thomas wrote:
> Since it doesn't look like I'll be using BTRFS for my application, I too
> would appreciate hearing about experiences with ZFS as an alternative.
> Unfortunately, the application we're using is only available for CentOS-6,
> so we'll have to pressure the developer to release
On Wed, Dec 27, 2017, at 1:46 PM, Tom Dial wrote:
>
>
> On 12/27/2017 04:57 AM, Matthew Crews wrote:
> > I wouldn't trust BTRFS in an enterprise environment, but I have good
> > experience in a personal environment. Make sure you are using modern
> > kernels though (I wouldn't use anything
On 12/27/2017 04:57 AM, Matthew Crews wrote:
> I wouldn't trust BTRFS in an enterprise environment, but I have good
> experience in a personal environment. Make sure you are using modern kernels
> though (I wouldn't use anything earlier than 4.4, and realistically I would
> use 4.9 or 4.13 or
I wouldn't trust BTRFS in an enterprise environment, but I have good experience
in a personal environment. Make sure you are using modern kernels though (I
wouldn't use anything earlier than 4.4, and realistically I would use 4.9 or
4.13 or higher), and I definitely would not use RAID5/6.
For
On 12/26/17 11:37, Rick Thomas wrote:
Is btrfs mature enough to use in enterprise applications?
If you are using it, I’d like to hear from you about your experiences — good or
bad.
My proposed application is for a small community radio station music library.
We currently have about 5TB of
On Tue, Dec 26, 2017 at 05:21:10PM -0500, Roberto C. Sánchez wrote:
I second XFS for your application. Another to consider might be JFS.
Either one of those would be very mature and suitable for the
enterprise. I don't have any real experience with JFS
I can't think of any reason to go with
Roberto C. Sánchez wrote:
> On Tue, Dec 26, 2017 at 09:48:09PM +0200, Eero Volotinen wrote:
>>use XFS, it's mature and suitable for big storage. (or gluster or
>>cehp?)
> I second XFS for your application. Another to consider might be JFS.
I believe development of
On Tue, Dec 26, 2017 at 09:48:09PM +0200, Eero Volotinen wrote:
>use XFS, it's mature and suitable for big storage. (or gluster or cehp?)
I second XFS for your application. Another to consider might be JFS.
Either one of those would be very mature and suitable for the
enterprise. I don't
Hi Rick,
On Tue, Dec 26, 2017 at 11:37:32AM -0800, Rick Thomas wrote:
> Is btrfs mature enough to use in enterprise applications?
Not in my opinion. I've dabbled with it at home and based on those
experiences I will not be using it professionally any time soon.
> If you are using it, I’d like
tl;dr:
save yourself the hassle and dont. go for md-raid5/6 + (luks +) XFS.
long version:
Just last week we migrated our soon to be production server (6 disks)
btrfs-raid10 to md-raid6+XFS, after btrfs managed to die twice in december.
So cool btrfs-raid/filesystem-level-raid sounds, so broken
use XFS, it's mature and suitable for big storage. (or gluster or cehp?)
Eero
26.12.2017 21.45 "Rick Thomas" kirjoitti:
>
> Is btrfs mature enough to use in enterprise applications?
>
> If you are using it, I’d like to hear from you about your experiences —
> good or bad.
47 matches
Mail list logo