On Apr 17, 2011, at 3:05 AM, Charles Polisher cpol...@surewest.net wrote:
On Wed, Apr 13, 2011 at 11:55:08PM -0400, Ross Walker wrote:
On Apr 13, 2011, at 9:40 PM, Brandon Ooi brand...@gmail.com wrote:
On Wed, Apr 13, 2011 at 6:04 PM, Ross Walker rswwal...@gmail.com wrote:
One was a
On Wed, Apr 13, 2011 at 11:55:08PM -0400, Ross Walker wrote:
On Apr 13, 2011, at 9:40 PM, Brandon Ooi brand...@gmail.com wrote:
On Wed, Apr 13, 2011 at 6:04 PM, Ross Walker rswwal...@gmail.com wrote:
One was a hardware raid over fibre channel, which silently corrupted
itself. System
On Thursday, April 14, 2011 11:26 PM, Benjamin Franz wrote:
On 04/14/2011 08:04 AM, Christopher Chan wrote:
Then try both for your use case and your hardware. We have wide raid6 setups
that does well over 500 MB/s write (that is: not all raid6 writes suck...).
/me replaces all of Peter's
On Thursday, April 14, 2011 11:30 PM, Les Mikesell wrote:
On 4/14/2011 7:32 AM, Christopher Chan wrote:
HAHAHAAAAHA
The XFS codebase is the biggest pile of mess in the Linux kernel and you
expect it to be not run into mysterious problems? Remember, XFS
On 04/14/2011 09:00 PM, Christopher Chan wrote:
Wanna try that again with 64MB of cache only and tell us whether there
is a difference in performance?
There is a reason why 3ware 85xx cards were complete rubbish when used
for raid5 and which led to the 95xx/96xx series.
_
I don't happen to
On Thursday, April 14, 2011 05:26:41 PM Ross Walker wrote:
2011/4/14 Peter Kjellström c...@nsc.liu.se:
...
While I do concede the obvious point regarding rebuild time (raid6 takes
from long to very long to rebuild) I'd like to point out:
* If you do the math for a 12 drive raid10 vs
On Friday, April 15, 2011 07:24 PM, Benjamin Franz wrote:
On 04/14/2011 09:00 PM, Christopher Chan wrote:
Wanna try that again with 64MB of cache only and tell us whether there
is a difference in performance?
There is a reason why 3ware 85xx cards were complete rubbish when used
for raid5
On Fri, Apr 15, 2011 at 3:05 PM, Christopher Chan
christopher.c...@bradbury.edu.hk wrote:
On Friday, April 15, 2011 07:24 PM, Benjamin Franz wrote:
On 04/14/2011 09:00 PM, Christopher Chan wrote:
Wanna try that again with 64MB of cache only and tell us whether there
is a difference in
On 04/15/2011 06:05 AM, Christopher Chan wrote:
Woohoo, next we will be seeing md raid6 also giving comparable results
if that is the case. I am not the only person on this list that thinks
cache is king for raid5/6 on hardware raid boards and the using hardware
raid + bbu cache for better
On Apr 15, 2011, at 9:17 AM, Rudi Ahlers r...@softdux.com wrote:
On Fri, Apr 15, 2011 at 3:05 PM, Christopher Chan
christopher.c...@bradbury.edu.hk wrote:
On Friday, April 15, 2011 07:24 PM, Benjamin Franz wrote:
On 04/14/2011 09:00 PM, Christopher Chan wrote:
Wanna try that again
On Fri, Apr 15, 2011 at 6:26 PM, Ross Walker rswwal...@gmail.com wrote:
On Apr 15, 2011, at 9:17 AM, Rudi Ahlers r...@softdux.com wrote:
On Fri, Apr 15, 2011 at 3:05 PM, Christopher Chan
christopher.c...@bradbury.edu.hk
christopher.c...@bradbury.edu.hk wrote:
On Friday, April 15, 2011
On Apr 15, 2011, at 12:32 PM, Rudi Ahlers r...@softdux.com wrote:
On Fri, Apr 15, 2011 at 6:26 PM, Ross Walker rswwal...@gmail.com wrote:
On Apr 15, 2011, at 9:17 AM, Rudi Ahlers r...@softdux.com wrote:
On Fri, Apr 15, 2011 at 3:05 PM, Christopher Chan
As matter of interest, does anyone know how to use an SSD drive for cach
purposes on Linux software RAID drives? ZFS has this feature and it
makes a helluva difference to a storage server's performance.
You cannot. You can however use one for the external journal of ext3/4
in full
-Original Message-
From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On Behalf
Of Christopher Chan
Sent: Wednesday, April 13, 2011 4:49 PM
To: centos@centos.org
Subject: Re: [CentOS] 40TB File System Recommendations
While we are at it, disks being directly connected
On 04/13/2011 09:04 PM, Ross Walker wrote:
On Apr 13, 2011, at 7:26 PM, John Jasen jja...@realityfailure.org wrote:
snipped my stuff
Every now and then I hear these XFS horror stories. They seem too impossible
to believe.
Nothing breaks for absolutely no reason and failure to know where
On Wednesday, April 13, 2011 04:54:01 AM Ross Walker wrote:
On Apr 12, 2011, at 8:53 AM, Rudi Ahlers r...@softdux.com wrote:
...
As matter of interest, what hardware do you use? i.e. what CPU's, size
of RAM and RAID cards do you use on this size system?
Everyone always recommends to use
On Thursday, April 14, 2011 09:04 AM, Ross Walker wrote:
On Apr 13, 2011, at 7:26 PM, John Jasenjja...@realityfailure.org wrote:
On 04/12/2011 08:19 PM, Christopher Chan wrote:
On Tuesday, April 12, 2011 10:36 PM, John Jasen wrote:
On 04/12/2011 10:21 AM, Boris Epstein wrote:
On Tue, Apr
On Thursday, April 14, 2011 07:26 AM, John Jasen wrote:
On 04/12/2011 08:19 PM, Christopher Chan wrote:
On Tuesday, April 12, 2011 10:36 PM, John Jasen wrote:
On 04/12/2011 10:21 AM, Boris Epstein wrote:
On Tue, Apr 12, 2011 at 3:36 AM, Alain Péan
alain.p...@lpp.polytechnique.fr
On Thursday, April 14, 2011 02:54 PM, Sorin Srbu wrote:
-Original Message-
From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On Behalf
Of Christopher Chan
Sent: Wednesday, April 13, 2011 4:49 PM
To: centos@centos.org
Subject: Re: [CentOS] 40TB File System
-Original Message-
From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On Behalf
Of Christopher Chan
Sent: Thursday, April 14, 2011 2:34 PM
To: centos@centos.org
Subject: Re: [CentOS] 40TB File System Recommendations
Oh yeah, we are on PCIe and NUMA architectures now. I
On Thursday, April 14, 2011 09:04 AM, Ross Walker wrote:
On Apr 13, 2011, at 7:26 PM, John Jasenjja...@realityfailure.org
wrote:
On 04/12/2011 08:19 PM, Christopher Chan wrote:
On Tuesday, April 12, 2011 10:36 PM, John Jasen wrote:
On 04/12/2011 10:21 AM, Boris Epstein wrote:
On Tue, Apr
On Apr 14, 2011, at 6:54 AM, John Jasen jja...@realityfailure.org wrote:
On 04/13/2011 09:04 PM, Ross Walker wrote:
On Apr 13, 2011, at 7:26 PM, John Jasen jja...@realityfailure.org wrote:
snipped my stuff
Every now and then I hear these XFS horror stories. They seem too impossible
to
On Tuesday, April 12, 2011 03:10:33 PM Lars Hecking wrote:
OTOH, gparted doesn't see my software raid array either. Gparted it
rather practical for regular plain vanilla partitions, but for more
advanced stuff and filesystems, fdisk is probably better.
For filersystems 2TB, you're
On Thu, 14 Apr 2011, Peter Kjellström wrote:
On Tuesday, April 12, 2011 03:10:33 PM Lars Hecking wrote:
OTOH, gparted doesn't see my software raid array either. Gparted it
rather practical for regular plain vanilla partitions, but for more
advanced stuff and filesystems, fdisk is probably
-Original Message-
From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On Behalf
Of Peter Kjellström
Sent: Thursday, April 14, 2011 3:31 PM
To: centos@centos.org
Subject: Re: [CentOS] 40TB File System Recommendations
On Tuesday, April 12, 2011 03:10:33 PM Lars Hecking wrote
On Thursday, April 14, 2011 08:55 PM, Simon Matter wrote:
On Thursday, April 14, 2011 09:04 AM, Ross Walker wrote:
On Apr 13, 2011, at 7:26 PM, John Jasenjja...@realityfailure.org
wrote:
On 04/12/2011 08:19 PM, Christopher Chan wrote:
On Tuesday, April 12, 2011 10:36 PM, John Jasen wrote:
On Tuesday, April 12, 2011 02:56:54 PM rai...@ultra-secure.de wrote:
...
Steve,
I'm managing machines with 30TB of storage for more then two years. And
with
good reporting and reaction we have never had to run fsck.
That's not the issue.
The issue is rebuild-time.
The longer it takes,
On Wednesday, April 13, 2011 09:29:29 AM Matthew Feinberg wrote:
Thank you everyone for the advice and great information. From what I am
gathering XFS is the way to go.
A couple more questions.
What partitioning utility is suggested? parted and fdisk do not seem to
be doing the job.
My
On Thursday, April 14, 2011 10:37:15 AM Christopher Chan wrote:
I used XFS extensively when I was running mail server farms for
the mail queue filesystem and I only remember one or two incidents when
the filesystem was marked read-only for no reason (seemingly - never had
the time to find
On Thursday, April 14, 2011 10:54 PM, Lamar Owen wrote:
On Thursday, April 14, 2011 10:37:15 AM Christopher Chan wrote:
I used XFS extensively when I was running mail server farms for
the mail queue filesystem and I only remember one or two incidents when
the filesystem was marked read-only
On Thursday, April 14, 2011 10:47 PM, Peter Kjellström wrote:
On Wednesday, April 13, 2011 09:29:29 AM Matthew Feinberg wrote:
Thank you everyone for the advice and great information. From what I am
gathering XFS is the way to go.
A couple more questions.
What partitioning utility is
On 4/14/2011 9:54 AM, Lamar Owen wrote:
On Thursday, April 14, 2011 10:37:15 AM Christopher Chan wrote:
I used XFS extensively when I was running mail server farms for
the mail queue filesystem and I only remember one or two incidents when
the filesystem was marked read-only for no reason
2011/4/14 Peter Kjellström c...@nsc.liu.se:
On Tuesday, April 12, 2011 02:56:54 PM rai...@ultra-secure.de wrote:
...
Steve,
I'm managing machines with 30TB of storage for more then two years. And
with
good reporting and reaction we have never had to run fsck.
That's not the issue.
The
On 04/14/2011 08:04 AM, Christopher Chan wrote:
Then try both for your use case and your hardware. We have wide raid6 setups
that does well over 500 MB/s write (that is: not all raid6 writes suck...).
/me replaces all of Peter's cache with 64MB modules.
Let's try again.
If you are trying
On 4/14/2011 7:32 AM, Christopher Chan wrote:
HAHAHAAAAHA
The XFS codebase is the biggest pile of mess in the Linux kernel and you
expect it to be not run into mysterious problems? Remember, XFS was
PORTED over to Linux. It is not a 'native' thing to
On Thursday, April 14, 2011 04:13:19 PM Steve Brooks wrote:
On Thu, 14 Apr 2011, Peter Kjellström wrote:
On Tuesday, April 12, 2011 03:10:33 PM Lars Hecking wrote:
OTOH, gparted doesn't see my software raid array either. Gparted it
rather practical for regular plain vanilla partitions, but
On Thursday, April 14, 2011 04:15:10 PM Sorin Srbu wrote:
-Original Message-
From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On
Behalf Of Peter Kjellström
Sent: Thursday, April 14, 2011 3:31 PM
To: centos@centos.org
Subject: Re: [CentOS] 40TB File System
2011/4/14 Peter Kjellström c...@nsc.liu.se:
On Thursday, April 14, 2011 04:13:19 PM Steve Brooks wrote:
On Thu, 14 Apr 2011, Peter Kjellström wrote:
On Tuesday, April 12, 2011 03:10:33 PM Lars Hecking wrote:
OTOH, gparted doesn't see my software raid array either. Gparted it
rather
On Thursday, April 14, 2011 04:54:34 PM Lamar Owen wrote:
On Thursday, April 14, 2011 10:37:15 AM Christopher Chan wrote:
I used XFS extensively when I was running mail server farms for
the mail queue filesystem and I only remember one or two incidents when
the filesystem was marked
On Thursday, April 14, 2011 11:20:23 AM Les Mikesell wrote:
Same here, CentOS5 and ext3. Rare and random across identical hardware.
So far I've blamed the hardware.
I don't have that luxury. This is one VM on a VMware ESX 3.5U5 host, and the
storage is EMC Clariion fibre-channel, with the
Lamar Owen wrote:
On Thursday, April 14, 2011 10:37:15 AM Christopher Chan wrote:
I used XFS extensively when I was running mail server farms for the
mail queue filesystem and I only remember one or two incidents when the
filesystem was marked read-only for no reason (seemingly - never had
the
On Apr 14, 2011, at 6:43 AM, Ross Walker wrote:
On Apr 14, 2011, at 6:54 AM, John Jasen jja...@realityfailure.org
wrote:
On 04/13/2011 09:04 PM, Ross Walker wrote:
On Apr 13, 2011, at 7:26 PM, John Jasen
jja...@realityfailure.org wrote:
snipped my stuff
Every now and then I hear
On Thursday, April 14, 2011 02:17:41 PM aurfal...@gmail.com wrote:
However if you like XFS, I'll assume you liek IRIX so check the 5dwm
project which is the IRIX desktop for Linux.
Cool. Now if they ported the Audio DAT ripping program for IRIX to Linux, I'd
be able to get rid of my O2.
On Apr 14, 2011, at 12:43 PM, Lamar Owen wrote:
On Thursday, April 14, 2011 02:17:41 PM aurfal...@gmail.com wrote:
However if you like XFS, I'll assume you liek IRIX so check the 5dwm
project which is the IRIX desktop for Linux.
Cool. Now if they ported the Audio DAT ripping program for
aurfal...@gmail.com wrote:
On Apr 14, 2011, at 12:43 PM, Lamar Owen wrote:
On Thursday, April 14, 2011 02:17:41 PM aurfal...@gmail.com wrote:
However if you like XFS, I'll assume you liek IRIX so check the 5dwm
project which is the IRIX desktop for Linux.
Cool. Now if they ported the Audio
On Wed, Apr 13, 2011 at 07:18:23PM -0400, John Jasen wrote:
On 04/12/2011 11:30 AM, Les Mikesell wrote:
On 4/12/2011 9:36 AM, John Jasen wrote:
snipped: two recommendations for XFS
I would chime in with a dis-commendation for XFS. At my previous
employer, two cases involving XFS
One was 32 bit, the other 64 bit.
Christopher Chan christopher.c...@bradbury.edu.hk wrote:
On Thursday, April 14, 2011 07:26 AM, John Jasen wrote:
On 04/12/2011 08:19 PM, Christopher Chan wrote:
On Tuesday, April 12, 2011 10:36 PM, John Jasen wrote:
On 04/12/2011 10:21 AM, Boris Epstein
On 4/13/11, Brandon Ooi brand...@gmail.com wrote:
centos 5 can expand raid 0/1/5. just not 6. 10 is just layered 0/1 so you
can expand it.
centos 6 will be able to expand raid6 as it was a feature in 2.6.20 or
something.
This is where I'm getting confused. I had been reading up on mdadm,
On Wed, Apr 13, 2011 at 6:35 AM, Emmanuel Noobadmin
centos.ad...@gmail.comwrote:
On 4/12/11, Rudi Ahlers r...@softdux.com wrote:
But, our RAID10 is setup as a stripe of mirrors, i.e. sda1 sdb1 - md0,
sdc1 + sdd1 -md1, then sde1 + sdf1 -md2, and finally md0 + md1 + md2
are
stripped. The
Thank you everyone for the advice and great information. From what I am
gathering XFS is the way to go.
A couple more questions.
What partitioning utility is suggested? parted and fdisk do not seem to
be doing the job.
Raid Level. I am considering moving away from the raid6 due to possible
-Original Message-
From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On Behalf
Of Matthew Feinberg
Sent: Wednesday, April 13, 2011 9:29 AM
To: CentOS mailing list
Subject: Re: [CentOS] 40TB File System Recommendations
Hardware or software raid. Is there an advantage
On 4/13/11, Rudi Ahlers r...@softdux.com wrote:
I haven't had problems doing it this way yet.
Thanks for the confirmation. Could you please outline the general
steps to expand an existing RAID 10 with another RAID 1 device?
I'm trying to test this out but unfortunately being the noob that I
am,
On Wednesday, April 13, 2011 04:00 PM, Sorin Srbu wrote:
With today's CPU-performance and RAM available, software raids are not a
problem
to power.
That depends. Software raid is fine for raid1 and raid0. If you want
raid5 or raid6, you have to use hardware raid with bbu cache that
On Apr 13, 2011, at 8:45 AM, Christopher Chan
christopher.c...@bradbury.edu.hk wrote:
On Wednesday, April 13, 2011 04:00 PM, Sorin Srbu wrote:
With today's CPU-performance and RAM available, software raids are not a
problem
to power.
That depends. Software raid is fine for raid1 and
On Tuesday, April 12, 2011 06:49:08 PM Drew wrote:
Where can I get an enterprise-class 2TB drive for $100? Commodity SATA
isn't enterprise-class.
I can get Seagate's Constellation ES series SATA drives in 1TB for
$125. 2TB will run me around $225.
Yeah, those are reasonable near-line
On Wednesday, April 13, 2011 09:18 PM, Ross Walker wrote:
On Apr 13, 2011, at 8:45 AM, Christopher
Chanchristopher.c...@bradbury.edu.hk wrote:
On Wednesday, April 13, 2011 04:00 PM, Sorin Srbu wrote:
With today's CPU-performance and RAM available, software raids are not a
problem
to
On Tuesday, April 12, 2011 07:00:26 PM compdoc wrote:
I've had good luck with green, 5400 rpm Samsung drives. They don't spin down
automatically and work fine in my raid 5 arrays. The cost is about $80 for
2TB drives.
And that's a good price point for a commodity drive; not something I would
-Original Message-
From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On Behalf
Of Christopher Chan
Sent: Wednesday, April 13, 2011 3:45 PM
To: centos@centos.org
Subject: Re: [CentOS] 40TB File System Recommendations
While we are at it, disks being directly connected
On Wednesday, April 13, 2011 10:32 PM, Sorin Srbu wrote:
-Original Message-
From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On Behalf
Of Christopher Chan
Sent: Wednesday, April 13, 2011 3:45 PM
To: centos@centos.org
Subject: Re: [CentOS] 40TB File System
The biggest issue isn't the spindown. Google 'WDTLER' and see the other,
bigger, issue. In a nutshell, TLER (Time-Limited Error Recovery; see
https://secure.wikimedia.org/wikipedia/en/wiki/TLER ) allows the drive to
not try to recover soft errors quite as long. The error recovery time can
On 04/12/2011 11:30 AM, Les Mikesell wrote:
On 4/12/2011 9:36 AM, John Jasen wrote:
snipped: two recommendations for XFS
I would chime in with a dis-commendation for XFS. At my previous
employer, two cases involving XFS resulted in irrecoverable data
corruption. These were on RAID systems
On 04/12/2011 08:19 PM, Christopher Chan wrote:
On Tuesday, April 12, 2011 10:36 PM, John Jasen wrote:
On 04/12/2011 10:21 AM, Boris Epstein wrote:
On Tue, Apr 12, 2011 at 3:36 AM, Alain Péan
alain.p...@lpp.polytechnique.fr
mailto:alain.p...@lpp.polytechnique.fr wrote:
snipped: two
On Apr 13, 2011, at 7:26 PM, John Jasen jja...@realityfailure.org wrote:
On 04/12/2011 08:19 PM, Christopher Chan wrote:
On Tuesday, April 12, 2011 10:36 PM, John Jasen wrote:
On 04/12/2011 10:21 AM, Boris Epstein wrote:
On Tue, Apr 12, 2011 at 3:36 AM, Alain Péan
On Wed, Apr 13, 2011 at 6:04 PM, Ross Walker rswwal...@gmail.com wrote:
One was a hardware raid over fibre channel, which silently corrupted
itself. System checked out fine, raid array checked out fine, xfs was
replaced with ext3, and the system ran without issue.
Second was multiple
On Apr 13, 2011, at 9:40 PM, Brandon Ooi brand...@gmail.com wrote:
On Wed, Apr 13, 2011 at 6:04 PM, Ross Walker rswwal...@gmail.com wrote:
One was a hardware raid over fibre channel, which silently corrupted
itself. System checked out fine, raid array checked out fine, xfs was
replaced
Hello All
I have a brand spanking new 40TB Hardware Raid6 array to play around
with. I am looking for recommendations for which filesystem to use. I am
trying not to break this up into multiple file systems as we are going
to use it for backups. Other factors is performance and reliability.
On 04/12/11 12:23 AM, Matthew Feinberg wrote:
Hello All
I have a brand spanking new 40TB Hardware Raid6 array
never mind file systems... is that one raid set?do you have any idea
how LONG rebuilding that is going to take when there are any drive
hiccups? or how painfully slow writes
Le 12/04/2011 09:23, Matthew Feinberg a écrit :
Hello All
I have a brand spanking new 40TB Hardware Raid6 array to play around
with. I am looking for recommendations for which filesystem to use. I am
trying not to break this up into multiple file systems as we are going
to use it for
On Tue, Apr 12, 2011 at 9:23 AM, Matthew Feinberg matt...@choopa.com wrote:
Hello All
I have a brand spanking new 40TB Hardware Raid6 array to play around
with. I am looking for recommendations for which filesystem to use. I am
trying not to break this up into multiple file systems as we are
On Tuesday 12 April 2011 10:36:54 Alain Péan wrote:
Le 12/04/2011 09:23, Matthew Feinberg a écrit :
Hello All
I have a brand spanking new 40TB Hardware Raid6 array to play around
with. I am looking for recommendations for which filesystem to use. I am
trying not to break this up into
On Tue, 12 Apr 2011, Marian Marinov wrote:
On Tuesday 12 April 2011 10:36:54 Alain Péan wrote:
Le 12/04/2011 09:23, Matthew Feinberg a écrit :
Hello All
I have a brand spanking new 40TB Hardware Raid6 array to play around
with. I am looking for recommendations for which filesystem to use. I
On Apr 12, 2011, at 3:23 AM, Matthew Feinberg wrote:
ext4 does not seem to be fully baked in 5.6 yet. parted 1.8 does not
support creating ext4 (strange)
The CentOS homepage states that ext4 is now a fully supported filesystem in 5.6.
___
CentOS
On Tuesday 12 April 2011 15:34:21 Torres, Giovanni (NIH/NINDS) [C] wrote:
On Apr 12, 2011, at 3:23 AM, Matthew Feinberg wrote:
ext4 does not seem to be fully baked in 5.6 yet. parted 1.8 does not
support creating ext4 (strange)
The CentOS homepage states that ext4 is now a fully supported
-Original Message-
From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On Behalf
Of Torres, Giovanni (NIH/NINDS) [C]
Sent: Tuesday, April 12, 2011 2:34 PM
To: CentOS mailing list
Subject: Re: [CentOS] 40TB File System Recommendations
On Apr 12, 2011, at 3:23 AM, Matthew
On Tue, Apr 12, 2011 at 2:47 PM, Marian Marinov m...@yuhu.biz wrote:
Steve,
I'm managing machines with 30TB of storage for more then two years. And with
good reporting and reaction we have never had to run fsck.
However I'm sure that if you have to run fsck on so big file systems, it will
On Tuesday 12 April 2011 15:34:21 Torres, Giovanni (NIH/NINDS) [C] wrote:
On Apr 12, 2011, at 3:23 AM, Matthew Feinberg wrote:
ext4 does not seem to be fully baked in 5.6 yet. parted 1.8 does not
support creating ext4 (strange)
The CentOS homepage states that ext4 is now a fully supported
On Tuesday 12 April 2011 15:56:54 rai...@ultra-secure.de wrote:
On Tuesday 12 April 2011 15:34:21 Torres, Giovanni (NIH/NINDS) [C] wrote:
On Apr 12, 2011, at 3:23 AM, Matthew Feinberg wrote:
ext4 does not seem to be fully baked in 5.6 yet. parted 1.8 does not
support creating ext4
OTOH, gparted doesn't see my software raid array either. Gparted it rather
practical for regular plain vanilla partitions, but for more advanced stuff
and
filesystems, fdisk is probably better.
For filersystems 2TB, you're better off grabbing a copy of GPT fdisk.
Rudi Ahlers wrote:
On Tue, Apr 12, 2011 at 2:47 PM, Marian Marinov m...@yuhu.biz wrote:
I'm managing machines with 30TB of storage for more then two years. And
with good reporting and reaction we have never had to run fsck.
However I'm sure that if you have to run fsck on so big file
-Original Message-
From: centos-boun...@centos.org [mailto:centos-boun...@centos.org] On Behalf
Of Lars Hecking
Sent: Tuesday, April 12, 2011 3:11 PM
To: centos@centos.org
Subject: Re: [CentOS] 40TB File System Recommendations
OTOH, gparted doesn't see my software raid array either
On Tuesday 12 April 2011 16:20:22 m.r...@5-cent.us wrote:
Rudi Ahlers wrote:
On Tue, Apr 12, 2011 at 2:47 PM, Marian Marinov m...@yuhu.biz wrote:
I'm managing machines with 30TB of storage for more then two years. And
with good reporting and reaction we have never had to run fsck.
On 12.4.2011 15:02, Marian Marinov wrote:
On Tuesday 12 April 2011 15:56:54
rainer-rnrd0m5o0maboiyizis...@public.gmane.org wrote:
Yes... but with such RAID10 solution you get only half of the disk space...
so
from 10 2TB drives you get only 10TB instead of 16TB with RAID6.
From a
On Tue, Apr 12, 2011 at 3:48 PM, Markus Falb markus.f...@fasel.at wrote:
On 12.4.2011 15:02, Marian Marinov wrote:
On Tuesday 12 April 2011 15:56:54
rainer-rnrd0m5o0maboiyizis...@public.gmane.org wrote:
Yes... but with such RAID10 solution you get only half of the disk
space... so
from
On Tuesday 12 April 2011 16:48:14 Markus Falb wrote:
On 12.4.2011 15:02, Marian Marinov wrote:
On Tuesday 12 April 2011 15:56:54
rainer-rnrd0m5o0maboiyizis...@public.gmane.org wrote:
Yes... but with such RAID10 solution you get only half of the disk
space... so from 10 2TB drives you
On Tue, Apr 12, 2011 at 3:36 AM, Alain Péan alain.p...@lpp.polytechnique.fr
wrote:
Le 12/04/2011 09:23, Matthew Feinberg a écrit :
Hello All
I have a brand spanking new 40TB Hardware Raid6 array to play around
with. I am looking for recommendations for which filesystem to use. I am
On Tue, Apr 12, 2011 at 8:56 AM, rai...@ultra-secure.de wrote:
That's not the issue.
The issue is rebuild-time.
The longer it takes, the more likely is another failure in the array.
With RAID6, this does not instantly kill your RAID, as with RAID5 - but I
assume it will further decrease
On 04/12/2011 10:21 AM, Boris Epstein wrote:
On Tue, Apr 12, 2011 at 3:36 AM, Alain Péan
alain.p...@lpp.polytechnique.fr
mailto:alain.p...@lpp.polytechnique.fr wrote:
snipped: two recommendations for XFS
I would chime in with a dis-commendation for XFS. At my previous
employer, two cases
On Tuesday 12 April 2011 17:36:39 John Jasen wrote:
On 04/12/2011 10:21 AM, Boris Epstein wrote:
On Tue, Apr 12, 2011 at 3:36 AM, Alain Péan
alain.p...@lpp.polytechnique.fr
mailto:alain.p...@lpp.polytechnique.fr wrote:
snipped: two recommendations for XFS
I would chime in with a
- Original Message -
| On Tuesday 12 April 2011 17:36:39 John Jasen wrote:
| On 04/12/2011 10:21 AM, Boris Epstein wrote:
| On Tue, Apr 12, 2011 at 3:36 AM, Alain Péan
| alain.p...@lpp.polytechnique.fr
|
| mailto:alain.p...@lpp.polytechnique.fr wrote:
| snipped: two
On 4/12/2011 9:36 AM, John Jasen wrote:
snipped: two recommendations for XFS
I would chime in with a dis-commendation for XFS. At my previous
employer, two cases involving XFS resulted in irrecoverable data
corruption. These were on RAID systems running from 4 to 20 TB.
Was this on a 32 or
On Tue, Apr 12, 2011 at 06:00:57PM +0300, Marian Marinov wrote:
Can someone(who actually knows) share with us, what is the state of
xfs-utils,
how stable and usable are they for recovery of broken XFS filesystems?
I have done an XFS repair once or twice on a real filesystem (~4TB) in a
On Tue, Apr 12, 2011 at 10:36:39AM -0400, John Jasen wrote:
On 04/12/2011 10:21 AM, Boris Epstein wrote:
On Tue, Apr 12, 2011 at 3:36 AM, Alain Péan
alain.p...@lpp.polytechnique.fr
mailto:alain.p...@lpp.polytechnique.fr wrote:
snipped: two recommendations for XFS
I would chime in with
On Apr 12, 2011, at 12:31 AM, John R Pierce wrote:
On 04/12/11 12:23 AM, Matthew Feinberg wrote:
Hello All
I have a brand spanking new 40TB Hardware Raid6 array
never mind file systems... is that one raid set?do you have any
idea
how LONG rebuilding that is going to take when there
On 04/12/11 6:02 AM, Marian Marinov wrote:
Yes... but with such RAID10 solution you get only half of the disk space... so
from 10 2TB drives you get only 10TB instead of 16TB with RAID6.
those disks are $100 each. whats your data worth?
The rebuild time goes way up as the number of drives
On Tuesday, April 12, 2011 02:51:45 PM John R Pierce wrote:
On 04/12/11 6:02 AM, Marian Marinov wrote:
Yes... but with such RAID10 solution you get only half of the disk space...
so
from 10 2TB drives you get only 10TB instead of 16TB with RAID6.
those disks are $100 each. whats
On Apr 12, 2011, at 1:54 PM, Lamar Owen wrote:
On Tuesday, April 12, 2011 02:51:45 PM John R Pierce wrote:
On 04/12/11 6:02 AM, Marian Marinov wrote:
Yes... but with such RAID10 solution you get only half of the disk
space... so
from 10 2TB drives you get only 10TB instead of 16TB with
On Tue, Apr 12, 2011 at 02:01:42PM -0700, aurfal...@gmail.com wrote:
The cheapies are so called green as they spin down often which is not
what you want in a RAID setup.
The WD RE4-GP is a so-called ''green'' disk that's suitable for RAID
arrays. It's marketed and priced as an enterprise
On Apr 12, 2011, at 3:02 PM, Keith Keller wrote:
On Tue, Apr 12, 2011 at 02:01:42PM -0700, aurfal...@gmail.com wrote:
The cheapies are so called green as they spin down often which is not
what you want in a RAID setup.
The WD RE4-GP is a so-called ''green'' disk that's suitable for RAID
Where can I get an enterprise-class 2TB drive for $100? Commodity SATA isn't
enterprise-class. SAS is; FC is, SCSI is. A 500GB FC drive with EMC firmware
new is going to set you back ten times that, at least. What's youre data
worth indeed, putting it on commodity disk :-)
I can get
The WD RE4-GP is a so-called ''green'' disk that's suitable for RAID
arrays. It's marketed and priced as an enterprise drive.
I've had good luck with green, 5400 rpm Samsung drives. They don't spin down
automatically and work fine in my raid 5 arrays. The cost is about $80 for
2TB drives.
I
1 - 100 of 104 matches
Mail list logo