Benjamin Franz wrote:
Robert Heller wrote:
I suspect that this is a simular case to what I did: I have a server
with 4 drives. I have several (small) RAID1 partitions (/boot, /,
/usr, /var, etc.) with 4 mirrors and one large RAID5 with three
partitions and a hot spare (a LVM volumn group,
Robert Spangler wrote:
On Thursday 25 March 2010 18:10, Robert Heller wrote:
The prefered way to go would be RAID10 (RAID1 (mirror) + RAID0 (stripe)).
Form pairs as RAID1, then strip the pairs. With 8 disks, this would 4
pairs, 1.5TB/pair = 1.5*4 = 6TB total.
I am just
On Mar 27, 2010, at 5:07 AM, John R Pierce pie...@hogranch.com wrote:
Robert Spangler wrote:
On Thursday 25 March 2010 18:10, Robert Heller wrote:
The prefered way to go would be RAID10 (RAID1 (mirror) + RAID0
(stripe)).
Form pairs as RAID1, then strip the pairs. With 8 disks, this
Chan Chung Hang Christopher wrote:
Benjamin Franz wrote:
Yup. 8 way RAID1 for the OS, 8 way RAID6 for the data. I was hoping when
I setup the 8-way RAID1 for the OS that I would get really good read
speeds since md is supposed to stripe reads from RAID1, but in practice
the RAID6
On Saturday 27 March 2010 05:07, John R Pierce wrote:
for all practical purposes its the same thing. if it was really
stripe then mirror, a naive mirror handler would think it would have to
remirror both drives when one half of one of the stripesets failed and
was replaced. but in
On Saturday 27 March 2010 09:22, Ross Walker wrote:
for all practical purposes its the same thing. if it was really
stripe then mirror, a naive mirror handler would think it would have
to
remirror both drives when one half of one of the stripesets failed and
was replaced.
On Mar 25, 2010, at 9:36 PM, Christopher Chan christopher.c...@bradbury.edu.hk
wrote:
On Friday, March 26, 2010 09:12 AM, John R Pierce wrote:
Christopher Chan wrote:
but with RAID 10, data is safe after many types of failures.
Except for the case when a mirror dies after which the whole
Ross Walker wrote:
No, not yet, but I always recommend setting up your data arrays
manually so your intimately familiar with how they are constructed and
the mdadm command usage is fresh in your head.
Did you know with Neil's raid10 implementation you can store 3 copies
of the data so
Benjamin Franz wrote:
Ross Walker wrote:
No, not yet, but I always recommend setting up your data arrays
manually so your intimately familiar with how they are constructed and
the mdadm command usage is fresh in your head.
Did you know with Neil's raid10 implementation you can store 3
I have one system with an 8-way RAID1 for the OS.
For real or is that a typo? Is that incase you go on holiday for a week and
a drive-dies-a-day?
--
Regards,
James.
http://www.jamesbensley.co.cc/
___
CentOS mailing list
CentOS@centos.org
At Fri, 26 Mar 2010 17:16:04 + CentOS mailing list centos@centos.org
wrote:
I have one system with an 8-way RAID1 for the OS.
For real or is that a typo? Is that incase you go on holiday for a week and
a drive-dies-a-day?
I suspect that this is a simular case to what I
Robert Heller wrote:
I suspect that this is a simular case to what I did: I have a server
with 4 drives. I have several (small) RAID1 partitions (/boot, /,
/usr, /var, etc.) with 4 mirrors and one large RAID5 with three
partitions and a hot spare (a LVM volumn group, containing /home and
On 3/26/2010 12:45 PM, Robert Heller wrote:
At Fri, 26 Mar 2010 17:16:04 + CentOS mailing listcentos@centos.org
wrote:
I have one system with an 8-way RAID1 for the OS.
For real or is that a typo? Is that incase you go on holiday for a week and
a drive-dies-a-day?
I suspect
On Mar 26, 2010, at 10:25 AM, Benjamin Franz jfr...@freerun.com wrote:
Ross Walker wrote:
No, not yet, but I always recommend setting up your data arrays
manually so your intimately familiar with how they are constructed
and
the mdadm command usage is fresh in your head.
Did you know
On Thursday 25 March 2010 18:10, Robert Heller wrote:
The prefered way to go would be RAID10 (RAID1 (mirror) + RAID0 (stripe)).
Form pairs as RAID1, then strip the pairs. With 8 disks, this would 4
pairs, 1.5TB/pair = 1.5*4 = 6TB total.
I am just starting to look into this RAID and I was
On 3/25/2010 2:24 PM, Slack-Moehrle wrote:
Can anyone provide a tutorial or advice on how to configure a software RAID 5
from the command-line (since I did not install Gnome)?
I have 8 x 1.5tb Drives.
Make matching partitions on each disk with fdisk, setting the type to FD
(raid
At Thu, 25 Mar 2010 12:24:57 -0700 (PDT) CentOS mailing list
centos@centos.org wrote:
Can anyone provide a tutorial or advice on how to configure a software RAID 5
from the command-line (since I did not install Gnome)?
I have 8 x 1.5tb Drives.
mdadm --create /dev/md0 --level=5
I used this guide for my first RAID on an Ubuntu box, its very straight
forward. Its all command line based so everything here I have used in CentOS
(apart from the writer sets the RAID flag on his drives via the GParted GUI
but this can be done via terminal);
On Thu, Mar 25, 2010 at 3:36 PM, Robert Heller hel...@deepsoft.com wrote:
At Thu, 25 Mar 2010 12:24:57 -0700 (PDT) CentOS mailing list
centos@centos.org wrote:
Can anyone provide a tutorial or advice on how to configure a software RAID
5 from the command-line (since I did not install
On Thu, Mar 25, 2010 at 9:07 PM, Boris Epstein borepst...@gmail.com wrote:
Note: RAID5 is not really recomended for such large disks. You run the
risk of a complete data loss if one disk fails and the another disk
fails during the rebuild.
Why is the size a factor here? Why would this be OK
Am 25.03.2010 um 22:07 schrieb Boris Epstein:
Robert,
Why is the size a factor here? Why would this be OK with smaller
disks? How would you partition this instead?
Thanks.
Boris.
This has been discussed before.
The root of the problem lies in the fact that when a disk fails, you
On Thu, Mar 25, 2010 at 5:27 PM, Hakan Koseoglu ha...@koseoglu.org wrote:
On Thu, Mar 25, 2010 at 9:07 PM, Boris Epstein borepst...@gmail.com wrote:
Note: RAID5 is not really recomended for such large disks. You run the
risk of a complete data loss if one disk fails and the another disk
fails
As the disks get bigger, rebuild time also increases and the
performance of the disks don't increase linearly with their storage.
This means that when you are rebuilding a disk, the chances of one of
your other disks failing becomes significantly large. Most suggest
RAID6 these days as a minimum,
On 3/25/2010 4:43 PM, Boris Epstein wrote:
On Thu, Mar 25, 2010 at 5:27 PM, Hakan Koseogluha...@koseoglu.org wrote:
On Thu, Mar 25, 2010 at 9:07 PM, Boris Epsteinborepst...@gmail.com wrote:
Note: RAID5 is not really recomended for such large disks. You run the
risk of a complete data loss
At Thu, 25 Mar 2010 22:27:56 +0100 CentOS mailing list centos@centos.org
wrote:
Am 25.03.2010 um 22:07 schrieb Boris Epstein:
Robert,
Why is the size a factor here? Why would this be OK with smaller
disks? How would you partition this instead?
Thanks.
Boris.
At Thu, 25 Mar 2010 17:07:47 -0400 CentOS mailing list centos@centos.org
wrote:
On Thu, Mar 25, 2010 at 3:36 PM, Robert Heller hel...@deepsoft.com wrote:
At Thu, 25 Mar 2010 12:24:57 -0700 (PDT) CentOS mailing list
centos@centos.org wrote:
Can anyone provide a tutorial or advice
Boris Epstein wrote:
You surely do have a point there. However, it is still not all that
likely that a disk will fail during the rebuild time in question (what
are we talking? some hours max?)
8 disks is about the upper limit I'd suggest for a single raid group on
any sort of system.
Slack-Moehrle wrote:
RAID 10 = 4 x 1.5 = 6tb - 1.5tb for parity data = 4.5tb per stripe then
mirror it.
no -1.5 on that. you don't have parity when you are mirroring.
8x1.5TB raid10 is simply 4*1.5 = 6TB I'd still want hot spare.
Robert Heller wrote:
At Thu, 25 Mar 2010 22:27:56 +0100 CentOS mailing list centos@centos.org
wrote:
[...]
The root of the problem lies in the fact that when a disk fails, you
have to read-out the data from the other disks to re-build the RAID.
Reads from disks have a certain
On Friday, March 26, 2010 05:52 AM, Slack-Moehrle wrote:
As the disks get bigger, rebuild time also increases and the
performance of the disks don't increase linearly with their storage.
This means that when you are rebuilding a disk, the chances of one of
your other disks failing becomes
Christopher Chan wrote:
but with RAID 10, data is safe after many types of failures.
Except for the case when a mirror dies after which the whole thing is
toast but in theory you can survive up to four disks going down.
if you have a 8 drive raid1+0, and a random drive fails, you
On Friday, March 26, 2010 09:12 AM, John R Pierce wrote:
Christopher Chan wrote:
but with RAID 10, data is safe after many types of failures.
Except for the case when a mirror dies after which the whole thing is
toast but in theory you can survive up to four disks going down.
if you have
32 matches
Mail list logo