Hello linux-raid,
I have a home fileserver which used a 6-disk RAID5 array
with old disks and cheap IDE controllers (all disks are
IDE masters).
As was expected, sooner or later the old hardware (and/or
cabling) began failing. The array falls apart, in particular
currently it has 5 working
Hi,
I'm running 2.6.23.8 x86_64 using mdadm v2.6.4.
I was adding a disk (/dev/sdf) to an existing raid5 (/dev/sd[a-e] - md0)
During that reshape (at around 4%) /dev/sdd reported read errors and
went offline.
I replaced /dev/sdd with a new drive and tried to reassemble the array
(/dev/sdd was
On Thu, 22 Nov 2007 22:09:27 -0500, [EMAIL PROTECTED]
said:
[ ... ] a RAID 10 personality defined in md that can be
implemented using mdadm. If so, is it available in 2.6.20.11,
[ ... ]
Very good choice about 'raid10' in general. For a single layer
just use '-l raid10'. Run 'man mdadm', the
Joshua Johnson wrote:
Greetings, long time listener, first time caller.
I recently replaced a disk in my existing 8 disk RAID 6 array.
Previously, all disks were PATA drives connected to the motherboard
IDE and 3 promise Ultra 100/133 controllers. I replaced one of the
Promise controllers with
On Nov 24, 2007 12:20 PM, Bill Davidsen [EMAIL PROTECTED] wrote:
Does that match what's in the init files used at boot? By any chance
does the information there explicitly list partitions by name? If you
change to PARTITIONS in /etc/mdadm.conf it won't bite you until you
change the detected