On Wednesday 04 May 2011 13:08:34 Evgeny Bushkov wrote:
> On 04.05.2011 11:54, Joost Roeleveld wrote:
> > On Wednesday 04 May 2011 10:07:58 Evgeny Bushkov wrote:
> >> On 04.05.2011 01:49, Florian Philipp wrote:
> >>> Am 03.05.2011 19:54, schrieb Evgeny Bushkov:
> >>>> Hi.
> >>>> How can I find out which is the parity disk in a RAID-4 soft
> >>>> array? I
> >>>> couldn't find that in the mdadm manual.  I know that RAID-4
> >>>> features a
> >>>> dedicated parity disk that is usually the bottleneck of the array,
> >>>> so
> >>>> that disk must be as fast as possible. It seems useful to employ a
> >>>> few
> >>>> slow disks with a relatively fast disk in such a RAID-4 array.
> >>>> 
> >>>> Best regards,
> >>>> Bushkov E.
> >>> 
> >>> You are seriously considering a RAID4? You know, there is a reason
> >>> why
> >>> it was superseded by RAID5. Given the way RAID4 operates, a first
> >>> guess
> >>> for finding the parity disk in a running array would be the one with
> >>> the worst SMART data. It is the parity disk that dies the soonest.
> >>> 
> >>> From looking at the source code it seems like the last specified
> >>> disk is parity. Disclaimer: I'm no kernel hacker and I have only
> >>> inspected the code, not tried to understand the whole MD subsystem.
> >>> 
> >>> Regards,
> >>> Florian Philipp
> >> 
> >> Thank you for answering... The reason I consider RAID-4 is a few
> >> sata/150 drives  and a pair of sata II drives I've got. Let's look at
> >> the problem from the other side: I can create RAID-0(from sata II
> >> drives) and then add it to RAID-4 as the parity disk. It doesn't
> >> bother
> >> me if any disk from the RAID-0 fails, that wouldn't disrupt my RAID-4
> >> array. For example:
> >> 
> >> mdadm --create /dev/md1 --level=4 -n 3 -c 128 /dev/sdb1 /dev/sdc1
> >> missing mdadm --create /dev/md2 --level=0 -n 2 -c 128 /dev/sda1
> >> /dev/sdd1 mdadm /dev/md1 --add /dev/md2
> >> 
> >> livecd ~ # cat /proc/mdstat
> >> Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
> >> md2 : active raid0 sdd1[1] sda1[0]
> >> 
> >>       20969472 blocks super 1.2 128k chunks
> >> 
> >> md1 : active raid4 md2[3] sdc1[1] sdb1[0]
> >> 
> >>       20969216 blocks super 1.2 level 4, 128k chunk, algorithm 0
> >>       [3/2] [UU_]
> >> 
> >> [========>............]  recovery = 43.7% (4590464/10484608)
> >> finish=1.4min speed=69615K/sec
> >> 
> >> That configuration works well, but I'm not sure if md1 is the parity
> >> disk here, that's why I asked. May be I'm wrong and RAID-5 is the only
> >> worth array, I'm just trying to consider all pros and cons here.
> >> 
> >> Best regards,
> >> Bushkov E.
> > 
> > I only use RAID-0 (when I want performance and don't care about the
> > data), RAID-1 (for data I can't afford to loose) and RAID-5 (data I
> > would like to keep). I have never bothered with RAID-4.
> > 
> > What do you see in the "dmesg" after the mdadm commands?
> > It might actually mention which is the parity disk in there.
> > 
> > --
> > Joost
> 
> There's nothing special in dmesg:
> 
> md: bind<md2>
> RAID conf printout:
>  --- level:4 rd:3 wd:2
>  disk 0, o:1, dev:sdb1
>  disk 1, o:1, dev:sdc1
>  disk 2, o:1, dev:md2
> md: recovery of RAID array md1
> 
> I've run some tests with different chunk sizes, the fastest was
> raid-10(4 disks), raid-5(3 disks) was closely after. Raid-4(4 disks) was
> almost as fast as raid-5 so I don't see any sense to use it.
> 
> Best regards,
> Bushkov E.

What's the result of:
mdadm --misc --detail /dev/md1
?

Not sure what info this command will provide with a RAID-4...

--
Joost

Reply via email to