Problem with RAID-1, inconsistent data returned if disks get out of sync

2006-11-17 Thread Roger Lucas
Hi, I am running the 2.6.16.20 kernel what is otherwise a Debian Sarge system. I have two identical SATA hard drives in the system. Both have an identical boot partion at the start of the disks (/dev/sda1, /dev/sda2) and the remainder of the disks is used as RAID-1 on which I have LVM for my

raidreconf for 5 x 320GB - 8 x 320GB

2006-11-17 Thread Timo Bernack
Hi there, i am running a 5-disk RAID5 using mdadm on a suse 10.1 system. As the array is running out of space, i consider adding three more HDDs. Before i set up the current array, i made a small test with raidreconf: - build a 4-disk RAID5 /dev/md0 (with only 1.5gb for each partition) with

RE: Raid1 uses partially reconstructed drive

2006-11-17 Thread Danny Sung
Uhh... I have no idea what metadata or bitmap anything I was using... Default I guess? How do I safely get that info? I didn't see anything obvious in the man page. I think what may have happened is that I had created the primary drive on hdb, then physically moved it to hda. I wish I wrote

mdadm --misc --detail --test ... question

2006-11-17 Thread Russell Hammer
I'm trying to test the status of a raid device using mdadm: # mdadm --misc --detail --test /dev/md0 However this does not appear to work as documented. As I read the man page, the return code is supposed to reflect the status of the raid device: MISC MODE ... --detail

Re: raidreconf for 5 x 320GB - 8 x 320GB

2006-11-17 Thread Mike Hardy
You don't want to use raidreconf unless I'm misunderstanding your goal - I have also had success with raidreconf but have had data-loss failures as well (I've posted to the list about it if you search). The data-loss failures were after I had run tests that showed me it should work. raidreconf