Very small internal bitmap after recreate

2007-11-02 Thread Ralf Müller
I have a 5 disk version 1.0 superblock RAID5 which had an internal bitmap that has been reported to have a size of 299 pages in /proc/ mdstat. For whatever reason I removed this bitmap (mdadm --grow -- bitmap=none) and recreated it afterwards (mdadm --grow -- bitmap=internal). Now it has a

Re: Very small internal bitmap after recreate

2007-11-02 Thread Ralf Müller
Am 02.11.2007 um 12:43 schrieb Neil Brown: For now, you will have to live with a smallish bitmap, which probably isn't a real problem. Ok then. Array Slot : 3 (0, 1, failed, 2, 3, 4) Array State : uuUuu 1 failed This time I'm getting nervous - Array State failed doesn't sound

Re: Very small internal bitmap after recreate

2007-11-02 Thread Ralf Müller
Am 02.11.2007 um 10:22 schrieb Neil Brown: On Friday November 2, [EMAIL PROTECTED] wrote: I have a 5 disk version 1.0 superblock RAID5 which had an internal bitmap that has been reported to have a size of 299 pages in /proc/ mdstat. For whatever reason I removed this bitmap (mdadm --grow --

Re: Very small internal bitmap after recreate

2007-11-02 Thread Ralf Müller
Am 02.11.2007 um 11:22 schrieb Ralf Müller: # mdadm -E /dev/sdg1 /dev/sdg1: Magic : a92b4efc Version : 01 Feature Map : 0x1 Array UUID : e1a335a8:fc0f0626:d70687a6:5d9a9c19 Name : 1 Creation Time : Wed Oct 31 14:30:55 2007 Raid Level : raid5

RAID6 recover problem

2007-10-12 Thread Ralf Müller
disks sda2 and sdb2 all other disks report 19 dirty chunks (mdadm -X). The offline disks report 7 dirty chunks. If one needs any further data - just ask. Hoping for assistance Ralf Müller - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL

Re: RAID6 recover problem

2007-10-12 Thread Ralf Müller
Am 12.10.2007 um 17:51 schrieb Nagilum: Then you can mark them as bad and linux will sync to a spare. Because it is already running without redundancy - two disks marked as failed - it will simply go offline. As for your sdc, I'd test it outside of the raid (dd if=/dev/sdc

Re: Replace drive in RAID5 without losing redundancy?

2007-03-08 Thread Ralf Müller
Am 07.03.2007 um 16:14 schrieb Bill Davidsen: Neil Brown wrote: On Monday March 5, [EMAIL PROTECTED] wrote: Is it possible to mark a disk as to be replaced by an existing spare, then migrate to the spare disk and kick the old disk _after_ migration has been done? Or not even kick - but

Re: Replace drive in RAID5 without losing redundancy?

2007-03-06 Thread Ralf Müller
Am 05.03.2007 um 23:29 schrieb Neil Brown: On Monday March 5, [EMAIL PROTECTED] wrote: Is it possible to mark a disk as to be replaced by an existing spare, then migrate to the spare disk and kick the old disk _after_ migration has been done? Or not even kick - but mark as new spare.

Re: Replace drive in RAID5 without losing redundancy?

2007-03-06 Thread Ralf Müller
Am 06.03.2007 um 08:37 schrieb dean gaudet: On Tue, 6 Mar 2007, Neil Brown wrote: On Monday March 5, [EMAIL PROTECTED] wrote: Is it possible to mark a disk as to be replaced by an existing spare, then migrate to the spare disk and kick the old disk _after_ migration has been done? Or

Replace drive in RAID5 without losing redundancy?

2007-03-05 Thread Ralf Müller
Hi The day before I grew a 4 times 300GB disk RAID5. I replaced the 300GB drives by 750GB ones. As far as I can see the proposed way to do that is to kick a drive from RAID and let a spare drive take over - for sensible data this is scary - at least for me, because I lose redundancy for the