Re: Very small internal bitmap after recreate

2007-11-02 Thread Neil Brown
On Friday November 2, [EMAIL PROTECTED] wrote:
 
 Am 02.11.2007 um 10:22 schrieb Neil Brown:
 
  On Friday November 2, [EMAIL PROTECTED] wrote:
  I have a 5 disk version 1.0 superblock RAID5 which had an internal
  bitmap that has been reported to have a size of 299 pages in /proc/
  mdstat. For whatever reason I removed this bitmap (mdadm --grow --
  bitmap=none) and recreated it afterwards (mdadm --grow --
  bitmap=internal). Now it has a reported size of 10 pages.
 
  Do I have a problem?
 
  Not a big problem, but possibly a small problem.
  Can you send
 mdadm -E /dev/sdg1
  as well?
 
 Sure:
 
 # mdadm -E /dev/sdg1
 /dev/sdg1:
Magic : a92b4efc
  Version : 01
  Feature Map : 0x1
   Array UUID : e1a335a8:fc0f0626:d70687a6:5d9a9c19
 Name : 1
Creation Time : Wed Oct 31 14:30:55 2007
   Raid Level : raid5
 Raid Devices : 5
 
Used Dev Size : 625137008 (298.09 GiB 320.07 GB)
   Array Size : 2500547584 (1192.35 GiB 1280.28 GB)
Used Size : 625136896 (298.09 GiB 320.07 GB)
 Super Offset : 625137264 sectors

So there is 256 sectors before the superblock were a bitmap could go,
or about 6 sectors afterwards

State : clean
  Device UUID : 95afade2:f2ab8e83:b0c764a0:4732827d
 
 Internal Bitmap : 2 sectors from superblock

And the '6 sectors afterwards' was chosen.
6 sectors has room for 5*512*8 = 20480 bits,
and from your previous email:
   Bitmap : 19078 bits (chunks), 0 dirty (0.0%)
you have 19078 bits, which is about right (a the bitmap chunk size
must be a power of 2).

So the problem is that mdadm -G is putting the bitmap after the
superblock rather than considering the space before
(checks code)

Ahh, I remember now.  There is currently no interface to tell the
kernel where to put the bitmap when creating one on an active array,
so it always puts in the 'safe' place.  Another enhancement waiting
for time.

For now, you will have to live with a smallish bitmap, which probably
isn't a real problem.  With 19078 bits, you will still get a
several-thousand-fold increase it resync speed after a crash
(i.e. hours become seconds) and to some extent, fewer bits are better
and you have to update them less.

I've haven't made any measurements to see what size bitmap is
ideal... maybe someone should :-)

  Update Time : Fri Nov  2 07:46:38 2007
 Checksum : 4ee307b3 - correct
   Events : 408088
 
   Layout : left-symmetric
   Chunk Size : 128K
 
  Array Slot : 3 (0, 1, failed, 2, 3, 4)
 Array State : uuUuu 1 failed
 
 This time I'm getting nervous - Array State failed doesn't sound good!

This is nothing to worry about - just a bad message from mdadm.

The superblock has recorded that there was once a device in position 2
which is now failed (See the list in Array Slot).
This summaries as 1 failed in Array State.

But the array is definitely working OK now.

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Very small internal bitmap after recreate

2007-11-02 Thread Ralf Müller
I have a 5 disk version 1.0 superblock RAID5 which had an internal  
bitmap that has been reported to have a size of 299 pages in /proc/ 
mdstat. For whatever reason I removed this bitmap (mdadm --grow -- 
bitmap=none) and recreated it afterwards (mdadm --grow -- 
bitmap=internal). Now it has a reported size of 10 pages.


Do I have a problem?

# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md1 : active raid5 sdd1[0] sdh1[5] sdf1[4] sdg1[3] sde1[1]
  1250273792 blocks super 1.0 level 5, 128k chunk, algorithm 2  
[5/5] [U]

  bitmap: 0/10 pages [0KB], 16384KB chunk

# mdadm -X /dev/sdg1
Filename : /dev/sdg1
   Magic : 6d746962
 Version : 4
UUID : e1a335a8:fc0f0626:d70687a6:5d9a9c19
  Events : 408088
  Events Cleared : 408088
   State : OK
   Chunksize : 16 MB
  Daemon : 5s flush period
  Write Mode : Normal
   Sync Size : 312568448 (298.09 GiB 320.07 GB)
  Bitmap : 19078 bits (chunks), 0 dirty (0.0%)

# mdadm --version
mdadm - v2.6.2 - 21st May 2007

# uname -a
Linux DatenGrab 2.6.22.9-0.4-default #1 SMP 2007/10/05 21:32:04 UTC  
i686 i686 i386 GNU/Linux



Regards Ralf

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Very small internal bitmap after recreate

2007-11-02 Thread Ralf Müller


Am 02.11.2007 um 12:43 schrieb Neil Brown:


For now, you will have to live with a smallish bitmap, which probably
isn't a real problem.


Ok then.


 Array Slot : 3 (0, 1, failed, 2, 3, 4)
Array State : uuUuu 1 failed

This time I'm getting nervous - Array State failed doesn't sound  
good!


This is nothing to worry about - just a bad message from mdadm.

The superblock has recorded that there was once a device in position 2
which is now failed (See the list in Array Slot).
This summaries as 1 failed in Array State.

But the array is definitely working OK now.


Good to know.

Thanks a lot
Ralf
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Very small internal bitmap after recreate

2007-11-02 Thread Ralf Müller


Am 02.11.2007 um 10:22 schrieb Neil Brown:


On Friday November 2, [EMAIL PROTECTED] wrote:

I have a 5 disk version 1.0 superblock RAID5 which had an internal
bitmap that has been reported to have a size of 299 pages in /proc/
mdstat. For whatever reason I removed this bitmap (mdadm --grow --
bitmap=none) and recreated it afterwards (mdadm --grow --
bitmap=internal). Now it has a reported size of 10 pages.

Do I have a problem?


Not a big problem, but possibly a small problem.
Can you send
   mdadm -E /dev/sdg1
as well?


Sure:

# mdadm -E /dev/sdg1
/dev/sdg1:
  Magic : a92b4efc
Version : 01
Feature Map : 0x1
 Array UUID : e1a335a8:fc0f0626:d70687a6:5d9a9c19
   Name : 1
  Creation Time : Wed Oct 31 14:30:55 2007
 Raid Level : raid5
   Raid Devices : 5

  Used Dev Size : 625137008 (298.09 GiB 320.07 GB)
 Array Size : 2500547584 (1192.35 GiB 1280.28 GB)
  Used Size : 625136896 (298.09 GiB 320.07 GB)
   Super Offset : 625137264 sectors
  State : clean
Device UUID : 95afade2:f2ab8e83:b0c764a0:4732827d

Internal Bitmap : 2 sectors from superblock
Update Time : Fri Nov  2 07:46:38 2007
   Checksum : 4ee307b3 - correct
 Events : 408088

 Layout : left-symmetric
 Chunk Size : 128K

Array Slot : 3 (0, 1, failed, 2, 3, 4)
   Array State : uuUuu 1 failed

This time I'm getting nervous - Array State failed doesn't sound good!

Regards
Ralf
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Very small internal bitmap after recreate

2007-11-02 Thread Ralf Müller


Am 02.11.2007 um 11:22 schrieb Ralf Müller:



# mdadm -E /dev/sdg1
/dev/sdg1:
  Magic : a92b4efc
Version : 01
Feature Map : 0x1
 Array UUID : e1a335a8:fc0f0626:d70687a6:5d9a9c19
   Name : 1
  Creation Time : Wed Oct 31 14:30:55 2007
 Raid Level : raid5
   Raid Devices : 5

  Used Dev Size : 625137008 (298.09 GiB 320.07 GB)
 Array Size : 2500547584 (1192.35 GiB 1280.28 GB)
  Used Size : 625136896 (298.09 GiB 320.07 GB)
   Super Offset : 625137264 sectors
  State : clean
Device UUID : 95afade2:f2ab8e83:b0c764a0:4732827d

Internal Bitmap : 2 sectors from superblock
Update Time : Fri Nov  2 07:46:38 2007
   Checksum : 4ee307b3 - correct
 Events : 408088

 Layout : left-symmetric
 Chunk Size : 128K

Array Slot : 3 (0, 1, failed, 2, 3, 4)
   Array State : uuUuu 1 failed

This time I'm getting nervous - Array State failed doesn't sound good!


Just to make it clear - the array is still reported active by in / 
proc/mdstat and behaves well - no failed devices:

md1 : active raid5 sdd1[0] sdh1[5] sdf1[4] sdg1[3] sde1[1]
  1250273792 blocks super 1.0 level 5, 128k chunk, algorithm 2  
[5/5] [U]

  bitmap: 0/10 pages [0KB], 16384KB chunk

Regards
Ralf
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html