Re: raid5 grow problem

2006-08-18 Thread Justin Piszcz
Adding XFS mailing list to this e-mail to show that the grow for xfs 
worked.


On Thu, 17 Aug 2006, ÊæÐÇ wrote:


I've only tried growing a RAID5, which was the only RAID that I remember
being supported (to grow) in the kernel, I am not sure if its posible to

i know this,but how you grow your raid5,what's your mdadm version?
need anyother configure before creat md  use mdadm -G ..to grow?

grow other types of RAID arrays.

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html



0) Compile your kernel with experimental raid5 re-sizing support.
1) Add a spare to the RAID5.
2) Grow the array.

Like this:

p34:~# mdadm --create /dev/md3 /dev/hda1 /dev/hde1 /dev/sdc1 --level=5 
--raid-disks=3

mdadm: array /dev/md3 started.
p34:~# mdadm -D /dev/md3
/dev/md3:
Version : 00.90.03
  Creation Time : Fri Jul  7 15:44:24 2006
 Raid Level : raid5
 Array Size : 781417472 (745.22 GiB 800.17 GB)
Device Size : 390708736 (372.61 GiB 400.09 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 3
Persistence : Superblock is persistent

Update Time : Fri Jul  7 15:44:24 2006
  State : clean, degraded, recovering
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1

 Layout : left-symmetric
 Chunk Size : 64K

 Rebuild Status : 0% complete

   UUID : cf7a7488:64c04921:b8dfe47c:6c785fa1
 Events : 0.1

Number   Major   Minor   RaidDevice State
   0   310  active sync   /dev/hda1
   1  3311  active sync   /dev/hde1
   3   8   332  spare rebuilding   /dev/sdc1
p34:~#

p34:~# df -h | grep /raid5
/dev/md3  746G   80M  746G   1% /raid5
p34:~# umount /dev/md3
p34:~# mdadm -D /dev/md3
/dev/md3:
Version : 00.90.03
  Creation Time : Fri Jul  7 15:44:24 2006
 Raid Level : raid5
 Array Size : 781417472 (745.22 GiB 800.17 GB)
Device Size : 390708736 (372.61 GiB 400.09 GB)
   Raid Devices : 3
  Total Devices : 4
Preferred Minor : 3
Persistence : Superblock is persistent

Update Time : Fri Jul  7 18:25:29 2006
  State : clean
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

 Layout : left-symmetric
 Chunk Size : 64K

   UUID : cf7a7488:64c04921:b8dfe47c:6c785fa1
 Events : 0.26

Number   Major   Minor   RaidDevice State
   0   310  active sync   /dev/hda1
   1  3311  active sync   /dev/hde1
   2   8   332  active sync   /dev/sdc1

   3  221-  spare   /dev/hdc1

p34:~# mdadm /dev/md3 --grow --raid-disks=4
mdadm: Need to backup 384K of critical section..
mdadm: ... critical section passed.
p34:~# cat /proc/mdstat
Personalities : [raid1] [raid5] [raid4]
md1 : active raid1 sdb2[1] sda2[0]
  136448 blocks [2/2] [UU]

md2 : active raid1 sdb3[1] sda3[0]
  70268224 blocks [2/2] [UU]

md3 : active raid5 hdc1[3] sdc1[2] hde1[1] hda1[0]
  781417472 blocks super 0.91 level 5, 64k chunk, algorithm 2 [4/4] 
[]
  []  reshape =  0.0% (85120/390708736) 
finish=840.5min speed=7738K/sec


md0 : active raid1 sdb1[1] sda1[0]
  2200768 blocks [2/2] [UU]

unused devices: none
p34:~# cat /proc/mdstat
Personalities : [raid1] [raid5] [raid4]
md1 : active raid1 sdb2[1] sda2[0]
  136448 blocks [2/2] [UU]

md2 : active raid1 sdb3[1] sda3[0]
  70268224 blocks [2/2] [UU]

md3 : active raid5 hdc1[3] sdc1[2] hde1[1] hda1[0]
  781417472 blocks super 0.91 level 5, 64k chunk, algorithm 2 [4/4] 
[]
  []  reshape =  0.0% (286284/390708736) 
finish=779.8min speed=8342K/sec


md0 : active raid1 sdb1[1] sda1[0]
  2200768 blocks [2/2] [UU]

unused devices: none
p34:~#

p34:~# mount /raid5

p34:~# xfs_growfs /raid5
meta-data=/dev/md3   isize=256agcount=32, agsize=6104816 
blks

 =   sectsz=4096  attr=0
data =   bsize=4096   blocks=195354112, imaxpct=25
 =   sunit=16 swidth=48 blks, unwritten=1
naming   =version 2  bsize=4096
log  =internal   bsize=4096   blocks=32768, version=2
 =   sectsz=4096  sunit=1 blks
realtime =none   extsz=196608 blocks=0, rtextents=0
data blocks changed from 195354112 to 195354368
p34:~#

p34:~# umount /raid5

p34:~# mount /raid5
p34:~# df -h
/dev/md3  746G   80M  746G   1% /raid5
p34:~#



Re: raid5 grow problem

2006-08-17 Thread Justin Piszcz



On Thu, 17 Aug 2006, ÊæÐÇ wrote:


hello all:
  i installed adadm 2.5.2,and compiled the 2.6.17.6 kernel .when
i cmd to grom a raid5 array ,it don't work.how to do can make  raid5
grow.thks for your help.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html



It may help if you posted the error you were getting.

Justin.

Re: raid5 grow problem

2006-08-17 Thread 舒星

dear sir:
 i try to make a reshape command with mdadm (version
2.5.2).in linux shell ,i make
$:mdadm -G /dev/md5 /dev/sdd2
/dev/md5 is a raid5 device with 4 disks,/dev/sdd2 is a partition
it give the prompt :mdadm :can only add devices to linear arrays,but i
have read the resourse code of mdadm(V2.5.2),it do really support
adding several disks into a array (espacially raid5) to make the array
more large capability.

i want to know  if i want to add(not hotadd) a disk(or a partition) to
this array ,how can i make an reshape command correctly?
i have read the help of mdadm ,but i still can't find the correct
command format!
   please  do me a favor!
thank you !
   dragon
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid5 grow problem

2006-08-17 Thread Justin Piszcz
I've only tried growing a RAID5, which was the only RAID that I remember 
being supported (to grow) in the kernel, I am not sure if its posible to 
grow other types of RAID arrays.


On Thu, 17 Aug 2006, ÊæÐÇ wrote:


dear sir:
i try to make a reshape command with mdadm (version
2.5.2).in linux shell ,i make
$:mdadm -G /dev/md5 /dev/sdd2
/dev/md5 is a raid5 device with 4 disks,/dev/sdd2 is a partition
it give the prompt :mdadm :can only add devices to linear arrays,but i
have read the resourse code of mdadm(V2.5.2),it do really support
adding several disks into a array (espacially raid5) to make the array
more large capability.

i want to know  if i want to add(not hotadd) a disk(or a partition) to
this array ,how can i make an reshape command correctly?
i have read the help of mdadm ,but i still can't find the correct
command format!
  please  do me a favor!
   thank you !
  dragon
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid5 grow problem

2006-08-17 Thread 舒星

I've only tried growing a RAID5, which was the only RAID that I remember
being supported (to grow) in the kernel, I am not sure if its posible to

i know this,but how you grow your raid5,what's your mdadm version?
need anyother configure before creat md  use mdadm -G ..to grow?

grow other types of RAID arrays.

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


raid5 grow problem

2006-08-16 Thread 舒星

hello all:
   i installed adadm 2.5.2,and compiled the 2.6.17.6 kernel .when
i cmd to grom a raid5 array ,it don't work.how to do can make  raid5
grow.thks for your help.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Kernel 2.6.17 and RAID5 Grow Problem (critical section backup)

2006-07-17 Thread Neil Brown
On Tuesday July 11, [EMAIL PROTECTED] wrote:
  Hm, what's superblock 0.91? It is not mentioned in mdadm.8.
 
  Not sure, the block version perhaps?
 
  Well yes of course, but what characteristics? The manual only lists
   0, 0.90, default
   1, 1.0, 1.1, 1.2
  No 0.91 :(
 
 
 AFAICR superblock version gets raised by 0.01 for the duration of 
 reshape, so that non-reshape aware kernels do not try to assemble it 
 (and cause data corruption).

Exactly.  The following will be in the next mdadm - unless someone
wants to re-write it for me using shorter sentences :-)

NeilBrown



diff .prev/md.4 ./md.4
--- .prev/md.4  2006-06-20 10:01:17.0 +1000
+++ ./md.4  2006-07-18 10:14:47.0 +1000
@@ -74,6 +74,14 @@ UUID
 a 128 bit Universally Unique Identifier that identifies the array that
 this device is part of.
 
+When a version 0.90 array is being reshaped (e.g. adding extra devices
+to a RAID5), the version number is temporarily set to 0.91.  This
+ensures that if the reshape process is stopped in the middle (e.g. by
+a system crash) and the machine boots into an older kernel that does
+not support reshaping, then the array will not be assembled (which
+would cause data corruption) but will be left untouched until a kernel
+that can complete the reshape processes is used.
+
 .SS ARRAYS WITHOUT SUPERBLOCKS
 While it is usually best to create arrays with superblocks so that
 they can be assembled reliably, there are some circumstances where an
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Kernel 2.6.17 and RAID5 Grow Problem (critical section backup)

2006-07-11 Thread Jan Engelhardt
 Hm, what's superblock 0.91? It is not mentioned in mdadm.8.
 

 Not sure, the block version perhaps?

Well yes of course, but what characteristics? The manual only lists

 0, 0.90, default
Use the original 0.90  format  superblock.   This  format
limits  arrays to 28 componenet devices and limits compo■
nent devices of levels 1 and greater to 2 terabytes.

 1, 1.0, 1.1, 1.2
Use the new version-1 format superblock.   This  has  few
restrictions.The   different   subversion  store  the
superblock at different locations on the  device,  either
at  the  end (for 1.0), at the start (for 1.1) or 4K from
the start (for 1.2).

No 0.91 :(
(My mdadm is 2.2, but the problem remains in 2.5.2)


Jan Engelhardt
-- 

Re: Kernel 2.6.17 and RAID5 Grow Problem (critical section backup)

2006-07-11 Thread Petr Vyskocil

Hm, what's superblock 0.91? It is not mentioned in mdadm.8.


Not sure, the block version perhaps?


Well yes of course, but what characteristics? The manual only lists
 0, 0.90, default
 1, 1.0, 1.1, 1.2
No 0.91 :(



AFAICR superblock version gets raised by 0.01 for the duration of 
reshape, so that non-reshape aware kernels do not try to assemble it 
(and cause data corruption).


Petr

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Kernel 2.6.17 and RAID5 Grow Problem (critical section backup)

2006-07-10 Thread Justin Piszcz



On Sat, 8 Jul 2006, Neil Brown wrote:


On Friday July 7, [EMAIL PROTECTED] wrote:


Jul  7 08:44:59 p34 kernel: [4295845.933000] raid5: reshape: not enough
stripes.  Needed 512
Jul  7 08:44:59 p34 kernel: [4295845.962000] md: couldn't update array
info. -28

So the RAID5 reshape only works if you use a 128kb or smaller chunk size?



Neil,

Any comments?



Yes.   This is something I need to fix in the next mdadm.
You need to tell md/raid5 to increase the size of the stripe cache
before the grow can proceed.  You can do this with

 echo 600  /sys/block/md3/md/stripe_cache_size

Then the --grow should work.  The next mdadm will do this for you.

NeilBrown

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

md3 : active raid5 sdc1[7] sde1[6] sdd1[5] hdk1[2] hdi1[4] hde1[3] hdc1[1] 
hda1[0]
  2344252416 blocks super 0.91 level 5, 512k chunk, algorithm 2 [8/8] 
[]
  []  reshape =  0.2% (1099280/390708736) 
finish=1031.7min speed=6293K/sec


It is working, thanks!

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Kernel 2.6.17 and RAID5 Grow Problem (critical section backup)

2006-07-10 Thread Jan Engelhardt
 md3 : active raid5 sdc1[7] sde1[6] sdd1[5] hdk1[2] hdi1[4] hde1[3] hdc1[1]
 hda1[0]
  2344252416 blocks super 0.91 level 5, 512k chunk, algorithm 2 [8/8]
 []
  []  reshape =  0.2% (1099280/390708736)
 finish=1031.7min speed=6293K/sec

 It is working, thanks!

Hm, what's superblock 0.91? It is not mentioned in mdadm.8.


Jan Engelhardt
-- 
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Kernel 2.6.17 and RAID5 Grow Problem (critical section backup)

2006-07-10 Thread Justin Piszcz



On Tue, 11 Jul 2006, Jan Engelhardt wrote:


md3 : active raid5 sdc1[7] sde1[6] sdd1[5] hdk1[2] hdi1[4] hde1[3] hdc1[1]
hda1[0]
 2344252416 blocks super 0.91 level 5, 512k chunk, algorithm 2 [8/8]
[]
 []  reshape =  0.2% (1099280/390708736)
finish=1031.7min speed=6293K/sec

It is working, thanks!


Hm, what's superblock 0.91? It is not mentioned in mdadm.8.


Jan Engelhardt
--



Not sure, the block version perhaps?

I am using:

$ mdadm -V
mdadm - v2.5 -  26 May 2006

Debian Etch.

Justin.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Kernel 2.6.17 and RAID5 Grow Problem (critical section backup)

2006-07-07 Thread Justin Piszcz

p34:~# mdadm /dev/md3 -a /dev/hde1
mdadm: added /dev/hde1

p34:~# mdadm -D /dev/md3
/dev/md3:
Version : 00.90.03
  Creation Time : Fri Jun 30 09:17:12 2006
 Raid Level : raid5
 Array Size : 1953543680 (1863.04 GiB 2000.43 GB)
Device Size : 390708736 (372.61 GiB 400.09 GB)
   Raid Devices : 6
  Total Devices : 7
Preferred Minor : 3
Persistence : Superblock is persistent

Update Time : Fri Jul  7 08:25:44 2006
  State : clean
 Active Devices : 6
Working Devices : 7
 Failed Devices : 0
  Spare Devices : 1

 Layout : left-symmetric
 Chunk Size : 512K

   UUID : e76e403c:7811eb65:73be2f3b:0c2fc2ce
 Events : 0.232940

Number   Major   Minor   RaidDevice State
   0  2210  active sync   /dev/hdc1
   1  5611  active sync   /dev/hdi1
   2   312  active sync   /dev/hda1
   3   8   493  active sync   /dev/sdd1
   4  8814  active sync   /dev/hdm1
   5   8   335  active sync   /dev/sdc1

   6  331-  spare   /dev/hde1
p34:~# mdadm --grow /dev/md3 --raid-disks=7
mdadm: Need to backup 15360K of critical section..
mdadm: Cannot set device size/shape for /dev/md3: No space left on device
p34:~# mdadm --grow /dev/md3 --bitmap=internal --raid-disks=7
mdadm: can change at most one of size, raiddisks, bitmap, and layout
p34:~# umount /dev/md3
p34:~# mdadm --grow /dev/md3 --raid-disks=7
mdadm: Need to backup 15360K of critical section..
mdadm: Cannot set device size/shape for /dev/md3: No space left on device
p34:~#

The disk only has about 350GB of 1.8TB used, any idea why I get this 
error?


I searched google but could not find anything on this issue when trying to 
grow the array?



-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Kernel 2.6.17 and RAID5 Grow Problem (critical section backup)

2006-07-07 Thread Justin Piszcz

On Fri, 7 Jul 2006, Justin Piszcz wrote:


p34:~# mdadm /dev/md3 -a /dev/hde1
mdadm: added /dev/hde1

p34:~# mdadm -D /dev/md3
/dev/md3:
   Version : 00.90.03
 Creation Time : Fri Jun 30 09:17:12 2006
Raid Level : raid5
Array Size : 1953543680 (1863.04 GiB 2000.43 GB)
   Device Size : 390708736 (372.61 GiB 400.09 GB)
  Raid Devices : 6
 Total Devices : 7
Preferred Minor : 3
   Persistence : Superblock is persistent

   Update Time : Fri Jul  7 08:25:44 2006
 State : clean
Active Devices : 6
Working Devices : 7
Failed Devices : 0
 Spare Devices : 1

Layout : left-symmetric
Chunk Size : 512K

  UUID : e76e403c:7811eb65:73be2f3b:0c2fc2ce
Events : 0.232940

   Number   Major   Minor   RaidDevice State
  0  2210  active sync   /dev/hdc1
  1  5611  active sync   /dev/hdi1
  2   312  active sync   /dev/hda1
  3   8   493  active sync   /dev/sdd1
  4  8814  active sync   /dev/hdm1
  5   8   335  active sync   /dev/sdc1

  6  331-  spare   /dev/hde1
p34:~# mdadm --grow /dev/md3 --raid-disks=7
mdadm: Need to backup 15360K of critical section..
mdadm: Cannot set device size/shape for /dev/md3: No space left on device
p34:~# mdadm --grow /dev/md3 --bitmap=internal --raid-disks=7
mdadm: can change at most one of size, raiddisks, bitmap, and layout
p34:~# umount /dev/md3
p34:~# mdadm --grow /dev/md3 --raid-disks=7
mdadm: Need to backup 15360K of critical section..
mdadm: Cannot set device size/shape for /dev/md3: No space left on device
p34:~#

The disk only has about 350GB of 1.8TB used, any idea why I get this error?

I searched google but could not find anything on this issue when trying to 
grow the array?



-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html



Is it because I use a 512kb chunksize?

Jul  7 08:44:59 p34 kernel: [4295845.933000] raid5: reshape: not enough 
stripes.  Needed 512
Jul  7 08:44:59 p34 kernel: [4295845.962000] md: couldn't update array 
info. -28


So the RAID5 reshape only works if you use a 128kb or smaller chunk size?

Justin.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Kernel 2.6.17 and RAID5 Grow Problem (critical section backup)

2006-07-07 Thread Justin Piszcz



On Fri, 7 Jul 2006, Justin Piszcz wrote:


On Fri, 7 Jul 2006, Justin Piszcz wrote:


On Fri, 7 Jul 2006, Justin Piszcz wrote:


p34:~# mdadm /dev/md3 -a /dev/hde1
mdadm: added /dev/hde1

p34:~# mdadm -D /dev/md3
/dev/md3:
   Version : 00.90.03
 Creation Time : Fri Jun 30 09:17:12 2006
Raid Level : raid5
Array Size : 1953543680 (1863.04 GiB 2000.43 GB)
   Device Size : 390708736 (372.61 GiB 400.09 GB)
  Raid Devices : 6
 Total Devices : 7
Preferred Minor : 3
   Persistence : Superblock is persistent

   Update Time : Fri Jul  7 08:25:44 2006
 State : clean
Active Devices : 6
Working Devices : 7
Failed Devices : 0
 Spare Devices : 1

Layout : left-symmetric
Chunk Size : 512K

  UUID : e76e403c:7811eb65:73be2f3b:0c2fc2ce
Events : 0.232940

   Number   Major   Minor   RaidDevice State
  0  2210  active sync   /dev/hdc1
  1  5611  active sync   /dev/hdi1
  2   312  active sync   /dev/hda1
  3   8   493  active sync   /dev/sdd1
  4  8814  active sync   /dev/hdm1
  5   8   335  active sync   /dev/sdc1

  6  331-  spare   /dev/hde1
p34:~# mdadm --grow /dev/md3 --raid-disks=7
mdadm: Need to backup 15360K of critical section..
mdadm: Cannot set device size/shape for /dev/md3: No space left on device
p34:~# mdadm --grow /dev/md3 --bitmap=internal --raid-disks=7
mdadm: can change at most one of size, raiddisks, bitmap, and layout
p34:~# umount /dev/md3
p34:~# mdadm --grow /dev/md3 --raid-disks=7
mdadm: Need to backup 15360K of critical section..
mdadm: Cannot set device size/shape for /dev/md3: No space left on device
p34:~#

The disk only has about 350GB of 1.8TB used, any idea why I get this 
error?


I searched google but could not find anything on this issue when trying to 
grow the array?



-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html



Is it because I use a 512kb chunksize?

Jul  7 08:44:59 p34 kernel: [4295845.933000] raid5: reshape: not enough 
stripes.  Needed 512
Jul  7 08:44:59 p34 kernel: [4295845.962000] md: couldn't update array 
info. -28


So the RAID5 reshape only works if you use a 128kb or smaller chunk size?

Justin.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/




From the source:


/* Can only proceed if there are plenty of stripe_heads.
@@ -2599,30 +2593,48 @@ static int raid5_reshape(mddev_t *mddev,
* If the chunk size is greater, user-space should request more
* stripe_heads first.
*/
- if ((mddev-chunk_size / STRIPE_SIZE) * 4  conf-max_nr_stripes) {
+ if ((mddev-chunk_size / STRIPE_SIZE) * 4  conf-max_nr_stripes ||
+ (mddev-new_chunk / STRIPE_SIZE) * 4  conf-max_nr_stripes) {
printk(KERN_WARNING raid5: reshape: not enough stripes. Needed %lu\n,
(mddev-chunk_size / STRIPE_SIZE)*4);
return -ENOSPC;
}

I don't see anything that mentions one needs to use a certain chunk size?

Any idea what the problem is here?

Justin.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Neil,

Any comments?

Justin.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Kernel 2.6.17 and RAID5 Grow Problem (critical section backup)

2006-07-07 Thread Justin Piszcz



On Fri, 7 Jul 2006, Justin Piszcz wrote:




On Fri, 7 Jul 2006, Justin Piszcz wrote:


On Fri, 7 Jul 2006, Justin Piszcz wrote:


On Fri, 7 Jul 2006, Justin Piszcz wrote:


p34:~# mdadm /dev/md3 -a /dev/hde1
mdadm: added /dev/hde1

p34:~# mdadm -D /dev/md3
/dev/md3:
   Version : 00.90.03
 Creation Time : Fri Jun 30 09:17:12 2006
Raid Level : raid5
Array Size : 1953543680 (1863.04 GiB 2000.43 GB)
   Device Size : 390708736 (372.61 GiB 400.09 GB)
  Raid Devices : 6
 Total Devices : 7
Preferred Minor : 3
   Persistence : Superblock is persistent

   Update Time : Fri Jul  7 08:25:44 2006
 State : clean
Active Devices : 6
Working Devices : 7
Failed Devices : 0
 Spare Devices : 1

Layout : left-symmetric
Chunk Size : 512K

  UUID : e76e403c:7811eb65:73be2f3b:0c2fc2ce
Events : 0.232940

   Number   Major   Minor   RaidDevice State
  0  2210  active sync   /dev/hdc1
  1  5611  active sync   /dev/hdi1
  2   312  active sync   /dev/hda1
  3   8   493  active sync   /dev/sdd1
  4  8814  active sync   /dev/hdm1
  5   8   335  active sync   /dev/sdc1

  6  331-  spare   /dev/hde1
p34:~# mdadm --grow /dev/md3 --raid-disks=7
mdadm: Need to backup 15360K of critical section..
mdadm: Cannot set device size/shape for /dev/md3: No space left on device
p34:~# mdadm --grow /dev/md3 --bitmap=internal --raid-disks=7
mdadm: can change at most one of size, raiddisks, bitmap, and layout
p34:~# umount /dev/md3
p34:~# mdadm --grow /dev/md3 --raid-disks=7
mdadm: Need to backup 15360K of critical section..
mdadm: Cannot set device size/shape for /dev/md3: No space left on device
p34:~#

The disk only has about 350GB of 1.8TB used, any idea why I get this 
error?


I searched google but could not find anything on this issue when trying 
to grow the array?



-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html



Is it because I use a 512kb chunksize?

Jul  7 08:44:59 p34 kernel: [4295845.933000] raid5: reshape: not enough 
stripes.  Needed 512
Jul  7 08:44:59 p34 kernel: [4295845.962000] md: couldn't update array 
info. -28


So the RAID5 reshape only works if you use a 128kb or smaller chunk size?

Justin.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/




From the source:


/* Can only proceed if there are plenty of stripe_heads.
@@ -2599,30 +2593,48 @@ static int raid5_reshape(mddev_t *mddev,
* If the chunk size is greater, user-space should request more
* stripe_heads first.
*/
- if ((mddev-chunk_size / STRIPE_SIZE) * 4  conf-max_nr_stripes) {
+ if ((mddev-chunk_size / STRIPE_SIZE) * 4  conf-max_nr_stripes ||
+ (mddev-new_chunk / STRIPE_SIZE) * 4  conf-max_nr_stripes) {
printk(KERN_WARNING raid5: reshape: not enough stripes. Needed %lu\n,
(mddev-chunk_size / STRIPE_SIZE)*4);
return -ENOSPC;
}

I don't see anything that mentions one needs to use a certain chunk size?

Any idea what the problem is here?

Justin.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Neil,

Any comments?

Justin.



The --grow option worked, sort of.

p34:~# mdadm /dev/md3 --grow --size=max
p34:~# umount /dev/md3
p34:~# mdadm -S /dev/md3
p34:~# mount /dev/md3
Segmentation fault
p34:~#

[4313355.425000] BUG: unable to handle kernel NULL pointer dereference at 
virtual address 00d4

[4313355.425000]  printing eip:
[4313355.425000] c03c377b
[4313355.425000] *pde = 
[4313355.425000] Oops: 0002 [#1]
[4313355.425000] PREEMPT SMP
[4313355.425000] CPU:0
[4313355.425000] EIP:0060:[c03c377b]Not tainted VLI
[4313355.425000] EFLAGS: 00010046   (2.6.17.3 #4)
[4313355.425000] EIP is at _spin_lock_irqsave+0x14/0x61
[4313355.425000] eax:    ebx: f7e6c000   ecx: c0333d12   edx: 
0202
[4313355.425000] esi: 00d4   edi: f7fb9600   ebp: 00d4   esp: 
f7e6dc94

[4313355.425000] ds: 007b   es: 007b   ss: 0068
[4313355.425000] Process mount (pid: 22892, threadinfo=f7e6c000 
task=c1a90580)
[4313355.425000] Stack: c19947e4  c0333d32 0002 c012aaa2 
f7e6dccc f7e6dc9c f7e6dc9c
[4313355.425000]f7e6dccc c0266b8d c19947e4   
e11a61f8 f7e6dccc f7e6dccc
[4313355.425000]0005 f7fda014 f7fda000 f7fe8c00 c0259a79 
e11a61c0 0001 001f

[4313355.425000] Call Trace:
[4313355.425000]  c0333d32 raid5_unplug_device+0x20/0x65  c012aaa2 
flush_workqueue+0x67/0x87

Re: Kernel 2.6.17 and RAID5 Grow Problem (critical section backup)

2006-07-07 Thread Justin Piszcz



On Fri, 7 Jul 2006, Justin Piszcz wrote:




On Fri, 7 Jul 2006, Justin Piszcz wrote:




On Fri, 7 Jul 2006, Justin Piszcz wrote:


On Fri, 7 Jul 2006, Justin Piszcz wrote:


On Fri, 7 Jul 2006, Justin Piszcz wrote:


p34:~# mdadm /dev/md3 -a /dev/hde1
mdadm: added /dev/hde1

p34:~# mdadm -D /dev/md3
/dev/md3:
   Version : 00.90.03
 Creation Time : Fri Jun 30 09:17:12 2006
Raid Level : raid5
Array Size : 1953543680 (1863.04 GiB 2000.43 GB)
   Device Size : 390708736 (372.61 GiB 400.09 GB)
  Raid Devices : 6
 Total Devices : 7
Preferred Minor : 3
   Persistence : Superblock is persistent

   Update Time : Fri Jul  7 08:25:44 2006
 State : clean
Active Devices : 6
Working Devices : 7
Failed Devices : 0
 Spare Devices : 1

Layout : left-symmetric
Chunk Size : 512K

  UUID : e76e403c:7811eb65:73be2f3b:0c2fc2ce
Events : 0.232940

   Number   Major   Minor   RaidDevice State
  0  2210  active sync   /dev/hdc1
  1  5611  active sync   /dev/hdi1
  2   312  active sync   /dev/hda1
  3   8   493  active sync   /dev/sdd1
  4  8814  active sync   /dev/hdm1
  5   8   335  active sync   /dev/sdc1

  6  331-  spare   /dev/hde1
p34:~# mdadm --grow /dev/md3 --raid-disks=7
mdadm: Need to backup 15360K of critical section..
mdadm: Cannot set device size/shape for /dev/md3: No space left on 
device

p34:~# mdadm --grow /dev/md3 --bitmap=internal --raid-disks=7
mdadm: can change at most one of size, raiddisks, bitmap, and layout
p34:~# umount /dev/md3
p34:~# mdadm --grow /dev/md3 --raid-disks=7
mdadm: Need to backup 15360K of critical section..
mdadm: Cannot set device size/shape for /dev/md3: No space left on 
device

p34:~#

The disk only has about 350GB of 1.8TB used, any idea why I get this 
error?


I searched google but could not find anything on this issue when trying 
to grow the array?



-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html



Is it because I use a 512kb chunksize?

Jul  7 08:44:59 p34 kernel: [4295845.933000] raid5: reshape: not enough 
stripes.  Needed 512
Jul  7 08:44:59 p34 kernel: [4295845.962000] md: couldn't update array 
info. -28


So the RAID5 reshape only works if you use a 128kb or smaller chunk size?

Justin.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel 
in

the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/




From the source:


/* Can only proceed if there are plenty of stripe_heads.
@@ -2599,30 +2593,48 @@ static int raid5_reshape(mddev_t *mddev,
* If the chunk size is greater, user-space should request more
* stripe_heads first.
*/
- if ((mddev-chunk_size / STRIPE_SIZE) * 4  conf-max_nr_stripes) {
+ if ((mddev-chunk_size / STRIPE_SIZE) * 4  conf-max_nr_stripes ||
+ (mddev-new_chunk / STRIPE_SIZE) * 4  conf-max_nr_stripes) {
printk(KERN_WARNING raid5: reshape: not enough stripes. Needed %lu\n,
(mddev-chunk_size / STRIPE_SIZE)*4);
return -ENOSPC;
}

I don't see anything that mentions one needs to use a certain chunk size?

Any idea what the problem is here?

Justin.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Neil,

Any comments?

Justin.



The --grow option worked, sort of.

p34:~# mdadm /dev/md3 --grow --size=max
p34:~# umount /dev/md3
p34:~# mdadm -S /dev/md3
p34:~# mount /dev/md3
Segmentation fault
p34:~#

[4313355.425000] BUG: unable to handle kernel NULL pointer dereference at 
virtual address 00d4

[4313355.425000]  printing eip:
[4313355.425000] c03c377b
[4313355.425000] *pde = 
[4313355.425000] Oops: 0002 [#1]
[4313355.425000] PREEMPT SMP
[4313355.425000] CPU:0
[4313355.425000] EIP:0060:[c03c377b]Not tainted VLI
[4313355.425000] EFLAGS: 00010046   (2.6.17.3 #4)
[4313355.425000] EIP is at _spin_lock_irqsave+0x14/0x61
[4313355.425000] eax:    ebx: f7e6c000   ecx: c0333d12   edx: 
0202
[4313355.425000] esi: 00d4   edi: f7fb9600   ebp: 00d4   esp: 
f7e6dc94

[4313355.425000] ds: 007b   es: 007b   ss: 0068
[4313355.425000] Process mount (pid: 22892, threadinfo=f7e6c000 
task=c1a90580)
[4313355.425000] Stack: c19947e4  c0333d32 0002 c012aaa2 f7e6dccc 
f7e6dc9c f7e6dc9c
[4313355.425000]f7e6dccc c0266b8d c19947e4   e11a61f8 
f7e6dccc f7e6dccc
[4313355.425000]0005 f7fda014 f7fda000 f7fe8c00 c0259a79 e11a61c0 
0001 001f

[4313355.425000] Call Trace:
[4313355.425000]  c0333d32 

Re: Kernel 2.6.17 and RAID5 Grow Problem (critical section backup)

2006-07-07 Thread Neil Brown
On Friday July 7, [EMAIL PROTECTED] wrote:
  
  Jul  7 08:44:59 p34 kernel: [4295845.933000] raid5: reshape: not enough 
  stripes.  Needed 512
  Jul  7 08:44:59 p34 kernel: [4295845.962000] md: couldn't update array 
  info. -28
  
  So the RAID5 reshape only works if you use a 128kb or smaller chunk size?
  
 
 Neil,
 
 Any comments?
 

Yes.   This is something I need to fix in the next mdadm.
You need to tell md/raid5 to increase the size of the stripe cache
before the grow can proceed.  You can do this with

  echo 600  /sys/block/md3/md/stripe_cache_size

Then the --grow should work.  The next mdadm will do this for you.

NeilBrown

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Kernel 2.6.17 and RAID5 Grow Problem (critical section backup)

2006-07-07 Thread Justin Piszcz



On Sat, 8 Jul 2006, Neil Brown wrote:


On Friday July 7, [EMAIL PROTECTED] wrote:


Jul  7 08:44:59 p34 kernel: [4295845.933000] raid5: reshape: not enough
stripes.  Needed 512
Jul  7 08:44:59 p34 kernel: [4295845.962000] md: couldn't update array
info. -28

So the RAID5 reshape only works if you use a 128kb or smaller chunk size?



Neil,

Any comments?



Yes.   This is something I need to fix in the next mdadm.
You need to tell md/raid5 to increase the size of the stripe cache
before the grow can proceed.  You can do this with

 echo 600  /sys/block/md3/md/stripe_cache_size

Then the --grow should work.  The next mdadm will do this for you.

NeilBrown



Hey!  You're awake :)

I am going to try it with just 64kb to prove to myself it works with that, 
but then I will re-create the raid5 again like I had it before and attempt 
it again, I did not see that documented anywhere!! Also, how do you use 
the --backup-file option? Nobody seems to know!

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Kernel 2.6.17 and RAID5 Grow Problem (critical section backup)

2006-07-07 Thread Neil Brown
On Friday July 7, [EMAIL PROTECTED] wrote:
 
 Hey!  You're awake :)

Yes, and thinking about breakfast (it's 8:30am here).

 
 I am going to try it with just 64kb to prove to myself it works with that, 
 but then I will re-create the raid5 again like I had it before and attempt 
 it again, I did not see that documented anywhere!! Also, how do you use 
 the --backup-file option? Nobody seems to know!

man mdadm
   --backup-file=
  This  is  needed  when  --grow is used to increase the number of
  raid-devices in a RAID5 if there  are no  spare  devices  avail-
  able.   See  the section below on RAID_DEVICE CHANGES.  The file
  should be stored on a separate device, not  on  the  raid  array
  being reshaped.


So e.g.
   mdadm --grow /dev/md3 --raid-disk=7 --backup-file=/root/md3-backup

mdadm will copy the first few stripes to /root/md3-backup and start
the reshape.  Once it gets past the critical section, mdadm will
remove the file.
If your system crashed during the critical section, then you wont be
able to assemble the array without providing the backup file:

e.g.
  mdadm --assemble /dev/md3 --backup-file=/root/md3-backup /dev/sd[a-g]

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Kernel 2.6.17 and RAID5 Grow Problem (critical section backup)

2006-07-07 Thread Justin Piszcz



On Sat, 8 Jul 2006, Neil Brown wrote:


On Friday July 7, [EMAIL PROTECTED] wrote:


Hey!  You're awake :)


Yes, and thinking about breakfast (it's 8:30am here).



I am going to try it with just 64kb to prove to myself it works with that,
but then I will re-create the raid5 again like I had it before and attempt
it again, I did not see that documented anywhere!! Also, how do you use
the --backup-file option? Nobody seems to know!


man mdadm
  --backup-file=
 This  is  needed  when  --grow is used to increase the number of
 raid-devices in a RAID5 if there  are no  spare  devices  avail-
 able.   See  the section below on RAID_DEVICE CHANGES.  The file
 should be stored on a separate device, not  on  the  raid  array
 being reshaped.


So e.g.
  mdadm --grow /dev/md3 --raid-disk=7 --backup-file=/root/md3-backup

mdadm will copy the first few stripes to /root/md3-backup and start
the reshape.  Once it gets past the critical section, mdadm will
remove the file.
If your system crashed during the critical section, then you wont be
able to assemble the array without providing the backup file:

e.g.
 mdadm --assemble /dev/md3 --backup-file=/root/md3-backup /dev/sd[a-g]

NeilBrown



Gotcha, thanks.

Quick question regarding reshaping, must one wait until the re-shape is 
completed before he or she grows the file system?


With the re-shape still in progress, I tried to grow the xfs FS but it 
stayed the same.


p34:~# df -h | grep /raid5
/dev/md3  746G   80M  746G   1% /raid5

p34:~# mdadm /dev/md3 --grow --raid-disks=4
mdadm: Need to backup 384K of critical section..
mdadm: ... critical section passed.
p34:~#

p34:~# cat /proc/mdstat
md3 : active raid5 hdc1[3] sdc1[2] hde1[1] hda1[0]
  781417472 blocks super 0.91 level 5, 64k chunk, algorithm 2 [4/4] 
[]
  []  reshape =  0.0% (85120/390708736) 
finish=840.5min speed=7738K/sec

p34:~#

p34:~# mount /raid5
p34:~# xfs_growfs /raid5
meta-data=/dev/md3   isize=256agcount=32, agsize=6104816 
blks

 =   sectsz=4096  attr=0
data =   bsize=4096   blocks=195354112, imaxpct=25
 =   sunit=16 swidth=48 blks, unwritten=1
naming   =version 2  bsize=4096
log  =internal   bsize=4096   blocks=32768, version=2
 =   sectsz=4096  sunit=1 blks
realtime =none   extsz=196608 blocks=0, rtextents=0
data blocks changed from 195354112 to 195354368
p34:~#

p34:~# umount /raid5
p34:~# mount /raid5
p34:~# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/md3  746G   80M  746G   1% /raid5
p34:~#

I guess one has to wait until the reshape is complete before growing the 
filesystem..?

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Kernel 2.6.17 and RAID5 Grow Problem (critical section backup)

2006-07-07 Thread Justin Piszcz



On Sat, 8 Jul 2006, Neil Brown wrote:


On Friday July 7, [EMAIL PROTECTED] wrote:


I guess one has to wait until the reshape is complete before growing the
filesystem..?


Yes.  The extra space isn't available until the reshape has completed
(if it was available earlier, the reshape wouldn't be necessary)

NeilBrown



Just wanted to confirm, thanks for all the help, I look forward to the new 
revision of mdadm :)  In the mean time, after I get another drive I will 
try your work around, but so far it looks good, thanks.!


Justin.

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Kernel 2.6.17 and RAID5 Grow Problem (critical section backup)

2006-07-07 Thread Neil Brown
On Friday July 7, [EMAIL PROTECTED] wrote:
 
 I guess one has to wait until the reshape is complete before growing the 
 filesystem..?

Yes.  The extra space isn't available until the reshape has completed
(if it was available earlier, the reshape wouldn't be necessary)

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html