RE: [linux-lvm] Q: Online resizing ext3 FS

2007-09-13 Thread Hiren Joshi
Sorry for the top quoting, I'm new to outlook =)

We did something similar, as the lv grows (with ext2online) the snapshot
will start filling up. If the resize fails, you will not be able to
(easily) revert directly back to the logical volume. This is because as
you start copying stuff back, the snapshot will be filling up. If you
want to be 100% safe, I would say go for some near line device that can
store 500G and backup to that!
HTH

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Stuart D. Gathman
Sent: 12 September 2007 21:04
To: LVM general discussion and development
Cc: linux-raid@vger.kernel.org
Subject: RE: [linux-lvm] Q: Online resizing ext3 FS

On Wed, 12 Sep 2007, Hiren Joshi wrote:

 Has anyone of you been using ext2online to resize (large) ext3 
 filesystems?
 I have to do it going from 500GB to 1TB on a productive system I was 
 wondering if you have some horror/success stories.
 I'm using RHEL4/U4 (kernel 2.6.9) on this system.

This brings up an LVM related question I've had.  Can I do this:

  1) take snapshot of 500GB LV
  2) resize source LV to 1TB
  3) run ext2online
  4a) resize succeeds - remove shapshot
  4b) resize fails horribly - copy shapshot to LV and restart
4b.1) is there a way to revert the source LV to the snapshot?
(without allocating snapshot as big as source LV)

-- 
  Stuart D. Gathman [EMAIL PROTECTED]
Business Management Systems Inc.  Phone: 703 591-0911 Fax: 703
591-6154 Confutatis maledictis, flammis acribus addictis - background
song for a Microsoft sponsored Where do you want to go from here?
commercial.

___
linux-lvm mailing list
[EMAIL PROTECTED]
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: md raid acceleration and the async_tx api

2007-09-13 Thread Yuri Tikhonov

 Hi Dan,

On Friday 07 September 2007 20:02, you wrote:
 You need to fetch from the 'md-for-linus' tree.  But I have attached
 them as well.

 git fetch git://lost.foo-projects.org/~dwillia2/git/iop
 md-for-linus:md-for-linus

 Thanks.

 Unrelated question. Comparing the drivers/md/raid5.c file in the Linus's 
2.6.23-rc6 tree and in your md-for-linus one I'd found the following 
difference in the expand-related part of the handle_stripe5() function:

-   s.locked += handle_write_operations5(sh, 1, 1);
+   s.locked += handle_write_operations5(sh, 0, 1);

 That is, in your case we are passing rcw=0, whereas in the Linus's case the 
handle_write_operation5() is called with rcw=1. What code is correct ?

 Regards, Yuri
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID6 mdadm --grow bug?

2007-09-13 Thread Neil Brown
On Wednesday September 12, [EMAIL PROTECTED] wrote:
 
 
 Problem:
 
 The mdadm --grow command fails when trying to add disk to a RAID6.
 
..
 
 So far I have replicated this problem on RHEL5 and Ubuntu 7.04  
 running the latest official updates and patches. I have even tried it  
 with the most latest version of mdadm 2.6.3 under RHEL5. RHEL5 uses  
 version 2.5.4.

You don't say what kernel version you are using (as I don't use RHEL5
or Ubunutu, I don't know what 'latest' means).

If it is 2.6.23-rcX, then it is a known problem that should be fixed
in the next -rc.  If it is something else... I need details.

Also, any kernel message (run 'dmesg') might be helpful.

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [linux-lvm] Q: Online resizing ext3 FS

2007-09-13 Thread Goswin von Brederlow
Tomasz Chmielewski [EMAIL PROTECTED] writes:

 Chris Osicki schrieb:
 Hi

 I apologize in advance for asking a question not really appropriate
 for this mailing list, but I couldn't find a better place with lots of
 people managing lots of disk space.

 The question:
 Has anyone of you been using ext2online to resize (large) ext3 filesystems?
 I have to do it going from 500GB to 1TB on a productive system I was
 wondering if you have some horror/success stories.
 I'm using RHEL4/U4 (kernel 2.6.9) on this system.

That kernel seems to be a bit old. Better upgrade first.

I did some resizes in that size range. Although just adding 50-200GB
not doubling the size in one go. But I see no reason it should
fail. Growing the fs was always quick (a minute or two) and
painless. Maybe because I have used -Tlargefile4 for mke2fs so the
amount of inodes is drastically reduced.

 Yes, I tried to online resize a similar filesystem (600 MB to 1.2 TB)
 and it didn't work.

 At some point, resize2fs would just exit with errors.
 I tried to do it several times before I figured out what's missing;
 sometimes, I interrupted the process with ctrl+c. No data loss
 occurred.

 To do an online ext3 resize, the filesystem needs a resize_inode
 feature. You can check the features with dumpe2fs:

So was that what you were missing or did some other error occur?

I tried to resize an fs without resize_inode and it just plain told me
and abrted.

MfG
Goswin
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [linux-lvm] Q: Online resizing ext3 FS

2007-09-13 Thread Goswin von Brederlow
Stuart D. Gathman [EMAIL PROTECTED] writes:

 On Wed, 12 Sep 2007, Hiren Joshi wrote:

 Has anyone of you been using ext2online to resize (large) ext3
 filesystems?
 I have to do it going from 500GB to 1TB on a productive system I was
 wondering if you have some horror/success stories.
 I'm using RHEL4/U4 (kernel 2.6.9) on this system.

 This brings up an LVM related question I've had.  Can I do this:

   1) take snapshot of 500GB LV
   2) resize source LV to 1TB
   3) run ext2online
   4a) resize succeeds - remove shapshot
   4b) resize fails horribly - copy shapshot to LV and restart
 4b.1) is there a way to revert the source LV to the snapshot?
   (without allocating snapshot as big as source LV)

Why do you resize the LV if you want to test it first? Give the
snapshot the extra 500G and resize that. If it fails you just remove
the snapshot and try again.

If it succeeds then you can do it the other way around and resize for
real.

MfG
Goswin
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [linux-lvm] Q: Online resizing ext3 FS

2007-09-13 Thread Tomasz Chmielewski

Goswin von Brederlow schrieb:

Tomasz Chmielewski [EMAIL PROTECTED] writes:


(...)


Yes, I tried to online resize a similar filesystem (600 MB to 1.2 TB)
and it didn't work.

At some point, resize2fs would just exit with errors.
I tried to do it several times before I figured out what's missing;
sometimes, I interrupted the process with ctrl+c. No data loss
occurred.

To do an online ext3 resize, the filesystem needs a resize_inode
feature. You can check the features with dumpe2fs:


So was that what you were missing or did some other error occur?

I tried to resize an fs without resize_inode and it just plain told me
and abrted.


It was working for some time (15 or 30 minutes?), the fs grew a couple 
of gigabytes, and then it exited with an error. At first I thought it's 
because the fs might need fsck, but after I did fsck, it didn't help - 
next online tries didn't increase the fs anymore.



--
Tomasz Chmielewski
http://wpkg.org
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: md raid acceleration and the async_tx api

2007-09-13 Thread Dan Williams
On 9/13/07, Yuri Tikhonov [EMAIL PROTECTED] wrote:

  Hi Dan,

 On Friday 07 September 2007 20:02, you wrote:
  You need to fetch from the 'md-for-linus' tree.  But I have attached
  them as well.
 
  git fetch git://lost.foo-projects.org/~dwillia2/git/iop
  md-for-linus:md-for-linus

  Thanks.

  Unrelated question. Comparing the drivers/md/raid5.c file in the Linus's
 2.6.23-rc6 tree and in your md-for-linus one I'd found the following
 difference in the expand-related part of the handle_stripe5() function:

 -   s.locked += handle_write_operations5(sh, 1, 1);
 +   s.locked += handle_write_operations5(sh, 0, 1);

  That is, in your case we are passing rcw=0, whereas in the Linus's case the
 handle_write_operation5() is called with rcw=1. What code is correct ?

There was a recent bug discovered in my changes to the expansion code.
 The fix has now gone into Linus's tree through Andrew's tree.  I kept
the fix out of my 'md-for-linus' tree to prevent it getting dropped
from -mm due to automatic git-tree merge-detection.  I have now
rebased my git tree so everything is in sync.

However, after talking with Neil at LCE we came to the conclusion that
it would be best if I just sent patches since git tree updates tend to
not get enough review, and because the patch sets will be more
manageable now that the big pieces of the acceleration infrastructure
have been merged.

  Regards, Yuri

Thanks,
Dan
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID6 mdadm --grow bug?

2007-09-13 Thread David Miller

Neil,

On RHEL5 the kernel is 2.6.18-8.1.8. On Ubuntu 7.04 the kernel is  
2.6.20-16. Someone on the Arstechnica forums wrote they see the same  
thing in Debian etch running kernel 2.6.18. Below is a messages log  
from the RHEL5 system. I have only included the section for creating  
the RAID6, adding a spare and trying to grow it. There is a one line  
error when I do the mdadm --grow command. It is md: couldn't  
update array info. -22.


md: bindloop1
md: bindloop2
md: bindloop3
md: bindloop4
md: md0: raid array is not clean -- starting background reconstruction
raid5: device loop4 operational as raid disk 3
raid5: device loop3 operational as raid disk 2
raid5: device loop2 operational as raid disk 1
raid5: device loop1 operational as raid disk 0
raid5: allocated 4204kB for md0
raid5: raid level 6 set md0 active with 4 out of 4 devices, algorithm 2
RAID5 conf printout:
 --- rd:4 wd:4 fd:0
 disk 0, o:1, dev:loop1
 disk 1, o:1, dev:loop2
 disk 2, o:1, dev:loop3
 disk 3, o:1, dev:loop4
md: syncing RAID array md0
md: minimum _guaranteed_ reconstruction speed: 1000 KB/sec/disc.
md: using maximum available idle IO bandwidth (but not more than  
20 KB/sec) for reconstruction.

md: using 128k window, over a total of 102336 blocks.
md: md0: sync done.
RAID5 conf printout:
 --- rd:4 wd:4 fd:0
 disk 0, o:1, dev:loop1
 disk 1, o:1, dev:loop2
 disk 2, o:1, dev:loop3
 disk 3, o:1, dev:loop4
md: bindloop5
md: couldn't update array info. -22

David.



On Sep 13, 2007, at 3:52 AM, Neil Brown wrote:


On Wednesday September 12, [EMAIL PROTECTED] wrote:



Problem:

The mdadm --grow command fails when trying to add disk to a RAID6.


..


So far I have replicated this problem on RHEL5 and Ubuntu 7.04
running the latest official updates and patches. I have even tried it
with the most latest version of mdadm 2.6.3 under RHEL5. RHEL5 uses
version 2.5.4.


You don't say what kernel version you are using (as I don't use RHEL5
or Ubunutu, I don't know what 'latest' means).

If it is 2.6.23-rcX, then it is a known problem that should be fixed
in the next -rc.  If it is something else... I need details.

Also, any kernel message (run 'dmesg') might be helpful.

NeilBrown


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID6 mdadm --grow bug?

2007-09-13 Thread Neil Brown
On Thursday September 13, [EMAIL PROTECTED] wrote:
 Neil,
 
 On RHEL5 the kernel is 2.6.18-8.1.8. On Ubuntu 7.04 the kernel is  
 2.6.20-16. Someone on the Arstechnica forums wrote they see the same  
 thing in Debian etch running kernel 2.6.18. Below is a messages log  
 from the RHEL5 system. I have only included the section for creating  
 the RAID6, adding a spare and trying to grow it. There is a one line  
 error when I do the mdadm --grow command. It is md: couldn't  
 update array info. -22.

reshaping raid6 arrays was not supported until 2.6.21.  So you'll need
a newer kernel.

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


MD devices renaming or re-ordering question

2007-09-13 Thread Maurice Hilarius
Hi to all.

I wonder if somebody would care to help me to solve a problem?

I have some servers.
They are running CentOS5
This OS has a limitation where the maximum filesystem size is 8TB.

Each server curr3ently has a  AMCC/3WARE 16 port SATA controllers. Total
of 16 ports / drives
I am using 750GB drives.

I am exporting the drives as single, NOT as hardware RAID
That is due to the filesystem and controller limitations, among other
reasons.

Each server currently has 16 disks attached to the one controller

I want to add a 2nd controller, and, for now, 4 more disks on it.

I want to have the boot disk as a plain disk, as presently configured as
sda1,2,3

The remaining 15 disks are configured as :
sdb1 through sde1 as md0 ( 4 devices/partitions)
sdf1 through sdp1 as md1 (10 devices/partitions)
I want to add a 2nd controller, and 4 more drives, to the md0 device.

But, I do not want md0 to be split across the 2 controllers this way.
I prefer to do the split on md1

Other than starting from scratch, the best solution would be to add the
disks to md0, then to magically turn md0 into md1, and md1 into md0

So, the question:
How does one make md1 into md0, and vice versa?
Without losing the data on these md's ?

Thanks in advance for any suggestions.



-- 
Regards, Maurice


/09 F9 11 02 9D 74 E3 5B D8 41 56 C5 63 56 88 C0/

/1001 1001 00010001 0010 10011101 01110100 11100011 01011011
11011000 0101 01010110 11000101 01100011 01010110 10001000 1100/

/10 base 13,256,278,887,989,457,651,018,865,901,401,704,640/
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: MD devices renaming or re-ordering question

2007-09-13 Thread Goswin von Brederlow
Maurice Hilarius [EMAIL PROTECTED] writes:

 Hi to all.

 I wonder if somebody would care to help me to solve a problem?

 I have some servers.
 They are running CentOS5
 This OS has a limitation where the maximum filesystem size is 8TB.

 Each server curr3ently has a  AMCC/3WARE 16 port SATA controllers. Total
 of 16 ports / drives
 I am using 750GB drives.

 I am exporting the drives as single, NOT as hardware RAID
 That is due to the filesystem and controller limitations, among other
 reasons.

 Each server currently has 16 disks attached to the one controller

 I want to add a 2nd controller, and, for now, 4 more disks on it.

 I want to have the boot disk as a plain disk, as presently configured as
 sda1,2,3

 The remaining 15 disks are configured as :
 sdb1 through sde1 as md0 ( 4 devices/partitions)
 sdf1 through sdp1 as md1 (10 devices/partitions)
 I want to add a 2nd controller, and 4 more drives, to the md0 device.

 But, I do not want md0 to be split across the 2 controllers this way.
 I prefer to do the split on md1

 Other than starting from scratch, the best solution would be to add the
 disks to md0, then to magically turn md0 into md1, and md1 into md0

 So, the question:
 How does one make md1 into md0, and vice versa?
 Without losing the data on these md's ?

 Thanks in advance for any suggestions.

The simplest is to pull the disks from md1 from the first controler
and put them into the 2nd controler and then add the new disks to the
first controler.

Actually I would pull the 4 disks from md0 and put them on the second
controler. Have no split at all.

Or alternatively split both raids evenly to balance loads between the
controlers. Might be faster.

MfG
Goswin
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: md raid acceleration and the async_tx api

2007-09-13 Thread Mr. James W. Laferriere

Hello Dan ,

On Thu, 13 Sep 2007, Dan Williams wrote:

On 9/13/07, Yuri Tikhonov [EMAIL PROTECTED] wrote:

 Hi Dan,
On Friday 07 September 2007 20:02, you wrote:

You need to fetch from the 'md-for-linus' tree.  But I have attached
them as well.

git fetch git://lost.foo-projects.org/~dwillia2/git/iop
md-for-linus:md-for-linus


 Thanks.

 Unrelated question. Comparing the drivers/md/raid5.c file in the Linus's
2.6.23-rc6 tree and in your md-for-linus one I'd found the following
difference in the expand-related part of the handle_stripe5() function:

-   s.locked += handle_write_operations5(sh, 1, 1);
+   s.locked += handle_write_operations5(sh, 0, 1);

 That is, in your case we are passing rcw=0, whereas in the Linus's case the
handle_write_operation5() is called with rcw=1. What code is correct ?


There was a recent bug discovered in my changes to the expansion code.
The fix has now gone into Linus's tree through Andrew's tree.  I kept
the fix out of my 'md-for-linus' tree to prevent it getting dropped
from -mm due to automatic git-tree merge-detection.  I have now
rebased my git tree so everything is in sync.

However, after talking with Neil at LCE we came to the conclusion that
it would be best if I just sent patches since git tree updates tend to
not get enough review, and because the patch sets will be more
manageable now that the big pieces of the acceleration infrastructure
have been merged.


 Regards, Yuri


Thanks,
Dan
	Does this discussion of patches include any changes to cure the 'BUG' 
instance I reported ?


ie: raid5:md3: kernel BUG , followed by , Silent halt .

Tia ,  JimL
--
+-+
| James   W.   Laferriere | System   Techniques | Give me VMS |
| NetworkEngineer | 663  Beaumont  Blvd |  Give me Linux  |
| [EMAIL PROTECTED] | Pacifica, CA. 94044 |   only  on  AXP |
+-+
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: md raid acceleration and the async_tx api

2007-09-13 Thread Williams, Dan J
 From: Mr. James W. Laferriere [mailto:[EMAIL PROTECTED]
   Hello Dan ,
 
 On Thu, 13 Sep 2007, Dan Williams wrote:
  On 9/13/07, Yuri Tikhonov [EMAIL PROTECTED] wrote:
   Hi Dan,
  On Friday 07 September 2007 20:02, you wrote:
  You need to fetch from the 'md-for-linus' tree.  But I have
attached
  them as well.
 
  git fetch git://lost.foo-projects.org/~dwillia2/git/iop
  md-for-linus:md-for-linus
 
   Thanks.
 
   Unrelated question. Comparing the drivers/md/raid5.c file in the
Linus's
  2.6.23-rc6 tree and in your md-for-linus one I'd found the
following
  difference in the expand-related part of the handle_stripe5()
function:
 
  -   s.locked += handle_write_operations5(sh, 1, 1);
  +   s.locked += handle_write_operations5(sh, 0, 1);
 
   That is, in your case we are passing rcw=0, whereas in the Linus's
case
 the
  handle_write_operation5() is called with rcw=1. What code is
correct ?
 
  There was a recent bug discovered in my changes to the expansion
code.
  The fix has now gone into Linus's tree through Andrew's tree.  I
kept
  the fix out of my 'md-for-linus' tree to prevent it getting dropped
  from -mm due to automatic git-tree merge-detection.  I have now
  rebased my git tree so everything is in sync.
 
  However, after talking with Neil at LCE we came to the conclusion
that
  it would be best if I just sent patches since git tree updates tend
to
  not get enough review, and because the patch sets will be more
  manageable now that the big pieces of the acceleration
infrastructure
  have been merged.
 
   Regards, Yuri
 
  Thanks,
  Dan
   Does this discussion of patches include any changes to cure the
'BUG'
 instance I reported ?
 
   ie: raid5:md3: kernel BUG , followed by , Silent halt .
No, this is referring to:
http://marc.info/?l=linux-raidm=118845398229443w=2

The BUG you reported currently looks to be caused by interactions with
the bitmap support code... still investigating.

   Tia ,  JimL

Thanks,
Dan
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: MD devices renaming or re-ordering question

2007-09-13 Thread Goswin von Brederlow
Goswin von Brederlow [EMAIL PROTECTED] writes:

 The simplest is to pull the disks from md1 from the first controler
 and put them into the 2nd controler and then add the new disks to the
 first controler.

That is of cause whith the raid stoped. You didn't say what kind of
raid you have but while it is online it would need resyncing or would
just plain break if you hot-plug a disk.

MfG
Goswin
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html