Re: Can uuid of raid array be changed?

2005-04-18 Thread Luca Berra
On Mon, Apr 18, 2005 at 08:05:22PM -0500, John McMonagle wrote:
Luca Berra wrote:
On Sun, Apr 17, 2005 at 05:04:13PM -0500, John McMonagle wrote:
Need to duplicate some computers that are using raid 1.
I was thinking of just adding adding an extra drive and then moving 
it to the new system. The only problem is the clones will all have 
the same uuids.  If at some later date the drives got mixed up I 
could see a possibilities for disaster.  Not exactly likely as the 
computers will be in different cities.

Is there a way to change the uuid if a raid array?
Is it really worth worrying about?
you can recreate the array, this will not damage existing data.
L.
Thanks
I'll try it.
I suspect I'll find out real quick but do you need to a 
--zero-superblock  on all  devices making the raid arrays?
NO
Will this damage the lvm2 superblock info?
Probably a good idea to do a vgcfgback just to be safe..
NO
the idea is after you cloned the drive, create a new array with the
force flag and using as components the cloned disk and the magic word
"missing", this will create a new degraded array and won't touch any
data.
you can then hotadd a new drive to this array, it will fill the slot
used by the "missing" keyword.
L.
--
Luca Berra -- [EMAIL PROTECTED]
   Communication Media & Services S.r.l.
/"\
\ / ASCII RIBBON CAMPAIGN
 XAGAINST HTML MAIL
/ \
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Questions about software RAID

2005-04-18 Thread Peter T. Breuer
tmp <[EMAIL PROTECTED]> wrote:
> I've read "man mdadm" and "man mdadm.conf" but I certainly doesn't have
> an overview of software RAID.

Then try using it instead/as well as reading about it, and you will
obtain a more cmprehensive understanding.

> OK. The HOWTO describes mostly a raidtools context, however. Is the
> following correct then?
> mdadm.conf may be considered as the replacement for raidtab. When mdadm

No. Mdadm (generally speaking) does NOT use a configuration file and
that is perhaps its major difference wrt to raidtools.  Tt's command
line.  You can see for yourself what the man page itself summarises as
the differences (the one about not using a configuration file is #2 of
3):

mdadm is a program that can be used to create, manage, and monitor
MD devices.  As such it provides a similar set of functionality to
the raidtools packages.  The key differ­ ences between mdadm and
raidtools are:

   mdadm is a single program and not a collection of pro­ grams.

   mdadm can perform (almost) all of its functions with­ out having
   a configuration file and does not use one by default.  Also mdadm
   helps with management of the configuration file.

   mdadm can provide information about your arrays (through Query,
   Detail, and Examine) that raidtools cannot.


> starts it consults this file and starts the raid arrays correspondingly.

No. As far as I am aware, the config file contains such details of
existing raid arrays as may conveniently be discovered during a
physical scan, and as such cntains only redundant information that at
most may save the cost of a physical scan during such operations as may
require it.

Feel free to correct me!

> This leads to the following:

Then I'll ignore it :-).

> Is it correct that I can use whole disks (/dev/hdb) only if I make a
> partitionable array and thus creates the partitions UPON the raid
> mechanism?

Incomprehensible, I am afraid.  You can use either partitions or whole
disks in a raid array.

> As far as I can see, partitionable arrays makes disk replacements easier

Oh - you mean that the partitions can be recognized at bootup by the
kernel.

> You say I can't boot from such a partitionable raid array. Is that
> correctly understood?

Partitionable? Or partitioned? I'm not sure what you mean.

You would be able to boot via lilo from a partitioned RAID1 array, since
all lilo requires is a block map of here to read the kernel image from,
and either component of the RAID1 would do, and I'm sure that lilo has
been altered to allow the use of both/either components blockmap during
its startup routines.

I don't know if grub can boot from a RAID1 array but it strikes me as
likely since it would be able to ignore the raid1-ness and boot
successfully just as though it were a (pre-raid-aware) lilo.

> Can I "grow" a partitionable raid array if I replace the existing disks
> with larger ones later? 

Partitionable? Or partitioned? If you grew the array you would be
extending it beyond the last partition. The partition table itself is n
sector zero, so it is not affected. You would presumably next change
the partitions to take advatage of the increased size available.

> Would you prefer manual partitioned disks, even though disk replacements
> are a bit more difficult?

I don't understand.

> I guess that mdadm automatically writes persistent superblocks to all
> disks?

By default, yes?

> I meant, the /dev/mdX has to be formatted, not the individual
> partitions. Still right?

I'm not sure what you mean. You mean "/dev/mdXy" by "individual
partitions"?

> So I could actually just pull out the disk, insert a new one and do a
> "mdadm -a /dev/mdX /dev/sdY"?

You might want to check that the old has been removed as well as faulted
first. I would imagine it is "only" faulted. But it doesn't matter.

> The RAID system won't detect the newly inserted disk itself?

It obeys commands. You can program the hotplug system to add it in
autmatically.

> Are there some HOWTO out there, that is up-to-date and is based on RAID
> usage with mdadm and kernel 2.6 instead of raidtools and kernel 2.2/2.4?

What there is seems fine to me if you can use the mdadm equivalents
instead of raidhotadd and raidsetfaulty and raidhotremve and mkraid.
The config file is not needed.

Peter

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


optimum blockdev --setra settings

2005-04-18 Thread John McMonagle
Anyone establish optimum blockdev --setra settings for raid on a 2.6 kernel?
There has been some discussions on the lvm mailing list.
In the case of lvm on raid sounds like it's best to use 0 on the md and 
disk devices and something around 1024 and 4096 on the lvm devices.
It seems make some sense that it can cause a lot of unneeded reads 
particularly if you have a lot of layers and/or a lot of raid5 drives.

My experiments have been inconclusive.  At least it seems that setting 
read ahead to 0 on the low level devices has no penalty.

John
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can uuid of raid array be changed?

2005-04-18 Thread John McMonagle
Luca Berra wrote:
On Sun, Apr 17, 2005 at 05:04:13PM -0500, John McMonagle wrote:
Need to duplicate some computers that are using raid 1.
I was thinking of just adding adding an extra drive and then moving 
it to the new system. The only problem is the clones will all have 
the same uuids.  If at some later date the drives got mixed up I 
could see a possibilities for disaster.  Not exactly likely as the 
computers will be in different cities.

Is there a way to change the uuid if a raid array?
Is it really worth worrying about?
you can recreate the array, this will not damage existing data.
L.
Thanks
I'll try it.
I suspect I'll find out real quick but do you need to a 
--zero-superblock  on all  devices making the raid arrays?

Will this damage the lvm2 superblock info?
Probably a good idea to do a vgcfgback just to be safe..
John
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Trouble assembling multipath md devices between two nodes who see the same LUNs.

2005-04-18 Thread Anu Matthew
Thanks Larks. It is much appreciated. 

If the original metadata written by hostA changes by "mdadm --assemble" running 
on hostB, will mdmpd be able to recover the failed links on hostA when they 
re-surface? I am asking this because, I guess unless hostA goes down, hostA 
reflects the new metadata written by hostB. Will mdmpd take the new uuid into 
account? 

Not entirely OT, but devabel gets to the see the same uuid/serial numbers on 
the shared luns across hosts. It would have been cool if mdadm could too, right?

Thanks,

Anu Matthew 


> On 2005-04-18T17:14:53, Anu Matthew 
> 
> md multipath has on-disk metadata and modifies it. md is NOT
> cluster-safe for concurrent activation.
> 
> 
> Sincerely,
>Lars Marowsky-Brée <[EMAIL PROTECTED]>
> 
> -- 
> High Availability & Clustering
> SUSE Labs, Research and Development
> SUSE LINUX Products GmbH - A Novell Business
 

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Questions about software RAID

2005-04-18 Thread tmp
Thanks for your answers! They led to a couple of new questions,
however. :-)

I've read "man mdadm" and "man mdadm.conf" but I certainly doesn't have
an overview of software RAID.

> yes
> raidtab is deprecated - man mdadm

OK. The HOWTO describes mostly a raidtools context, however. Is the
following correct then?
mdadm.conf may be considered as the replacement for raidtab. When mdadm
starts it consults this file and starts the raid arrays correspondingly.
This leads to the following:


a) If mdadm starts the arrays, how can I then boot from a RAID device
(mdadm isn't started upon boot)?
I don't quite get which parts of the RAID system are controled by the
kernel and which parts are controled by mdadm.


b) Whenever I replace disks, the runtime configuration changes. I assume
that I should manually edit mdadm.conf in order to make corespond to
reality?

> >2) The new disk has to be manually partitioned before beeing used in the
> >array.
> no it doesn't. You could use the whole disk (/dev/hdb).
> In general, AFAIK, partitions are better as they allow automatic 
> assembly at boot.

Is it correct that I can use whole disks (/dev/hdb) only if I make a
partitionable array and thus creates the partitions UPON the raid
mechanism?

As far as I can see, partitionable arrays makes disk replacements easier
as you can just replace the disk and let the RAID software take care of
syncing the new disk with existing partitioning. Is that correct?

You say I can't boot from such a partitionable raid array. Is that
correctly understood?

Can I "grow" a partitionable raid array if I replace the existing disks
with larger ones later? 

Would you prefer manual partitioned disks, even though disk replacements
are a bit more difficult?

I guess that mdadm automatically writes persistent superblocks to all
disks?

> >3) Must all partition types be 0xFD? What happens if they are not?
> no
> They won't be autodetected by the _kernel_

OK, so it is generally a good idea to always set the partition types to
0xFD, I guess.

> >4) I guess the partitions itself doesn't have to be formated as the
> >filesystem is on the RAID-level. Is that correct?
> compulsory!

I meant, the /dev/mdX has to be formatted, not the individual
partitions. Still right?

> >5) Removing a disk requires that I do a "mdadm -r" on all the partitions
> >that is involved in a RAID array. I attempt to by a hot-swap capable
> >controler, so what happens if I just pull out the disk without this
> >manual removal command?
> as far as md is concerned the disk disappeared.
> I _think_ this is just like mdadm -r.

So I could actually just pull out the disk, insert a new one and do a
"mdadm -a /dev/mdX /dev/sdY"?
The RAID system won't detect the newly inserted disk itself?

> > I.e. do I have to let my swap disk be a
> >RAID-setup too if I wan't it to continue upon disk crash?
> yes - a mirror, not a stripe.

OK. Depending on your recomendations above, I could either make it a
swap partition on a partitionable array or create an array for the swap
in the conventional way (of existing partitions).

Thanks again for your help!

Are there some HOWTO out there, that is up-to-date and is based on RAID
usage with mdadm and kernel 2.6 instead of raidtools and kernel 2.2/2.4?
I can't possibly be the only one with these newbie questions. :-)



-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Trouble assembling multipath md devices between two nodes who see the same LUNs.

2005-04-18 Thread Lars Marowsky-Bree
On 2005-04-18T17:14:53, Anu Matthew <[EMAIL PROTECTED]> wrote:

> Hello,
> 
> I have two nodes, hostA and hostB, both of them see the same 4 multipath 
> LUNs.
> 
> md0 to md4 are thus visible to both the hosts, (yeah, they both do not 
> write to thoe md devices at the same time, hostB mounts them only when 
> hostA is down or has crashed, and viceversa.)
> 
> It works for a while.

md multipath has on-disk metadata and modifies it. md is NOT
cluster-safe for concurrent activation.


Sincerely,
Lars Marowsky-Brée <[EMAIL PROTECTED]>

-- 
High Availability & Clustering
SUSE Labs, Research and Development
SUSE LINUX Products GmbH - A Novell Business

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Trouble assembling multipath md devices between two nodes who see the same LUNs.

2005-04-18 Thread Anu Matthew
Hello,
I have two nodes, hostA and hostB, both of them see the same 4 multipath 
LUNs.

md0 to md4 are thus visible to both the hosts, (yeah, they both do not 
write to thoe md devices at the same time, hostB mounts them only when 
hostA is down or has crashed, and viceversa.)

It works for a while.
While testing, the latest we have seen is, after md devices are stopped 
on hostA, hostB cannot start the md devices, as the uuid changes, and
comes back with a message in --verbose mode: 

mdadm: /dev/sdc  has wrong uuid
mdadm: no devices found for /dev/md0
Tried copying over the mdadm.conf with the latest uuids from hostA, 
still hostB cannot assemble them. I have to re-create the md devices on  
hostB eventually, then after a while, hostA starts to act up: it simply 
refuses to assemble.

Is there a way I can circumvent this problem? I am running rhel 3.0 (AS 
and ES), and we're on  mdadm - v1.5.0 - 22 Jan 2004, from Redhat's 
mdadm-1.5.0-9 rpm.

Thank you so much in advance,
--AM
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Questions about software RAID

2005-04-18 Thread Frank Wittig
tmp wrote:

>2) The new disk has to be manually partitioned before beeing used in the
>array. What happens if the new partitions are larger than other
>partitions used in the array? What happens if they are smaller?
>  
>
there's no problem to create partitions which have exactly the same size
as the old ones.
your disks can be from a different manufacturer, have differnt sizes,
different number of physical heads or anything else.

if you set up your disks to have the same geometry (heads, cylinders,
sectors) you can have partitions exactly the same size. (the extra size
of larger disks won't be lost.)

read "man fdisk" and have a look at the parameters -C -H and -S...

greetings,
frank


signature.asc
Description: OpenPGP digital signature


Re: Questions about software RAID

2005-04-18 Thread Peter T. Breuer
tmp <[EMAIL PROTECTED]> wrote:
> 1) I have a RAID-1 setup with one spare disk. A disk crashes and the
> spare disk takes over. Now, when the crashed disk is replaced with a new
> one, what is then happening with the role of the spare disk? Is it
> reverting to its old role as spare disk?

Try it and see.  Run raidsetfaulty on one disk.  That will bring the
spare in.  Run raidhotremove on the original.  Then "replace" it
with raidhotadd.

> If it is NOT reverting to it's old role, then the raidtab file will
> suddenly be out-of-sync with reality. Is that correct?

Shrug. It was "out of sync" as you call it the moment the spare disk
started to be used not as a spare but as part of the array.

> Does the answer given here differ in e.g. RAID-5 setups?

No.

> 2) The new disk has to be manually partitioned before beeing used in the
> array. What happens if the new partitions are larger than other
> partitions used in the array?

Bigger is fine, obviously!

> What happens if they are smaller?

They can't be used.

> 3) Must all partition types be 0xFD? What happens if they are not?

They can be anything you like. If they aren't, then the kernel
can't set them up at boot.

> 4) I guess the partitions itself doesn't have to be formated as the
> filesystem is on the RAID-level. Is that correct?

?? Sentence does not compute, I am afraid.


> 5) Removing a disk requires that I do a "mdadm -r" on all the partitions
> that is involved in a RAID array.

Does it? Well, I see that you mean "removing a disk intentionally".

> I attempt to by a hot-swap capable
> controler, so what happens if I just pull out the disk without this
> manual removal command?

The disk will error at the next access and will be faulted out of the
array.

> Aren't there some more hotswap-friendly setup?

?? Not sure what you mean. You mean, can you program the hotplug system
to do a setfaulty and remove from the array? Yes. Look at your hotplug
scripts in /etc/hotplug. But it' always going to be late hatever it
does, given that pulling the array is the trigger!

> 6) I know that the kernel does stripping automatically if more
> partitions are given as swap partitions in /etc/fstab. But can it also
> handle if one disk crashes? I.e. do I have to let my swap disk be a
> RAID-setup too if I wan't it to continue upon disk crash?

People have recently pointed out that raiding your swap makes sense
exactly in order to cope robustly with this eventually. You'd have had
to raid everything ELSE on the dead disk, of course, so I'm not quite
as sure as everyone else that it's a truly staggeringly wonderful idea. 

Peter

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Questions about software RAID

2005-04-18 Thread David Greaves
tmp wrote:
I read the software RAID-HOWTO, but the below 6 questions is still
unclear. I have asked around on IRC-channels and it seems that I am not
the only one being confused. Maybe the HOWTO could be updated to
clearify the below items?
1) I have a RAID-1 setup with one spare disk. A disk crashes and the
spare disk takes over. Now, when the crashed disk is replaced with a new
one, what is then happening with the role of the spare disk?
the new disk is spare, the array doesn't revert to it's original state.
Is it
reverting to its old role as spare disk?
 

so no it doesn't.
If it is NOT reverting to it's old role, then the raidtab file will
suddenly be out-of-sync with reality. Is that correct?
 

yes
raidtab is deprecated - man mdadm
Does the answer given here differ in e.g. RAID-5 setups?
 

no
2) The new disk has to be manually partitioned before beeing used in the
array.
no it doesn't. You could use the whole disk (/dev/hdb).
In general, AFAIK, partitions are better as they allow automatic 
assembly at boot.

What happens if the new partitions are larger than other
partitions used in the array?
nothing special - eventually, if you replace all the partitions with 
bigger ones you can 'grow' the array

What happens if they are smaller?
 

it won't work (doh!)
3) Must all partition types be 0xFD? What happens if they are not?
 

no
They won't be autodetected by the _kernel_
4) I guess the partitions itself doesn't have to be formated as the
filesystem is on the RAID-level. Is that correct?
 

compulsory!
5) Removing a disk requires that I do a "mdadm -r" on all the partitions
that is involved in a RAID array. I attempt to by a hot-swap capable
controler, so what happens if I just pull out the disk without this
manual removal command?
 

as far as md is concerned the disk disappeared.
I _think_ this is just like mdadm -r.
Aren't there some more hotswap-friendly setup?
 

What's unfriendly?
6) I know that the kernel does stripping automatically if more
partitions are given as swap partitions in /etc/fstab. But can it also
handle if one disk crashes?
no - striping <> mirroring
The kernel will fail to read data on the crashed disk - game over.
I.e. do I have to let my swap disk be a
RAID-setup too if I wan't it to continue upon disk crash?
 

yes - a mirror, not a stripe.
David
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Questions about software RAID

2005-04-18 Thread tmp
I read the software RAID-HOWTO, but the below 6 questions is still
unclear. I have asked around on IRC-channels and it seems that I am not
the only one being confused. Maybe the HOWTO could be updated to
clearify the below items?


1) I have a RAID-1 setup with one spare disk. A disk crashes and the
spare disk takes over. Now, when the crashed disk is replaced with a new
one, what is then happening with the role of the spare disk? Is it
reverting to its old role as spare disk?

If it is NOT reverting to it's old role, then the raidtab file will
suddenly be out-of-sync with reality. Is that correct?

Does the answer given here differ in e.g. RAID-5 setups?


2) The new disk has to be manually partitioned before beeing used in the
array. What happens if the new partitions are larger than other
partitions used in the array? What happens if they are smaller?


3) Must all partition types be 0xFD? What happens if they are not?


4) I guess the partitions itself doesn't have to be formated as the
filesystem is on the RAID-level. Is that correct?


5) Removing a disk requires that I do a "mdadm -r" on all the partitions
that is involved in a RAID array. I attempt to by a hot-swap capable
controler, so what happens if I just pull out the disk without this
manual removal command?
Aren't there some more hotswap-friendly setup?


6) I know that the kernel does stripping automatically if more
partitions are given as swap partitions in /etc/fstab. But can it also
handle if one disk crashes? I.e. do I have to let my swap disk be a
RAID-setup too if I wan't it to continue upon disk crash?


Thanks!

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can uuid of raid array be changed?

2005-04-18 Thread Luca Berra
On Sun, Apr 17, 2005 at 05:04:13PM -0500, John McMonagle wrote:
Need to duplicate some computers that are using raid 1.
I was thinking of just adding adding an extra drive and then moving it 
to the new system. The only problem is the clones will all have the same 
uuids.  If at some later date the drives got mixed up I could see a 
possibilities for disaster.  Not exactly likely as the computers will be 
in different cities.

Is there a way to change the uuid if a raid array?
Is it really worth worrying about?
you can recreate the array, this will not damage existing data.
L.
--
Luca Berra -- [EMAIL PROTECTED]
   Communication Media & Services S.r.l.
/"\
\ / ASCII RIBBON CAMPAIGN
 XAGAINST HTML MAIL
/ \
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html