Not as I see it. You take a backup to a large disk, or disks as the case
may be. That is your safety net. Then you try the md disks in the hard raid
controller. If they work, Bob's your uncle. If they do not work then create
the proper raid configuration on the hardware controller with the md disks
and copy in the backup. Perform the copying using a live CD to the extent
you can. At no time do you end up with twin RAID arrays. Of course, if you
have enough disks simply copy the md raid as an disk to the hard raid as a
disk. "tar" or "dd" imaging can work. If you use different disks in the
RAIDs then use tar or even cpio to copy the files rather than copy a pure
image. That will tend to optimize the partitioning to use the drive's
actual internal block size for creating partition boundaries.
{^_^}
On 2011/12/19 15:43, Felip Moll wrote:
Doing it this way seems to be a high risk operation.
Furthermore I want not do this because then I will have two raids: one raid per
software (md) into one per hardware.. my thoughts are about copying manually the
dirs of the operating system, then modifying configurations.. I think it is a
"more secure" process.
Thanks for the answer jdow ;)
2011/12/20 jdow <[email protected] <mailto:[email protected]>>
First take a complete backup of the md raid.
Then if the laws if Innate Perversity of Inanimate Objects you'll be able to
move the disks and have them just work. Your data is protected. (If you had
no backup IPIO would, of course, lead to the transition failing
expensively.)
Even if IPIO does not work you restore from the complete backup to the same
disks they were on after the hardware RAID assembles itself. (Despite the
numerous times IPIO seems to work, I still figure it's a silly superstition.
It does lead to a correct degree of paranoia, though.)
{^_^}
On 2011/12/19 09:18, Felip Moll wrote:
Well, I will remake my question to not scare possible "answerers":
How to move a SL6.0 system with md raid (raid per software), to another
server
without mantaining the raid per software?
Thanks!
2011/12/16 Felip Moll <[email protected] <mailto:[email protected]>
<mailto:[email protected] <mailto:[email protected]>>>
Hello all!
Recently I installed and configured a Scientific Linux to run as a
high
performance computing cluster with 15 slave nodes and one master. I
did this
while an older system with RedHat 5.0 was running in order
to avoid users to stop their computations. All gone well. I migrated
node to
node and now I have a flawlessly cluster with SL6!.
Well, the fact is that while migrating I used the node1 to install
SL6 while
the node0 was hosting the old master operating system. Node1 has
less ram
and no raid capabilities, so I configured a Raid5 per software when
installing, using md linux software (which comes per default to a
normal
installation when you select "raid"). Node0 has a Raid 5 hardware
controller.
Now I want to move the new master node1, into node0. I thought about
this
and I have to shutdown node1, node0, and with a LiveCD partition the
harddisk of node0 and copy the contents of the disk of node1 into
it. Then
make grub install.
All right but, what do you think that I should take in consideration
regarding to Raid and md? I will have to modify /etc/fstab and also
delete
/etc/mdadm.conf to avoid md running. Anything more?
Thank you very much!