On Sun, Apr 17, 2005 at 05:04:13PM -0500, John McMonagle wrote:
Need to duplicate some computers that are using raid 1.
I was thinking of just adding adding an extra drive and then moving it
to the new system. The only problem is the clones will all have the same
uuids. If at some later date
I read the software RAID-HOWTO, but the below 6 questions is still
unclear. I have asked around on IRC-channels and it seems that I am not
the only one being confused. Maybe the HOWTO could be updated to
clearify the below items?
1) I have a RAID-1 setup with one spare disk. A disk crashes and
tmp wrote:
I read the software RAID-HOWTO, but the below 6 questions is still
unclear. I have asked around on IRC-channels and it seems that I am not
the only one being confused. Maybe the HOWTO could be updated to
clearify the below items?
1) I have a RAID-1 setup with one spare disk. A disk
On 2005-04-18T17:14:53, Anu Matthew [EMAIL PROTECTED] wrote:
Hello,
I have two nodes, hostA and hostB, both of them see the same 4 multipath
LUNs.
md0 to md4 are thus visible to both the hosts, (yeah, they both do not
write to thoe md devices at the same time, hostB mounts them only
Thanks for your answers! They led to a couple of new questions,
however. :-)
I've read man mdadm and man mdadm.conf but I certainly doesn't have
an overview of software RAID.
yes
raidtab is deprecated - man mdadm
OK. The HOWTO describes mostly a raidtools context, however. Is the
following
Thanks Larks. It is much appreciated.
If the original metadata written by hostA changes by mdadm --assemble running
on hostB, will mdmpd be able to recover the failed links on hostA when they
re-surface? I am asking this because, I guess unless hostA goes down, hostA
reflects the new metadata
Luca Berra wrote:
On Sun, Apr 17, 2005 at 05:04:13PM -0500, John McMonagle wrote:
Need to duplicate some computers that are using raid 1.
I was thinking of just adding adding an extra drive and then moving
it to the new system. The only problem is the clones will all have
the same uuids. If at
Anyone establish optimum blockdev --setra settings for raid on a 2.6 kernel?
There has been some discussions on the lvm mailing list.
In the case of lvm on raid sounds like it's best to use 0 on the md and
disk devices and something around 1024 and 4096 on the lvm devices.
It seems make some