MD devices renaming or re-ordering question

2007-09-13 Thread Maurice Hilarius
Hi to all.

I wonder if somebody would care to help me to solve a problem?

I have some servers.
They are running CentOS5
This OS has a limitation where the maximum filesystem size is 8TB.

Each server curr3ently has a  AMCC/3WARE 16 port SATA controllers. Total
of 16 ports / drives
I am using 750GB drives.

I am exporting the drives as single, NOT as hardware RAID
That is due to the filesystem and controller limitations, among other
reasons.

Each server currently has 16 disks attached to the one controller

I want to add a 2nd controller, and, for now, 4 more disks on it.

I want to have the boot disk as a plain disk, as presently configured as
sda1,2,3

The remaining 15 disks are configured as :
sdb1 through sde1 as md0 ( 4 devices/partitions)
sdf1 through sdp1 as md1 (10 devices/partitions)
I want to add a 2nd controller, and 4 more drives, to the md0 device.

But, I do not want md0 to be split across the 2 controllers this way.
I prefer to do the split on md1

Other than starting from scratch, the best solution would be to add the
disks to md0, then to magically turn md0 into md1, and md1 into md0

So, the question:
How does one make md1 into md0, and vice versa?
Without losing the data on these md's ?

Thanks in advance for any suggestions.



-- 
Regards, Maurice


/09 F9 11 02 9D 74 E3 5B D8 41 56 C5 63 56 88 C0/

/1001 1001 00010001 0010 10011101 01110100 11100011 01011011
11011000 0101 01010110 11000101 01100011 01010110 10001000 1100/

/10 base 13,256,278,887,989,457,651,018,865,901,401,704,640/
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


best way to create RAID10 on an CentOS5 install

2007-06-30 Thread Maurice Hilarius
Hello all again.

Extending from an earlier question:
deliberately degrading RAID1 to a single disk, then back again

I got some useful answers, which I appreciate.

Taking this the next step, I want to create a RAID10 using 4 disks on a
CentOS install
I also want to be able to stop and remove a pair of disks periodically ,
so I ma y exchange them as backup media.
Then add new disks and re-start it.

First challenge I see is the actual RAID10 creation in the install.

Second challenge is the syntax to stop the (correct) pair of disks and
remove them, then re-add them and restart the array so that is re-synchs.

Can anyone lend me some syntax and tips please?


-- 

With our best regards,


Maurice W. HilariusTelephone: 01-780-456-9771
Hard Data Ltd.  FAX:   01-780-456-9772
11060 - 166 Avenue email:[EMAIL PROTECTED]
Edmonton, AB, Canada   http://www.harddata.com/
   T5X 1Y3


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


deliberately degrading RAID1 to a single disk, then back again

2007-06-26 Thread Maurice Hilarius
Good day all.

Scenario:
Pair of identical disks.
partitions:
Disk 0:
/boot - NON-RAIDed
swap
/  - rest of disk

Disk 01
/boot1 - placeholder to take same space as /boot on disk0 - NON-RAIDed
swap
/  - rest of disk

I created RAID1 over / on both disks, made /dev/md0

From time to time I want to degrade back to only single disk, and turn
off RAID as the overhead has some cost
From time to time I want to restore to RAID1 function, and re-synch the
pair to current.

Yes, this is a backup scenario..

Are there any Recommendations ( with mdadm syntax)  please?





-- 

-- 

Regards,
Maurice
09 F9 11 02 9D 74 E3 5B D8 41 56 C5 63 56 88 C0

1001 1001 00010001 0010 10011101 01110100 11100011 01011011 11011000 
0101 01010110 11000101 01100011 01010110 10001000 1100

10 base 13,256,278,887,989,457,651,018,865,901,401,704,640 


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Thanks! Was:[Re: strange RAID5 problem]

2006-05-10 Thread Maurice Hilarius
Thanks to Neil, Luca, and CaT, who were all a big help.



-- 

With our best regards,


Maurice W. HilariusTelephone: 01-780-456-9771
Hard Data Ltd.  FAX:   01-780-456-9772
11060 - 166 Avenue email:[EMAIL PROTECTED]
Edmonton, AB, Canada   http://www.harddata.com/
   T5X 1Y3

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: strange RAID5 problem

2006-05-09 Thread Maurice Hilarius
Luca Berra wrote:
 On Mon, May 08, 2006 at 11:30:52PM -0600, Maurice Hilarius wrote:
 [EMAIL PROTECTED] ~]# mdadm /dev/md3 -a /dev/sdw1

 But, I get this error message:
 mdadm: hot add failed for /dev/sdw1: No such device

 What? We just made the partition on sdw a moment ago in fdisk. It IS
 there!

 I don't believe you, prove it (/proc/partitions)


I understand. Here we go then. Devices in question bracketed with **:

[EMAIL PROTECTED] ~]# cat /proc/partitions
major minor  #blocks  name

   3 0  117220824 hda
   3 1 104391 hda1
   3 22008125 hda2
   3 3  115105725 hda3
   364  117220824 hdb
   365 104391 hdb1
   3662008125 hdb2
   367  115105725 hdb3
   8 0  390711384 sda
   8 1  390708801 sda1
   816  390711384 sdb
   817  390708801 sdb1
   832  390711384 sdc
   833  390708801 sdc1
   848  390711384 sdd
   849  390708801 sdd1
   864  390711384 sde
   865  390708801 sde1
   880  390711384 sdf
   881  390708801 sdf1
   896  390711384 sdg
   897  390708801 sdg1
   8   112  390711384 sdh
   8   113  390708801 sdh1
   8   128  390711384 sdi
   8   129  390708801 sdi1
   8   144  390711384 sdj
   8   145  390708801 sdj1
   8   160  390711384 sdk
   8   161  390708801 sdk1
   8   176  390711384 sdl
   8   177  390708801 sdl1
   8   192  390711384 sdm
   8   193  390708801 sdm1
   8   208  390711384 sdn
   8   209  390708801 sdn1
   8   224  390711384 sdo
   8   225  390708801 sdo1
   8   240  390711384 sdp
   8   241  390708801 sdp1
  65 0  390711384 sdq
  65 1  390708801 sdq1
  6516  390711384 sdr
  6517  390708801 sdr1
  6532  390711384 sds
  6533  390708801 sds1
  6548  390711384 sdt
  6549  390708801 sdt1
  6564  390711384 sdu
  6565  390708801 sdu1
  6580  390711384 sdv
  6581  390708801 sdv1
**
  6596  390711384 sdw
  6597  390708801 sdw1
**
  65   112  390711384 sdx
  65   113  390708801 sdx1
  65   128  390711384 sdy
  65   129  390708801 sdy1
  65   144  390711384 sdz
  65   145  390708801 sdz1
  65   160  390711384 sdaa
  65   161  390708801 sdaa1
  65   176  390711384 sdab
  65   177  390708801 sdab1
  65   192  390711384 sdac
  65   193  390708801 sdac1
  65   208  390711384 sdad
  65   209  390708801 sdad1
  65   224  390711384 sdae
  65   225  390708801 sdae1
  65   240  390711384 sdaf
  65   241  390708801 sdaf1
**
   9 0 104320 md0
**
   9 2 5860631040 md2
   9 1  115105600 md1



-- 

Regards,
Maurice

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: strange RAID5 problem

2006-05-09 Thread Maurice Hilarius
Luca Berra wrote:
 ..
 I don't believe you, prove it (/proc/partitions)

 I understand. Here we go then. Devices in question bracketed with **:

 ok, now i do.
 is the /dev/sdw1 device file correctly created?
 you could try straceing mdadm to see what happens

 what about the other suggestion? trying to stop the array and restart
 it, since it is marked as inactive.
 L.

Here is what we ended up doing that fixed it.
Thanks to Neil on the --force, however even with that,
ALL parameters were needed on the mdadm -C or it still refused.
We used EVMS  to rebuild as that is what originally created the RAID.

mdadm -C /dev/md3 --chunk=256 --level=5 --parity=ls --raid-devices=16
--force /dev/evms/.nodes/sdq1 /dev/evms/.nodes/sdr1
/dev/evms/.nodes/sds1 /dev/evms/.nodes/sdt1 /dev/evms/.nodes/sdu1
/dev/evms/.nodes/sdv1 missing /dev/evms/.nodes/sdx1
/dev/evms/.nodes/sdy1 /dev/evms/.nodes/sdz1 /dev/evms/.nodes/sdaa1
/dev/evms/.nodes/sdab1 /dev/evms/.nodes/sdac1 /dev/evms/.nodes/sdad1
/dev/evms/.nodes/sdae1 /dev/evms/.nodes/sdaf1

Notice we are assembling a device with a missing member, and the
devices are in order per: mdamd -D /dev/md3

This was the *only* that it would come up. It was mountable, data seems
intact.
We started the rebuild with no errors by simply adding the device
as I mentioned before with -a.

Then sped it up via:

echo 10  /proc/sys/dev/raid/speed_limit_min

Because frankly we have the resources to do so and need it going as fast
as possible.

-- 

Regards,
Maurice

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID5 recovery trouble, bd_claim failed?

2006-04-19 Thread Maurice Hilarius
Nathanial Byrnes wrote:
 Yes, I did not have the funding nor approval to purchase more hardware
 when I set it up (read wife). Once it was working... the rest is
 history.

   

OK, so if you have a pair of IDE disks, jumpered as Master and slave,
and if one fails:

If Master failed, re-jumper remaining disk on pair on same cable as
Master, no slave present

If Slave failed, re-jumper remaining disk on pair on same cable as
Master, no slave present.

Then you will have the remaining disk working normally, at least.

When you can afford it I suggest buying a controller with enough ports
to support the number of drives you have, with no Master/Slave pairing.

Good luck !

And to the  software guys trying to help: We need to start with the
(obvious) hardware problem, before we advise on how to recover data from
a borked system..
Once he has the jumpering on the drives sorted out, the drive that went
missing will be back again..


-- 

Regards,
Maurice

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID5 recovery trouble, bd_claim failed?

2006-04-19 Thread Maurice Hilarius
Nate Byrnes wrote:
 Hi All,
I'm not sure that is entirely the case. From a hardware
 perspective, I can access all the disks from the OS, via fdisk and dd.
 It is really just mdadm that is failing.  Would I still need to work
 the jumper issue?
Thanks,
Nate

IF the disks are as we suspect (master and slave relationships) and IF
you now have either a failed or a removed drive, then you  MUST correct
the jumpering.
Sure, you can often see a disk that is misconfigured.
It is almost certain, however, that when you write to it you will simply
cause corruption on it.

Of course, so far this is all speculation, as you have not actually said
what the disks, controller interfaces, and jumpering and so forth are at.
I was merely speculating, based on what you have said.

No amount of software magic will cure a hardware problem..


-- 

With our best regards,


Maurice W. HilariusTelephone: 01-780-456-9771
Hard Data Ltd.  FAX:   01-780-456-9772
11060 - 166 Avenue email:[EMAIL PROTECTED]
Edmonton, AB, Canada   http://www.harddata.com/
   T5X 1Y3

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID5 recovery trouble, bd_claim failed?

2006-04-18 Thread Maurice Hilarius
Nathanial Byrnes wrote:
 Hi All,
   Recently I lost a disk in my raid5 SW array. It seems that it took a
 second disk with it. The other disk appears to still be funtional (from
 an fdisk perspective...). I am trying to get the array to work in
 degraded mode via failed-disk in raidtab, but am always getting the
 following error:

   
Let me guess:
IDE disks, in pairs.
Jumpered as Master and Salve.

Right?





-- 

With our best regards,


Maurice W. HilariusTelephone: 01-780-456-9771
Hard Data Ltd.  FAX:   01-780-456-9772
11060 - 166 Avenue email:[EMAIL PROTECTED]
Edmonton, AB, Canada   http://www.harddata.com/
   T5X 1Y3

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Questions about: Where to find algorithms for RAID5 / RAID6

2006-04-11 Thread Maurice Hilarius
Good day.

I am looking for some information, and hope the readers of this list
might be able to point me in the right direction:

Here is the scenario:
In RAID5 ( or RAID6) when a file is written, some parity data is
created, (by some form of XOR process, I assume), then that parity data
is written to disk.

I am looking to find the algorithm that is used to create that parity
data and to decides where to place it on the disks.

Any help on this is deeply appreciated.

-- 

With our best regards,


Maurice W. HilariusTelephone: 01-780-456-9771
Hard Data Ltd.  FAX:   01-780-456-9772
11060 - 166 Avenue email:[EMAIL PROTECTED]
Edmonton, AB, Canada   http://www.harddata.com/
   T5X 1Y3


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Real Time Mirroring of a NAS

2006-04-10 Thread Maurice Hilarius
andy liebman wrote:
 ..
 Thanks for your reply, and the suggestions of others. I'm going to
 look into both NBD and DRBD.

 Actually, I see that my idea to export an iSCSI target from Server B,
 mount it on A, and just create a RAID1 array with the two block
 devices must be very similar to what DRBD is doing, but my guess is
 that DRBD, with it's heartbeat signal, is probably more robust at
 error handling. I'd love to hear from somebody who has experience with
 DRBD.

 By the way, I use 3ware 9550SX cards. On a 16 drive RAID-5 SATA array,
 I can get sequential reads that top 600 MBs/sec. That's megabytes, not
 megabits. And write speeds are close to 400 MB/sec with the new faster
 on-board XOR processing. And random reads are at least 200 MB/sec. So,
 10 GbE is a must, really.

 Andy

Hi Andy.

A couple of other suggestions that may prove helpful:

1) EVMS
http://evms.sourceforge.net/


2) Lustre
http://www.clusterfs.com/
http://www.lustre.org/



-- 

With our best regards,


Maurice W. HilariusTelephone: 01-780-456-9771
Hard Data Ltd.  FAX:   01-780-456-9772
11060 - 166 Avenue email:[EMAIL PROTECTED]
Edmonton, AB, Canada   http://www.harddata.com/
   T5X 1Y3

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: ANNOUNCE: mdadm 2.4 - A tool for managing Soft RAID under Linux

2006-03-30 Thread Maurice Hilarius
Neil Brown wrote:
 I am pleased to announce the availability of
mdadm version 2.4
 ..

 Release 2.4 primarily adds support for increasing the number of
 devices in a RAID5 array, which requires 2.6.17 (or some -rc or -mm
 prerelease).
 ..
Is there a corresponding means to increase the size of a file system to
use this?
 -   Allow --monitor to work with arrays with 28 devices
   
So, how DO we get past the old 26 device alphabet limit ?

Thanks, as always, for the great work, Neil.



-- 

With our best regards,


Maurice W. HilariusTelephone: 01-780-456-9771
Hard Data Ltd.  FAX:   01-780-456-9772
11060 - 166 Avenue email:[EMAIL PROTECTED]
Edmonton, AB, Canada   http://www.harddata.com/
   T5X 1Y3

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html