On 2007-04-20T15:57:35, Rob Bray [EMAIL PROTECTED] wrote:
I'm attempting to do host-based mirroring with one LUN on each of two EMC
CX storage units, each with two service processors. Connection is via
Emulex LP9802, using lpfc driver, and sg.
Use dm-multipath (multipath-tools package), not
G'day all,
I've got 3 arrays here. A 3 drive raid-5, a 10 drive raid-5 and a 15 drive raid-6. They are all
currently 250GB SATA drives.
I'm contemplating an upgrade to 500GB drives on one or more of the arrays and wondering the best way
to do the physical swap.
The slow and steady way
Brad Campbell wrote:
[]
It occurs though that the superblocks would be in the wrong place for
the new drives and I'm wondering if the kernel or mdadm might not find
them.
I once had a similar issue. And wrote a tiny program (a hack, sort of),
to read or write md superblock from/to a component
Brad Campbell wrote:
G'day all,
I've got 3 arrays here. A 3 drive raid-5, a 10 drive raid-5 and a 15
drive raid-6. They are all currently 250GB SATA drives.
I'm contemplating an upgrade to 500GB drives on one or more of the
arrays and wondering the best way to do the physical swap.
The
Kernel: 2.6.21.1
Here is the bug:
md2: RAID1 (works fine)
md3: RAID5 (only syncs at the sync_speed_min set by the kernel)
If I do not run this command:
echo 55000 /sys/block/md3/md/sync_speed_min
I will get 2 megabytes per second check speed for RAID 5.
However, the odd part is I can leave
David Greaves wrote:
I was more wondering about the feasibility of using dd to copy the drive
contents to the larger drives (then I could do 5 at a time) and working
it from there.
Err, if you can dd the drives, why can't you create a new array and use xfsdump
or equivalent? Is downtime due
On Tuesday May 8, [EMAIL PROTECTED] wrote:
Kernel: 2.6.21.1
Here is the bug:
md2: RAID1 (works fine)
md3: RAID5 (only syncs at the sync_speed_min set by the kernel)
If I do not run this command:
echo 55000 /sys/block/md3/md/sync_speed_min
I will get 2 megabytes per second check
On Tue, 8 May 2007, Neil Brown wrote:
On Tuesday May 8, [EMAIL PROTECTED] wrote:
Kernel: 2.6.21.1
Here is the bug:
md2: RAID1 (works fine)
md3: RAID5 (only syncs at the sync_speed_min set by the kernel)
If I do not run this command:
echo 55000 /sys/block/md3/md/sync_speed_min
I will get
Richard Scobie wrote:
Peter Rabbitson wrote:
design of modern drives? I have an array of 4 Maxtor sata drives, and
raw read performance at the end of the disk is 38mb/s compared to 62mb/s
at the beginning.
At least one supplier of terabyte arrays mitigates this effect and
improves seek
Hi list.
I recently had a crash on my RAID machine and now two out of five RAIDs
don't start anymore. I don't even understand the error:
[EMAIL PROTECTED]:~$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid5] [raid4]
md5 : active raid5 hdh8[0] hde8[3] hdf8[2] hdg8[1]
Hello,
I hope this is the appropriate forum for this request if not please
direct me to the correct one.
I have a system running FC6, 2.6.20-1.2925, software RAID5 and a
power outage seems to have borked the file structure on the RAID.
Boot shows the following disks:
sda #first
Benjamin Schieder wrote:
Hi list.
md2 : inactive hdh5[4](S) hdg5[1] hde5[3] hdf5[2]
11983872 blocks
[EMAIL PROTECTED]:~# mdadm -R /dev/md/2
mdadm: failed to run array /dev/md/2: Input/output error
[EMAIL PROTECTED]:~# mdadm /dev/md/
0 1 2 3 4 5
[EMAIL PROTECTED]:~# mdadm
Mark A. O'Neil wrote:
Hello,
I hope this is the appropriate forum for this request if not please
direct me to the correct one.
I have a system running FC6, 2.6.20-1.2925, software RAID5 and a power
outage seems to have borked the file structure on the RAID.
Boot shows the following
13 matches
Mail list logo