I'm attempting to do host-based mirroring with one LUN on each of two EMC
CX storage units, each with two service processors. Connection is via
Emulex LP9802, using lpfc driver, and sg.
The two LUNs (with two possible paths each) present fine as /dev/sd[a-d].
I have tried using both md-multipath
Hi,
I just tried to setup a one-device raid onto an USB flash drive.
Creating, setting up ext3 and filling with data was no problem.
But when I tried to work with it afterwards the metadevice was
unresponsive. I tried both linear and raid0 levels, but that
made no difference.
For my
I am trying to grow a raid5 volume in-place. I would like to expand the
partition boundaries, then grow raid5 into the newly-expanded partitions.
I was wondering if there is a way to move the superblock from the end of
the old partition to the end of the new partition. I've tried dd
if=/dev/sdX1
i want to say up front that i have several 3ware 7504 and 7508 cards
which i am completely satisfied with. i use them as JBOD, and they make
stellar PATA controllers (not RAID controllers). they're not perfect
(they're slow), but they've been rock solid for years.
not so the 9550sx.
i've
I have been using an older 64bit system, socket 754 for a while now. It
has
the old PCI bus 33Mhz. I have two low cost (no HW RAID) PCI SATA I cards
each with 4 ports to give me an eight disk RAID 6. I also have a Gig NIC,
on the PCI bus. I have Gig switches with clients connecting to it
On Mon, 2006-10-09 at 15:49 +0200, Erik Mouw wrote:
There is no way to figure out what exactly is correct data and what is
not. It might work right after creation and during the initial install,
but after the next reboot there is no way to figure out what blocks to
believe.
You don't
I'm looking for new harddrives.
This is my experience so far.
SATA cables:
=
I have zero good experiences with any SATA cables.
They've all been crap so far.
3.5 ATA harddrives buyable where I live:
==
(All drives are 7200rpm, for
Maybe valid but not helping with my problem since the problem is/was,
that /dev/md0 didn't exist at all. mdadm -C won't create device nodes.
But I figured the workaround meanwhile, so it doesn't matter anymore.
(In case someone wanna know: mknod in /lib/udev/devices does it on a hard
disk
I'm looking for new harddrives.
This is my experience so far.
SATA cables:
=
I have zero good experiences with any SATA cables.
They've all been crap so far.
3.5 ATA harddrives buyable where I live:
==
(All drives are 7200rpm, for
Am Sonntag, 17. September 2006 13:36 schrieben Sie:
On 9/17/06, Ask Bjørn Hansen [EMAIL PROTECTED] wrote:
It's recommended to use a script to scrub the raid device regularly,
to detect sleeping bad blocks early.
What's the best way to do that? dd the full md device to /dev/null?
Just to follow up my speed observations last month on a 6x SATA - 3x
PCIe - AMD64 system, as of 2.6.18 final, RAID-10 checking is running
at a reasonable ~156 MB/s (which I presume means 312 MB/s of reads),
and raid5 is better than the 23 MB/s I complained about earlier, but
still a bit
Am Dienstag, 12. September 2006 16:08 schrieb Justin Piszcz:
/dev/MAKEDEV /dev/md0
also make sure the SW raid modules etc are loaded if necessary.
Won't work, MAKEDEV doesn't know how to create [/dev/]md0.
mknod /dev/md0 b 9 0
perhaps?
-
To unsubscribe from this list: send the line
On Monday August 28, [EMAIL PROTECTED] wrote:
This might be a dumb question, but what causes md to use a large amount
of
cpu resources when reading a large amount of data from a raid1 array?
I assume you meant raid5 there.
md/raid5 shouldn't use that much CPU when reading.
It does use
Rob Bray wrote:
This might be a dumb question, but what causes md to use a large amount
of
cpu resources when reading a large amount of data from a raid1 array?
Examples are on a 2.4GHz AMD64, 2GB, 2.6.15.1 (I realize there are md
enhancements to later versions; I had some other unrelated
This might be a dumb question, but what causes md to use a large amount of
cpu resources when reading a large amount of data from a raid1 array?
Examples are on a 2.4GHz AMD64, 2GB, 2.6.15.1 (I realize there are md
enhancements to later versions; I had some other unrelated issues and
rolled back
This might be a dumb question, but what causes md to use a large amount of
cpu resources when reading a large amount of data from a raid1 array?
Examples are on a 2.4GHz AMD64, 2GB, 2.6.15.1 (I realize there are md
enhancements to later versions; I had some other unrelated issues and
rolled back
hello,
after reboot, md only binds to one mirror (/dev/hdb1).
raid1: raid set md0 active with 1 out of 2 mirrors
After adding /dev/hda1 manually 'mdadm --add /dev/md0 /dev/hda1', the
raid seems to work well:
isp:/var/log# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1
17 matches
Mail list logo