On Sun, 16 Jul 2006, A. Liemen wrote:
Hardware Raid.
http://www.areca.com.tw/products/html/pcix-sata.htm
If you want to see even worse performance with bonnie, try running several
parallel in sync, somewhere around 6-15 simultaneous read sessions[*]
should give you rather horrible
Based on various googled comments I have selected 'deadline' as the
elevator for the disks comprising my md arrays, with no further tuning
yet ... not so stellar :(
Basically concurrent reads (even just 2, even worse with 1 read + 1
write) don't work too well.
Example:
RAID1: I bulk-move some
Sevrin Robstad wrote:
I got a friend of mine to make a list of all the 6^6 combinations of dev
1 2 3 4 5 missing,
shouldn't this work ???
Only if you get the layout and chunk size right.
And make sure that you know whether you were using partitions (eg.
sda1) or whole drives (eg. sda - bad
Hi list.
Just recently I set up a few Arrays over 3 250 GB harddisks like this:
Personalities : [linear] [raid0] [raid1] [raid5] [raid4]
md5 : active raid5 hdb8[0] hda8[1] hdc8[2]
451426304 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
md4 : active raid5 hdb7[0] hda7[1]
A. Liemen wrote:
Hardware Raid.
http://www.areca.com.tw/products/html/pcix-sata.htm
You should ask the vendor, this isn't a software RAID issue, and the
usual path to improving bad hardware is in upgrading. You may be able to
get better firmware if you're lucky.
Alex
Jeff Breidenbach
Burn Alting wrote:
Last year, there were discussions on this list about the possible
use of a 'co-processor' (Intel's IOP333) to compute raid 5/6's
parity data.
We are about to see low cost, multi core cpu chips with very
high speed memory bandwidth. In light of this, is there any
effective
Richard Scobie wrote:
Neil Brown wrote:
Add
DEVICE /dev/sd?
or similar on a separate line.
Remove
devices=/dev/sdc,/dev/sdd
Thanks.
My mistake, I thought after having assembled the arrays initially,
that the output of:
mdadm --detail --scan mdadm.conf
could be used directly.
Dexter Filmore wrote:
Currently I have 4 discs on a 4 channel sata controller which does its job
quite well for 20 bucks.
Now, if I wanted to grow the array I'd probably go for another one of these.
How can I tell if the discs on the new controller will become sd[e-h] or if
they'll be the
All has been quiet on this topic for a while--any more takers? Please
help if you can! Thanks in advance. Here is the current state of affairs:
[EMAIL PROTECTED] ~]# mdadm --add /dev/md1 /dev/hdd2
mdadm: add new device failed for /dev/hdd2 as 2: Invalid argument
[EMAIL PROTECTED] ~]# mdadm
Neil Brown wrote:
On Thursday July 6, [EMAIL PROTECTED] wrote:
hello, i just realized that internal bitmaps do not seem to work
anymore.
I cannot imagine why. Nothing you have listed show anything wrong
with md...
Maybe you were expecting
mdadm -X /dev/md100
to do something
Michael Tokarev wrote:
Neil Brown wrote:
On Monday July 3, [EMAIL PROTECTED] wrote:
Hello,
the following patch aims at solving an issue that is confusing a lot of
users.
when using udev, device files are created only when devices are
registered with the kernel, and md devices are
I am setting up a mythtv backend server on fedora 5 using 5x Seagate 300gb
drives and PNY S-cure Raid 3. the bulk of the 1.2 TB is a jfs partition
while there is a 100 mb /boot partition in ext2 also. the OS runs from /
partition and /swap partition on a WD raptor on a different controller.
My
Christian Pernegger wrote:
I finally got around to testing 2.6.17.4 with libata-tj-stable-20060710.
Hardware: ICH7R in ahci mode + WD5000YS's.
EH: much, much better. Before the patch it seemed like errors were
only printed to dmesg but never handed up to any layer above. Now md
actually fails
[This is a bit of a repost, because I'm slightly desperate :)]
I'm still having problems with some md arrays not shutting down
cleanly on halt / reboot.
The problem seems to affect only arrays that are started via an
initrd, even if they do not have the root filesystem on them.
That's all
Molle Bestefich wrote:
Sevrin Robstad wrote:
I got a friend of mine to make a list of all the 6^6 combinations of
dev
1 2 3 4 5 missing,
shouldn't this work ???
Only if you get the layout and chunk size right.
And make sure that you know whether you were using partitions (eg.
sda1) or
Am Montag, 17. Juli 2006 20:28 schrieb Bill Davidsen:
Next question: assembling by UUID, does that matter at all?
No. There's the beauty of it.
That's what I needed to hear.
(And while talking UUID - can I safely migrate to a udev-kernel? Someone
on this list recently ran into trouble
On 17 Jul 2006, Christian Pernegger suggested tentatively:
I'm still having problems with some md arrays not shutting down
cleanly on halt / reboot.
The problem seems to affect only arrays that are started via an
initrd, even if they do not have the root filesystem on them.
That's all
Hi, I was/am trying to install Debian Sarge r2 with 2 Sata HD's working
on Raid 1 via Software and in this newly MD device, I put LVM. All works
fine and the debian installs well, but when the LILO try to install, it
says me that I dont have an active partition and no matter what I do, it
On Thursday July 6, [EMAIL PROTECTED] wrote:
I created a raid1 array using /dev/disk/by-id with (2) 250GB USB 2.0
Drives. It was working for about 2 minutes until I tried to copy a
directory tree from one drive to the array and then cancelled it
midstream. After cancelling the copy, when I
On Friday July 7, [EMAIL PROTECTED] wrote:
Neil But if you wanted to (and were running a fairly recent kernel) you
Neil could
Neil mdadm --grow --bitmap=internal /dev/md0
Did this. And now I can do mdadm -X /dev/hde1 to examine the bitmap,
but I think this totally blows. To create a
On Tuesday July 11, [EMAIL PROTECTED] wrote:
Hm, what's superblock 0.91? It is not mentioned in mdadm.8.
Not sure, the block version perhaps?
Well yes of course, but what characteristics? The manual only lists
0, 0.90, default
1, 1.0, 1.1, 1.2
No 0.91 :(
AFAICR superblock
On Tuesday July 11, [EMAIL PROTECTED] wrote:
Neil,
It worked, echo'ing the 600 to the stripe width in /sys, however, how
come /dev/md3 says it is 0 MB when I type fdisk -l?
Is this normal?
Yes. The 'cylinders' number is limited to 16bits. For you 2.2TB
array, the number of 'cylinders'
On Tuesday July 11, [EMAIL PROTECTED] wrote:
Hi,
I created to 3 x /dev/md1 to /dev/md3 which consist of six identical
200GB hdd
my mdadm --detail --scan looks like
Proteus:/home/vladoportos# mdadm --detail --scan
ARRAY /dev/md1 level=raid1 num-devices=2
On Monday July 17, [EMAIL PROTECTED] wrote:
I have written some posts about this before... My 6 disk RAID 5 broke
down because of hardware failure. When I tried to get it up'n'running again
I did a --create without any missing disk, which made it rebuild. I have
also lost all information
On Tuesday July 11, [EMAIL PROTECTED] wrote:
Christian Pernegger wrote:
The fact that the disk had changed minor numbers after it was plugged
back in bugs me a bit. (was sdc before, sde after). Additionally udev
removed the sdc device file, so I had to manually recreate it to be
able to
On Monday July 10, [EMAIL PROTECTED] wrote:
Karl Voit wrote:
[snip]
Well this is because of the false(?) superblocks of sda-sdd in comparison to
sda1 to sdd1.
I don't understand this. Do you have more than a single partion on sda?
Is sda1 occupying the entire disk? since the superblock
On Friday July 14, [EMAIL PROTECTED] wrote:
Hi i have small problem when i booting i have md1 as /boot md2 as swap
and md3 as / (root) and when it come to md3 it say something like md3
has no identity information i cant read it its go too fast... it
actualy dont affect system as far i can
On Tuesday July 11, [EMAIL PROTECTED] wrote:
Checksum : 4aa9094a - expected 4aa908c4
This is a bit scary. You have a single-bit error, either in the
checksum or elsewhere in the superblock.
I would recommend at least a memtest86 run.
NeilBrown
-
To unsubscribe from this list: send
On Thursday July 13, [EMAIL PROTECTED] wrote:
Hi all,
I'm new to MD RAID. When I read the book Understanding the Linux Kernel, I
know that there are several layers between Filesystems (e.g. ext2) and block
device files (e.g. /dev/sda1). These layers are:
Filesystem == Generic Block Layer ==
On Mon, 17 Jul 2006, Christian Pernegger wrote:
The problem seems to affect only arrays that are started via an
initrd, even if they do not have the root filesystem on them.
That's all arrays if they're either managed by EVMS or the
ramdisk-creator is initramfs-tools. For yaird-generated
On Tuesday July 4, [EMAIL PROTECTED] wrote:
Michael Tokarev [EMAIL PROTECTED] wrote:
Why to test for udev at all? If the device does not exist, regardless
if udev is running or not, it might be a good idea to try to create it.
Because IT IS NEEDED, period. Whenever the operation fails or
On Sunday July 16, [EMAIL PROTECTED] wrote:
Thanks for the reply, Neil. Here is my version:
[EMAIL PROTECTED] log]# mdadm --version
mdadm - v2.3.1 - 6 February 2006
Positively ancient :-) Nothing obvious in the change log since then.
Can you show me the output of
mdadm -E /dev/hdd2
On Monday July 17, [EMAIL PROTECTED] wrote:
/dev/md/0 on /boot type ext2 (rw,nogrpid)
/dev/md/1 on / type reiserfs (rw)
/dev/md/2 on /var type reiserfs (rw)
/dev/md/3 on /opt type reiserfs (rw)
/dev/md/4 on /usr type reiserfs (rw)
/dev/md/5 on /data type reiserfs (rw)
I'm running the
Thanks a lot :-)
--
This message was sent on behalf of [EMAIL PROTECTED] at openSubscriber.com
http://www.opensubscriber.com/message/linux-raid@vger.kernel.org/4441892.html
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More
34 matches
Mail list logo