Re: Random Seek on Array as slow as on single disk

2006-07-17 Thread Mattias Wadenstein
On Sun, 16 Jul 2006, A. Liemen wrote: Hardware Raid. http://www.areca.com.tw/products/html/pcix-sata.htm If you want to see even worse performance with bonnie, try running several parallel in sync, somewhere around 6-15 simultaneous read sessions[*] should give you rather horrible

Tuning the I/O scheduler for md?

2006-07-17 Thread Christian Pernegger
Based on various googled comments I have selected 'deadline' as the elevator for the disks comprising my md arrays, with no further tuning yet ... not so stellar :( Basically concurrent reads (even just 2, even worse with 1 read + 1 write) don't work too well. Example: RAID1: I bulk-move some

Re: trying to brute-force my RAID 5...

2006-07-17 Thread Molle Bestefich
Sevrin Robstad wrote: I got a friend of mine to make a list of all the 6^6 combinations of dev 1 2 3 4 5 missing, shouldn't this work ??? Only if you get the layout and chunk size right. And make sure that you know whether you were using partitions (eg. sda1) or whole drives (eg. sda - bad

Problem with --manage

2006-07-17 Thread Benjamin Schieder
Hi list. Just recently I set up a few Arrays over 3 250 GB harddisks like this: Personalities : [linear] [raid0] [raid1] [raid5] [raid4] md5 : active raid5 hdb8[0] hda8[1] hdc8[2] 451426304 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU] md4 : active raid5 hdb7[0] hda7[1]

Re: Random Seek on Array as slow as on single disk

2006-07-17 Thread Bill Davidsen
A. Liemen wrote: Hardware Raid. http://www.areca.com.tw/products/html/pcix-sata.htm You should ask the vendor, this isn't a software RAID issue, and the usual path to improving bad hardware is in upgrading. You may be able to get better firmware if you're lucky. Alex Jeff Breidenbach

Re: Hardware assisted parity computation - is it now worth it?

2006-07-17 Thread Bill Davidsen
Burn Alting wrote: Last year, there were discussions on this list about the possible use of a 'co-processor' (Intel's IOP333) to compute raid 5/6's parity data. We are about to see low cost, multi core cpu chips with very high speed memory bandwidth. In light of this, is there any effective

Re: Array will not assemble

2006-07-17 Thread Bill Davidsen
Richard Scobie wrote: Neil Brown wrote: Add DEVICE /dev/sd? or similar on a separate line. Remove devices=/dev/sdc,/dev/sdd Thanks. My mistake, I thought after having assembled the arrays initially, that the output of: mdadm --detail --scan mdadm.conf could be used directly.

Re: second controller: what will my discs be called, and does it matter?

2006-07-17 Thread Bill Davidsen
Dexter Filmore wrote: Currently I have 4 discs on a 4 channel sata controller which does its job quite well for 20 bucks. Now, if I wanted to grow the array I'd probably go for another one of these. How can I tell if the discs on the new controller will become sd[e-h] or if they'll be the

Re: Can't add disk to failed raid array

2006-07-17 Thread Paul Waldo
All has been quiet on this topic for a while--any more takers? Please help if you can! Thanks in advance. Here is the current state of affairs: [EMAIL PROTECTED] ~]# mdadm --add /dev/md1 /dev/hdd2 mdadm: add new device failed for /dev/hdd2 as 2: Invalid argument [EMAIL PROTECTED] ~]# mdadm

Re: issue with internal bitmaps

2006-07-17 Thread Bill Davidsen
Neil Brown wrote: On Thursday July 6, [EMAIL PROTECTED] wrote: hello, i just realized that internal bitmaps do not seem to work anymore. I cannot imagine why. Nothing you have listed show anything wrong with md... Maybe you were expecting mdadm -X /dev/md100 to do something

Re: [PATCH] enable auto=yes by default when using udev

2006-07-17 Thread Bill Davidsen
Michael Tokarev wrote: Neil Brown wrote: On Monday July 3, [EMAIL PROTECTED] wrote: Hello, the following patch aims at solving an issue that is confusing a lot of users. when using udev, device files are created only when devices are registered with the kernel, and md devices are

troubleshooting PNY S-cure

2006-07-17 Thread whc86 (sent by Nabble.com)
I am setting up a mythtv backend server on fedora 5 using 5x Seagate 300gb drives and PNY S-cure Raid 3. the bulk of the 1.2 TB is a jfs partition while there is a 100 mb /boot partition in ext2 also. the OS runs from / partition and /swap partition on a WD raptor on a different controller. My

Re: Test feedback 2.6.17.4+libata-tj-stable (EH, hotplug)

2006-07-17 Thread Bill Davidsen
Christian Pernegger wrote: I finally got around to testing 2.6.17.4 with libata-tj-stable-20060710. Hardware: ICH7R in ahci mode + WD5000YS's. EH: much, much better. Before the patch it seemed like errors were only printed to dmesg but never handed up to any layer above. Now md actually fails

Still can't get md arrays that were started from an initrd to shutdown

2006-07-17 Thread Christian Pernegger
[This is a bit of a repost, because I'm slightly desperate :)] I'm still having problems with some md arrays not shutting down cleanly on halt / reboot. The problem seems to affect only arrays that are started via an initrd, even if they do not have the root filesystem on them. That's all

Re: trying to brute-force my RAID 5...

2006-07-17 Thread Sevrin Robstad
Molle Bestefich wrote: Sevrin Robstad wrote: I got a friend of mine to make a list of all the 6^6 combinations of dev 1 2 3 4 5 missing, shouldn't this work ??? Only if you get the layout and chunk size right. And make sure that you know whether you were using partitions (eg. sda1) or

Re: second controller: what will my discs be called, and does it matter?

2006-07-17 Thread Dexter Filmore
Am Montag, 17. Juli 2006 20:28 schrieb Bill Davidsen: Next question: assembling by UUID, does that matter at all? No. There's the beauty of it. That's what I needed to hear. (And while talking UUID - can I safely migrate to a udev-kernel? Someone on this list recently ran into trouble

Re: Still can't get md arrays that were started from an initrd to shutdown

2006-07-17 Thread Nix
On 17 Jul 2006, Christian Pernegger suggested tentatively: I'm still having problems with some md arrays not shutting down cleanly on halt / reboot. The problem seems to affect only arrays that are started via an initrd, even if they do not have the root filesystem on them. That's all

Raid and LVM and LILO

2006-07-17 Thread Du
Hi, I was/am trying to install Debian Sarge r2 with 2 Sata HD's working on Raid 1 via Software and in this newly MD device, I put LVM. All works fine and the debian installs well, but when the LILO try to install, it says me that I dont have an active partition and no matter what I do, it

Re: Mounting array was read write for about 3 minutes, then Read-only file system error

2006-07-17 Thread Neil Brown
On Thursday July 6, [EMAIL PROTECTED] wrote: I created a raid1 array using /dev/disk/by-id with (2) 250GB USB 2.0 Drives. It was working for about 2 minutes until I tried to copy a directory tree from one drive to the array and then cancelled it midstream. After cancelling the copy, when I

Re: zeroing old superblocks upgrading...

2006-07-17 Thread Neil Brown
On Friday July 7, [EMAIL PROTECTED] wrote: Neil But if you wanted to (and were running a fairly recent kernel) you Neil could Neil mdadm --grow --bitmap=internal /dev/md0 Did this. And now I can do mdadm -X /dev/hde1 to examine the bitmap, but I think this totally blows. To create a

Re: Kernel 2.6.17 and RAID5 Grow Problem (critical section backup)

2006-07-17 Thread Neil Brown
On Tuesday July 11, [EMAIL PROTECTED] wrote: Hm, what's superblock 0.91? It is not mentioned in mdadm.8. Not sure, the block version perhaps? Well yes of course, but what characteristics? The manual only lists 0, 0.90, default 1, 1.0, 1.1, 1.2 No 0.91 :( AFAICR superblock

Re: Raid5 Reshape Status + xfs_growfs = Success! (2.6.17.3)

2006-07-17 Thread Neil Brown
On Tuesday July 11, [EMAIL PROTECTED] wrote: Neil, It worked, echo'ing the 600 to the stripe width in /sys, however, how come /dev/md3 says it is 0 MB when I type fdisk -l? Is this normal? Yes. The 'cylinders' number is limited to 16bits. For you 2.2TB array, the number of 'cylinders'

Re: Problem with 3xRAID1 to RAID 0

2006-07-17 Thread Neil Brown
On Tuesday July 11, [EMAIL PROTECTED] wrote: Hi, I created to 3 x /dev/md1 to /dev/md3 which consist of six identical 200GB hdd my mdadm --detail --scan looks like Proteus:/home/vladoportos# mdadm --detail --scan ARRAY /dev/md1 level=raid1 num-devices=2

Re: trying to brute-force my RAID 5...

2006-07-17 Thread Neil Brown
On Monday July 17, [EMAIL PROTECTED] wrote: I have written some posts about this before... My 6 disk RAID 5 broke down because of hardware failure. When I tried to get it up'n'running again I did a --create without any missing disk, which made it rebuild. I have also lost all information

Re: Test feedback 2.6.17.4+libata-tj-stable (EH, hotplug)

2006-07-17 Thread Neil Brown
On Tuesday July 11, [EMAIL PROTECTED] wrote: Christian Pernegger wrote: The fact that the disk had changed minor numbers after it was plugged back in bugs me a bit. (was sdc before, sde after). Additionally udev removed the sdc device file, so I had to manually recreate it to be able to

Re: only 4 spares and no access to my data

2006-07-17 Thread Neil Brown
On Monday July 10, [EMAIL PROTECTED] wrote: Karl Voit wrote: [snip] Well this is because of the false(?) superblocks of sda-sdd in comparison to sda1 to sdd1. I don't understand this. Do you have more than a single partion on sda? Is sda1 occupying the entire disk? since the superblock

Re: Booting issue - rebooting also

2006-07-17 Thread Neil Brown
On Friday July 14, [EMAIL PROTECTED] wrote: Hi i have small problem when i booting i have md1 as /boot md2 as swap and md3 as / (root) and when it come to md3 it say something like md3 has no identity information i cant read it its go too fast... it actualy dont affect system as far i can

Re: PLEASE HELP ... raid5 array degraded ...

2006-07-17 Thread Neil Brown
On Tuesday July 11, [EMAIL PROTECTED] wrote: Checksum : 4aa9094a - expected 4aa908c4 This is a bit scary. You have a single-bit error, either in the checksum or elsewhere in the superblock. I would recommend at least a memtest86 run. NeilBrown - To unsubscribe from this list: send

Re: Where is MD Device Layer

2006-07-17 Thread Neil Brown
On Thursday July 13, [EMAIL PROTECTED] wrote: Hi all, I'm new to MD RAID. When I read the book Understanding the Linux Kernel, I know that there are several layers between Filesystems (e.g. ext2) and block device files (e.g. /dev/sda1). These layers are: Filesystem == Generic Block Layer ==

Re: Still can't get md arrays that were started from an initrd to shutdown

2006-07-17 Thread dean gaudet
On Mon, 17 Jul 2006, Christian Pernegger wrote: The problem seems to affect only arrays that are started via an initrd, even if they do not have the root filesystem on them. That's all arrays if they're either managed by EVMS or the ramdisk-creator is initramfs-tools. For yaird-generated

Re: [PATCH] enable auto=yes by default when using udev

2006-07-17 Thread Neil Brown
On Tuesday July 4, [EMAIL PROTECTED] wrote: Michael Tokarev [EMAIL PROTECTED] wrote: Why to test for udev at all? If the device does not exist, regardless if udev is running or not, it might be a good idea to try to create it. Because IT IS NEEDED, period. Whenever the operation fails or

Re: Can't add disk to failed raid array

2006-07-17 Thread Neil Brown
On Sunday July 16, [EMAIL PROTECTED] wrote: Thanks for the reply, Neil. Here is my version: [EMAIL PROTECTED] log]# mdadm --version mdadm - v2.3.1 - 6 February 2006 Positively ancient :-) Nothing obvious in the change log since then. Can you show me the output of mdadm -E /dev/hdd2

Re: Problem with --manage

2006-07-17 Thread Neil Brown
On Monday July 17, [EMAIL PROTECTED] wrote: /dev/md/0 on /boot type ext2 (rw,nogrpid) /dev/md/1 on / type reiserfs (rw) /dev/md/2 on /var type reiserfs (rw) /dev/md/3 on /opt type reiserfs (rw) /dev/md/4 on /usr type reiserfs (rw) /dev/md/5 on /data type reiserfs (rw) I'm running the

Re: Where is MD Device Layer

2006-07-17 Thread linuxmania . lizhi
Thanks a lot :-) -- This message was sent on behalf of [EMAIL PROTECTED] at openSubscriber.com http://www.opensubscriber.com/message/linux-raid@vger.kernel.org/4441892.html - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More