Voelker Boiko Vasiliev

2005-02-17 Thread Kraev Anatoli
Sufficient harder erections http://Novikov.jrz874383w.com/cs/?theman - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Question regarding mdadm.conf

2005-02-17 Thread Torsten E.
Hi Michael, Hi Lajber, Thanks for your hints! Michael Tokarev scribbled on 17.02.2005 08:14: Lajber Zoltan wrote: Hi! On Thu, 17 Feb 2005, Torsten E. wrote: How does I get those UUID information, to add them to the new /etc/mdadm.conf? Try this one: mdadm --detail /dev/md1 | grep UUID I'd say

Bad blocks

2005-02-17 Thread Guy
About 1 month ago the topic was bad blocks. I have been monitoring the bad blocks on my disks and I find I have had 3 new bad blocks since Jan 18. Each on a different disk. I have 17 disks, SEAGATE ST118202LC. These bad blocks did not cause any problems with md. I believe they were readable or

RE: Question regarding mdadm.conf

2005-02-17 Thread Guy
In the past, Neil has recommended using a device line like this: DEVICE partitions From man mdadm.conf: Alternatively, a device line can contain the word partitions. This will cause mdadm to read /proc/partitions and include all devices and partitions found there-in. mdadm does not

Re: Question regarding mdadm.conf

2005-02-17 Thread GrantC
On Thu, 17 Feb 2005 03:44:40 -0500, you wrote: In the past, Neil has recommended using a device line like this: DEVICE partitions . . . mdadm --detail --brief /dev/md1 | grep -v devices= peetoo:~$ mdadm --detail --brief /dev/md1 ARRAY /dev/md1 level=raid0 num-devices=2

Re: Huge mdadm resync problem.

2005-02-17 Thread Phantazm
i forgot to mention that speed is generally no problem on the raid set. It's connected to a gigabit interface and copying to and from the interface gives about 35mb/s over the link. So disks are fast and IO is working pretty good. Now i just sit here with my last hope on you fellas ;-) Regards

Re: Huge mdadm resync problem.

2005-02-17 Thread Lord Hess,Raum 301Kw,54-8994
Hi, check the sync rate while connecting the disks only as a IDE masters. This means you can try a RAID with 5 disks as I can see in your configuration. I guess that every disk is correct jumpered as a master or slave? Or do you use the cable select option? Lord Phantazm [EMAIL PROTECTED]

Re: Huge mdadm resync problem.

2005-02-17 Thread Phantazm
I use master slave. Problem is that i cant break raid set couse if i do i will loose over 1TB of data :/ Goin to see if i can get more controller cards though. Lord Hess,Raum 301Kw,54-8994 [EMAIL PROTECTED] skrev i meddelandet news:[EMAIL PROTECTED] Hi, check the sync rate while

Re: Huge mdadm resync problem.

2005-02-17 Thread Gordon Henderson
On Thu, 17 Feb 2005, Phantazm wrote: I use master slave. Problem is that i cant break raid set couse if i do i will loose over 1TB of data :/ Goin to see if i can get more controller cards though. Do it. Use 4 2-port cards for your 8 drives and only one drive per cable. It is possible, and

Re: Huge mdadm resync problem.

2005-02-17 Thread Phantazm
It might be worht a try. I have 6 pci slots. That i know of today my mobo shares irq on pci 5 and 2. looks like i cant get around that. I have disabled apic on mobo too. Going to see if i can get me 2 more ata133 cards then to see if i get more lucky :) Thanx for your answers and suggestions.

[PATCH md 0 of 9] Introduction

2005-02-17 Thread NeilBrown
9 patches for md in 2.6.11-rc3-mm2 follow. 1 - tightens up some locking to close a race that is extremely unlikely to happen (famous last words). It can only happen when failed devices are being removed. 2 - Fixes a resync related problem for raid5 and raid6 arrays that have lost

[PATCH md 1 of 9] Remove possible oops in md/raid1

2005-02-17 Thread NeilBrown
When we get a pointer and check it is non-null, we should not get the pointer again but should instead use the pointer we got in the first place... Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./drivers/md/raid1.c | 40 1 files

[PATCH md 6 of 9] Improve the interface to sync_request

2005-02-17 Thread NeilBrown
1/ change the return value (which is number-of-sectors synced) from 'int' to 'sector_t'. The number of sectors is usually easily small enough to fit in an int, but if resync needs to abort, it may want to return the total number of remaining sectors, which could be large. Also errors cannot

[PATCH md 5 of 9] Improve locking on 'safemode' and move superblock writes

2005-02-17 Thread NeilBrown
When md marks the superblock dirty before a write, it calls generic_make_request (to write the superblock) from within generic_make_request (to write the first dirty block), which could cause problems later. With this patch, the superblock write is always done by the helper thread, and write

[PATCH md 3 of 9] Remove kludgy level check from md.c

2005-02-17 Thread NeilBrown
This test is overly specific, and misses raid10. Assume all levels = 1 might need reconstruction instead. Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./drivers/md/md.c |5 ++--- 1 files changed, 2 insertions(+), 3 deletions(-) diff ./drivers/md/md.c~current~

[PATCH md 2 of 9] Make raid5 and raid6 robust against failure during recovery.

2005-02-17 Thread NeilBrown
Two problems are fixed here. 1/ if the array is known to require a resync (parity update), but there are too many failed devices, the resync cannot complete but will be retried indefinitely. 2/ if the array has too many failed drives to be usable and a spare is available, reconstruction

[PATCH md 4 of 9] Merge md_enter_safemode into md_check_recovery

2005-02-17 Thread NeilBrown
md_enter_safemode checks if it is time to mark the md superblock as 'clean'. i.e. if all writes have completed and a suitable delay has passed. This is currently called from md_handle_safemode which in-turn is called (almost) every time md_check_recovery is called, and from the end of md_do_sync

Re: [PATCH md 9 of 9] Optimise reconstruction when re-adding a recently failed drive.

2005-02-17 Thread Mike Hardy
NeilBrown wrote: When an array is degraded, bit in the intent-bitmap are never cleared. So if a recently failed drive is re-added, we only need to reconstruct the block that are still reflected in the bitmap. This patch adds support for this re-adding. Hi there - If I understand this correctly,

Re: [PATCH md 9 of 9] Optimise reconstruction when re-adding a recently failed drive.

2005-02-17 Thread Neil Brown
On Thursday February 17, [EMAIL PROTECTED] wrote: NeilBrown wrote: When an array is degraded, bit in the intent-bitmap are never cleared. So if a recently failed drive is re-added, we only need to reconstruct the block that are still reflected in the bitmap. This patch adds support