Re: Linux 2.4.0-test8 and swap/journaling fs on raid

2000-09-27 Thread Neil Brown
On Wednesday September 27, [EMAIL PROTECTED] wrote: I was just wondering if the issues with swap on a raid device and with using a journaling fs on a raid device had been fixed in the latest 2.4.0-test kernels? Yes. md in 2.4 doesn't do interesting things with the buffer cache, so swap and

PATCH: raid1 - assorted bug fixes

2000-11-06 Thread Neil Brown
Linus, The following patch addresses a small number of bugs in raid1.c in 2.4.0-test10. 1/ A number of routines that are called from interrupt context used spin_lock_irq / spin_unlock_irq instead of the more appropriate spin_lock_irqsave( ,flags) / spin_unlock_irqrestore(

Re: compatability between patched 2.2 and 2.4?

2000-11-07 Thread Neil Brown
On Tuesday November 7, [EMAIL PROTECTED] wrote: I have a question regarding the diffrences between the 2.2+RAID-patch kernels and the 2.4-test kernels - I was wondering if there are any diffrences between them. For example, if I build systems with a 2.2.17+RAID and later install 2.4

Any experience with LSI SYM 53c1010 scsi controller??

2000-11-27 Thread Neil Brown
Hi, I am considering using an ASUS CUR-DLS mother board in a new NFS/RAID server, and wonder if anyone was any experience to report either with it, or with the Ultra-160 dual buss scsi controller that it has - the LSI SYM 53c1010. From what I can find in the kernel source, and from lsi

Re: [BUG] reconstruction doesn't start

2000-11-27 Thread Neil Brown
On Monday November 27, [EMAIL PROTECTED] wrote: When md2 is finished then md1 is resynced. Shouldn't they do resync at the same time? I never saw "md: serializing resync,..." what I supected to get because md0 and md1 share the same physical disks. My findigs: The md driver in

Re: raid1 resync problem ? (fwd)

2000-11-28 Thread Neil Brown
On Tuesday November 28, [EMAIL PROTECTED] wrote: Hi, I'm forwarding the message to you guys because I got no answer from Ingo Thanks I would suggest always CCing to [EMAIL PROTECTED] I have taken the liberty of Ccing this reply there. -- Forwarded message -- Date:

Re: raid1 resync problem ? (fwd)

2000-11-28 Thread Neil Brown
On Tuesday November 28, [EMAIL PROTECTED] wrote: snip If I understood correctly, bh-b_rsector is used to know if the sector number of the request being processed is not inside the resync range. In case it is, it sleeps waiting for the resync daemon. Otherwise, it can send

Re: we are finding that parity writes are half of all writes when writing 50mb files

2000-11-28 Thread Neil Brown
On Tuesday November 28, [EMAIL PROTECTED] wrote: On Tue, Nov 28, 2000 at 10:50:06AM +1100, Neil Brown wrote: However, there is only one "unplug-all-devices"(*) call in the API that a reader or write can make. It is not possible to unplug a particular device, or be

PATCH - md/Makefile - link order

2000-11-28 Thread Neil Brown
Linus, A couple of versions of this patch went into Alan's tree, but weren't quite right. This one is minimal, but works. The problem is that the the tidy up of xor.o, it auto-initialises itself, instead of being called by raid.o, and so needs to be linked *before* md.o, as the

PATCH - raid5.c - bad calculation

2000-11-28 Thread Neil Brown
Linus, I sent this patch to Alan a little while ago, but after ac4, so I don't know if it went into his tree. There is a bit of code at the front of raid5_sync_request which calculates which block is the parity block for a given stripe. However, to convert from a block number (1K units)

PATCH - md_boot - ifdef fix

2000-11-28 Thread Neil Brown
Linus, The are currently two ways to get md/raid devices configured at boot time. AUTODETECT_RAID finds bits of raid arrays from partition types and automagically connected them together MD_BOOT allows bits of raid arrays to be explicitly described on the boot line. Currently,

PATCH - md - MAX_REAL yields to MD_SB_DISKS

2000-11-28 Thread Neil Brown
Linus, md currently has two #defines which give a limit to the number of devices that can be in a given raid array: MAX_REAL (==12) dates back to the time before we had persistent superblocks, and mostly affects raid0 MD_SB_DISKS (==27) is a characteristic of the newer persistent

PATCH - md.c - confusing message corrected

2000-11-28 Thread Neil Brown
Linus, This is a resend of a patch that probably got lost a week or so ago. (It is also more gramatically correct). If md.c has two raid arrays that need to be resynced, and they share a physical device, then the two resyncs are serialised. However the message printed says something

PATCH - md - initialisation cleanup

2000-11-29 Thread Neil Brown
Linus, here is a patch for test12 which cleans up the initialisation of raid personalities. I didn't include it in the previous raid init cleanup because I hadn't figured out the inner mysteries of link order completely. The linux-kbuild list helped there. This patch arranges that each

Re: autodetect question

2000-12-01 Thread Neil Brown
On Friday December 1, [EMAIL PROTECTED] wrote: If I have all of MD as a module and autodetect raid enabled, do the MD drives that the machine has get detected and setup 1) at boot 2) at module load or 3) it doesn't 3. It doesn't. Rationale: by the time you are loading a module, you have

Re: FW: how to upgrade raid disks from redhat 4.1 to redhat 6.2

2000-12-01 Thread Neil Brown
On Friday December 1, [EMAIL PROTECTED] wrote: -Original Message- From: Carly Yang [mailto:[EMAIL PROTECTED]] Sent: Friday, December 01, 2000 2:42 PM To: [EMAIL PROTECTED] Subject: how to upgrade raid disks from redhat 4.1 to redhat 6.2 Dear Gregory I have a system

Re: Ex2FS unable to read superblock

2000-12-03 Thread Neil Brown
On Sunday December 3, [EMAIL PROTECTED] wrote: I'm new to the raid under linux world, and had a question. Sorry if several posts have been made by me previously, I had some trouble subscribing to the list... I successfully installed redhat 6.2 with raid 0 for two drives on a sun ultra 1.

Re: [OOPS] raidsetfaulty - raidhotremove - raidhotadd

2000-12-06 Thread Neil Brown
On Wednesday December 6, [EMAIL PROTECTED] wrote: Neil Brown wrote: Could you try this patch and see how it goes? Same result! Ok... must be something else... I tried again to reproduce it, and this time I succeeded. The problem happens when you try to access the last 128k of a raid1

PATCH - raid1 next drive selection.

2000-12-10 Thread Neil Brown
Linus (et al) The raid1 code has a concept of finding a "next available drive". It uses this for read balancing. Currently, this is implemented via a simple linked list that links the working drives together. However, there is no locking to make sure that the list does not get modified

PATCH - md device reference counting

2000-12-10 Thread Neil Brown
Linus (et al), An md device need to know if it is in-use so that it doesn't allow raidstop while still mounted. Previously it did this by looking for a superblock on the device. This is a bit in-elegant and doesn't generalise. With this patch, it tracks opens and closes (get and

linus

2000-12-10 Thread Neil Brown
Linus (et al), The raid code wants to be the sole accessor of any devices are are combined into the array. i.e. it want to lock those devices agaist other use. It currently tried to do this bby creating an inode that appears to be associated with that device. This no longer has any

PATCH - raid5 in 2.4.0-test13 - substantial rewrite with substantial performance increase

2000-12-20 Thread Neil Brown
Linus here is a rather large patch for raid5 in 2.4.0-test13-pre3. It is a substantial re-write of the stripe-cache handling code, which is the heart of the raid5 module. I have been sitting on this for a while so that others can test it (a few have) and so that I would have had quite

Re: mkraid problems

2001-01-04 Thread Neil Brown
--- Neil Brown [EMAIL PROTECTED] wrote: On Wednesday January 3, [EMAIL PROTECTED] wrote: mkraid aborts with no usefull error mssg on screen or in the syslog. My /etc/raidtab is set up correctly and I am using raidtools2 with kernel 2.2.18 with raid

Re: Status of raid.

2001-02-09 Thread Neil Brown
On Friday February 9, [EMAIL PROTECTED] wrote: Greetings, I'm getting ready to put kernel 2.4.1 on my server at home. I have some questions about the status of RAID in 2.4.1. Sorry to be dense but I couldn't glean the answers to these questions from my search of the mailing list. 1.

Re: Newbie questions

2001-02-21 Thread Neil Brown
On Wednesday February 21, [EMAIL PROTECTED] wrote: Hello, This is my first time playing with software raid so sorry if I sound dumb. What I have is a remote device that only has one hard drive. There is no ability for a second. Can I use the raidtools package to setup a raid-1 mirror on

Re: Urgent Problem: moving a raid

2001-02-25 Thread Neil Brown
On Sunday February 25, [EMAIL PROTECTED] wrote: Neil Brown wrote: OK, this time I really want to know how this should be handled. Well. it "should" be handled by re-writing various bits of raid code to make it all work more easily, but without doing that it "coul

Re: MD reverting to old Raid type

2001-02-25 Thread Neil Brown
On Sunday February 25, [EMAIL PROTECTED] wrote: Linux 2.4.1/RAIDtools2 0.90 I have 4 ide disks which have identical partition layouts. RAID is working successfully, its even booting RAID1. I created a RAID5 set on a set of 4 partitions, which works OK. I then destroyed that set and

Re: Going from 'old' (kernel v2.2.x) to 'new' (kernel v2.4.x) raidsystem

2001-02-26 Thread Neil Brown
On February 26, [EMAIL PROTECTED] wrote: I'm currently running a standard v2.2.17 kernel w/ the 'accompanying' raid system (linear). Having the following /etc/mdtab file: /dev/md0 linear,4k,0,75f3bcd8/dev/sdb1 /dev/sdc1 /dev/sdd10 /dev/sde1 /dev/sdf1 /dev/sdg1 And converted

Re: [lvm-devel] Re: partitions for RAID volumes?

2001-02-27 Thread Neil Brown
On Monday February 26, [EMAIL PROTECTED] wrote: Actually, the LVM metadata is somewhat poorly layed out in this respect. The metadata is at the start of the device, and on occasion is not even sector aligned, AFAICS. Also, the PE structures, while large powers of 2 in size, are not

Re: RaidHotAdd and reconstruction

2001-03-04 Thread Neil Brown
On Sunday March 4, [EMAIL PROTECTED] wrote: Hi folks, I have a two-disk RAID 1 test array that I was playing with. I then decided to hot add a third disk using ``raidhotadd''. The disk was added to the array, but as far as I could see, the RAID software did not start a reconstruction of

Re: Proposed RAID5 design changes.

2001-03-16 Thread Neil Brown
(I've taken Alan and Linus off the Cc list. I'm sure they have plenty to read, and may even be on linux-raid anyway). On Thursday March 15, [EMAIL PROTECTED] wrote: I'm not too happy with the linux RAID5 implementation. In my opinion, a number of changes need to be made, but I'm not sure how

Re: disk fails in raid5 but not in raid0

2001-03-19 Thread Neil Brown
On Monday March 19, [EMAIL PROTECTED] wrote: Hi, I have a RAID setup, 3 Compaq 4Gb drives running of an Adaptec2940UW. Kernel 2.2.18 with RAID-patches etc. I have been trying out various options, doing some stress-testing etc., and I have now arrived at the following situation that I

Re: Problem migrating RAID1 from 2.2.x to 2.4.2

2001-03-19 Thread Neil Brown
On Monday March 19, [EMAIL PROTECTED] wrote: I'm having trouble running a RAID1 root/boot mirror under 2.4.2. Works fine on 2.2.14 though. I'm running RH 6.2 with stock 2.2.14 kernel. Running RAID1 on a pair of 9.1 UW SCSI Barracudas as root/boot/lilo. md0 is / and md1 is 256M swap,

Re: Error Injector?

2001-03-21 Thread Neil Brown
On Wednesday March 21, [EMAIL PROTECTED] wrote: My question is based upon prior experience working for Stratus Computer. At Stratus it was impractical to go beat the disk drives with a hammer to cause them to fail - rather we would simply use a utility to cause the disk driver to begin to

Re: Strange performance results in RAID5

2001-03-28 Thread Neil Brown
On Thursday March 29, [EMAIL PROTECTED] wrote: Hi, I have been doing some performance checks on my RAID 5 system. Good. The system is 5 Seagate Cheetahs X15 Linux 2.4.2 I am using IOtest 3.0 on /dev/md0 My chunk size is 1M... When I do random reads of 64K blobs using one

Re: spare-disk in a RAID1 set? Conflicting answers...

2001-04-29 Thread Neil Brown
On Saturday April 28, [EMAIL PROTECTED] wrote: Question: can you have one or more spare-disk entries in /etc/raidtab when running a RAID1 set? First answer: the Linux Software-RAID HOWTO says yes, and gives an example of this in the section on RAID1 config in raidtab.

[PATCH] RAID5 NULL Checking Bug Fix

2001-05-15 Thread Neil Brown
On Wednesday May 16, [EMAIL PROTECTED] wrote: (more patches to come. They will go to Linus, Alan, and linux-raid only). This is the next one, which actually addresses the NULL Checking Bug. There are two places in the the raid code which allocate memory without (properly) checking for

[PATCH] - md_error gets simpler

2001-05-15 Thread Neil Brown
Linus, This isn't a bug fix, just a tidy up. Current, md_error - which is called when an underlying device detects an error - takes a kdev_t to identify which md array is affected. It converts this into a mddev_t structure pointer, and in every case, the caller already has the desired

[PATCH] raid resync by sectors to allow for 512byte block filesystems

2001-05-17 Thread Neil Brown
Linus, The current raid1/raid5 resync code requests resync in units of 1k (though the raid personality can round up requests if it likes). This interacts badly with filesystems that do IO in 512 byte blocks, such as XFS (because raid5 need to use the same blocksize for IO and resync). The

[PATCH] raid1 to use sector numbers in b_blocknr

2001-05-23 Thread Neil Brown
Linus, raid1 allocates a new buffer_head when passing a request done to an underlying device. It currently sets b_blocknr to b_rsector/(b_size9) from the original buffer_head to parallel other uses of b_blocknr (i.e. it being the number of the block). However, if raid1 gets a

Re: md= problems with devfs names

2001-06-02 Thread Neil Brown
On Saturday June 2, [EMAIL PROTECTED] wrote: I've moved from: md=4,/dev/sdf5,/dev/sdg5 to: md=4,/dev/scsi/host0/bus0/target30/lun0/part5,\ /dev/scsi/host0/bus0/target32/lun0/part5 And now get: md: Unknown device name,\

Re: mdrecoveryd invalid operand error

2001-06-06 Thread Neil Brown
On Wednesday June 6, [EMAIL PROTECTED] wrote: In the XFS kernel tree v2.4.3 w/ several patches, we were unable to raidhotremove and subsequently raidhotadd a spare without a reboot. It did not matter if you had a new or the same hard disk. We then tried the patch Igno Molnar sent regarding

Re: failure of raid 5 when first disk is unavailable

2001-06-07 Thread Neil Brown
On Thursday June 7, [EMAIL PROTECTED] wrote: Hi Neil; I am hoping you are going to tell me this is already solved, but here goes... Almost :-) scenario: hda4, hdb4, and hdc4 in a raid 5 with no hotspare. With 2.4.3 XFS kernels, it seems that a raid 5 does not come up

Re: mdctl

2001-06-11 Thread Neil Brown
On Friday June 8, [EMAIL PROTECTED] wrote: On Fri, 8 Jun 2001, Neil Brown wrote: If you don't like the name mdctl (I don't), please suggest another. How about raidctrl? Possible... though I don't think it is much better. Longer to type too:-) I kind of like having the md

Re: mdctl - names and code

2001-06-13 Thread Neil Brown
Thankyou for all the suggestions for names for mdctl. We have raidctl raidctrl swraidctl mdtools mdutils mdmanage mdmgr mdmd:-) mdcfg mdconfig mdadmin Mike Black suggested that it is valuable for tools that are related to start with a common prefix so that command completion can be used

Re: du discrepancies?

2001-06-14 Thread Neil Brown
On Friday June 15, [EMAIL PROTECTED] wrote: There appears to be a discrepancy between the true state of affairs on my RAID partitions and what df reports; [root /]# sfdisk -l /dev/hda Disk /dev/hda: 38792 cylinders, 16 heads, 63 sectors/track Units = cylinders of 516096 bytes, blocks

PATCH - md initialisation to accept devfs names

2001-06-20 Thread Neil Brown
Linus, it is possible to start an md array from the boot command line with, e.g. md=0,/dev/something,/dev/somethingelse However only names recognised by name_to_kdev_t work here. devfs based names do not work. To fix this, the follow patch moves the name lookup from __setup

PATCH - tag all printk's in md.c

2001-06-20 Thread Neil Brown
Linus, This patch makes sure that all the printks in md.c print a message starting with md: or md%d:. The next step (not today) will be to reduce a lot of them to KERN_INFO or similar as md is really quite noisy. Also, two printks in raid1.c get prefixed with raid1: This patch is

PATCH - raid5 performance improvement - 3 of 3

2001-06-20 Thread Neil Brown
Linus, and fellow RAIDers, This is the third in my three patch series for improving RAID5 throughput. This one substantially lifts write thoughput by leveraging the opportunities for write gathering provided by the first patch. With RAID5, it is much more efficient to write a whole

PATCH

2001-06-20 Thread Neil Brown
Linus, There is a buggy BUG in the raid5 code. If a request on an underlying device reports an error, raid5 finds out which device that was and marks it as failed. This is fine. If another request on the same device reports an error, raid5 fails to find that device in its table (because

Re: PATCH - raid5 performance improvement - 3 of 3

2001-06-24 Thread Neil Brown
On Sunday June 24, [EMAIL PROTECTED] wrote: Hi, We used to (long ago, 2.2.x), whenever we got a write request for some buffer, search the buffer cache to see if additional buffers which belong to that particular stripe are dirty, and then schedule them for writing as well, in an attempt

Re: Failed disk triggers raid5.c bug?

2001-06-24 Thread Neil Brown
On Sunday June 24, [EMAIL PROTECTED] wrote: Hi! Neil Brown wrote: On Thursday June 14, [EMAIL PROTECTED] wrote: Dear All I've just had a disk (sdc) fail in my raid5 array (sdb sdc sdd), Great! A real live hardware failure1 It is always more satisfying to watch one

Re: Failed disk triggers raid5.c bug?

2001-06-26 Thread Neil Brown
On Monday June 25, [EMAIL PROTECTED] wrote: Is there any way for the RAID code to be smarter when deciding about those event counters? Does it have any chance (theoretically) to _know_ that it shouldn't use the drive with event count 28? My current thinking is that once a raid array becomes

Re: Mounting very old style raid on a recent machine?

2001-06-26 Thread Neil Brown
On Tuesday June 26, [EMAIL PROTECTED] wrote: Hi, I currently have to salvage data from an ancient box that looks like to have run kernel 2.0.35. However, the system on that disk is corrupted and won't boot any more (at least not on today's hardware). It looks like main data is on a RAID.

PATCH/RFC - partitioning of md devices

2001-07-01 Thread Neil Brown
Linus, I wonder if you would consider applying, or commenting on this patch. It adds support for partitioning md devices. In particular, a new major device is created (name==mdp, number assigned dynamically) which provides for 15 partitions on each of the first 16 md devices. I

Re: raid 01 vs 10

2001-07-09 Thread Neil Brown
On Monday July 9, [EMAIL PROTECTED] wrote: I was wondering what people thought of using raid 0+1 (a mirrored array of raid0 stripes) vs. raid 1+0 (a raid0 array of mirrored disks). It seems that either can sustain at least one drive failure and the performance should be similar. Are there

Re: migrating raid-1 to different drive geometry ?

2005-01-24 Thread Neil Brown
On Monday January 24, [EMAIL PROTECTED] wrote: how can the existing raid setup be moved to the new disks without data loss ? I guess it must be something like this: 1) physically remove first old drive 2) physically add first new drive 3) re-create partitions on new drive 4) run

Re: migrating raid-1 to different drive geometry ?

2005-01-24 Thread Neil Brown
On Tuesday January 25, [EMAIL PROTECTED] wrote: Neil Brown wrote: If you are using a recent 2.6 kernel and mdadm 1.8.0, you can grow the array with mdadm --grow /dev/mdX --size=max Neil, Is this just for RAID1? OR will it work for RAID5 too? --grow --size=max should work

Re: /dev/md* Device Files

2005-01-26 Thread Neil Brown
On Wednesday January 26, [EMAIL PROTECTED] wrote: A useful trick I discovered yesterday: Add --auto to your mdadm commandline and it will create the device for you if it is missing :) Well, it seems that this machine is using the udev scheme for managing device files. I didn't realize

RE: Software RAID 0+1 with mdadm.

2005-01-26 Thread Neil Brown
On Wednesday January 26, [EMAIL PROTECTED] wrote: This bug that's fixed in 1.9.0, is in a bug when you create the array? ie do we need to use 1.9.0 to create the array. I'm looking to do the same but my bootdisk currently only has 1.7.soemthing on it. Do I need to make a custom bootcd with

Re: Software RAID 0+1 with mdadm.

2005-01-26 Thread Neil Brown
On Tuesday January 25, [EMAIL PROTECTED] wrote: Been trying for days to get a software RAID 0+1 setup. This is on SuSe 9.2 with kernel 2.6.8-24.11-smp x86_64. I am trying to setup a RAID 0+1 with 4 250gb SATA drives. I do the following: mdadm --create /dev/md1 --level=0 --chunk=4

Re: Change preferred minor number of an md device?

2005-01-31 Thread Neil Brown
On Monday January 31, [EMAIL PROTECTED] wrote: Hi to all, md gurus! Is there a way to edit the preferred minor of a stopped device? mdadm --assemble /dev/md0 --update=super-minor /dev/ will assemble the array and update the preferred minor to 0 (from /dev/md0). However this won't work

ANNOUNCE: mdadm 1.9.0 - A tool for managing Soft RAID under Linux

2005-02-03 Thread Neil Brown
I am pleased to announce the availability of mdadm version 1.9.0 It is available at http://www.cse.unsw.edu.au/~neilb/source/mdadm/ and http://www.{countrycode}.kernel.org/pub/linux/utils/raid/mdadm/ as a source tar-ball and (at the first site) as an SRPM, and as an RPM for i386.

Re: ANNOUNCE: mdadm 1.9.0 - A tool for managing Soft RAID under Linux

2005-02-03 Thread Neil Brown
On Friday February 4, [EMAIL PROTECTED] wrote: Neil Brown wrote: Release 1.9.0 adds: ... - --assemble --auto recognises 'standard' name and insists on using the appropriate major/minor number for them. Is this the problem I encountered when I added auto=md to my

Re: Problem with Openmosix

2005-02-14 Thread Neil Brown
On Monday February 14, [EMAIL PROTECTED] wrote: Hi, Neil... Hi. I use MD driver two year ago with Debian, and run perfectly. Great! The machine boot the new kernel a run Ok... but... if I (or another process) make a change/write to the raid md system, the computer crash with the

RE: [Bugme-new] [Bug 4211] New: md configuration destroys disk GPT label

2005-02-14 Thread Neil Brown
On Monday February 14, [EMAIL PROTECTED] wrote: Maybe I am confused, but if you use the whole disk, I would expect the whole disk could be over-written! What am I missing? I second that. Once you do anything to a whole disk, whether make an md array out of it, or mkfs it or anything else, you

Re: 2.6.11-rc4 md loops on missing drives

2005-02-15 Thread Neil Brown
), but there are too many failed devices, the resync cannot complete but will be retried indefinitedly. 2/ if the array has two many failed drives to be usable and a spare is available, reconstruction will be attempted, but cannot work. This also is retried indefinitely. Signed-off-by: Neil

Re: [PATCH md 9 of 9] Optimise reconstruction when re-adding a recently failed drive.

2005-02-17 Thread Neil Brown
On Thursday February 17, [EMAIL PROTECTED] wrote: NeilBrown wrote: When an array is degraded, bit in the intent-bitmap are never cleared. So if a recently failed drive is re-added, we only need to reconstruct the block that are still reflected in the bitmap. This patch adds support

Re: [PATCH md 0 of 9] Introduction

2005-02-20 Thread Neil Brown
On Friday February 18, [EMAIL PROTECTED] wrote: Would you recommend to apply this package http://neilb.web.cse.unsw.edu.au/~neilb/patches/linux-devel/2.6/2005-02-18-00/patch-all-2005-02-18-00 To a 2.6.10 kernel? No. I don't think it would apply. That patch it mostly experimental stuff. Only

Re: Raid-6 hang on write.

2005-02-27 Thread Neil Brown
On Friday February 25, [EMAIL PROTECTED] wrote: Turning on debugging in raid6main.c and md.c make it much harder to hit. So I'm assuming something timing related. raid6d -- md_check_recovery -- generic_make_request -- make_request -- get_active_stripe Yes, there is a real problem

Re: Raid-6 hang on write.

2005-03-01 Thread Neil Brown
On Tuesday March 1, [EMAIL PROTECTED] wrote: Neil Brown wrote: Could you please confirm if there is a problem with 2.6.11-rc4-bk4-bk10 as reported, and whether it seems to be the same problem. Ok.. are we all ready? I had applied your development patches to all my vanilla

Re: Joys of spare disks!

2005-03-01 Thread Neil Brown
On Wednesday March 2, [EMAIL PROTECTED] wrote: Is there any sound reason why this is not feasible? Is it just that someone needs to write the code to implement it? Exactly (just needs to be implemented). NeilBrown - To unsubscribe from this list: send the line unsubscribe linux-raid in the

Re: Creating RAID1 with missing - mdadm 1.90

2005-03-05 Thread Neil Brown
On Saturday March 5, [EMAIL PROTECTED] wrote: What might the proper [or functional] syntax be to do this? I'm running 2.6.10-1.766-FC3, and mdadm 1.90. It would help if you told us what you tried as then we could possible give a more focussed answer, however: mdadm --create /dev/md1

RE: Spare disk could not sleep / standby

2005-03-07 Thread Neil Brown
On Monday March 7, [EMAIL PROTECTED] wrote: I have no idea, but... Is the disk IO reads or writes. If writes, scary Maybe data destined for the array goes to the spare sometimes. I hope not. I feel safe with my 2.4 kernel. :) It is writes, but don't be scared. It is just

Re: BUG (Deadlock) in 2.6.10

2005-03-07 Thread Neil Brown
On Sunday February 27, [EMAIL PROTECTED] wrote: Hello. Just for your information: There is a deadlock in the following situation: MD2 is Raid 0 with 3 disks. sda1 sdb1 sdc1 MD3 is Raid 0 with 3 disks. sdd1 sde1 sdf1 MD4 is Raid 1 with 2 disks. MD2 and MD3!! If a disk in MD2 fails, MD2

Re: Spare disk could not sleep / standby

2005-03-07 Thread Neil Brown
On Tuesday March 8, [EMAIL PROTECTED] wrote: Neil Brown wrote: It is writes, but don't be scared. It is just super-block updates. In 2.6, the superblock is marked 'clean' whenever there is a period of about 20ms of no write activity. This increases the chance on a resync won't

Re: Spare disk could not sleep / standby

2005-03-07 Thread Neil Brown
On Tuesday March 8, [EMAIL PROTECTED] wrote: Neil Brown wrote: Then after 20ms with no write, they are all marked 'clean'. Then before the next write they are all marked 'active'. As the event count needs to be updated every time the superblock is modified, the event count

Re: md Grow for Raid 5

2005-03-08 Thread Neil Brown
On Tuesday March 8, [EMAIL PROTECTED] wrote: berk walker wrote: Have you guys seen/tried mdadm 1.90? I am delightfully experiencing the I believe the mdadm based grow does not work for raid5, but only for raid0 or raid1. raidreconf is actually capable of adding disks to raid5 and

Re: [PATCH md 0 of 4] Introduction

2005-03-08 Thread Neil Brown
On Monday March 7, [EMAIL PROTECTED] wrote: NeilBrown [EMAIL PROTECTED] wrote: The first two are trivial and should apply equally to 2.6.11 The second two fix bugs that were introduced by the recent bitmap-based-intent-logging patches and so are not relevant to 2.6.11 yet.

Re: [PATCH md 0 of 4] Introduction

2005-03-08 Thread Neil Brown
On Tuesday March 8, [EMAIL PROTECTED] wrote: But I digress. My immediate problem is that writes must be queued first. I thought md traditionally did not queue requests, but instead used its own make_request substitute to dispatch incoming requests as they arrived. Have you remodelled the

Re: mdadm --dangerous-no-resync equivalent

2005-03-09 Thread Neil Brown
On Thursday March 10, [EMAIL PROTECTED] wrote: Hi, I have an installer (http://sourceforge.net/projects/terraformix/) that creates Raid 1 arrays, previously the arrays were created with mkraid using the --dangerous-no-resync option. I am now required to build the arrays with mdadm and have

Re: Problems with Linux RAID in kernel 2.6

2005-03-10 Thread Neil Brown
On Thursday March 10, [EMAIL PROTECTED] wrote: Hi, I have many problems with RAID in kernel 2.6.10. .. And dmesg says: md: raidstart(pid 2944) used deprecated START_ARRAY ioctl. This will not -- !!! be supported beyond 2.6

Re: [PATCH 1/2] md bitmap bug fixes

2005-03-14 Thread Neil Brown
On Monday March 14, [EMAIL PROTECTED] wrote: On 2005-03-14T21:22:57, Neil Brown [EMAIL PROTECTED] wrote: Hi there, just a question about how the bitmap stuff works with 1++-redundancy, say RAID1 with 2 mirrors, or RAID6. I assume you mean RAID1 with 3 drives (there isn't really one main

Re: [PATCH 0/3] md bitmap-based asynchronous writes

2005-03-20 Thread Neil Brown
On Thursday March 17, [EMAIL PROTECTED] wrote: These three patches provide the ability to perform asynchronous writes with raid1. The asynchronous write capability is primarily useful when raid1 is employed in network replication (i.e., with one or more of the disks located on a remote

Re: Strangeness when booting raid1: md5 already running?

2005-03-21 Thread Neil Brown
On Monday March 21, [EMAIL PROTECTED] wrote: Folks, I had to pull the plug on my box today, and when it rebooted got this rather strange raid issue. The box has 5 raid1 arrays, consisting of 5 partitions on each of 2 drives. When it rebooted, the md5 array came up like so: raid1: raid

Re: RAID1: no resync after crash?

2005-03-21 Thread Neil Brown
for reporting this. NeilBrown Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./drivers/md/raid1.c |3 ++- 1 files changed, 2 insertions(+), 1 deletion(-) diff ./drivers/md/raid1.c~current~ ./drivers/md/raid1.c --- ./drivers/md/raid1.c~current~ 2004-08-16 10:17:11.0 +1000

Re: RAID1 and data safety?

2005-03-21 Thread Neil Brown
On Wednesday March 16, [EMAIL PROTECTED] wrote: Just wondering; Is there any way to tell MD to do verify-on-write and read-from-all-disks on a RAID1 array? No. I would have thought that modern disk drives did some sort of verify-on-write, else how would they detect write errors, and they are

Re: mdadm command to trigger Raid Recovery?

2005-03-21 Thread Neil Brown
On Saturday March 19, [EMAIL PROTECTED] wrote: Hi, What exactly is the command to recover a raid array after system crash? I created a raid5 array with 3 disks. After system crash, one disk is out of sync, so I tried the command mdadm --assemble --run --force --update=resync /dev/md2

Re: Questions regarding readonly/readwrite semantics

2005-03-23 Thread Neil Brown
On Tuesday March 22, [EMAIL PROTECTED] wrote: Hello, in the beginning I had just one simple question :) ... Is there any way to start RAIDs in readonly mode while autodetection on system boot? No. The read-only mode has not been well thought out in md, and I have not yet put any effort into

Re: Software RAID0 behaviour/performance on ATA drives

2005-03-24 Thread Neil Brown
On Friday March 25, [EMAIL PROTECTED] wrote: The recommended setup for doing RAID0 with ATA drives is that each hard drive should be on its own IDE channel. The reasoning is so they can operate concurrently... i.e. if you put them in a master-slave configuration on the same channel, you

Re: Software RAID0 behaviour/performance on ATA drives

2005-03-25 Thread Neil Brown
On Friday March 25, [EMAIL PROTECTED] wrote: The raid0 driver is 'clever' at all. Hmm.. that should have been The raid0 drivers isn't 'clever' at all. ^ It is given requests by the filesystem or mm subsystem, maps them to the correct

Re: Raid1 problem can't add remove or mark faulty -- it did work

2005-03-26 Thread Neil Brown
On Saturday March 26, [EMAIL PROTECTED] wrote: i have a strange problem -- can't get a fully funtional 2 drive raid back up running-- it may or may not be a drive bios interaction don't know. none of the mdadm manage functions will work add remove or mark faulty i have purged and reinstalled

Re: AW: AW: RAID1 and data safety?

2005-03-29 Thread Neil Brown
On Tuesday March 29, [EMAIL PROTECTED] wrote: But: If you have a raid1 and a journaling fs, see the following: If the system chrashes at the end of a write transaction, then the end-of-transaction information may got written to hda already, but not to hdb. On the next boot, the journaling

Re: once again raid5

2005-03-31 Thread Neil Brown
On Thursday March 31, [EMAIL PROTECTED] wrote: Hi, we still have troubles with our raid5 array. You can find the history of the fault in detail in my other postings (11.3.2005). I will show you my attempts. There are 4 discs (Maxtor 250GB) in a raid5-array. One disc failed and we

Re: raid1-diseaster on reboot: old version overwrites new version

2005-04-02 Thread Neil Brown
On Saturday April 2, [EMAIL PROTECTED] wrote: * What did I do wrong? The only explantion to me is, that I had the wrong entry in my lilo.conf. I had root=/dev/hda6 there instead of root=/dev/md2 So maybe root was always mounted as /dev/hda6 and never as /dev/md2, which was started, but

ANNOUNCE: mdadm 1.10.0 - A tool for managing Soft RAID under Linux

2005-04-04 Thread Neil Brown
I am pleased to announce the availability of mdadm version 1.10.0 It is available at http://www.cse.unsw.edu.au/~neilb/source/mdadm/ and http://www.{countrycode}.kernel.org/pub/linux/utils/raid/mdadm/ as a source tar-ball and (at the first site) as an SRPM, and as an RPM for i386.

Re: How to move a singleton raid1 drive from raid1 /dev/md2 to /dev/md1

2005-04-06 Thread Neil Brown
On Thursday April 7, [EMAIL PROTECTED] wrote: Hi Software Raid Gurus!: I have A1:~# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 hdb1[0] hdg1[1] 244195904 blocks [2/2] [UU] md1 : active raid1 hdc1[0] 244195904 blocks [1/1] [U] md2 : active raid1 hde1[0]

Re: [PATCH md 001 of 2] Close a small race in md thread deregistration

2005-04-07 Thread Neil Brown
On Thursday April 7, [EMAIL PROTECTED] wrote: That code all seems a bit crufty to me. Sometime it would be good to stop using signals in-kernel and to use the kthread API for thread startup and shutdown. I've just added that to my TODO list... thanks for the suggestion. NeilBrown - To

ANNOUNCE: mdadm 1.11.0 - A tool for managing Soft RAID under Linux

2005-04-10 Thread Neil Brown
I am pleased (embarrassed?) to announce the availability of mdadm version 1.11.0 It is available at http://www.cse.unsw.edu.au/~neilb/source/mdadm/ and http://www.{countrycode}.kernel.org/pub/linux/utils/raid/mdadm/ as a source tar-ball and (at the first site) as an SRPM, and as an RPM

Re: out of sync raid 5 + xfs = kernel startup problem

2005-04-12 Thread Neil Brown
On Tuesday April 12, [EMAIL PROTECTED] wrote: My raid5 system recently went through a sequence of power outages. When everything came back on the drives were out of sync. No big issue... just sync them back up again. But something is going wrong. Any help is appreciated. dmesg provides

  1   2   3   4   5   6   7   8   9   10   >