Re: Array will not assemble

2006-07-06 Thread Neil Brown
On Friday July 7, [EMAIL PROTECTED] wrote: Perhaps I am misunderstanding how assemble works, but I have created a new RAID 1 array on a pair of SCSI drives and am having difficulty re-assembling it after a reboot. The relevent mdadm.conf entry looks like this: ARRAY /dev/md3

Re: How does md determine which partitions to use in RAID1 when DEVICE partitions is specified

2006-07-03 Thread Neil Brown
On Monday July 3, [EMAIL PROTECTED] wrote: I have Fedora Core 5 installed with mirroring on the Boot partition and root partition. I created a Logical Volume Group on the mirrored root partition. How does md figure out which partitions are actually specified. It says it stores the uuid in

Re: raid5 write performance

2006-07-02 Thread Neil Brown
On Sunday July 2, [EMAIL PROTECTED] wrote: Neil hello. I have been looking at the raid5 code trying to understand why writes performance is so poor. raid5 write performance is expected to be poor, as you often need to pre-read data or parity before the write can be issued. If I am not

Re: [PATCH] enable auto=yes by default when using udev

2006-07-02 Thread Neil Brown
On Monday July 3, [EMAIL PROTECTED] wrote: Hello, the following patch aims at solving an issue that is confusing a lot of users. when using udev, device files are created only when devices are registered with the kernel, and md devices are registered only when started. mdadm needs the

Re: changing MD device names

2006-07-01 Thread Neil Brown
On Saturday July 1, [EMAIL PROTECTED] wrote: I have a system which was running several raid1 devices (md0 - md2) using 2 physical drives (hde, and hdg). I wanted to swap out these drives for two different ones, so I did the following: 1) swap out hdg for a new drive 2) create degraded

Re: raid issues after power failure

2006-06-30 Thread Neil Brown
On Friday June 30, [EMAIL PROTECTED] wrote: On Fri, 30 Jun 2006, Francois Barre wrote: Did you try upgrading mdadm yet ? yes, I have version 2.5 now, and it produces the same results. Try adding '--force' to the -A line. That tells mdadm to try really hard to assemble the array. You

Re: Strange intermittant errors + RAID doesn't fail the disk.

2006-06-30 Thread Neil Brown
On Friday June 30, [EMAIL PROTECTED] wrote: More problems ... As reported I have 4x WD5000YS (Caviar RE2 500 GB) in a md RAID5 array. I've been benchmarking and otherwise testing the new array these last few days, and apart from the fact that the md doesn't shut down properly I've had no

Re: Cutting power without breaking RAID

2006-06-29 Thread Neil Brown
On Thursday June 29, [EMAIL PROTECTED] wrote: Why should this trickery be needed? When an array is mounted r/o it should be clean. How can it be dirty. I assume readonly implies noatime, I mount physically readonly devices without explicitly saying noatime and nothing whines. The

Re: Drive issues in RAID vs. not-RAID ..

2006-06-28 Thread Neil Brown
On Wednesday June 28, [EMAIL PROTECTED] wrote: I've seen a few comments to the effect that some disks have problems when used in a RAID setup and I'm a bit preplexed as to why this might be.. What's the difference between a drive in a RAID set (either s/w or h/w) and a drive on it's own,

Re: Cutting power without breaking RAID

2006-06-28 Thread Neil Brown
On Wednesday June 28, [EMAIL PROTECTED] wrote: Hello, I'm facing this problem: when my Linux box detects a POWER FAIL event from the UPS, it starts a normal shutdown. Just before the normal kernel poweroff, it sends to the UPS a signal on the serial line which says cut-off the power

Re: mdadm 2.5.2 - Static built , Interesting warnings when

2006-06-28 Thread Neil Brown
On Tuesday June 27, [EMAIL PROTECTED] wrote: Hello All , What change in Glibc mekes this necessary ? Is there a method available to include the getpwnam getgrnam structures so that full static build will work . Tia , JimL gcc -Wall -Werror -Wstrict-prototypes -ggdb

ANNOUNCE: mdadm 2.5.2 - A tool for managing Soft RAID under Linux

2006-06-27 Thread Neil Brown
I am pleased to announce the availability of mdadm version 2.5.2 It is available at the usual places: http://www.cse.unsw.edu.au/~neilb/source/mdadm/ and countrycode=xx. http://www.${countrycode}kernel.org/pub/linux/utils/raid/mdadm/ and via git at git://neil.brown.name/mdadm

Re: Is shrinking raid5 possible?

2006-06-26 Thread Neil Brown
On Friday June 23, [EMAIL PROTECTED] wrote: Why would you ever want to reduce the size of a raid5 in this way? A feature that would have been useful to me a few times is the ability to shrink an array by whole disks. Example: 8x 300 GB disks - 2100 GB raw capacity shrink file

Re: Bug in 2.6.17 / mdadm 2.5.1

2006-06-26 Thread Neil Brown
On Monday June 26, [EMAIL PROTECTED] wrote: Neil Brown wrote: snip Alternately you can apply the following patch to the kernel and version-1 superblocks should work better. -stable material? Maybe. I'm not sure it exactly qualifies, but I might try sending it to them and see what

Re: recover data from linear raid

2006-06-26 Thread Neil Brown
On Monday June 26, [EMAIL PROTECTED] wrote: This is what I get now, after creating with fdisk /dev/hdb1 and /dev/hdc1 as linux raid autodetect partitions So I'm totally confused now. You said it was 'linear', but the boot log showed 'raid0'. The drives didn't have a partition table

Re: recover data from linear raid

2006-06-26 Thread Neil Brown
On Monday June 26, [EMAIL PROTECTED] wrote: This is what I get now, after creating with fdisk /dev/hdb1 and /dev/hdc1 as linux raid autodetect partitions So I'm totally confused now. You said it was 'linear', but the boot log showed 'raid0'. The drives didn't have a partition table

Re: Bug in 2.6.17 / mdadm 2.5.1

2006-06-25 Thread Neil Brown
On Sunday June 25, [EMAIL PROTECTED] wrote: Hi! There's a bug in Kernel 2.6.17 and / or mdadm which prevents (re)adding a disk to a degraded RAID5-Array. Thank you for the detailed report. The bug is in the md driver in the kernel (not in mdadm), and only affects version-1 superblocks.

Re: Large single raid and XFS or two small ones and EXT3?

2006-06-23 Thread Neil Brown
On Friday June 23, [EMAIL PROTECTED] wrote: The problem is that there is no cost effective backup available. One-liner questions : - How does Google make backups ? No, Google ARE the backups :-) - Aren't tapes dead yet ? LTO-3 does 300Gig, and LTO-4 is planned. They may not cope with

Re: read perfomance patchset

2006-06-22 Thread Neil Brown
On Monday June 19, [EMAIL PROTECTED] wrote: Neil hello if i am not mistaken here: in first instance of : if(bi) ... ... you return without setting to NULL Yes, you are right. Thanks. And fixing that bug removes the crash. However I've been doing a few tests and

Re: Is shrinking raid5 possible?

2006-06-22 Thread Neil Brown
On Thursday June 22, [EMAIL PROTECTED] wrote: Neil Brown wrote: On Monday June 19, [EMAIL PROTECTED] wrote: Hi, I'd like to shrink the size of a RAID5 array - is this possible? My first attempt shrinking 1.4Tb to 600Gb, mdadm --grow /dev/md5 --size=629145600 gives mdadm

Re: Raid5 reshape

2006-06-21 Thread Neil Brown
On Tuesday June 20, [EMAIL PROTECTED] wrote: Nigel J. Terry wrote: Well good news and bad news I'm afraid... Well I would like to be able to tell you that the time calculation now works, but I can't. Here's why: Why I rebooted with the newly built kernel, it decided to hit the magic 21

Re: Can't get drives containing spare devices to spindown

2006-06-21 Thread Neil Brown
On Thursday June 22, [EMAIL PROTECTED] wrote: Marc L. de Bruin wrote: Situation: /dev/md0, type raid1, containing 2 active devices (/dev/hda1 and /dev/hdc1) and 2 spare devices (/dev/hde1 and /dev/hdg1). Those two spare 'partitions' are the only partitions on those disks and

Re: Can't get drives containing spare devices to spindown

2006-06-21 Thread Neil Brown
On Thursday June 22, [EMAIL PROTECTED] wrote: Thanks Neil for your quick reply. Would it be possible to elaborate a bit on the problem and the solution? I guess I won't be on 2.6.18 for some time... When an array has been idle (no writes) for a short time (20 or 200 ms, depending on

Re: the question about raid0_make_request

2006-06-19 Thread Neil Brown
On Monday June 19, [EMAIL PROTECTED] wrote: We can imagine that there is a raid0 array whose layerout is drawn in the attachment. Take this for example. There are 3 zones totally, and their zone-nb_dev is 5,4,3 respectively. In the raid0_make_request function, the var block is the offset of

Re: Raid5 reshape

2006-06-19 Thread Neil Brown
On Monday June 19, [EMAIL PROTECTED] wrote: That seems to have fixed it. The reshape is now progressing and there are no apparent errors in dmesg. Details below. Great! I'll send another confirmation tomorrow when hopefully it has finished :-) Many thanks for a great product and great

Re: [PATCH] ANNOUNCE: mdadm 2.5.1 - A tool for managing Soft RAID under Linux

2006-06-19 Thread Neil Brown
On Monday June 19, [EMAIL PROTECTED] wrote: Neil Brown wrote: I am pleased to announce the availability of mdadm version 2.5.1 What the heck, here's another one. :) This one is slightly more serious. We're getting a device of 0:0 in Fail events from the mdadm monitor sometimes now

Re: the question about raid0_make_request

2006-06-18 Thread Neil Brown
On Monday June 19, [EMAIL PROTECTED] wrote: When I read the code of raid0_make_request,I meet some questions. 1\ block = bio-bi_sector 1,it's the device offset in kilotytes. so why do we use block substract zone-zone_offset? The zone-zone_offset is the zone offset relative the mddev in

Re: Raid5 reshape

2006-06-17 Thread Neil Brown
On Saturday June 17, [EMAIL PROTECTED] wrote: Any ideas what I should do next? Thanks Looks like you've probably hit a bug. I'll need a bit more info though. First: [EMAIL PROTECTED] ~]# cat /proc/mdstat Personalities : [raid5] [raid4] md0 : active raid5 sdb1[1] sda1[0] hdc1[4](S)

Re: Raid5 reshape

2006-06-17 Thread Neil Brown
OK, thanks for the extra details. I'll have a look and see what I can find, but it'll probably be a couple of days before I have anything useful for you. NeilBrown - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo

Re: Raid5 reshape

2006-06-16 Thread Neil Brown
On Friday June 16, [EMAIL PROTECTED] wrote: You have to grow the ext3 fs separately. ext2resize /dev/mdX. Keep in mind this can only be done off-line. ext3 can be resized online. I think ext2resize in the latest release will do the right thing whether it is online or not. There is a limit

Re: IBM xSeries stop responding during RAID1 reconstruction

2006-06-15 Thread Neil Brown
On Thursday June 15, [EMAIL PROTECTED] wrote: On Wed, Jun 14, 2006 at 10:46:09AM -0500, Bill Cizek wrote: Niccolo Rigacci wrote: When the sync is complete, the machine start to respond again perfectly. I was able to work around this by lowering /proc/sys/dev/raid/speed_limit_max

Re: Raid5 software problems after loosing 4 disks for 48 hours

2006-06-15 Thread Neil Brown
On Friday June 16, [EMAIL PROTECTED] wrote: And is there a way if more then 1 disks goes offline, for the whole array to be taken offline? My understanding of raid5 is loose 1+ disks and nothing on the raid would be readable. this is not the case here. Nothing will be writable, but some

Re: to understand the logic of raid0_make_request

2006-06-15 Thread Neil Brown
On Friday June 16, [EMAIL PROTECTED] wrote: Thanks a lot.I went through the code again following your guide.But I still can't understand how the bio-bi_sector and bio-bi_dev are computed.I don't know what the var 'block' stands for. Could you explain them to me ? 'block' is simply

Re: raid6

2006-06-14 Thread Neil Brown
On Thursday June 15, [EMAIL PROTECTED] wrote: I am confronted with a big problem of the raid6 algorithm, when recently I learn the raid6 code of linux 2.6 you have contributed . Unfortunately I can not understand the algorithm of P +Q parity in this program . Is this some formula

Re: to understand the logic of raid0_make_request

2006-06-12 Thread Neil Brown
On Tuesday June 13, [EMAIL PROTECTED] wrote: hello,everyone. I am studying the code of raid0.But I find that the logic of raid0_make_request is a little difficult to understand. Who can tell me what the function of raid0_make_request will do eventually? One of two possibilities. Most often

Re: raid 5 read performance

2006-06-09 Thread Neil Brown
On Friday June 9, [EMAIL PROTECTED] wrote: Neil hello Sorry for the delay. too many things to do. You aren't alone there! I have implemented all said in : http://www.spinics.net/lists/raid/msg11838.html As always I have some questions: 1. mergeable_bvec I did not understand

Re: Raid5 read error correction log

2006-06-04 Thread Neil Brown
On Saturday June 3, [EMAIL PROTECTED] wrote: Hey Neil, It would sure be nice if the log contained any info about the error correction that's been done rather than simply saying read error corrected, like which array chunk, device and sector was corrected. I'm having a persistent pending

Re: problems with raid6, mdadm: RUN_ARRAY failed

2006-06-04 Thread Neil Brown
On Friday June 2, [EMAIL PROTECTED] wrote: I have some old controler Mylex Acceleraid 170LP with 6 SCSI 36GB disks on it. Running hardware raid5 resulted with very poor performance (7Mb/sec in sequential writing, with horrid iowait). So I configured it to export 6 logical disks and tried

Re: raid5 hang on get_active_stripe

2006-06-02 Thread Neil Brown
On Friday June 2, [EMAIL PROTECTED] wrote: On Thu, 1 Jun 2006, Neil Brown wrote: I've got one more long-shot I would like to try first. If you could backout that change to ll_rw_block, and apply this patch instead. Then when it hangs, just cat the stripe_cache_active file and see

Re: Clarifications about check/repair, i.e. RAID SCRUBBING

2006-06-02 Thread Neil Brown
On Friday June 2, [EMAIL PROTECTED] wrote: In any regard: I'm talking about triggering the following functionality: echo check /sys/block/mdX/md/sync_action echo repair /sys/block/mdX/md/sync_action On a RAID5, and soon a RAID6, I'm looking to set up a cron job, and am trying to

Re: RAID5E

2006-05-31 Thread Neil Brown
On Wednesday May 31, [EMAIL PROTECTED] wrote: Where I was working most recently some systems were using RAID5E (RAID5 with both the parity and hot spare distributed). This seems to be highly desirable for small arrays, where spreading head motion over one more drive will improve

Re: [PATCH 006 of 10] md: Set/get state of array via sysfs

2006-05-31 Thread Neil Brown
On Wednesday May 31, [EMAIL PROTECTED] wrote: * NeilBrown ([EMAIL PROTECTED]) wrote: This allows the state of an md/array to be directly controlled via sysfs and adds the ability to stop and array without tearing it down. Array states/settings: clear No devices, no

Re: [PATCH 008 of 10] md: Allow raid 'layout' to be read and set via sysfs.

2006-05-31 Thread Neil Brown
On Wednesday May 31, [EMAIL PROTECTED] wrote: * NeilBrown ([EMAIL PROTECTED]) wrote: +static struct md_sysfs_entry md_layout = +__ATTR(layout, 0655, layout_show, layout_store); 0644? I think the correct response is Doh! :-) Yes, thanks, NeilBrown - To unsubscribe from this list: send the

Re: RAID 5 Whole Devices - Partition

2006-05-30 Thread Neil Brown
On Tuesday May 30, [EMAIL PROTECTED] wrote: Hello, I am trying to create a RAID5 array out of 3 160GB SATA drives. After i create the array i want to partition the device into 2 partitions. The system lies on a SCSI disk and the 2 partitions will be used for data storage. The SATA host

Re: raid5 hang on get_active_stripe

2006-05-30 Thread Neil Brown
On Tuesday May 30, [EMAIL PROTECTED] wrote: On Tue, 30 May 2006, Neil Brown wrote: Could you try this patch please? On top of the rest. And if it doesn't fail in a couple of days, tell me how regularly the message kblockd_schedule_work failed gets printed. i'm running

Re: raid5 hang on get_active_stripe

2006-05-30 Thread Neil Brown
On Tuesday May 30, [EMAIL PROTECTED] wrote: actually i think the rate is higher... i'm not sure why, but klogd doesn't seem to keep up with it: [EMAIL PROTECTED]:~# grep -c kblockd_schedule_work /var/log/messages 31 [EMAIL PROTECTED]:~# dmesg | grep -c kblockd_schedule_work 8192 # grep

Re: [PATCH] mdadm 2.5 (Was: ANNOUNCE: mdadm 2.5 - A tool for managing Soft RAID under Linux)

2006-05-29 Thread Neil Brown
On Monday May 29, [EMAIL PROTECTED] wrote: On Mon, May 29, 2006 at 12:08:25PM +1000, Neil Brown wrote: On Sunday May 28, [EMAIL PROTECTED] wrote: Thanks for the patches. They are greatly appreciated. You're welcome - mdadm-2.3.1-kernel-byteswap-include-fix.patch reverts a change

Re: raid5 hang on get_active_stripe

2006-05-28 Thread Neil Brown
On Saturday May 27, [EMAIL PROTECTED] wrote: On Sat, 27 May 2006, Neil Brown wrote: Thanks. This narrows it down quite a bit... too much infact: I can now say for sure that this cannot possible happen :-) 2/ The message.gz you sent earlier with the echo t /proc/sysrq

Re: [PATCH] mdadm 2.5 (Was: ANNOUNCE: mdadm 2.5 - A tool for managing Soft RAID under Linux)

2006-05-28 Thread Neil Brown
On Sunday May 28, [EMAIL PROTECTED] wrote: On Fri, May 26, 2006 at 04:33:08PM +1000, Neil Brown wrote: I am pleased to announce the availability of mdadm version 2.5 hello, i tried rebuilding mdadm 2.5 on current mandriva cooker, which uses gcc-4.1.1, glibc-2.4 and dietlibc 0.29

Re: [patch] install a static build

2006-05-28 Thread Neil Brown
On Sunday May 28, [EMAIL PROTECTED] wrote: Hello Luca, maybe you better add an install-static target. you're right, that would be a cleaner approach. I've don so, and while doing so added install-tcc, install-ulibc, install-klibc too. And while I'm busy in the Makefile anyway I've made

Re: problems with raid=noautodetect

2006-05-28 Thread Neil Brown
On Friday May 26, [EMAIL PROTECTED] wrote: On Tue, May 23, 2006 at 08:39:26AM +1000, Neil Brown wrote: Presumably you have a 'DEVICE' line in mdadm.conf too? What is it. My first guess is that it isn't listing /dev/sdd? somehow. Neil, i am seeing a lot of people that fall in this same

Re: RAID5 kicks non-fresh drives

2006-05-28 Thread Neil Brown
On Friday May 26, [EMAIL PROTECTED] wrote: I had no idea about this particular configuration requirement. None of just to be clear: it's not a requirement. if you want the very nice auto-assembling behavior, you need to designate the auto-assemblable partitions. but you can assemble

Re: RAID5 kicks non-fresh drives

2006-05-28 Thread Neil Brown
On Friday May 26, [EMAIL PROTECTED] wrote: On Thu, 25 May 2006, Craig Hollabaugh wrote: That did it! I set the partition FS Types from 'Linux' to 'Linux raid autodetect' after my last re-sync completed. Manually stopped and started the array. Things looked good, so I crossed my fingers

Re: mdadm and 2.4 kernel?

2006-05-26 Thread Neil Brown
On Thursday May 25, [EMAIL PROTECTED] wrote: Hi, for various reasons i'll need to run mdadm on a 2.4 kernel. Now I have 2.4.32 kernel. Take a look: [EMAIL PROTECTED]:~# mdadm --create --verbose /dev/md0 --level=1 --bitmap=/root/md0bitmap -n 2 /dev/nda /dev/ndb --force --assume-clean

Re: raid5 hang on get_active_stripe

2006-05-26 Thread Neil Brown
On Friday May 26, [EMAIL PROTECTED] wrote: On Tue, 23 May 2006, Neil Brown wrote: i applied them against 2.6.16.18 and two days later i got my first hang... below is the stripe_cache foo. thanks -dean neemlark:~# cd /sys/block/md4/md/ neemlark:/sys/block/md4/md# cat

Re: RAID5 kicks non-fresh drives

2006-05-25 Thread Neil Brown
On Thursday May 25, [EMAIL PROTECTED] wrote: From dmesg md: Autodetecting RAID arrays. md: autorun ... md: considering sdl1 ... md: adding sdl1 ... md: adding sdi1 ... md: adding sdh1 ... md: adding sdg1 ... md: adding sdf1 ... md: adding sde1 ... md: adding sdd1 ... md:

Re: Max. md array size under 32-bit i368 ...

2006-05-24 Thread Neil Brown
On Wednesday May 24, [EMAIL PROTECTED] wrote: I know this has come up before, but a few quick googles hasn't answered my questions - I'm after the max. array size that can be created under bog-standard 32-bit intel Linux, and any issues re. partitioning. I'm aiming to create a raid-6 over

Re: 4 disks in raid 5: 33MB/s read performance?

2006-05-24 Thread Neil Brown
On Wednesday May 24, [EMAIL PROTECTED] wrote: Mark Hahn wrote: I just dd'ed a 700MB iso to /dev/null, dd returned 33MB/s. Isn't that a little slow? what bs parameter did you give to dd? it should be at least 3*chunk (probably 3*64k if you used defaults.) I would expect

Re: iostat messed up with md on 2.6.16.x

2006-05-24 Thread Neil Brown
On Wednesday May 24, [EMAIL PROTECTED] wrote: Hi, I upgraded my kernel from 2.6.15.6 to 2.6.16.16 and now the 'iostat -x 1' permanently shows 100% utilisation on each disk that member of an md array. I asked my friend who using 3 boxes with 2.6.16.2 2.6.16.9 2.6.16.11 and raid1, he's

Re: Does software RAID take advantage of SMP, or 64 bit CPU(s)?

2006-05-23 Thread Neil Brown
On Monday May 22, [EMAIL PROTECTED] wrote: A few simple questions about the 2.6.16+ kernel and software RAID. Does software RAID in the 2.6.16 kernel take advantage of SMP? Not exactly. RAID5/6 tends to use just one cpu for parity calculations, but that frees up other cpus for doing other

Re: improving raid 5 performance

2006-05-23 Thread Neil Brown
On Tuesday May 23, [EMAIL PROTECTED] wrote: Neil hello. 1. i have applied the common path according to http://www.spinics.net/lists/raid/msg11838.html as much as i can. Great. I look forward to seeing the results. it looks ok in terms of throughput. before i continue to a non common

Re: raid5 resize in 2.6.17 - how will it be different from raidreconf?

2006-05-22 Thread Neil Brown
On Monday May 22, [EMAIL PROTECTED] wrote: Will it be less risky to grow an array that way? It should be. In particular it will survive an unexpected reboot (as long as you don't lose and drives at the same time) which I don't think raidreconf would. Testing results so far are quite

Re: problems with raid=noautodetect

2006-05-22 Thread Neil Brown
On Monday May 22, [EMAIL PROTECTED] wrote: hi list, I read somewhere that it would be better not to rely on the autodetect-mechanism in the kernel at boot time, but rather to set up /etc/mdadm.conf accordingly and boot with raid=noautodetect. Well, I tried that :) I set up

Re: spin_lock_irq() in handle_stripe()

2006-05-22 Thread Neil Brown
On Monday May 22, [EMAIL PROTECTED] wrote: Good day Neil, all if I understand right, we disable irqs in handle_stripe() just because of using device_lock which can be grabbed from interrupt context (_end_io functions). can we replace it by a new separate spinlock and don't block

Re: 4 disks in raid 5: 33MB/s read performance?

2006-05-22 Thread Neil Brown
On Monday May 22, [EMAIL PROTECTED] wrote: I just dd'ed a 700MB iso to /dev/null, dd returned 33MB/s. Isn't that a little slow? System is a sil3114 4 port sata 1 controller with 4 samsung spinpoint 250GB, 8MB cache in raid 5 on a Athlon XP 2000+/512MB. Yes, read on raid5 isn't as fast as

Re: raid5 hang on get_active_stripe

2006-05-22 Thread Neil Brown
On Wednesday May 17, [EMAIL PROTECTED] wrote: On Thu, 11 May 2006, dean gaudet wrote: On Tue, 14 Mar 2006, Neil Brown wrote: On Monday March 13, [EMAIL PROTECTED] wrote: I just experienced some kind of lockup accessing my 8-drive raid5 (2.6.16-rc4-mm2). The system has been up

Re: raid 5 read performance

2006-05-21 Thread Neil Brown
On Sunday May 21, [EMAIL PROTECTED] wrote: Question : What is the cost of not walking trough the raid5 code in the case of READ ? if i add and error handling code will it be suffice ? Please read http://www.spinics.net/lists/raid/msg11838.html and ask if you have further

Re: raid5 resize in 2.6.17 - how will it be different from raidreconf?

2006-05-21 Thread Neil Brown
On Monday May 22, [EMAIL PROTECTED] wrote: How will the raid5 resize in 2.6.17 be different from raidreconf? It is done (mostly) in the kernel while the array is active, rather than completely in user-space while the array is off-line. Will it be less risky to grow an array that way? It

Re: mdadm: bitmap size

2006-05-21 Thread Neil Brown
(Please don't reply off-list. If the conversation starts on the list, please leave it there unless there is a VERY GOOD reason). On Monday May 22, [EMAIL PROTECTED] wrote: On 5/19/06, Neil Brown [EMAIL PROTECTED] wrote: On Friday May 19, [EMAIL PROTECTED] wrote: As i can see the bitmap

Re: recovery speed on many-disk RAID 1

2006-05-20 Thread Neil Brown
On Saturday May 20, jeff@jab.org wrote: interrupted by seeks from read requests on the RAID. But that's not really necessary; imagine if it instead went something like: sbb1 - sbg1# High bandwidth copy operation limited by drive speed sb[cde]1# These guys handle read requests

Re: Raid5 resize testing opportunity

2006-05-18 Thread Neil Brown
On Thursday May 18, [EMAIL PROTECTED] wrote: Hi Neil, The raid5 reshape seems to have gone smoothly (nice job!), though it took 11 hours! Are there any pieces of info you would like about the array? Excellent! No, no other information would be useful. This is the first real-life example

Re: [RFC][PATCH] MD RAID Acceleration: Move stripe operations outside the spin lock

2006-05-18 Thread Neil Brown
On Tuesday May 16, [EMAIL PROTECTED] wrote: This is the second revision of the effort to enable offload of MD's xor and copy operations to dedicated hardware resources. Please comment on the approach of this patch and whether it will be suitable to expand this to the other areas in

Re: raid5 hang on get_active_stripe

2006-05-18 Thread Neil Brown
On Wednesday May 17, [EMAIL PROTECTED] wrote: let me know if you want the task dump output from this one too. No thanks - I doubt it will containing anything helpful. I'll try to put some serious time into this next week - as soon as I get mdadm 2.5 out. NeilBrown - To unsubscribe from

Re: Raid5 resize testing opportunity

2006-05-17 Thread Neil Brown
On Wednesday May 17, [EMAIL PROTECTED] wrote: Hi all, For Neil's benefit (:-) I'm about to test the raid5 resize code by trying to grow our 2TB raid5 from 8 to 10 devices. Currently, I'm running a 2.6.16-rc4-mm2 kernel. Is this current enough to support the resize? (I suspect not.) If I

Re: softraid and multiple distros

2006-05-15 Thread Neil Brown
On Monday May 15, [EMAIL PROTECTED] wrote: I always use entire disks if I want the entire disks raided (sounds obvious, doesn't it...) I only use partitions when I want to vary the raid layout for different parts of the disk (e.g. mirrored root, mirrored swap, raid6 for the rest). But

Re: [PATCH 008 of 8] md/bitmap: Change md/bitmap file handling to use bmap to file blocks.

2006-05-15 Thread Neil Brown
On Monday May 15, [EMAIL PROTECTED] wrote: Ho hum, I give up. Thankyou :-) I found our debate very valuable - it helped me clarify my understanding of some areas of linux filesystem semantics (and as I am trying to write a filesystem in my 'spare time', that will turn out to be very useful).

Re: raid0 over 2 h/w raid5's OOPSing at mkfs

2006-05-15 Thread Neil Brown
On Monday May 15, [EMAIL PROTECTED] wrote: I've got a x86_64 system with 2 3ware 9550SX-12s, each set up as a raid5 w/ a hot spare. Over that, I do a software raid0 stripe via: mdadm -C /dev/md0 -c 512 -l 0 -n 2 /dev/sd[bc]1 Whenever I try to format md0 (I've tried both mke2fs and

Re: [PATCH 001 of 3] md: Change md/bitmap file handling to use bmap to file blocks-fix

2006-05-15 Thread Neil Brown
On Monday May 15, [EMAIL PROTECTED] wrote: NeilBrown [EMAIL PROTECTED] wrote: + do_sync_file_range(file, 0, LLONG_MAX, + SYNC_FILE_RANGE_WRITE | + SYNC_FILE_RANGE_WAIT_AFTER); That needs a SYNC_FILE_RANGE_WAIT_BEFORE

Re: recovery from mkswap on mounted raid1 ext3 filesystem?

2006-05-15 Thread Neil Brown
On Monday May 15, [EMAIL PROTECTED] wrote: I accidentally ran mkswap on an md raid1 device which had a mounted ext3 filesystem on it. I also did a swapon, but I don't think anything was written to swap before I noticed the mistake. How much of the partition is toast, and is it something

Re: [PATCH 008 of 8] md/bitmap: Change md/bitmap file handling to use bmap to file blocks.

2006-05-14 Thread Neil Brown
On Saturday May 13, [EMAIL PROTECTED] wrote: Paul Clements [EMAIL PROTECTED] wrote: Andrew Morton wrote: The loss of pagecache coherency seems sad. I assume there's never a requirement for userspace to read this file. Actually, there is. mdadm reads the bitmap file, so that

Re: softraid and multiple distros

2006-05-14 Thread Neil Brown
On Sunday May 14, [EMAIL PROTECTED] wrote: Am Sonntag, 14. Mai 2006 16:50 schrieben Sie: What do I need to do when I want to install a different distro on the machine with a raid5 array? Which files do I need? /etc/mdadm.conf? /etc/raittab? both? MD doesn't need any files to

Re: [PATCH 008 of 8] md/bitmap: Change md/bitmap file handling to use bmap to file blocks.

2006-05-14 Thread Neil Brown
(replying to bits of several emails) On Friday May 12, [EMAIL PROTECTED] wrote: Neil Brown [EMAIL PROTECTED] wrote: However some IO requests cannot complete until the filesystem I/O completes, so we need to be sure that the filesystem I/O won't block waiting for memory, or fail

Re: [PATCH 002 of 8] md/bitmap: Remove bitmap writeback daemon.

2006-05-12 Thread Neil Brown
On Friday May 12, [EMAIL PROTECTED] wrote: NeilBrown [EMAIL PROTECTED] wrote: ./drivers/md/bitmap.c | 115 ++ hmm. I hope we're not doing any of that filesystem I/O within the context of submit_bio() or kblockd or anything like that.

Re: [PATCH 008 of 8] md/bitmap: Change md/bitmap file handling to use bmap to file blocks.

2006-05-12 Thread Neil Brown
On Friday May 12, [EMAIL PROTECTED] wrote: NeilBrown [EMAIL PROTECTED] wrote: If md is asked to store a bitmap in a file, it tries to hold onto the page cache pages for that file, manipulate them directly, and call a cocktail of operations to write the file out. I don't believe this is

Re: RAID5 - 4 disk reboot trouble.

2006-05-11 Thread Neil Brown
On Thursday May 11, [EMAIL PROTECTED] wrote: Hi, I'm running a raid5 system, and when I reboot my raid seems to be failing. (One disk is set to spare and other disk seems to be oke in the detials page but we get a INPUT/OUTPUT error when trying to mount it) We cannot seem te find the

Re: hardware raid 5 and software raid 0 stripe broke.

2006-05-11 Thread Neil Brown
On Thursday May 11, [EMAIL PROTECTED] wrote: We have a Linux box running redhat 7.2 We have two hardware controllers in it with about 500gig's each. They're raid 5. We were using a software raid to combine them all together. 1 hard drive went down so we replaced it and now the

Re: [PATCH 009 of 11] md: Support stripe/offset mode in raid10

2006-05-08 Thread Neil Brown
On Wednesday May 3, [EMAIL PROTECTED] wrote: Neil Brown wrote: On Tuesday May 2, [EMAIL PROTECTED] wrote: NeilBrown wrote: The industry standard DDF format allows for a stripe/offset layout where data is duplicated on different stripes. e.g. A B C D D A B C

Re: strange RAID5 problem

2006-05-08 Thread Neil Brown
On Monday May 8, [EMAIL PROTECTED] wrote: Good evening. I am having a bit of a problem with a largish RAID5 set. Now it is looking more and more like I am about to lose all the data on it, so I am asking (begging?) to see if anyone can help me sort this out. Very thorough description, but

Re: Two-disk RAID5?

2006-05-05 Thread Neil Brown
On Friday May 5, [EMAIL PROTECTED] wrote: Sorry, I couldn't find a diplomatic way to say you're completely wrong. We don't necessarily expect a diplomatic way, but a clear and intelligent one would be helpful. In two-disk RAID5 which is it? 1) The 'parity bit' is the same as the

Re: [PATCH 009 of 11] md: Support stripe/offset mode in raid10

2006-05-02 Thread Neil Brown
On Tuesday May 2, [EMAIL PROTECTED] wrote: NeilBrown wrote: The industry standard DDF format allows for a stripe/offset layout where data is duplicated on different stripes. e.g. A B C D D A B C E F G H H E F G (columns are

Re: [PATCH 004 of 11] md: Increase the delay before marking metadata clean, and make it configurable.

2006-05-01 Thread Neil Brown
-safemode_delay == 0) + mddev-safemode_delay = 1; + } + return len; And most of that goes away. Maybe it could go in a library :-? NeilBrown From: Linus Torvalds [EMAIL PROTECTED] To: Neil Brown [EMAIL PROTECTED] cc

Re: try to write back redundant data before failing disk in raid5 setup

2006-04-30 Thread Neil Brown
On Monday May 1, [EMAIL PROTECTED] wrote: Hello, Suppose a read action on a disk which is member of a raid5 (or raid1 or any other raid where there's data redundancy) fails. What ahppens next is that the entire disk is marked as failed and a raid5 rebuild is initiated. However, that

Re: raid5 resizing

2006-04-30 Thread Neil Brown
On Monday May 1, [EMAIL PROTECTED] wrote: Hey folks. There's no point in using LVM on a raid5 setup if all you intend to do in the future is resize the filesystem on it, is there? The new raid5 resizing code takes care of providing the extra space and then as long as the say ext3 filesystem

Re: [PATCH 003 of 5] md: Change ENOTSUPP to EOPNOTSUPP

2006-04-29 Thread Neil Brown
On Friday April 28, [EMAIL PROTECTED] wrote: NeilBrown wrote: Change ENOTSUPP to EOPNOTSUPP Because that is what you get if a BIO_RW_BARRIER isn't supported ! Dumb question, hope someone can answer it :). Does this mean that any version of MD up till now won't know that SATA disks does

RE: Two-disk RAID5?

2006-04-26 Thread Neil Brown
On Wednesday April 26, [EMAIL PROTECTED] wrote: I suspect I should have just kept out of this, and waited for someone like Neil to answer authoratatively. So...Neil, what's the right answer to Tuomas's 2 disk RAID5 question? :) .. and a deep resounding voice from on-high spoke and in

Re: Trying to start dirty, degraded RAID6 array

2006-04-26 Thread Neil Brown
start dirty degraded array for md0 The '-f' is meant to make this work. However it seems there is a bug. Could you please test this patch? It isn't exactly the right fix, but it definitely won't hurt. Thanks, NeilBrown Signed-off-by: Neil Brown [EMAIL PROTECTED] ### Diffstat output ./super0

Re: linear writes to raid5

2006-04-26 Thread Neil Brown
On Thursday April 20, [EMAIL PROTECTED] wrote: Neil Brown wrote: What is the rationale for your position? My rationale was that if md layer receives *write* requests not smaller than a full stripe size, it is able to omit reading data to update, and can just calculate new parity from

Re: Trying to start dirty, degraded RAID6 array

2006-04-26 Thread Neil Brown
On Thursday April 27, [EMAIL PROTECTED] wrote: Neil Brown wrote: The '-f' is meant to make this work. However it seems there is a bug. Could you please test this patch? It isn't exactly the right fix, but it definitely won't hurt. Thanks, Neil, I'll give this a go when I get home

Re: [patch 1/2] raid6_end_write_request() spinlock fix

2006-04-24 Thread Neil Brown
On Tuesday April 25, [EMAIL PROTECTED] wrote: Hello, Reduce the raid6_end_write_request() spinlock window. Andrew: please don't include these in -mm. This one and the corresponding raid5 are wrong, and I'm not sure yet the unplug_device changes. In this case, the call to md_error, which in

Re: to be or not to be...

2006-04-23 Thread Neil Brown
On Sunday April 23, [EMAIL PROTECTED] wrote: Hi all, to make a long story very very shorty: a) I create /dev/md1, kernel latest rc-2-git4 and mdadm-2.4.1.tgz, with this command: /root/mdadm -Cv /dev/.static/dev/.static/dev/.static/dev/md1 \

<    1   2   3   4   5   6   7   8   9   10   >