Re: very degraded RAID5, or increasing capacity by adding discs

2007-10-09 Thread Mr. James W. Laferriere
Hello Neil , On Tue, 9 Oct 2007, Neil Brown wrote: On Tuesday October 9, [EMAIL PROTECTED] wrote: Problems at step 4.: 'man mdadm' doesn't tell if it's possible to grow an array to a degraded array (non existant disc). Is it possible? Why not experiment with loop devices on files

Optimization report for Justin .

2007-10-01 Thread Mr. James W. Laferriere
Hello Justin , Three seperate single runs of bonnie(*) . Please note , the linux-2.6.23-rc6 , Concerns your email of this weekend about Subject: Bonnie++ with 1024k stripe SW/RAID5 causes kernel to goto D-state . No lockups or hangs were noticed .

Re: Optimization report for Justin .

2007-10-01 Thread Mr. James W. Laferriere
hoping that Mr. Dan Williams(et al?) patches will bring this up even more . Thank you for posting the optimizations . Twyl , JimL On Mon, 1 Oct 2007, Mr. James W. Laferriere wrote: Hello Justin , Three seperate single runs of bonnie(*) . Please

Without tweaking , (was:Re: mkfs options for a 16x hw raid5 and xfs ...)

2007-09-26 Thread Mr. James W. Laferriere
Hello Justin all , --Justin Piszcz Wrote: -- Date: Wed, 26 Sep 2007 12:24:20 -0400 (EDT) From: Justin Piszcz [EMAIL PROTECTED] Subject: Re: mkfs options for a 16x hw raid5 and xfs (mostly large files) I have a question, when I use multiple writer threads (2 or 3) I see

Speaking of network disks (was: Re: syncing remote homes.)

2007-09-22 Thread Mr. James W. Laferriere
Hello Bill all , Bill Davidsen [EMAIL PROTECTED] Sat, 22 Sep 2007 09:41:40 -0400 , wrote: My only advice is to try and quantify the data volume and look at nbd vs. iSCSI to provide the mirror if you go that way. You mentioned nbd as a transport for disk to remote disk .

Re: md raid acceleration and the async_tx api

2007-09-13 Thread Mr. James W. Laferriere
Hello Dan , On Thu, 13 Sep 2007, Dan Williams wrote: On 9/13/07, Yuri Tikhonov [EMAIL PROTECTED] wrote: Hi Dan, On Friday 07 September 2007 20:02, you wrote: You need to fetch from the 'md-for-linus' tree. But I have attached them as well. git fetch

Re: raid5:md3: kernel BUG , followed by , Silent halt .

2007-09-05 Thread Mr. James W. Laferriere
Hello Dan , On Mon, 27 Aug 2007, Dan Williams wrote: On 8/25/07, Mr. James W. Laferriere [EMAIL PROTECTED] wrote: On Mon, 20 Aug 2007, Dan Williams wrote: On 8/18/07, Mr. James W. Laferriere [EMAIL PROTECTED] wrote: Hello All , Here we go again . Again attempting to do

Re: Patch for boot-time assembly of v1.x-metadata-based soft (MD) arrays

2007-08-26 Thread Mr. James W. Laferriere
On Sun, 26 Aug 2007, Justin Piszcz wrote: On Sun, 26 Aug 2007, Abe Skolnik wrote: Dear Mr./Dr./Prof. Brown et al, I recently had the unpleasant experience of creating an MD array for the purpose of booting off it and then not being able to do so. Since I had already made changes to the

Re: raid5:md3: kernel BUG , followed by , Silent halt .

2007-08-25 Thread Mr. James W. Laferriere
Hello Dan , On Mon, 20 Aug 2007, Dan Williams wrote: On 8/18/07, Mr. James W. Laferriere [EMAIL PROTECTED] wrote: Hello All , Here we go again . Again attempting to do bonnie++ testing on a small array . Kernel 2.6.22.1 Patches involved , IOP1

raid5:md3: kernel BUG , followed by , Silent halt .

2007-08-18 Thread Mr. James W. Laferriere
Hello All , Here we go again . Again attempting to do bonnie++ testing on a small array . Kernel 2.6.22.1 Patches involved , IOP1 , 2.6.22.1-iop1 for improved sequential write performance (stripe-queue) , Dan Williams [EMAIL PROTECTED] [SCSI] Addition to pci_ids.h for

Re: [RFT] 2.6.22.1-iop1 for improved sequential write performance (stripe-queue)

2007-08-04 Thread Mr. James W. Laferriere
Hello Dan , On Thu, 19 Jul 2007, Dan Williams wrote: Per Bill Davidsen's request I have made available a 2.6.22.1 based kernel with the current raid5 performance changes I have been working on: 1/ Offload engine acceleration (recently merged for the 2.6.23 development cycle) 2/

Re: raid5:md3: read error corrected , followed by , Machine Check

2007-07-23 Thread Mr. James W. Laferriere
Hello Bill , On Mon, 23 Jul 2007, Bill Davidsen wrote: Mr. James W. Laferriere wrote: Hello Andrew , On Tue, 17 Jul 2007, Andrew Burgess wrote: The 'MCE's have been ongoing for sometime . I have replaced every item in the system except the chassis scsi backplane power

Re: raid5:md3: read error corrected , followed by , Machine Check

2007-07-21 Thread Mr. James W. Laferriere
Hello Andrew , On Tue, 17 Jul 2007, Andrew Burgess wrote: The 'MCE's have been ongoing for sometime . I have replaced every item in the system except the chassis scsi backplane power supply(750Watts) . Everything . MB,cpu,memory,scsi controllers, ... These

Re: raid5:md3: read error corrected , followed by , Machine Check Exception: .

2007-07-14 Thread Mr. James W. Laferriere
Hello Alan ( Justin) , On Sun, 15 Jul 2007, Alan Cox wrote: On Sat, 14 Jul 2007 17:08:27 -0700 (PDT) Mr. James W. Laferriere [EMAIL PROTECTED] wrote: Hello All , I was under the impression that a 'machine check' would be caused by some near to the CPU hardware failure

Re: Fastest Chunk Size w/XFS For MD Software RAID = 1024k

2007-07-02 Thread Mr. James W. Laferriere
Hello Justin ( all) , On Thu, 28 Jun 2007, Justin Piszcz wrote: On Thu, 28 Jun 2007, Peter Rabbitson wrote: Justin Piszcz wrote: On Thu, 28 Jun 2007, Peter Rabbitson wrote: Interesting, I came up with the same results (1M chunk being superior) with a completely different raid set

Re: [md-accel PATCH 00/19] md raid acceleration and the async_tx api

2007-06-26 Thread Mr. James W. Laferriere
Hello Dan , On Tue, 26 Jun 2007, Dan Williams wrote: Greetings, Per Andrew's suggestion this is the md raid5 acceleration patch set updated with more thorough changelogs to lower the barrier to entry for reviewers. To get started with the code I would suggest the following order:

Re: major performance drop on raid5 due to context switches caused by small max_hw_sectors [partially resolved]

2007-04-22 Thread Mr. James W. Laferriere
Hello Justin , On Sun, 22 Apr 2007, Justin Piszcz wrote: On Sun, 22 Apr 2007, Pallai Roland wrote: On Sunday 22 April 2007 16:48:11 Justin Piszcz wrote: Have you also optimized your stripe cache for writes? Not yet. Is it worth it? -- d Yes, it is-- well, if write speed is important

mdadm: RUN_ARRAY failed: Cannot allocate memory

2007-03-24 Thread Mr. James W. Laferriere
Hello Neil , I found the problem that caused the 'cannot allcate memory' , DON'T use '--bitmap=' . But that said , H , Shouldn't mdadm just stop say ... 'md: bitmaps not supported for this level.' Like it puts out into dmesg . Also think this message

Another report of a raid6 array being maintaind by _raid5 in ps .

2007-03-21 Thread Mr. James W. Laferriere
Hello Neil , Someone else reported this before . But I'd thought it was under a older kernel than 2.6.21-rc4 . Hth , JimL root 2936 0.0 0.0 2948 1760 tts/0Ss 04:30 0:00 -bash root 2965 0.3 0.0 0 0 ?S 04:34 0:00 [md3_raid5] root 2977 0.0

Re: raid6 array , part id 'fd' not assembling at boot .

2007-03-18 Thread Mr. James W. Laferriere
Hello Neil Bill , On Sun, 18 Mar 2007, Bill Davidsen wrote: Neil Brown wrote: On Saturday March 17, [EMAIL PROTECTED] wrote: Neil Brown wrote: In-kernel auto-assembly using partition type 0xFD only works for metadata=0.90. This is deliberate. Don't use 0xFD partitions. Use mdadm

raid6 array , part id 'fd' not assembling at boot .

2007-03-16 Thread Mr. James W. Laferriere
Hello All , I am having a dickens of a time with preparing this system to replace my present one . I created a raid6 array over 6 147GB scsi drives . steps I followed were . fdisk /dev/sd[c-h] ( one at a time of course ) created a partition starting at cyl 2

Re: raid5 software vs hardware: parity calculations?

2007-01-15 Thread Mr. James W. Laferriere
Hello Dean , On Mon, 15 Jan 2007, dean gaudet wrote: ...snip... it should just be: echo check /sys/block/mdX/md/sync_action if you don't have a /sys/block/mdX/md/sync_action file then your kernel is too old... or you don't have /sys mounted... (or you didn't replace X with the raid

Re: RAID5 fill up?

2006-09-08 Thread Mr. James W. Laferriere
Hello Neil Luca , On Fri, 8 Sep 2006, Luca Berra wrote: On Fri, Sep 08, 2006 at 02:26:31PM +0200, Lars Schimmer wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Michael Tokarev wrote: Lars Schimmer wrote: Hi! I´ve got a software RAiD5 with 6 250GB HDs. Now I changed one disk

Re: [PATCH] md: new bitmap sysfs interface

2006-08-03 Thread Mr. James W. Laferriere
Hello All , On Thu, 3 Aug 2006, David Greaves wrote: Neil Brown wrote: write-bits-here-to-dirty-them-in-the-bitmap is probably (no, definitely) too verbose. Any better suggestions? It's not actually a bitmap is it? It takes a number or range and *operates* on a bitmap. so:

mdadm 2.5.2 - Static built , Interesting warnings when

2006-06-27 Thread Mr. James W. Laferriere
Hello All , What change in Glibc mekes this necessary ? Is there a method available to include the getpwnam getgrnam structures so that full static build will work . Tia , JimL gcc -Wall -Werror -Wstrict-prototypes -ggdb -DSendmail=\/usr/sbin/sendmail -t\

Re: IBM xSeries stop responding during RAID1 reconstruction

2006-06-20 Thread Mr. James W. Laferriere
Hello Gabor , On Tue, 20 Jun 2006, Gabor Gombas wrote: On Tue, Jun 20, 2006 at 03:08:59PM +0200, Niccolo Rigacci wrote: Do you know if it is possible to switch the scheduler at runtime? echo cfq /sys/block/disk/queue/scheduler At least one can do a ls of the /sys/block area

Re: raid1 with 1.2 superblock never marked healthy?

2006-02-20 Thread Mr. James W. Laferriere
Hello Neil All , On Mon, 20 Feb 2006, Janos Farkas wrote: On 2006-02-20 at 09:30:22, Neil Brown wrote: If you use an 'internal' bitmap (which is mirrored across all drives much like the superblock) then you don't need to specify a file name. However if you want the bitmap on a

Lilo append= , A suggestion .

2006-02-13 Thread Mr. James W. Laferriere
Hello Neil All , I'll bet I am going to get harassed over this , but ... The present form (iirc) of the lilo append statement is append=md=d0,/dev/sda,/dev/sdb I am wondering how difficult the below would be to code ? This allows a (relatively)

Re: Kernels and MD versions (was: md: Introduction - raid5 reshape mark-2)

2006-02-09 Thread Mr. James W. Laferriere
Hello Patrik , On Wed, 8 Feb 2006, Patrik Jonsson wrote: Neil Brown wrote: I always make them against the latest -mm kernel, so that would be a good place to start. However things change quickly and I can't promise it will apply against whatever is the 'latest' today. If you would

Re: RAID 16?

2006-02-02 Thread Mr. James W. Laferriere
Hello David , On Wed, 1 Feb 2006, David Liontooth wrote: We're wondering if it's possible to run the following -- * define 4 pairs of RAID 1 with an 8-port 3ware 9500S card * the OS will see these are four normal drives * use md to configure them into a RAID 6 array Would this

Re: [PATCH 000 of 3] md: Introduction

2006-02-02 Thread Mr. James W. Laferriere
Hello Neil , On Thu, 2 Feb 2006, NeilBrown wrote: Three patches for 2.6.lastest. All should go in 2.6.16. One won't apply against -rc1-git5 as it fixes a bug in a patch in -mm that hasn't quite got to -linus yes. They are mostly little fixes. I've been doing some more testing,

Re: [PATCH 000 of 5] md: Introduction

2006-01-22 Thread Mr. James W. Laferriere
Hello Neil , On Mon, 23 Jan 2006, Neil Brown wrote: On Monday January 23, [EMAIL PROTECTED] wrote: NeilBrown wrote: In line with the principle of release early, following are 5 patches against md in 2.6.latest which implement reshaping of a raid5 array. By this I mean adding 1 or more

Re: Drive fails raid6 array is not self rebuild .

2005-09-09 Thread Mr. James W. Laferriere
Hello David , Thank you for the idea . But ... [EMAIL PROTECTED]:~ # mdadm --readonly /dev/md_d0 mdadm: failed to set readonly for /dev/md_d0: Device or resource busy I think I'll try Neil's upgrade to 2.6.13 his patch to mdadm . I'll report back if that cures my

Re: Drive fails raid6 array is not self rebuild .

2005-09-09 Thread Mr. James W. Laferriere
Hello Neil , I patched all were successful . But after a make clean ; make I get ... Tia , JimL ..snip... gcc -Wall -Werror -Wstrict-prototypes -DCONFFILE=\/etc/mdadm.conf\ -ggdb -DSendmail=\/usr/sbin/sendmail -t\ -c -o Assemble.o Assemble.c Assemble.c: In

OT: lilo overwriting partition info ?

2005-09-09 Thread Mr. James W. Laferriere
Hello All , Off topic I know ... I have a question not related to MD . Have you heard of complaints about lilo overwriting partition info on disks after the first 2 if those are in an raid1 ? Or any mentions of lilo writing to all 16 disks causing

Re: Drive fails raid6 array is not self rebuild .

2005-09-09 Thread Mr. James W. Laferriere
Hello Neil , On Sat, 10 Sep 2005, Neil Brown wrote: On Friday September 9, [EMAIL PROTECTED] wrote: Hello Neil , I patched all were successful . But after a make clean ; make I get ... Tia , JimL ..snip... gcc -Wall -Werror -Wstrict-prototypes