AW: Mustn't be RAID 1 and 0 read-performance be similar?

2007-08-20 Thread Rustedt, Florian
Ok,

So reading out TWO streams serial from each disk would give the speedup,
wouldn't it?
The driver would just have to cache the four(6,8,..) streams until they are
merged?
So the driver has just to cache two streams per disk and all would be
perfect...?

Just as proposal for the future md-raid-driver releases ;))

Florian

 -Ursprüngliche Nachricht-
 Von: Mario 'BitKoenig' Holbe [mailto:[EMAIL PROTECTED] 
 Gesendet: Montag, 13. August 2007 16:50
 An: linux-raid@vger.kernel.org
 Betreff: Re: Mustn't be RAID 1 and 0 read-performance be similar?
 
 Rustedt, Florian [EMAIL PROTECTED] wrote:
  If the speed on RAID 0 is based on reading out in parallel, then it 
  must be the same on RAID 1, mustn't it?
  On RAID 1, it is possible, to read two blocks in parallel 
 to speed up, too.
 
 It's not that simple.
 On RAID0 you can read one single stream of data from all of 
 the disks in
 parallel: you read one stream from each disk with each stream 
 containing completely different data and merge them together 
 to get the original stream. On RAID1 you can only read 
 exactly the same stream from all of the disks. Thus, RAID1 
 cannot provide RAID0's speed-up for a single stream.
 However, if you read multiple streams parallel, RAID1 can do 
 better than RAID0 because you can read stream1 from disk1, 
 stream2 from disk2 etc.
 Using RAID0, this speed-up can only be achieved for streams 
 = chunk size.
 
 
 regards
Mario
**
IMPORTANT: The contents of this email and any attachments are confidential. 
They are intended for the 
named recipient(s) only.
If you have received this email in error, please notify the system manager or 
the sender immediately and do 
not disclose the contents to anyone or make copies thereof.
*** eSafe scanned this email for viruses, vandals, and malicious content. ***
**

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


bug in --manage mode

2007-08-20 Thread exe

Hello, I hit a bug in mdadm.

According to --manage --help this must work, but it doesn't:

-bash-3.2# mdadm --manage -a /dev/md2 /dev/sda3
mdadm: /dev/sda3 does not appear to be an md device

But this works fine:
-bash-3.2# mdadm --manage --add   /dev/md2 /dev/sda3
mdadm: re-added /dev/sda3


The version is:
LFS:[EMAIL PROTECTED]:mdadm --version
mdadm - v2.6.2 - 21st May 2007


--
Kandalincev Alexandre
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid5:md3: kernel BUG , followed by , Silent halt .

2007-08-20 Thread Dan Williams
On 8/18/07, Mr. James W. Laferriere [EMAIL PROTECTED] wrote:
 Hello All ,  Here we go again .  Again attempting to do bonnie++ 
 testing
 on a small array .
 Kernel 2.6.22.1
 Patches involved ,
 IOP1 ,  2.6.22.1-iop1 for improved sequential write performance
 (stripe-queue) ,  Dan Williams [EMAIL PROTECTED]

Hello James,

Thanks for the report.

I tried to reproduce this on my system, no luck.  However it looks
like their is a potential race between 'handle_queue' and
'add_queue_bio'.  The attached patch moves these critical sections
under spin_lock(sq-lock), and adds some debugging output if this BUG
triggers.  It also includes a fix for retry_aligned_read which is
unrelated to this debug.

--
Dan
---
raid5-fix-sq-locking.patch
---
raid5: address potential sq-to_write race

From: Dan Williams [EMAIL PROTECTED]

synchronize reads and writes to the sq-to_write bit

Signed-off-by: Dan Williams [EMAIL PROTECTED]
---

 drivers/md/raid5.c |   12 
 1 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 02e313b..688b8d3 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -2289,10 +2289,14 @@ static int add_queue_bio(struct stripe_queue *sq, struct bio *bi, int dd_idx,
 	sh = sq-sh;
 	if (forwrite) {
 		bip = sq-dev[dd_idx].towrite;
+		set_bit(dd_idx, sq-to_write);
 		if (*bip == NULL  (!sh || (sh  !sh-dev[dd_idx].written)))
 			firstwrite = 1;
-	} else
+	} else {
 		bip = sq-dev[dd_idx].toread;
+		set_bit(dd_idx, sq-to_read);
+	}
+
 	while (*bip  (*bip)-bi_sector  bi-bi_sector) {
 		if ((*bip)-bi_sector + ((*bip)-bi_size  9)  bi-bi_sector)
 			goto overlap;
@@ -2324,7 +2328,6 @@ static int add_queue_bio(struct stripe_queue *sq, struct bio *bi, int dd_idx,
 		/* check if page is covered */
 		sector_t sector = sq-dev[dd_idx].sector;
 
-		set_bit(dd_idx, sq-to_write);
 		for (bi = sq-dev[dd_idx].towrite;
 		 sector  sq-dev[dd_idx].sector + STRIPE_SECTORS 
 			 bi  bi-bi_sector = sector;
@@ -2334,8 +2337,7 @@ static int add_queue_bio(struct stripe_queue *sq, struct bio *bi, int dd_idx,
 		}
 		if (sector = sq-dev[dd_idx].sector + STRIPE_SECTORS)
 			set_bit(dd_idx, sq-overwrite);
-	} else
-		set_bit(dd_idx, sq-to_read);
+	}
 
 	return 1;
 
@@ -3656,6 +3658,7 @@ static void handle_queue(struct stripe_queue *sq, int disks, int data_disks)
 	struct stripe_head *sh = NULL;
 
 	/* continue to process i/o while the stripe is cached */
+	spin_lock(sq-lock);
 	if (test_bit(STRIPE_QUEUE_HANDLE, sq-state)) {
 		if (io_weight(sq-overwrite, disks) == data_disks) {
 			set_bit(STRIPE_QUEUE_IO_HI, sq-state);
@@ -3678,6 +3681,7 @@ static void handle_queue(struct stripe_queue *sq, int disks, int data_disks)
 		 */
 		BUG_ON(!(sq-sh  sq-sh == sh));
 	}
+	spin_unlock(sq-lock);
 
 	release_queue(sq);
 	if (sh) {
---
raid5-debug-init_queue-bugs.patch
---
raid5: printk instead of BUG in init_queue

From: Dan Williams [EMAIL PROTECTED]

Signed-off-by: Dan Williams [EMAIL PROTECTED]
---

 drivers/md/raid5.c |   19 +--
 1 files changed, 13 insertions(+), 6 deletions(-)

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 688b8d3..7164011 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -557,12 +557,19 @@ static void init_queue(struct stripe_queue *sq, sector_t sector,
 		__FUNCTION__, (unsigned long long) sq-sector,
 		(unsigned long long) sector, sq);
 
-	BUG_ON(atomic_read(sq-count) != 0);
-	BUG_ON(io_weight(sq-to_read, disks));
-	BUG_ON(io_weight(sq-to_write, disks));
-	BUG_ON(io_weight(sq-overwrite, disks));
-	BUG_ON(test_bit(STRIPE_QUEUE_HANDLE, sq-state));
-	BUG_ON(sq-sh);
+	if ((atomic_read(sq-count) != 0) || io_weight(sq-to_read, disks) ||
+	io_weight(sq-to_write, disks) || io_weight(sq-overwrite, disks) ||
+	test_bit(STRIPE_QUEUE_HANDLE, sq-state) || sq-sh) {
+		printk(KERN_ERR %s: sector=%llx count: %d to_read: %lu 
+to_write: %lu overwrite: %lu state: %lx 
+sq-sh: %p\n, __FUNCTION__,
+(unsigned long long) sq-sector,
+atomic_read(sq-count),
+io_weight(sq-to_read, disks),
+io_weight(sq-to_write, disks),
+io_weight(sq-overwrite, disks),
+sq-state, sq-sh);
+	}
 
 	sq-state = (1  STRIPE_QUEUE_HANDLE);
 	sq-sector = sector;
---
raid5-fix-get_active_queue-bug.patch
---
raid5: fix get_active_queue bug in retry_aligned_read

From: Dan Williams [EMAIL PROTECTED]

Check for a potential null return from get_active_queue

Signed-off-by: Dan Williams [EMAIL PROTECTED]
---

 drivers/md/raid5.c |3 ++-
 1 

Linear RAID hot grow

2007-08-20 Thread Dat Chu
I am trying to find the mdadm version that support Linear RAID hot
grow. Does anyone have a link to point me to? I am currently running
2.6.2.

With warm regards,
Dat Chu
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Linear RAID hot grow

2007-08-20 Thread Justin Piszcz



On Mon, 20 Aug 2007, Dat Chu wrote:


I am trying to find the mdadm version that support Linear RAID hot
grow. Does anyone have a link to point me to? I am currently running
2.6.2.

With warm regards,
Dat Chu
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html



Not sure, it seems like it would be supported by now?

But according to the docs for 2.6.2:

   Grow   Grow  (or shrink) an array, or otherwise reshape it in some way.
  Currently supported growth options including changing the active
  size of component devices in RAID level 1/4/5/6 and changing the
  number of active devices in RAID1/5/6.

--

http://neil.brown.name/blog/SoftRaid

--grow option for linear md arrays

I've just been working on an enhancement for Linux MD linear arrays 
which allows them to be enlanged.


More specifically, a new device (typcially disk drive) can be added to the 
end of an active linear array now. The size increases accordingly.


You can find patches for mdadm and 2.6.7-rc3-mm1.

It is still a work in progress, as it isn't documented, some of the code 
doesn't report errors very nicely, and I realised that there is some work 
needed in md.c with respect to handling new superblocks.


I really should do the new-superblock code in mdadm and get it all tidied 
up.

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Linear RAID hot grow

2007-08-20 Thread Dat Chu
The links on Neil's website to those patches are no longer there. When
I try to hot grow a linear raid, it says that hot grow for linear raid
is not supported.

Where would I find those patches?

With warm regards,
Dat Chu






 On 8/20/07, Justin Piszcz [EMAIL PROTECTED] wrote:
 
 
  On Mon, 20 Aug 2007, Dat Chu wrote:
 
   I am trying to find the mdadm version that support Linear RAID hot
   grow. Does anyone have a link to point me to? I am currently running
   2.6.2.
  
   With warm regards,
   Dat Chu
   -
   To unsubscribe from this list: send the line unsubscribe linux-raid in
   the body of a message to  [EMAIL PROTECTED]
   More majordomo info at  http://vger.kernel.org/majordomo-info.html
  
 
  Not sure, it seems like it would be supported by now?
 
  But according to the docs for 2.6.2:
 
  Grow   Grow  (or shrink) an array, or otherwise reshape it in some 
  way.
 Currently supported growth options including changing the 
  active
 size of component devices in RAID level 1/4/5/6 and changing 
  the
 number of active devices in RAID1/5/6.
 
  --
 
  http://neil.brown.name/blog/SoftRaid
 
  --grow option for linear md arrays
 
   I've just been working on an enhancement for Linux MD linear arrays
  which allows them to be enlanged.
 
  More specifically, a new device (typcially disk drive) can be added to the
  end of an active linear array now. The size increases accordingly.
 
  You can find patches for mdadm and 2.6.7-rc3-mm1.
 
  It is still a work in progress, as it isn't documented, some of the code
  doesn't report errors very nicely, and I realised that there is some work
  needed in md.c with respect to handling new superblocks.
 
  I really should do the new-superblock code in mdadm and get it all tidied
  up.
 


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Linear RAID hot grow

2007-08-20 Thread Neil Brown
On Monday August 20, [EMAIL PROTECTED] wrote:
 The links on Neil's website to those patches are no longer there. When
 I try to hot grow a linear raid, it says that hot grow for linear raid
 is not supported.

What is the exact error message?
What kernel are you using?

hot-grow for linear does work with recent kernels and mdadm - it is
part of my test suite.
  mdadm --grow /dev/md-linear --add /dev/new-device

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Raid5 Reshape gone wrong, please help

2007-08-20 Thread Greg Nicholson
On 8/19/07, Greg Nicholson [EMAIL PROTECTED] wrote:
 On 8/19/07, Greg Nicholson [EMAIL PROTECTED] wrote:
  On 8/19/07, Neil Brown [EMAIL PROTECTED] wrote:
   On Saturday August 18, [EMAIL PROTECTED] wrote:
   
That looks to me like the first 2 gig is completely empty on the
drive.  I really don't think it actually started to do anything.
  
   The backup data is near the end of the device.  If you look at the
   last 2 gig you should see something.
  
 
  I figured something like that after I started thinking about it...
  That device is currently offline while I do some DD's to new devices.
 
   
Do you have further suggestions on where to go now?
  
   Maybe an 'strace' of mdadm -A  might show me something.
  
   If you feel like following the code, Assemble (in Assemble.c) should
   call Grow_restart.
   This should look in /dev/sdb1 (which is already open in 'fdlist') by
   calling 'load_super'.  It should then seek to 8 sectors before the
   superblock (or close to there) and read a secondary superblock which
   describes the backup data.
   If this looks good, it seeks to where the backup data is (which is
   towards the end of the device) and reads that.  It uses this to
   restore the 'critical section', and then updates the superblock on all
   devices.
  
   As you aren't getting the messages 'restoring critical section',
   something is going wrong before there.  It should fail:
 /dev/md0: Failed to restore critical section for reshape, sorry.
   but I can see that there is a problem with the error return from
   'Grow_restart'.  I'll get that fixed.
  
  
   
Oh, and thank you very much for your help.  Most of the data on this
array I can stand to loose... It's not critical, but there are some of
my photographs on this that my backup is out of date on.  I can
destroy it all and start over, but really want to try to recover this
if it's possible.  For that matter, if it didn't actually start
rewriting the stripes, is there anyway to push it back down to 4 disks
to recover ?
  
   You could always just recreate the array:
  
mdadm -C /dev/md0 -l5 -n4 -c256 --assume-clean /dev/sdf1 /dev/sde1  \
   /dev/sdd1 /dev/sdc1
  
   and make sure the data looks good (which it should).
  
   I'd still like to know that the problem is though
  
   Thanks,
   NeilBeon
  
 
  My current plan of attack, which I've been proceeding upon for the
  last 24 hours... I'm DDing the original drives to new devices.  Once I
  have copies of the drives, I'm going to try to recreate the array as a
  4 device array.  Hopefully, at that point, the raid will come up, LVM
  will initialize, and it's time to saturate the GigE offloading
  EVERYTHING.
 
  Assuming the above goes well which will definitely take some time,
  Then I'll take the original drives, run the strace and try to get some
  additional data for you.  I'd love to know what's up with this as
  well.  If there is additional information I can get you to help, let
  me know.  I've grown several arrays before without any issue, which
  frankly is why I didn't think this would have been an issue thus,
  my offload of the stuff I actually cared about wasn't up to date.
 
  At the end of day (or more likely, week)  I'll completely destroy the
  existing raid, and rebuild the entire thing to make sure I'm starting
  from a good base.  At least at that point, I'll have additional
  drives.  Given that I have dual File-servers that will have drives
  added, it seems likely that I'll be testing the code again soon.  Big
  difference being that this time, I won't make the assumption that
  everything will be perfect. :)
 
  Thanks again for your help, I'll post on my results as well as try to
  get you that strace.  It's been quite a while since I dove into kernel
  internals, or C for that matter, so it's unlikely I'm going to find
  anything myself But I'll definitely send results back if I can.
 


 Ok, as an update.  ORDER MATTERS.  :)

 The above command didn't work.  It built, but LVM didn't recognize.
 So, after despair, I thought, that's not the way I built it.  So, I
 redid it in Alphabetical order... and it worked.

 I'm in the process of taring and pulling everything off.

 Once that is done, I'll put the original drives back in, and try to
 understand what went wrong with the original grow/build.


And as a final update... I pulled all the data from the 4 disk array I
built from the copied Disks.  Everything looks to be intact.  That is
definitely a better feeling for me.

I then put the original disks back in, and compiled 2.6.3 to see if it
did any better on the assemble.  It appears that your update about the
critical section missing was successful, as mdadm cheerfully informed
me I was out of luck. :)

I'm attaching the strace, even though I don't think it will be of much
help... It appears that you solved the critical section failure at
least it's verbose about telling you.

I 

Re: Linear RAID hot grow

2007-08-20 Thread Dat Chu
I am currently running 2.6.22.1 kernel with mdadm version 2.6.2.

When I try to run
mdadm --grow /dev/md2 --add /dev/sdd1
mdadm: Cannot add new disk to this array

mdadm -D /dev/md2
/dev/md2:
Version : 00.90.03
  Creation Time : Thu Apr 12 15:10:30 2007
 Raid Level : linear
 Array Size : 4394524224 (4190.94 GiB 4499.99 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 2
Persistence : Superblock is persistent

Update Time : Mon Aug 20 21:59:02 2007
  State : active
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

   Rounding : 64K

   UUID : 2d55a746:46480234:2bd67da3:078a1f37
 Events : 0.17

Number   Major   Minor   RaidDevice State
   0   810  active sync   /dev/sda1
   1   8   171  active sync   /dev/sdb1
   2   8   332  active sync   /dev/sdc1

I am missing something?

Thanks for the quick reply Neil.

With warm regards,
Dat Chu

On 8/20/07, Neil Brown [EMAIL PROTECTED] wrote:
 On Monday August 20, [EMAIL PROTECTED] wrote:
  The links on Neil's website to those patches are no longer there. When
  I try to hot grow a linear raid, it says that hot grow for linear raid
  is not supported.

 What is the exact error message?
 What kernel are you using?

 hot-grow for linear does work with recent kernels and mdadm - it is
 part of my test suite.
   mdadm --grow /dev/md-linear --add /dev/new-device

 NeilBrown

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


4 Port eSATA RAID5/JBOD PCI-E 8x Controller

2007-08-20 Thread Richard Scobie

This looks like a potentially good, cheap candidate for md use.

Although Linux support is not explicitly mentioned, SiI 3124 is used.

http://www.addonics.com/products/host_controller/ADSA3GPX8-4e.asp

Regards,

Richard

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Linear RAID hot grow

2007-08-20 Thread Neil Brown
On Monday August 20, [EMAIL PROTECTED] wrote:
 I am currently running 2.6.22.1 kernel with mdadm version 2.6.2.

That combination should work.

 
 When I try to run
 mdadm --grow /dev/md2 --add /dev/sdd1
 mdadm: Cannot add new disk to this array

Odd.  I tried add a drive to a three-drive linear array and it
worked, though my drives are somewhat smaller than yours - that
shouldn't matter though.

Do you get any kernel messages at the time when you try to add?
  dmesg | tail -20

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html