Re: mismatch_cnt questions

2007-03-13 Thread Andre Noll
On 00:21, H. Peter Anvin wrote:
 I have just updated the paper at:
 
 http://www.kernel.org/pub/linux/kernel/people/hpa/raid6.pdf
 
 ... with this information (in slightly different notation and with a bit 
 more detail.)

There's a typo in the new section:

s/By assumption, X_z != D_n/By assumption, X_z != D_z/

Regards
Andre
-- 
The only person who always got his work done by Friday was Robinson Crusoe


signature.asc
Description: Digital signature


Issue with Adaptec AIC-7899P using MD on Dell PE 2650

2007-03-13 Thread Laurent CARON
Hi,

I'm happily using MD on one of our fileservers.

This fileserver is basically a Dell PowerEdge 2650 loaded with 5x300GB HDDs.

A 1TB Raid5 partition is defined over the 5 disks

md1 : active raid5 sdb3[5] sde3[4] sdd3[3] sdc3[2] sda3[0]
  1170238464 blocks level 5, 64k chunk, algorithm 2 [5/4] [U_UUU]

Today, one of the HDDs failed:
I got a lot of errors like this one in the logs
raid5:md1: read error not correctable (sector 153334912 on sdb3).

I failed/removed the disk from the array

mdadm /dev/mdX -f /dev/sdbX
mdadm /dev/mdX -r /dev/sdbX

I then removed it from the scsi bus echo   /proc/scsi/scsi

Removed the disk from the server (phisically)

Inserted a new one

Did rescan the SCSI bus to detect the new HDD, and the server froze.

This is the second time it happens while changing a hard disk on that
particular brand/model.

Is this a known issue with the SCSI HBA?

Did i do something wrong ?

Thanks

Laurent
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


sw raid0 read bottleneck

2007-03-13 Thread Tomka Gergely
Hi!

I am currently testing 3ware raid cards. Now i have 15 disks, and on these 
a swraid0. The write speed seems good (700 MBps), but the read performance 
only 350 MBps. Another problem when i try to read with two process, then 
the _sum_ of the read speeds fall back to 200 MBps. So there is a 
bottleneck, or something i need to know, but i dont have ideas. 

The details:

/dev/md0:
Version : 00.90.03
  Creation Time : Tue Mar 13 16:57:32 2007
 Raid Level : raid0
 Array Size : 7325797440 (6986.43 GiB 7501.62 GB)
   Raid Devices : 15
  Total Devices : 15
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Tue Mar 13 16:57:32 2007
  State : clean
 Active Devices : 15
Working Devices : 15
 Failed Devices : 0
  Spare Devices : 0

 Chunk Size : 64K

# uname -a
Linux ursula 2.6.18-4-686-bigmem #1 SMP Wed Feb 21 17:30:22 UTC 2007 i686 
GNU/Linux

# xfs_info /mnt/
meta-data=/dev/md0   isize=256agcount=32, agsize=57232784 
blks
 =   sectsz=512   attr=0
data =   bsize=4096   blocks=1831449088, 
imaxpct=25
 =   sunit=16 swidth=240 blks, unwritten=1
naming   =version 2  bsize=4096  
log  =internal   bsize=4096   blocks=32768, version=1
 =   sectsz=512   sunit=0 blks
realtime =none   extsz=983040 blocks=0, rtextents=0

(all software Debian Etch)

four Intel(R) Xeon(TM) CPU 3.00GHz

two 3ware 9590SE-8ML on PCIe

Intel Corporation 5000P Chipset


-- 
Tomka Gergely, [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sw raid0 read bottleneck

2007-03-13 Thread Justin Piszcz



On Tue, 13 Mar 2007, Tomka Gergely wrote:


Hi!

I am currently testing 3ware raid cards. Now i have 15 disks, and on these
a swraid0. The write speed seems good (700 MBps), but the read performance
only 350 MBps. Another problem when i try to read with two process, then
the _sum_ of the read speeds fall back to 200 MBps. So there is a
bottleneck, or something i need to know, but i dont have ideas.

The details:

/dev/md0:
   Version : 00.90.03
 Creation Time : Tue Mar 13 16:57:32 2007
Raid Level : raid0
Array Size : 7325797440 (6986.43 GiB 7501.62 GB)
  Raid Devices : 15
 Total Devices : 15
Preferred Minor : 0
   Persistence : Superblock is persistent

   Update Time : Tue Mar 13 16:57:32 2007
 State : clean
Active Devices : 15
Working Devices : 15
Failed Devices : 0
 Spare Devices : 0

Chunk Size : 64K

# uname -a
Linux ursula 2.6.18-4-686-bigmem #1 SMP Wed Feb 21 17:30:22 UTC 2007 i686
GNU/Linux

# xfs_info /mnt/
meta-data=/dev/md0   isize=256agcount=32, agsize=57232784
blks
=   sectsz=512   attr=0
data =   bsize=4096   blocks=1831449088,
imaxpct=25
=   sunit=16 swidth=240 blks, unwritten=1
naming   =version 2  bsize=4096
log  =internal   bsize=4096   blocks=32768, version=1
=   sectsz=512   sunit=0 blks
realtime =none   extsz=983040 blocks=0, rtextents=0

(all software Debian Etch)

four Intel(R) Xeon(TM) CPU 3.00GHz

two 3ware 9590SE-8ML on PCIe

Intel Corporation 5000P Chipset


--
Tomka Gergely, [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Have you tried increasing your readahead values for the md device?

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sw raid0 read bottleneck

2007-03-13 Thread Tomka Gergely
On Tue, 13 Mar 2007, Justin Piszcz wrote:

 Have you tried increasing your readahead values for the md device?

Yes. No real change. According to my humble mental image, readahead not a 
too useful thing, when we read 1-4 thread with sdd. The io subsystem 
already reading with the possible maximum speed, so don't have time to 
read ahead. Correct me, if i wrong. 

-- 
Tomka Gergely, [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sw raid0 read bottleneck

2007-03-13 Thread Tomka Gergely
On Tue, 13 Mar 2007, Tomka Gergely wrote:

 On Tue, 13 Mar 2007, Justin Piszcz wrote:
 
  Have you tried increasing your readahead values for the md device?
 
 Yes. No real change. According to my humble mental image, readahead not a 
 too useful thing, when we read 1-4 thread with sdd. The io subsystem 
 already reading with the possible maximum speed, so don't have time to 
 read ahead. Correct me, if i wrong. 

I was wrong, readahead can speed things up, to 450 MBps.

-- 
Tomka Gergely, [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sw raid0 read bottleneck

2007-03-13 Thread Justin Piszcz

Nice.

On Tue, 13 Mar 2007, Tomka Gergely wrote:


On Tue, 13 Mar 2007, Tomka Gergely wrote:


On Tue, 13 Mar 2007, Justin Piszcz wrote:


Have you tried increasing your readahead values for the md device?


Yes. No real change. According to my humble mental image, readahead not a
too useful thing, when we read 1-4 thread with sdd. The io subsystem
already reading with the possible maximum speed, so don't have time to
read ahead. Correct me, if i wrong.


I was wrong, readahead can speed things up, to 450 MBps.

--
Tomka Gergely, [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sw raid0 read bottleneck

2007-03-13 Thread Neil Brown
On Tuesday March 13, [EMAIL PROTECTED] wrote:
 On Tue, 13 Mar 2007, Tomka Gergely wrote:
 
  On Tue, 13 Mar 2007, Justin Piszcz wrote:
  
   Have you tried increasing your readahead values for the md device?
  
  Yes. No real change. According to my humble mental image, readahead not a 
  too useful thing, when we read 1-4 thread with sdd. The io subsystem 
  already reading with the possible maximum speed, so don't have time to 
  read ahead. Correct me, if i wrong. 
 
 I was wrong, readahead can speed things up, to 450 MBps.

Can you tell use what read-ahead size you needed?

15 drives and 64K chunks gives 960K per stripe.
The raid0 code should set the read-ahead to twice that: 1920K
which I would have thought would be enough, but apparently not.

Thanks,
NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: sw raid0 read bottleneck

2007-03-13 Thread Tomka Gergely
On Wed, 14 Mar 2007, Neil Brown wrote:

 On Tuesday March 13, [EMAIL PROTECTED] wrote:
  On Tue, 13 Mar 2007, Tomka Gergely wrote:
  
   On Tue, 13 Mar 2007, Justin Piszcz wrote:
   
Have you tried increasing your readahead values for the md device?
   
   Yes. No real change. According to my humble mental image, readahead not a 
   too useful thing, when we read 1-4 thread with sdd. The io subsystem 
   already reading with the possible maximum speed, so don't have time to 
   read ahead. Correct me, if i wrong. 
  
  I was wrong, readahead can speed things up, to 450 MBps.
 
 Can you tell use what read-ahead size you needed?
 
 15 drives and 64K chunks gives 960K per stripe.
 The raid0 code should set the read-ahead to twice that: 1920K
 which I would have thought would be enough, but apparently not.

blockdev --setra 262144 /dev/md0 gives me 650+ MB/s with 4 threads 
(paralell running sdd). Lower values give lower speeds, greater values not 
giving higher speeds.

-- 
Tomka Gergely, [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: mismatch_cnt questions

2007-03-13 Thread H. Peter Anvin

Andre Noll wrote:

On 00:21, H. Peter Anvin wrote:

I have just updated the paper at:

http://www.kernel.org/pub/linux/kernel/people/hpa/raid6.pdf

... with this information (in slightly different notation and with a bit 
more detail.)


There's a typo in the new section:

s/By assumption, X_z != D_n/By assumption, X_z != D_z/



Thanks, fixed.

-hpa
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html