FailSpare event?

2007-01-11 Thread Mike
Can someone tell me what this means please? I just received this in an email from one of my servers: From: mdadm monitoring [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: FailSpare event on /dev/md2:$HOST.$DOMAIN.com This is an automatically generated mail message from mdadm running on

Re: FailSpare event?

2007-01-11 Thread Neil Brown
On Thursday January 11, [EMAIL PROTECTED] wrote: Can someone tell me what this means please? I just received this in an email from one of my servers: A FailSpare event had been detected on md device /dev/md2. It could be related to component device /dev/sde2. It means that mdadm

Re: FailSpare event?

2007-01-11 Thread Mike
On Fri, 12 Jan 2007, Neil Brown might have said: On Thursday January 11, [EMAIL PROTECTED] wrote: Can someone tell me what this means please? I just received this in an email from one of my servers: A FailSpare event had been detected on md device /dev/md2. It could be

raid5 software vs hardware: parity calculations?

2007-01-11 Thread James Ralston
I'm having a discussion with a coworker concerning the cost of md's raid5 implementation versus hardware raid5 implementations. Specifically, he states: The performance [of raid5 in hardware] is so much better with the write-back caching on the card and the offload of the parity, it seems to

Re: FailSpare event?

2007-01-11 Thread Neil Brown
On Thursday January 11, [EMAIL PROTECTED] wrote: So I'm ok for the moment? Yes, I need to find the error and fix everything back to the (S) state. Yes, OK for the moment. The messages in $HOST:/var/log/messages for the time of the email are: Jan 11 16:04:25 elo kernel: sd 2:0:4:0: SCSI

Tweaking/Optimizing MD RAID: 195MB/s write, 181MB/s read (so far)

2007-01-11 Thread Justin Piszcz
With 4 Raptor 150s XFS (default XFS options): # Stripe tests: echo 8192 /sys/block/md3/md/stripe_cache_size # DD TESTS [WRITE] DEFAULT: $ dd if=/dev/zero of=10gb.no.optimizations.out bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 96.6988 seconds,

Re: Tweaking/Optimizing MD RAID: 195MB/s write, 181MB/s read (so far)

2007-01-11 Thread Justin Piszcz
RAID5 with 128kb chunk size. Raid0 was 317MB/s write and 279MB/s read. # xfs_growfs -n /dev/md3 meta-data=/dev/md3 isize=256agcount=16, agsize=6868160 blks = sectsz=4096 attr=0 data = bsize=4096 blocks=109890528,

Re: FailSpare event?

2007-01-11 Thread Mike Hardy
google BadBlockHowto Any just google it response sounds glib, but this is actually how to do it :-) If you're new to md and mdadm, don't forget to actually remove the drive from the array before you start working on it with 'dd' -Mike Mike wrote: On Fri, 12 Jan 2007, Neil Brown might have

Re: FailSpare event?

2007-01-11 Thread Martin Schröder
2007/1/12, Mike [EMAIL PROTECTED]: # 1 Background long Completed, segment failed -3943 This should still be in warranty. Try to get a replacement. Best Martin - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More

Linux Software RAID 5 Performance Optimizations: 2.6.19.1: (211MB/s read 195MB/s write)

2007-01-11 Thread Justin Piszcz
Using 4 raptor 150s: Without the tweaks, I get 111MB/s write and 87MB/s read. With the tweaks, 195MB/s write and 211MB/s read. Using kernel 2.6.19.1. Without the tweaks and with the tweaks: # Stripe tests: echo 8192 /sys/block/md3/md/stripe_cache_size # DD TESTS [WRITE] DEFAULT: (512K) $ dd