Re: ANNOUNCE: mdadm 2.4 - A tool for managing Soft RAID under Linux
Neil Brown wrote: I am pleased to announce the availability of mdadm version 2.4 It is available at the usual places: http://www.cse.unsw.edu.au/~neilb/source/mdadm/ and http://www.{countrycode}.kernel.org/pub/linux/utils/raid/mdadm/ mdadm is a tool for creating, managing and monitoring device arrays using the md driver in Linux, also known as Software RAID arrays. Release 2.4 primarily adds support for increasing the number of devices in a RAID5 array, which requires 2.6.17 (or some -rc or -mm prerelease). that's realy a long avaiting feature. but at the same time wouldn't it be finally possible to convert a non raid partition to an raid1? it's avery common thing and they used to said it's even working on windows:-( just my 2c. -- Levente Si vis pacem para bellum! - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re[2]: ANNOUNCE: mdadm 2.4 - A tool for managing Soft RAID under Linux
Hello Farkas, FL that's realy a long avaiting feature. but at the same time wouldn't it FL be finally possible to convert a non raid partition to an raid1? it's FL avery common thing and they used to said it's even working on windows:-( That would be cooler than making a metadevice and copying tons of files :) However, AFAIK this would require some support on FS side as well? FS-addressable space in a RAID metadevice (i.e. submirror) is aligned till the pre-last 64K block. If the partition was used up completely, the last [64..128]Kb can be used by its data and need to be remapped to a free location. And this is quite FS-dependant (like grow/shrink) and should be addressed by those FS toolkits. tune2fs and e2fsck do some similar remapping job when we enable/disable spare superblocks on a used filesystem... -- Best regards, Jim Klimovmailto:[EMAIL PROTECTED] - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Recommendations for supported 4-port SATA PCI card ?
Dear All I have 4x500GB Maxtor SATA drives and I want to attach these to a 4-port SATA PCI card and RAID5 them using md Could anybody recommend a card that will have out of box support on a Fedora system ? Many thanks Ian -- Ian Thurlbeckhttp://www.stams.strath.ac.uk/ Statistics and Modelling Science, University of Strathclyde Livingstone Tower, 26 Richmond Street, Glasgow, UK, G1 1XH Tel: +44 (0)141 548 3667 Fax: +44 (0)141 552 2079 - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
sata controllers status=0x51 { DriveReady SeekComplete Error } error=0x84 { DriveStatusError BadCRC }
Hi, I have a production server in place at a remote site. I have a single system drive that is an ide drive and two data drives that are on a via SATA controller in a raid1 configuration. I am monitoring the /var/log/messages and I get messages every few days Mar 22 23:31:36 A1 kernel: ata6: status=0x51 { DriveReady SeekComplete Error } Mar 22 23:31:36 A1 kernel: ata6: error=0x84 { DriveStatusError BadCRC } Mar 23 23:20:12 A1 kernel: ata5: status=0x51 { DriveReady SeekComplete Error } Mar 23 23:20:12 A1 kernel: ata5: error=0x84 { DriveStatusError BadCRC } Mar 23 23:32:03 A1 kernel: ata6: status=0x51 { DriveReady SeekComplete Error } Mar 23 23:32:04 A1 kernel: ata6: error=0x84 { DriveStatusError BadCRC } Mar 24 23:22:45 A1 kernel: ata5: status=0x51 { DriveReady SeekComplete Error } Mar 24 23:22:45 A1 kernel: ata5: error=0x84 { DriveStatusError BadCRC } Mar 27 23:16:57 A1 kernel: ata5: status=0x51 { DriveReady SeekComplete Error } Mar 27 23:16:57 A1 kernel: ata5: error=0x84 { DriveStatusError BadCRC } Mar 28 23:10:16 A1 kernel: ata5: status=0x51 { DriveReady SeekComplete Error } Mar 28 23:10:17 A1 kernel: ata5: error=0x84 { DriveStatusError BadCRC } Mar 28 23:23:32 A1 kernel: ata6: status=0x51 { DriveReady SeekComplete Error } Mar 28 23:23:32 A1 kernel: ata6: error=0x84 { DriveStatusError BadCRC } Mar 29 23:33:26 A1 kernel: ata6: status=0x51 { DriveReady SeekComplete Error } Mar 29 23:33:26 A1 kernel: ata6: error=0x84 { DriveStatusError BadCRC } Interestingly by the logs I see that they have occured March 1,2,3,8,14,17x3,20x4,21,22,23x2,24,27,28x2,29. (x2 means two errors as in above example). Also they occur during the activity of the cron job I do at 11pm to rsync backup the sata drive raid 1 to another server. here is the output of dmesg: ata5: dev 0 cfg 49:2f00 82:746b 83:7f01 84:4023 85:7469 86:3c01 87:4023 88:407f ata5: dev 0 ATA, max UDMA/133, 781422768 sectors: lba48 ata5: dev 0 configured for UDMA/133 scsi4 : sata_via ata6: dev 0 cfg 49:2f00 82:746b 83:7f01 84:4023 85:7469 86:3c01 87:4023 88:407f ata6: dev 0 ATA, max UDMA/133, 781422768 sectors: lba48 ata6: dev 0 configured for UDMA/133 scsi5 : sata_via Vendor: ATA Model: WDC WD4000YR-01P Rev: 01.0 Type: Direct-Access ANSI SCSI revision: 05 SCSI device sda: 781422768 512-byte hdwr sectors (400088 MB) SCSI device sda: drive cache: write back SCSI device sda: 781422768 512-byte hdwr sectors (400088 MB) SCSI device sda: drive cache: write back /dev/scsi/host4/bus0/target0/lun0: p1 Attached scsi disk sda at scsi4, channel 0, id 0, lun 0 Vendor: ATA Model: WDC WD4000YR-01P Rev: 01.0 Type: Direct-Access ANSI SCSI revision: 05 SCSI device sdb: 781422768 512-byte hdwr sectors (400088 MB) SCSI device sdb: drive cache: write back SCSI device sdb: 781422768 512-byte hdwr sectors (400088 MB) SCSI device sdb: drive cache: write back /dev/scsi/host5/bus0/target0/lun0: p1 Attached scsi disk sdb at scsi5, channel 0, id 0, lun 0 Am I correct in assuming that the sata drives are giving me these errors, and what shall I do? Could it possibly be a problem with the sata controller rather than the drives? [EMAIL PROTECTED]:~$ cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda1[0] sdb1[1] 390708736 blocks [2/2] [UU] unused devices: none I have done some testing with different sata controllers and recently switched another server from the built in sata controller on the A8v (via8237 controller) motherboard to an add in pci promise sata II150 card. I think I have seen conflicts between the sata_via and sata_promise and I already have a sata_promise card in the system for future expandability. I am running the debian stock 2.6.12-1-386 kernel and debian sarge with mdadm ii mdadm 1.9.0-4sarge1 Manage MD devices aka Linux Software Raid 1:/var/log# lsmod|grep sata sata_via8452 2 sata_promise9988 0 libata 44164 2 sata_via,sata_promise scsi_mod 129096 4 sr_mod,sata_promise,libata,sd_mod Thank you very much. Mitchell - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Recommendations for supported 4-port SATA PCI card ?
Addonics adst114 was the cheapest one I've found that works. I found it for $41 at thenerds.net but you may be better at the price searching than me. It's a Silicon Images 3114 chip, driven by the sata_sil driver I honestly don't recall if it was out-of-the-box working on FC4, but the updated kernels drive it fine, and FC5 (with 2.6.16+) should be fine with it. -Mike Ian Thurlbeck wrote: Dear All I have 4x500GB Maxtor SATA drives and I want to attach these to a 4-port SATA PCI card and RAID5 them using md Could anybody recommend a card that will have out of box support on a Fedora system ? Many thanks Ian - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: sata controllers status=0x51 { DriveReady SeekComplete Error } error=0x84 { DriveStatusError BadCRC }
Party line: It's a faulty cable (on both drives? triggered by rsync? Doesn't show up under 'badblocks'? hah!) Check out the linux-ide archive for my (and others) reports. I've had lots of issues like this - spurious and IMHO incorrect error messages. Only certain types of disk access cause them - xfs_repair and rsync seem to tickle it. With 2.6.15 I had lots of *very* scary moments with multiple disk failures on a raid5 during xfs_repair. I think it's down to the 'basic' error handling in the libata code and certain disks/controllers being loose with the protocol. They then identified problems in 'fua' (IIRC) handling which was pulled for 2.6.16. 2.6.16 seems to be much better (fewer 'odd' errors reported and md doesn't mind) David PS Mitchell - you're still using Verizon and I still live off the edge of their known world (in the UK) so I don't expect you'll get this reply - hard luck my friend - get a better ISP!) Mitchell Laks wrote: Hi, I have a production server in place at a remote site. I have a single system drive that is an ide drive and two data drives that are on a via SATA controller in a raid1 configuration. I am monitoring the /var/log/messages and I get messages every few days Mar 22 23:31:36 A1 kernel: ata6: status=0x51 { DriveReady SeekComplete Error } Mar 22 23:31:36 A1 kernel: ata6: error=0x84 { DriveStatusError BadCRC } Mar 23 23:20:12 A1 kernel: ata5: status=0x51 { DriveReady SeekComplete Error } Mar 23 23:20:12 A1 kernel: ata5: error=0x84 { DriveStatusError BadCRC } Mar 23 23:32:03 A1 kernel: ata6: status=0x51 { DriveReady SeekComplete Error } Mar 23 23:32:04 A1 kernel: ata6: error=0x84 { DriveStatusError BadCRC } Mar 24 23:22:45 A1 kernel: ata5: status=0x51 { DriveReady SeekComplete Error } Mar 24 23:22:45 A1 kernel: ata5: error=0x84 { DriveStatusError BadCRC } Mar 27 23:16:57 A1 kernel: ata5: status=0x51 { DriveReady SeekComplete Error } Mar 27 23:16:57 A1 kernel: ata5: error=0x84 { DriveStatusError BadCRC } Mar 28 23:10:16 A1 kernel: ata5: status=0x51 { DriveReady SeekComplete Error } Mar 28 23:10:17 A1 kernel: ata5: error=0x84 { DriveStatusError BadCRC } Mar 28 23:23:32 A1 kernel: ata6: status=0x51 { DriveReady SeekComplete Error } Mar 28 23:23:32 A1 kernel: ata6: error=0x84 { DriveStatusError BadCRC } Mar 29 23:33:26 A1 kernel: ata6: status=0x51 { DriveReady SeekComplete Error } Mar 29 23:33:26 A1 kernel: ata6: error=0x84 { DriveStatusError BadCRC } Interestingly by the logs I see that they have occured March 1,2,3,8,14,17x3,20x4,21,22,23x2,24,27,28x2,29. (x2 means two errors as in above example). Also they occur during the activity of the cron job I do at 11pm to rsync backup the sata drive raid 1 to another server. here is the output of dmesg: ata5: dev 0 cfg 49:2f00 82:746b 83:7f01 84:4023 85:7469 86:3c01 87:4023 88:407f ata5: dev 0 ATA, max UDMA/133, 781422768 sectors: lba48 ata5: dev 0 configured for UDMA/133 scsi4 : sata_via ata6: dev 0 cfg 49:2f00 82:746b 83:7f01 84:4023 85:7469 86:3c01 87:4023 88:407f ata6: dev 0 ATA, max UDMA/133, 781422768 sectors: lba48 ata6: dev 0 configured for UDMA/133 scsi5 : sata_via Vendor: ATA Model: WDC WD4000YR-01P Rev: 01.0 Type: Direct-Access ANSI SCSI revision: 05 SCSI device sda: 781422768 512-byte hdwr sectors (400088 MB) SCSI device sda: drive cache: write back SCSI device sda: 781422768 512-byte hdwr sectors (400088 MB) SCSI device sda: drive cache: write back /dev/scsi/host4/bus0/target0/lun0: p1 Attached scsi disk sda at scsi4, channel 0, id 0, lun 0 Vendor: ATA Model: WDC WD4000YR-01P Rev: 01.0 Type: Direct-Access ANSI SCSI revision: 05 SCSI device sdb: 781422768 512-byte hdwr sectors (400088 MB) SCSI device sdb: drive cache: write back SCSI device sdb: 781422768 512-byte hdwr sectors (400088 MB) SCSI device sdb: drive cache: write back /dev/scsi/host5/bus0/target0/lun0: p1 Attached scsi disk sdb at scsi5, channel 0, id 0, lun 0 Am I correct in assuming that the sata drives are giving me these errors, and what shall I do? Could it possibly be a problem with the sata controller rather than the drives? [EMAIL PROTECTED]:~$ cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sda1[0] sdb1[1] 390708736 blocks [2/2] [UU] unused devices: none I have done some testing with different sata controllers and recently switched another server from the built in sata controller on the A8v (via8237 controller) motherboard to an add in pci promise sata II150 card. I think I have seen conflicts between the sata_via and sata_promise and I already have a sata_promise card in the system for future expandability. I am running the debian stock 2.6.12-1-386 kernel and debian sarge with mdadm ii mdadm 1.9.0-4sarge1 Manage MD devices aka Linux Software Raid 1:/var/log# lsmod|grep sata sata_via8452 2 sata_promise9988 0 libata 44164 2 sata_via,sata_promise scsi_mod
Re: sata controllers status=0x51 { DriveReady SeekComplete Error } error=0x84 { DriveStatusError BadCRC }
THANK YOU! :) it appears this might be related to some of the errors a friend of mine got (ref: Re: recovering data on a failed raid-0 installation). after a bit more research, it does appears that a kernel bug in combination with some fast and loose protocol usage on a laptop IDE interface may have been at fault. more research on this forthcoming when a drive imager device arrives tomorrow *** error output the error reported in his case was: ata3 status = status 0x51 { DriveReady SeekComplete Error } 0x40 {Unrecoverable Error } repeated 5 times scsi error 2010 0x802 return code = sdb current sense key medium error additional sense unrecovered read error auto realocate failed. end request i/o error /dev/sdb sector 22629482 I/O error in filesystem md0 metadata device md0 block 0x29a1578 xfs log mount recovery error failed error 5 xfs log mount failed mount i/o error kernel panic! . On Thursday 30 March 2006 10:26, you wrote: Party line: It's a faulty cable (on both drives? triggered by rsync? Doesn't show up under 'badblocks'? hah!) Check out the linux-ide archive for my (and others) reports. I've had lots of issues like this - spurious and IMHO incorrect error messages. Only certain types of disk access cause them - xfs_repair and rsync seem to tickle it. With 2.6.15 I had lots of *very* scary moments with multiple disk failures on a raid5 during xfs_repair. I think it's down to the 'basic' error handling in the libata code and certain disks/controllers being loose with the protocol. They then identified problems in 'fua' (IIRC) handling which was pulled for 2.6.16. 2.6.16 seems to be much better (fewer 'odd' errors reported and md doesn't mind) David PS Mitchell - you're still using Verizon and I still live off the edge of their known world (in the UK) so I don't expect you'll get this reply - hard luck my friend - get a better ISP!) Mitchell Laks wrote: Hi, I have a production server in place at a remote site. I have a single system drive that is an ide drive and two data drives that are on a via SATA controller in a raid1 configuration. I am monitoring the /var/log/messages and I get messages every few days Mar 22 23:31:36 A1 kernel: ata6: status=0x51 { DriveReady SeekComplete Error } Mar 22 23:31:36 A1 kernel: ata6: error=0x84 { DriveStatusError BadCRC } Mar 23 23:20:12 A1 kernel: ata5: status=0x51 { DriveReady SeekComplete Error } Mar 23 23:20:12 A1 kernel: ata5: error=0x84 { DriveStatusError BadCRC } Mar 23 23:32:03 A1 kernel: ata6: status=0x51 { DriveReady SeekComplete Error } Mar 23 23:32:04 A1 kernel: ata6: error=0x84 { DriveStatusError BadCRC } Mar 24 23:22:45 A1 kernel: ata5: status=0x51 { DriveReady SeekComplete Error } Mar 24 23:22:45 A1 kernel: ata5: error=0x84 { DriveStatusError BadCRC } Mar 27 23:16:57 A1 kernel: ata5: status=0x51 { DriveReady SeekComplete Error } Mar 27 23:16:57 A1 kernel: ata5: error=0x84 { DriveStatusError BadCRC } Mar 28 23:10:16 A1 kernel: ata5: status=0x51 { DriveReady SeekComplete Error } Mar 28 23:10:17 A1 kernel: ata5: error=0x84 { DriveStatusError BadCRC } Mar 28 23:23:32 A1 kernel: ata6: status=0x51 { DriveReady SeekComplete Error } Mar 28 23:23:32 A1 kernel: ata6: error=0x84 { DriveStatusError BadCRC } Mar 29 23:33:26 A1 kernel: ata6: status=0x51 { DriveReady SeekComplete Error } Mar 29 23:33:26 A1 kernel: ata6: error=0x84 { DriveStatusError BadCRC } Interestingly by the logs I see that they have occured March 1,2,3,8,14,17x3,20x4,21,22,23x2,24,27,28x2,29. (x2 means two errors as in above example). Also they occur during the activity of the cron job I do at 11pm to rsync backup the sata drive raid 1 to another server. here is the output of dmesg: ata5: dev 0 cfg 49:2f00 82:746b 83:7f01 84:4023 85:7469 86:3c01 87:4023 88:407f ata5: dev 0 ATA, max UDMA/133, 781422768 sectors: lba48 ata5: dev 0 configured for UDMA/133 scsi4 : sata_via ata6: dev 0 cfg 49:2f00 82:746b 83:7f01 84:4023 85:7469 86:3c01 87:4023 88:407f ata6: dev 0 ATA, max UDMA/133, 781422768 sectors: lba48 ata6: dev 0 configured for UDMA/133 scsi5 : sata_via Vendor: ATA Model: WDC WD4000YR-01P Rev: 01.0 Type: Direct-Access ANSI SCSI revision: 05 SCSI device sda: 781422768 512-byte hdwr sectors (400088 MB) SCSI device sda: drive cache: write back SCSI device sda: 781422768 512-byte hdwr sectors (400088 MB) SCSI device sda: drive cache: write back /dev/scsi/host4/bus0/target0/lun0: p1 Attached scsi disk sda at scsi4, channel 0, id 0, lun 0 Vendor: ATA Model: WDC WD4000YR-01P Rev: 01.0 Type: Direct-Access ANSI SCSI revision: 05 SCSI device sdb: 781422768 512-byte hdwr sectors (400088 MB) SCSI device sdb: drive cache: write back SCSI device sdb: 781422768 512-byte hdwr sectors (400088 MB) SCSI device sdb: drive cache: write back /dev/scsi/host5/bus0/target0/lun0: p1
Re: ANNOUNCE: mdadm 2.4 - A tool for managing Soft RAID under Linux
Neil Brown wrote: I am pleased to announce the availability of mdadm version 2.4 .. Release 2.4 primarily adds support for increasing the number of devices in a RAID5 array, which requires 2.6.17 (or some -rc or -mm prerelease). .. Is there a corresponding means to increase the size of a file system to use this? - Allow --monitor to work with arrays with 28 devices So, how DO we get past the old 26 device alphabet limit ? Thanks, as always, for the great work, Neil. -- With our best regards, Maurice W. HilariusTelephone: 01-780-456-9771 Hard Data Ltd. FAX: 01-780-456-9772 11060 - 166 Avenue email:[EMAIL PROTECTED] Edmonton, AB, Canada http://www.harddata.com/ T5X 1Y3 - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: sata controllers status=0x51 { DriveReady SeekComplete Error } error=0x84 { DriveStatusError BadCRC }
David Greaves david at dgreaves.com writes: Check out the linux-ide archive for my (and others) reports. I just read them and weeped. I've had lots of issues like this - spurious and IMHO incorrect error messages. Only certain types of disk access cause them - xfs_repair and rsync seem to tickle it. Interesting. rsync for me. With 2.6.15 I had lots of *very* scary moments with multiple disk failures on a raid5 during xfs_repair. I think it's down to the 'basic' error handling in the libata code and certain disks/controllers being loose with the protocol. They then identified problems in 'fua' (IIRC) handling which was pulled for 2.6.16. 2.6.16 seems to be much better (fewer 'odd' errors reported and md doesn't mind) Fewer - oh oh. Can you quantify this for me? I seem to be getting around 8-9 /10 days = or averaging about 1 per rsync. I should check to see if the distribution is Poisson like distribution :). I have not lost my raid1 yet. lucky me. I am now very worried. I need to do some testing off line with the new kernel before putting out the fix. Is your experience also with an Asus board with a via vt8237 powered sata connector? Would you suggest I try the promise sata connector instead, before I try to move over to the newer kernel? Or both? Could you give me more of your experience? Thanks, Mitchell Thanks - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Recommendations for supported 4-port SATA PCI card ?
I have 4x500GB Maxtor SATA drives and I want to attach these to a 4-port SATA PCI card and RAID5 them using md Could anybody recommend a card that will have out of box support on a Fedora system ? which FC release? I believe FC4 would have decent support for promise tx4's (there are at least two - the most recent might not work OOB). sil 3114's ought to work as well. 3ware. Period. If you're going to use md, get the 8506-4 series rather than either of the 9xxx series cards. before the 9550, I never found attractive in price/performance: expensive as hell and a lot slower than MD. but the 9550 is really quite impressive... regards, mark hahn - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html