On Fri, Jun 18, 2010 at 9:53 AM, Jeff Bacon <ba...@twinight.org> wrote: > I know that this has been well-discussed already, but it's been a few months > - WD caviars with mpt/mpt_sas generating lots of retryable read errors, > spitting out lots of beloved " Log info 31080000 received for target" > messages, and just generally not working right. > > (SM 836EL1 and 836TQ chassis - though I have several variations on theme > depending on date of purchase: 836EL2s, 846s and 847s - sol10u8, > 1.26/1.29/1.30 LSI firmware on LSI retail 3801 and 3081E controllers. Not > that it works any better on the brace of 9211-8is I also tried these drives > on.) > > Before signing up for the list, I "accidentally" bought a wad of caviar black > 2TBs. No, they are new enough to not respond to WDTLER.EXE, and yes, they are > generally unhappy with my boxen. I have them "working" now, running > direct-attach off 3 3081E-Rs with breakout cables in the SC836TQ (passthru > backplane) chassis, set up as one pool of 2 6+2 raidz2 vdevs (16 drives > total), but they still toss the occasional error and performance is, well, > abysmal - zpool scrub runs at about a third the speed of the 1TB cudas that > they share the machine with, in terms of iostat reported ops/sec or > bytes/sec. They don't want to work in an expander chassis at all - spin up > the drives and connect them and they'll run great for a while, then after > about 12 hours they start throwing errors. (Cycling power on the enclosure > does seem to reset them to run for another 12 hours, but...) > > I've caved in and bought a brace of replacement cuda XTs, and I am currently > going to resign these drives to other lesser purposes (attached to si3132s > and ICH10 in a box to be used to store backups, running Windoze). It's kind > of a shame, because their single-drive performance is quite good - I've been > doing single-drive tests in another chassis against cudas and constellations, > and they seem quite a bit faster except on random-seek. > > Have I missed any changes/updates in the situation?
I'm been getting very bad performance out of a LSI 9211-4i card (mpt_sas) with Seagate Constellation 2TB SAS disks, SM SC846E1 and Intel X-25E/M SSDs. Long story short, I/O will hang for over 1 minute at random under heavy load. Swapping the 9211-4i for a MegaRAID 8888ELP (mega_sas) improves performance by 30-40% instantly and there are no hangs anymore so I'm guessing it's something related to the mpt_sas driver. I submitted bug #6963321 a few minutes ago (not available yet). -- Giovanni Tirloni gtirl...@sysdroid.com _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss