C M Reinehr wrote:
I don't know if the i2o_block issues were specifically with 64-bit,
32-bit or whatever. I was somewhat surprised to find that when I did
searches for dpt_i2o and i2o_block and AMD64 last week, some of the top
results were from my own thread here on installing on AMD64 two years
ago! Kind of depressing.
So: Anybody had recent problems with i2o_block in production on AMD64?
I can't say for sure whether it's i2o_block that's causing the problem, but I
have a dual-Opteron server that's running up-to-date Etch with md RAID-1. It
seems that about once a month or so when I check my logs I see that one of
the md devices has become corrupted and was rebuilt. For example:
Jan 7 01:06:03 eljudnir kernel: block-osm: transfer error
Jan 7 01:06:14 eljudnir kernel: disk 0, wo:0, o:1, dev:i2o/hdc1
Jan 7 01:06:14 eljudnir kernel: disk 1, wo:0, o:1, dev:i2o/hdd1
Jan 7 01:06:18 eljudnir kernel: block-osm: transfer error
Jan 7 01:13:14 eljudnir mdadm: Rebuild20 event detected on md device /dev/md1
...
But, I suppose this also could be the result of a disk drive that is failing
or any of several other causes.
I'm using i2o_core & i2o_block. It's a Tyan S2882UG3NR Thunder K8S M/B with a
built-in Adaptec AIC7902W dual-channel U320 SCSI controller.
Thanks, that's useful. I will probably end up installing using the
i2o_block (if that turns out to be possible) and then roll my own using
dpt_i2o. Somehow using the "official" Adaptec driver feels better. I
have no idea why it wasn't included as the "official" driver... it's
been working flawlessly for me since mid-2005. Not one disk error (that
I can see).
By the way, how do other people detect RAID errors remotely on AMD64 and
these i2o type cards? I seem to remember raidutils didn't work for some
reason when I tried it two years ago. If it had worked, I would have
been using it all this time. How do you know when a drive has failed,
short of constantly grepping the syslog for errors (and I don't even
really know what exactly to look for there)?
I have often thought that the disk performance with the 2015S zero
channel controller is not all that great, but I have no good handle on
whether it's the hard drives (73 GB Fujitsu 10k) or the RAID controller
itself, or something else (perhaps I've done something dumb with the
kernel config, I'm really no expert). I'm thinking to replace the
current drives with some 15k drives I can get through a friend of mine
(he is able to get 146 GB 15k drives from Dell for about US$260 each).
Does that sound like a good deal? Will I see any good improvement in
performance with the 15k drives? This is a LAMP server with dual Opteron
265. I figure I can replace the drives the same time I upgrade the rest
of the system and that should see me for the next year or two at least...
Thanks again (sorry for all the questions!)...
/Neil
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]