On Wed, 08 Mar 2000, Brian Pomerantz wrote:
> On Wed, Mar 08, 2000 at 09:14:02PM +0100, Holger Kiehl wrote:
> >
> > Why don't you try SW raid?
> >
>
> The Mylex controllers I have don't do SCSI, it presents a block
> device. I think I'm going to try these drives on my NCR controller
> just to get a base-line on what kind of write performance they are
> capable of. Maybe I'll try software RAID when I do that. What sort
A benchmark of the SW vs. HW results would be _very_ interesting to a
lot of us (hint, hint :)
> of CPU usage does software RAID use? Also, does it work on the Alpha
There's next to no CPU overhead in SW RAID levels -linear and -0. Their
overhead is barely-measurable, as no extra data is copied and no extra requests
are ussued. (It's simply a re-mapping of an existing request)
Level 1 has some overhead in the write case, as the write request must
be duplicated and sent to all participating devices. This is mainly a
RAM bandwidth eater, but it will show up as extra CPU usage.
Levels 4 and 5 do parity calculation. A PII 350MHz is capable of parity
calculation of 922 MB/s (number taken from the boot output). This means that
on virtually any disk configuration one could think of, the XOR overhead
imposed on the CPU would be something like less than 10% of all available
cycles. In most cases more like 1-2%, and that's during continuous writing.
Another overhead in -4 and -5 (and probably by far the most significant) is
that in order to execute a write request to the array, this request must
be re-mapped into a number of read requests, a parity calculation, and then
two write requests (one for the parity). Even though this sounds expensive
in terms of CPU and maybe latency, I would be rather surprised if you could
build a setup where the RAID-5 layer would eat more than 10% of your cycles
on any decent PII. Now compare 10% of a PII to the price of the Mylex ;)
> platform? Last time I tried to get it working (granted I didn't spend
> a lot of time on it), I was unable to get very far.
Sorry I don't know.
> In the end, I don't think software RAID is an option for HPC. It is
> likely in a production system we will want to have a great deal of
> space on each I/O server with many RAID chains on each of them. I
> don't think I would see the best performance using software RAID. To
You don't _think_ you would see better performance ?
I'm pretty sure you will see better performance. But on the other hand, with a
large number of disks, sometimes the hot-swap capability comes in handy, and
sometimes it's just nice to have a red light flashing next to the disk that
died. Hardware RAID certainly still has it's niche :) - it's just usually
not the performance one.
> add to the complexity, I'll be doing striping across nodes for our
> cluster file system. Probably what will happen is I'll use Fibre
> Channel with an external RAID "smart enclosure". This will allow for
> more storage and more performance per I/O node than what I could
> achieve by using host RAID adapters.
Ok, please try both SW and HW setup when you get the chance. This is a
situation that calls for real numbers.
However, when we're speculating wildly anyway, here's my guess: Software RAID
will wipe the floor with any hardware RAID solution for the striping (RAID-0)
setup, given the same or comparable PCI<->something<->disk busses. (And for
good reasons, hotswap on RAID-0 is not often an issue ;)
--
................................................................
: [EMAIL PROTECTED] : And I see the elder races, :
:.........................: putrid forms of man :
: Jakob �stergaard : See him rise and claim the earth, :
: OZ9ABN : his downfall is at hand. :
:.........................:............{Konkhra}...............: