begin  quoting Bob La Quey as of Thu, Dec 20, 2007 at 11:53:40AM -0800:
> On Dec 20, 2007 11:10 AM, Tracy R Reed <[EMAIL PROTECTED]> wrote:
> > Dexter Filmore wrote:
> > > ZFS is software. Don't know how ZFS performs, but being software I *bet* 
> > > it
> > > won't write 400MB/s like a hardware controller does. (Ok, 400 at a 
> > > sufficient
> > > high disk number, 6 or 8). I got soft raid 5 on my server and besides 
> > > that it
> > > is troublesome in terms of stability (had at least 6 occurances where a 
> > > disk
> > > flew from the array for no apparent reason, disk checks turned out disk 
> > > was
> > > fine so I re-added it each time) it does little more then 10MB/s writes at
> > > cost of many cpu cycles.
> >
> > Your first statement makes no sense. That hardware controller runs
> > software too. There is no such thing as "hardware is better than
> > software" or vice versa. They are apples and oranges. You cannot have
> > one without the other. And RAID 5 sucks in general for performance. On a
> > P4 system, XOR computation can be performed by the MMX unit
> > (independently of the main cpu) at 2Gbytes/sec, well above the needs of
> > I/O systems. So the RAID calculations themselves aren't likely to be
> > causing noticeable CPU performance differences.
> 
> Yeh, I think the point is that the "hardware" being used in a
> software RAID i.e. the "main" cpu (commodity) is running closer
> to Moore's Law than the "hardware" in a disk controller so the
> "hardware" used by a "software" RAID is faster.
> 
> This is kind of hard to say, so let me try again.
> 
> There is only "firmware" = "hardware" + "software" to do some task.
> 
> So we consider two cases:
>       1) "main" => f0 = h0 + s0
>       2) "disk ctrl" => f1 = h1 + s1
> 
> Assertion: because of manufacturing economics the "main" cpus will
> always be running closer to Moore's Law than are disk controllers.
> 
> So unless their is some serious architectural issue there is
> _no_ advantage to using the "disk ctrl" to run the software.
> Moreover there is a cost and performance disadvantage for so
> doing.

I don't follow that reasoning.

A specialized controller will always outperform a general-purpose
processor, all other things being equal.

When the general-purpose processors get to the point of outperforming
the specialized controllers, the specialized controllers are replaced
with a dedicated general-purpose processors.

(The first 1GB disk I saw in the 80s had a 14MHz 680x0 processor and
a megabyte of RAM, which was more raw processing power than the
computer it was hooked up to had.)

So the only performance-related distinctions that have any meaning are
the architectural ones. Everything else will average out.

As for economics, well, if you are requiring your main processor to do
more work, you need to invest in a faster processor.  Which gets more
expensive, quite fast... so it depends on how much excess capacity you
have. (And if you're running with plenty of excess capacity, why are
you worried about performance?)

ZFS brings an interesting architectural viewpoint to the table -- that
the checksum for some data _ought_ to be computed as close to the origin
as possible, as every subsystem between the origin and the disk could,
and will eventually, fail.

Corrupt data is often worthless, or worse.  (Not always. Some data can
have acceptable levels of corruption and still be fine.)

I believe it was Tracy that mentioned using the MMX capabilities to
compute checksums.  That's where I think we'll see more happen -- with
filesystems like ZFS pushing integrity issues closer to the general
purpose processor, and then general-purpose processors incorporating
custom hardware to pull that computation out of software and into the
hardware, where it can be fast.

-- 
Do modern processors no longer come with FPUs or allowances for one?
Stewart Stremler


-- 
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list

Reply via email to