Sam <[EMAIL PROTECTED]> wrote:
> On Thu, 24 Feb 2000, Len Budney wrote:
> > I've assumed that most filesystems are block-buffered anyway,
> > and safecat
>
> They are, however switching to/from kernel mode is extremely expensive.
Yes, but 5-6 orders of magnitude less expensive than physical writes. If
I had incurred, but discounted, physical latencies, I would be a
jackass. Instead I simply made a decision you don't care for, which I
would reverse in the face of actual performance data.
Under my own benchmarks, safecat with buffering is more CPU intensive
than without. However, they have about the similar wall-clock runtime
as each other and as /bin/cat. The best expectable speedup would be a
factor of about 2, not 1,000,000.
In particular, it is extremely unlikely that safecat is the source
of Mr. Generous's performance problems. If actual performance metrics
indicate otherwise, I will happily rewrite the program to accomodate
his needs.
> The reason getc/putc buffer single characters into big chunks, before
> calling read/write, is not because some programmer had some free time
> one afternoon, and decided to write all those convoluted macros and
> functions.
The reason for my idiom, and the reason I don't use those convoluted
macros and functions, is not because I got religion at an anti-libc
tent revival.
The getc/putc macros are, in general, a portability nightmare. Under
HP/UX, for example, they are (were) written so poorly that returns
from a signal handler dropped back inside the macro. Result: your
friends the ``convoluted macros'' degenerated to infinite loops upon
receipt of non-terminal signals.
Question: why didn't the fellow you admire so, who wrote these convoluted
macros, see fit to give proper error information? Why didn't he use errno?
How do I know why putc() fails when it fails?
Len.
--
It's not an opinion. It's a statement of fact. And it's wrong.
-- Dan Bernstein