On Fri, 14 Jun 2013 15:17:00 -0400, Walter Bright <[email protected]> wrote:

On 6/14/2013 11:43 AM, Steven Schveighoffer wrote:
The 80's are a long time ago.

But old code can live on in surprising ways.

The code living on is not what I'm talking about. The *assumptions* living on is the problem :) Old code can be written very carefully to avoid situations that are not present anymore.

Plus, your posting of the source code pretty much
refutes that your buffering scheme takes into account how important this should be. It ignores alignment of writes if you add an fflush in between writes.

One aspect of its buffering scheme being inferior doesn't mean the rest of it is. There are a rather large number of issues with doing good I/O.

I'm not saying it's inferior, just that it's not as big a deal as you say it is. At least not any more. I can see that it might have been very important with an OS like DOS.

It's not the only anecdote I have about that, either.
That's good, because the floppy DOS days are pretty much over :)

You're overlooking that there are a LOT of C runtimes in use out there, and testing on one of them in one system doesn't say anything about other systems, and many of them (such as for embedded systems) are fairly primitive.

The same can be said for your runtime. That is, your choices of buffering may not do well on other systems for which other buffering schemes may be tuned.

I think in the end, we are optimizing here in the wrong place. If a specific hardware/software combination requires specific buffering, the place to handle it is in the runtime, not code on top of it. If the C runtime that D uses isn't up to snuff, let's use a different scheme, or abandon it all together *for that specific device*.

Not that this is the situation we currently have, where D only runs on full-blown PCs...

-Steve

Reply via email to