Micah Cowan <[EMAIL PROTECTED]> writes: > I don't see what you see wrt making the code harder to follow and reason > about (true abstraction rarely does, AFAICT,
I was referring to the fact that adding an abstraction layer requires learning about the abstraction layer, both its concepts and its implementation, including its quirks and limitations. Too general abstractions added to application software are typically to be underspecified (for the domain they attempt to cover) and incomplete. Programmers tend to ignore the hidden cost of adding an abstraction layer until the cost becomes apparent, by which time it is too late. Application-specific abstractions are usually worth it because they are well-justified: they directly benefit the application by making the code base simpler and removing duplication. Some general abstractions are worth it because the alternative is worse; you wouldn't want to have two versions of SSL-using code, one for regular sockets, and one for SSL, since the whole point of SSL is that you're supposed to use it "as if" it were sockets behind the scenes. But adding a whole new abstraction layer over something as general as Berkely sockets to facilitate an automated test suite definitely sounds like ignoring the costs of such an abstraction layer. > I _am_ thinking that it'd probably be best to forgo the idea of > one-to-one correspondence of Berkeley sockets, and pass around a "struct > net_connector *" (and "struct net_listener *"), so we're not forced to > deal with file descriptor silliness (where obviously we'd have wanted to > avoid the values 0 through 2, and I was even thinking it might > _possibly_ be worthwhile to allocate real file descriptors to get the > numbers, just to avoid clashes). I have no idea what "file descriptor silliness" with values 0-2 you're referring to. :-) I do agree that an application-specific struct is better than a more general abstraction because it is easier to design and more useful to Wget in the long run. >>> This would mean we'd need to separate uses of read() and write() on >>> normal files (which should continue to use the real calls, until we >>> replace them with the file I/O abstractions), from uses of read(), >>> write(), etc on sockets, which would be using our emulated versions. >> >> Unless you're willing to spend a lot of time in careful design of >> these abstractions, I think this is a mistake. > > Why? Because implementing a file I/O abstraction is much harder and more time-consuming than it sounds. To paraphrase Greenspun, it would appear that every sufficiently large code base contains an ad-hoc, informally-specified, bug-ridden implementation of a streaming layer. There are streaming libraries out there; maybe we should consider using some of them.
