> From: "William Herrin" <[EMAIL PROTECTED]>
> History does, however, suggest that the more control you place directly
> in the hands of the end-user's single PC, the greater adaptability the
> system exhibits. This holds true whether the end user exercises that
> control by writing code himself or merely selecting which software he
> runs.
Yes, but... for really good adaptability it is also necessary that the
changes be within the scope of what will be 'comprehensible' (for lack of a
better word) to unmodified hosts on the far end. I expect that what I'm
trying to get at may be unclear; an example might be clearer...
The classic example of where this kind of local change worked really well is
TCP re-transmission: changes in the re-transmission algorithm were easily
tested and deployed because i) it was implemented in the hosts, and ii) a
modified host worked fine when dealing with a non-modified host.
I don't offhand have an example ready of where local change didn't work well
- although I guess places where we couldn't change things (e.g. IP address
size) are the extreme example of 'non-comprehensibility'.
How to ensure that a design has this kind of flexibility... that's something I
don't have a good feel for. I guess the designer always keeping in mind the
desirabilty of this kind of forward flexibility, designing things so that
algorithms _can_ be replaced, is a good step. (I don't recall if we had this
in mind when doing the original TCP retransmission algorithm - that case may
have been just serendipity.) I explicitly did a routing architecture in which
_none_ of the algorithms were part of the 'core spec' - precisely to enable
this sort of long-term 'local' adaptability.
Noel
_______________________________________________
rrg mailing list
[email protected]
https://www.irtf.org/mailman/listinfo/rrg