> A benefit of HTTP is that the MIME headers, particularly Content-type: > guide the client in the disposition of the content.
Which is largely ignored by the dominant browser in favour of file extension and content sniffing, probably because it is often misconfigured in servers. > A possible disadvantage of HTTP is that (it appears to me) the client > receives no explicit notification of a truncated file or interrupted > transfer. Is this true? Does FTP do better in this respect because It ought to get a connection reset, unless a proxy hides this. HTTP/1.1 requires that all content have an explicit length, although this is more to do with request pipelining than data integrity. > of its two-socket protocol? There are various transfer options for FTP; the one that tends to be implemented in practice doesn't convey length information. > RFC 1738 specifies URL handling by the client that avoids such > mapping. Unfortunately, the Big Two flout RFC 1738 here, and I consider the awkward, step by step, directory changing to be such a mapping. FTP makes no pretence that it understands the structure at all. > Lynx goes with the flow unless the server identifies itself as VAX. The flow includes some clients that seem to try and back up the current directory on a persistent connection, which doesn't work when there are symbolic links or the initial directory isn't the effective root (depending on whether they try ../ or / prefixes). I hope lynx doesn't do this. Another advantage that ftp used to have for large files is that it was de facto restartable on Unix servers (even though the protocol didn't allow restarts in the context in which they were used!). HTTP/1.1 now allows ranges to be fetched, although some caches can't cope well with this. HTTP has sophisticated caching, but many commercial sites don't want, even huge service packs, to be cached and don't understand caching, anyway. ; To UNSUBSCRIBE: Send "unsubscribe lynx-dev" to [EMAIL PROTECTED]
