>Isn't the TCP checksumming enough? Anyhow, encryption would also have >this effect.
This is part of the security stuff, not just about data integrity. Right now Rx data is protected by a weak keyed checksum. Currently RxTCP does not implement that; I am thinking about doing it. This might complicate the code more (two code paths). I can't see any way of getting reasonable transfer rates when doing security on the bulk data, but checksumming would be less intensive then encryption. >In any case, I was just curious about it being possible at all. Modern >servers shouldn't have any problems delivering gige-speed without >sendfile given sane code, it will be very interesting to see what >happens when 10gige gets common though. A wild guess is that we'll be >limited by disk speed. I can only say that a number of years ago, I was able to saturate an OC-12 between two machines (real data, being transferred to disk on each end) with a slightly modified ftp client and server. It had two features: increased TCP window size, and it used an Irix-specific mechanism to bypass the buffer cache. If we have the disk speed available, I can't see a reason why we can't do the same thing on modern hardware and disks. --Ken _______________________________________________ OpenAFS-devel mailing list [email protected] https://lists.openafs.org/mailman/listinfo/openafs-devel
