Hi friends of VIFF,

I've now played a little bit with artificial bandwidth limits and artificial delays in order to be able to better analyze protocols. (Actually, my motivation was to get the running time for AES matching my analysis. It worked.)

I found out that setting a delay of 1 second for every packet gives a running time matching the number of rounds quite well. On the other hand, limiting the bandwidth gives at least a qualitative impression of the number of elementary operations. Another observation is that one could also have a look at the TCP sequence numbers to get an impression of the number of elementary operations.

I've included to shell scripts for Linux: One delays the network traffic to a given address space by 1 second, the other limits the traffic to 15kbit/s. They must be run as root.

If adding a delay, one should also apply the attached patch to avoid strange results (longer running times for less traffic).

Best regards,

Disclaimer: Hereby I reject to encourage anyone to replace serious protocol analysis by heuristics.

diff -r 5feebdfcc759 viff/runtime.py
--- a/viff/runtime.py	Fri Apr 24 14:04:45 2009 +0200
+++ b/viff/runtime.py	Fri Apr 24 14:11:54 2009 +0200
@@ -272,6 +272,7 @@
         self.incoming_data = {}
     def connectionMade(self):
+        self.transport.setTcpNoDelay(True)
     def connectionLost(self, reason):

Attachment: delay.sh
Description: application/shellscript

Attachment: limit.sh
Description: application/shellscript

viff-devel mailing list (http://viff.dk/)

Reply via email to