Hello everyone, I've finally been able to do some measurements on GB Ethernet. Initial tests showed that TCP (~90 MB/s) was significantly faster than TIPC (SOCK_STREAM, ~60 MB/s) for more typical write sizes (ie. writes in 4k, 8k chunks). But then I noticed that the receiving box was under 100% CPU load. Upon investigating this, I noticed that all read() calls to the socket only returned 1476 bytes (1476 + 24 Byte header = 1500, the MTU used). When I looked at the code i saw that TIPC would only ask for more data if one passes the MSG_WAITALL flag to recv(). When I changed the benchmark tool on the receiving end accordingly, load dropped by about 50% and throughput increased to more than 90-95 MB/s, being a little faster than TCP in some cases.
Now, I think there is little reason to not check for more data if it is there and suggest the following modification to TIPC: if ((sz_copied < buf_len) /* didn't get all requested data */ - && (flags & MSG_WAITALL) /* ... and need to wait for more */ + && (!skb_queue_empty(&sk->sk_receive_queue) /* ... and there is more data to read ... */ + || (flags & MSG_WAITALL)) /* ... or need to wait for more */ && (!(flags & MSG_PEEK)) /* ... and aren't just peeking at data */ && (!err) /* ... and haven't reached a FIN */ ) This should have a similar effect as passing MSG_WAITALL under most high-traffic scenarios (I'll re-test this of course). I've put some of the results obtained so far here: http://www.strlen.de/tipc/ Thoughts? Florian ------------------------------------------------------------------------- This SF.net email is sponsored by DB2 Express Download DB2 Express C - the FREE version of DB2 express and take control of your XML. No limits. Just data. Click to get it now. http://sourceforge.net/powerbar/db2/ _______________________________________________ tipc-discussion mailing list tipc-discussion@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/tipc-discussion