On Monday, December 17, 2007 David Barrett wrote:
> I initially misread this to mean you did your testing over
> "localhost".

That's exactly what I did. 127.0.0.1, to be perfectly clear.

> I was recently surprised to find that localhost is actually quite 
> slow: I was doing some UDP-vs-TCP transfer tests and found localhost
> would max-out well before the 100Mbps network interface.  
>
> If my memory serves, localhost topped out around 1MBps.

As you can guess, there is no good reason why would the fake UDP
send take more time and effort than the real one - this probably 
can be explained only by the bug in the IP stack. It is even quite
possible that the bug is the same one that prevents the gigabit
interface from operating at full speed. On the bug-free UDP stack
implementation, I cannot think of any reason why would the loopback
sending rate ever be any lower than the sending rate of the actual
network card interface, whatever is the network speed of the card.
Bug-free loopback sending rate should probably be close to the rate
of the infinite-speed network interface.

For sure, it wouldn't be a first time when I see mysterious bugs in
the IP stack; for example, at some point SGI IRIX kernel had this
wonderful feature when it could send more 400-byte UDP packets per
second than of 100-byte ones. With 100-byte packets not only the 
used bandwidth would drop, but it would drop by a factor of more 
than four - the actual number of packets sent per second would drop,
too.

If I remember correctly, back then the problem was that there was
some mutex taken inside kernel on a per-packet basis, so if you were
sending fewer bytes per packet, the percentage of time spent in this
mutex grew in comparison with the time when the thread was awake but
did not hold this mutex, because there was less time spent throwing 
the packet bytes around. 

Now, maybe it resulted in higher likelihood of thread holding the
mutex while losing its timeslice on scheduler interrupt; maybe it
was something else. But to make the long story short, what was 
supposed to be a smaller sending load (100 bytes per packet vs 400)
actually resulted in *fewer* packets being sent per second, because
of some weird multithread interaction - some thread could not wake
up on the mutex acquisition soon enough, or something like that 
(I do remember the result, but not the details of the mutex wakeup
code).

Maybe something similar is happening here, too. Who knows...

Best wishes -
S.Osokine.
17 Dec 2007.

-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of David Barrett
Sent: Monday, December 17, 2007 12:45 PM
To: 'theory and practice of decentralized computer networks'
Subject: Re: [p2p-hackers] MTU in the real world

> -----Original Message-----
> From: Behalf Of Serguei Osokine
> Subject: Re: [p2p-hackers] MTU in the real world
> 
> All my testing without a gigabit interface was just over a loopback
> connection on my home 2001 machine, so it was probably not even all
> that relevant (though the basic patterns were more or less similar,
> if I remember correctly).

I initially misread this to mean you did your testing over "localhost".  And
had you said that (which you didn't), I was going to mention that I was
recently surprised to find that localhost is actually quite slow: I was
doing some UDP-vs-TCP transfer tests and found localhost would max-out well
before the 100Mbps network interface.  

If my memory serves, localhost topped out around 1MBps.

-david

_______________________________________________
p2p-hackers mailing list
[email protected]
http://lists.zooko.com/mailman/listinfo/p2p-hackers
_______________________________________________
p2p-hackers mailing list
[email protected]
http://lists.zooko.com/mailman/listinfo/p2p-hackers

Reply via email to