with gigabit links you will want to enable jumbograms to get  
reasonable performance.

On Dec 17, 2007, at 8:31 PM, Serguei Osokine wrote:

> On Monday, December 17, 2007 Alex Pankratov wrote:
>> The call time stays at 15-17 us for the sizes up to 1470 bytes, so
>> maxing the link capacity is still problematic. In fact, I tried this
>> and that and I was not able to go over 69% utilization.
>
> That's right; we have similar call time numbers, but fortunately we  
> did
> not need the whole gigabit. I think we never went over 450 Mbit/sec or
> so - I mean, maybe we could raise this a bit, but we did not need to.
>
> Best wishes -
> S.Osokine.
> 17 Dec 2007.
>
>
> -----Original Message-----
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Behalf Of Alex  
> Pankratov
> Sent: Monday, December 17, 2007 1:52 PM
> To: 'theory and practice of decentralized computer networks'
> Subject: Re: [p2p-hackers] MTU in the real world
>
>
> A follow-up. Please see below.
>
> Alex
>
>> -----Original Message-----
>> From: [EMAIL PROTECTED]
>> [mailto:[EMAIL PROTECTED] On Behalf Of
>> Serguei Osokine
>> Sent: Monday, December 17, 2007 9:52 AM
>> To: theory and practice of decentralized computer networks
>> Subject: Re: [p2p-hackers] MTU in the real world
>>
>> On Sunday, December 16, 2007 Alex Pankratov wrote:
>>> Admittedly, I didn't run the test over a one-gig link, but
>>> still the discrepancy between your findings and my results
>>> is a quite a bit odd.
>>
>> I'm not sure what's happening on 100 MBits/s. The real fun did
>> not even start until I was well over that number - the original
>> problem was to somehow send a gigabit per second of non-fragmented
>> non-jumbo UDP packets.
>
> I suspected just that. That's why I added the 100 Mbps disclaimer :)
>
>> [snip]
>>
>>> * the execution time of sendto() on my machine clearly depends on
>>>  a size of the packet and it is virtually the same for blocking
>>>  and non-blocking sockets.
>>>
>>>     bytes           microseconds
>>>     256             25
>>>     1024            87
>>>     4096            345
>>>     16384           1370
>>
>> In gigabit scenario you have to be under 10 microseconds per 1-KB
>> packet in order to fill the link to capacity. On our 1000-1400 byte
>> packets this time was several times higher than 10 microseconds, and
>> it was this time that was the performance bottleneck when a single
>> thread was doing all the sending. If multiple threads were sending
>> data, the CPU was maxing out and becoming the bottleneck instead.
>
> I run the same test over 1 Gbps, and I can now see constant execution
> time of the sendto. It's really bizarre. What's even more interesting,
> its behavior appears to change abruptly once the datagram size goes
> over 1024 bytes.
>
>       sendto() for 1024 bytes or less runs at about 15 us/call
>       sendto() for 1025 bytes and up - 244 us/call, constant
>
> And the problem goes away once the socket is unblocked. Just as you
> described.
>
> The call time stays at 15-17 us for the sizes up to 1470 bytes, so
> maxing the link capacity is still problematic. In fact, I tried this
> and that and I was not able to go over 69% utilization. The test app
> was still getting WOULDBLOCKs (in ~5% of calls), spawning multiple
> threads or processes did not make any difference, so I am inclined
> to think that this is a driver-level issue. It looks suspiciously
> like a driver that continues to use interrupts instead of polling
> under the high load.
>
> _______________________________________________
> p2p-hackers mailing list
> [email protected]
> http://lists.zooko.com/mailman/listinfo/p2p-hackers
>
> _______________________________________________
> p2p-hackers mailing list
> [email protected]
> http://lists.zooko.com/mailman/listinfo/p2p-hackers

_______________________________________________
p2p-hackers mailing list
[email protected]
http://lists.zooko.com/mailman/listinfo/p2p-hackers

Reply via email to