I was seeing this with the default -t u1 flag so doesn't that mean
everything is UDP?  Therefore no TCP keep-alive, probes, etc.? With UDP,
the sender has no way of knowing if the receiver is full (and doesn't
care), it just keeps sending.  That mean the only way you could get
EAGAIN is if the send buffer was full of its own accord.  That is the
application is stuffing packets in faster than the OS can send them.
Absolutely nothing to do with the reciever.  

True?
Ed

-----Original Message-----
From: Rick Jones [mailto:[EMAIL PROTECTED] 
Sent: Friday, August 04, 2006 1:31 PM
To: Klaus Darilion
Cc: Edward Russell; [email protected]
Subject: Re: [Sipp-users] TCP problems under high load

>> The sending TCP is stuck waiting for the window to open.  The 
>> receiving TCP may indeed open the window when the applicatoin recv's 
>> enough data out of it, but sending TCP's also have something called
>> (IIRC) a window probe triggered by a persist timer.  That timer is
>> (intiially) on the order of the normal TCP retransmission timer.  
>> That is on the order of several hundred milliseconds.
> 
> 
> I saw some empty packets which ethereal decoded as "TCP keep-alive". 
> They where sent every 0,5s to 3s (I can't remember exactly).

Those could indeed be window probes.  IIRC they would look rather like
keepalives, and ethereal would not know the difference unless it was
applying an heuristic like "if 'keepalive' sent to a zero-window
receiver then windo probe.

>> So, if there is code expecting timely response on the TCP connection,

>> when the TCP connection's window fills it could easily not get that 
>> result in a timely fashion and it could be all down the tubes from
there.
> 
> 
> The thing is, once the receiver buffer at the SIP proxy is full, of 
> course the sending buffer at the SIPp side also gets full as SIPp 
> still writes SIP packets into the socket. IMO SIPp should stop sending

> (and stop reporting) SIP messages into the socket.

Could be a toe-may-toe vs toe-mah-toe sort of thing - if one stops then
one will be paced by the remote and will not drive it into
oversaturation.

> Currently, if the socket does not accept any data (sending buffer is 
> full), SIPp buffers the messages and reports them as sent, which is 
> IMO wrong.

Indeed, if the send() was not successful it seems odd to assert the
message was sent.

>> Having multiple TCP connections would mean multiple windows which 
>> means it would take longer for the sum of the windows to fill - 
>> however if those connections are all to the same process/thread that 
>> process/thread could become just as loaded as before and the windows 
>> could still fill.
> 
> 
> Yes, that's exactly what happens. Using multiple SIPp instances does 
> not help.

Time to start profiling the server (eg http://www.hp.com/go/caliper :)
and tuning it :)

rick jones

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Sipp-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/sipp-users

Reply via email to