Is it right? This is like I do in my real application.

IMO it isn't. If sending full speed, forget the timer and only use the event. When using the timer, check a flag you set in OnDataSent. if falg not set, do not send anything, just exit the timer event handler, data will be sent on next tick.

Yesterday I have done some tests. I've tried different computers.
I noticed that:
On localhost I can reach very different max bitrates. From about 90Mbps
under winXp (cpu=T2300@1.66GHz), until 250Mbps under win7
(cpu=I72630QM@2.00GHz), but always some packet are lost.

In my opinion, sending back to back UDP packet will almost always result in packet lost. This is because the thread receiving data could be suspended for at last 20 mS or even much much more (Windows is not a real time OS). Winsock buffer must be large enough to buffer all data while thread is suspended and even larger since it has to somehow empty the buffer (Remember UDP has no flow control). You can set winsock to use a larger buffer (default is 8KB is memory serve me well).

If I open wireshark many more packets are lost by the application
(not by wireshark).

Wireshark do not loose packet because it has a Windows Driver running in kernel mode and working using interrupt, packets are buffered in memory and that buffer is displayed by the GUI independently (just like you are browsing a database). Of course you can do that as well, but not with ICS which only works in usermode.

--
francois.pie...@overbyte.be
The author of the freeware multi-tier middleware MidWare
The author of the freeware Internet Component Suite (ICS)
http://www.overbyte.be



----- Original Message ----- From: "emanuele bizzarri" <e.bizza...@e-works.it>
To: "ICS support mailing" <twsocket@elists.org>
Sent: Tuesday, March 01, 2011 10:11 AM
Subject: Re: [twsocket] udp packet loss


Hi Francois, hi all

I cannot reproduce the packet loss on localhost.

Strange. I always lose some packets on localhost (at least setting
Interval=0 on the client).

I can reproduce on different computers, but I found a flaw in your
design: remember TWSocket use non blocking. SendTo will fail if winsock
is not able to receive data and you don't check that condition.

Yes you're right. Now I've modified the code like this:

procedure TForm1.OnTimer(aSender:TObject);
begin
 if fWS.State=wsConnected then
 begin
   move(fCounter,fData^,4);
   if (fWS.SendTo(fPeerSrc,fPeerSrcLen,fData,fDataSize)>0)or
      (WSocket_WSAGetLastError=WSAEWOULDBLOCK)
   then
     inc(fSent);
 end;
end;

procedure TForm1.WSocketDataSent(Sender: TObject; ErrCode: Word);
begin
 if fSent>0 then
 begin
   dec(fSent);
   inc(fCounter);
   inc(fBR,fDataSize);
   Restart;
 end;
end;

Is it right? This is like I do in my real application.

I'm not sure you correctly checked with wireshark that all packets where
sent actually because their aren't when SendTo fails.

I'm pretty sure yes.
I've set internal packet number equal to wireshark packet number. Inside
wireshark, if I select one lost packet by the server application, I can
see that internal packet number corresponds.
So I think that this client side bug is not the cause of packet loss.
However I've fixed it.

Yesterday I have done some tests. I've tried different computers.
I noticed that:
On localhost I can reach very different max bitrates. From about 90Mbps
under winXp (cpu=T2300@1.66GHz), until 250Mbps under win7
(cpu=I72630QM@2.00GHz), but always some packet are lost.

On different machines, connected with a cross cable and Gbps ethernet
cards (I've installed the udp server over Q9300 2,5GHz machine, using
different operating systems), max bitrate is always about 160Mbps but:
under winXp a lot of packets are lost.
under win7 less packets are lost than using winXp.
under ubuntu+wine less packets are lost than using winXp and about the
same as win7.

If I open wireshark many more packets are lost by the application (not
by wireshark).

I noticed that if I set:
SetPriorityClass (GetCurrentProcess, HIGH_PRIORITY_CLASS);
and
fUDPServer.Priority:=tpNormal

less packets are lost (no improvement if I set also
fUDPServer.Priority:=tpTimeCritical);

I've tried to compile a third party example of udp server that use
winsock in a different way:
http://www.tenouk.com/Winsock/Winsock2example9.html
under windows and under linux, but the results are like the same.

I've also tried Indy project, but no improvement has been obtained.


I don't know if my tests are completely correct, but
my conclusion is that the mechanism of messaging notification used by
windows can create a udp rx bottleneck in some circumstances (system
wide and not only application wide).

In my real application (where bitrates are two orders of magnitude
lower) I've create a thread pool to manage incoming and outcoming data.
Data are transferred by threads using async queues (that use win
messages) with skip data mechanism where possible.
The result is that, however, udp rx is very (too much) sensible to any
action done on the system.
Now I'm working to reduce this sensibility. I can accept any other kind
of compromise, but I'd like that if udp packets phisically arrive on the
machine they were not discarded.


var
 lBuffer:array[0..1500] of AnsiChar;
I'd advice you not to allocate static buffer inside a method because
it is placed inside the stack every time method is called.


yes you're right. In my real application buffers are statically allocated.
I'm going to modify also the example like you say.


Thankyou for help,
Emanuele
--
To unsubscribe or change your settings for TWSocket mailing list
please goto http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket
Visit our website at http://www.overbyte.be

--
To unsubscribe or change your settings for TWSocket mailing list
please goto http://lists.elists.org/cgi-bin/mailman/listinfo/twsocket
Visit our website at http://www.overbyte.be

Reply via email to