Some information which may be useful for those implementing protocols over UDP: https://tools.ietf.org/html/rfc5405
On Mon, Mar 16, 2015 at 10:46 AM, Dahlia Trimble <[email protected]> wrote: > I'm not sure where the loss occurs but I've seen similar behavior in other > network layer implementations based on UDP. You can only send so much > before the receiving end sees packet loss and this makes sense as there is > no (or very little) buffering at the receiving end to store unprocessed > packets. I believe what most implementations do is throttle the packets at > the sending end to try to minimuze this loss. I'm not sure why you see > async showing more loss than other patterns but I have also seen loss with > select() polling as well. I believe viewers typically don't send packets at > such a high rate that such loss becomes a problem, and also any packets > sent by the viewer at any "high" rate are assumed to be unreliable anyway > and some loss is acceptable. OpenSImulator also throttles UDP packets it > sends to viewers to try to mitigate packet loss. If you were using TCP > instead, it will do this for you behind the scene so if you try to send too > much data down a TCP socket it will just fill buffers and/or eventually > block until it can successfully send the data. > > On Mon, Mar 16, 2015 at 6:54 AM, Michael Heilmann <[email protected]> > wrote: > >> Opensim Devs >> >> I have been working on an external process that I hope to link to an >> opensim plugin I am authoring. As a sanity check, I ran a simple socket >> test against the components to test for obvious problems before I get to >> the heavy lifting. This test was not meant to reveal anything, just to >> confirm that the pieces are in place. However, I am not seeing the expected >> behavior. >> >> My simple sanity test: send 1000000 small udp packets at a server, and >> have it count packets received. This is trivial, and the sending client is >> python, so all packets should be received. >> >> While running this on localhost: (~160K packets per second) >> >>> Go, c++, nodejs, c# synchronous : negligible to no packet loss, as >>> expected >>> >>> c# async: 80% packet loss >>> >> >> >> I have reviewed msdn documentation, and went through the opensim code (as >> it uses c# async) to clean up and run my tests. That 80% packet loss is >> after forcing my threadpool and iocp thread counts up, and giving c# async >> extra time before and after the test to warm up threadpools and process any >> queues. >> >> I have read that in some cases C# and .Net has a packet loss bug for udp >> on localhost. >> >> I re-ran this test between two linux servers with a 10gbe LAN >> interconnect to remove any .net localhost packet loss issues, with the >> following results: (~115K packets per second) >> >>> c# synchronous: ~998K messages received (average over 3 runs) >>> Go : ~980K messages received (average over 3 runs) >>> >>> c# async: ~40K messages received (average over 3 runs) >>> >> disclosure: Go on my workstation is version 1.4.2, but on the servers is >> version 1.2.1. Mono versions are identical >> >> Now, the 3 runs were very nearly identical, with differences in the >> thousands of packets. As before, C# async was given extra treatment to >> help it along. >> >> I am on linux 64 bit, so I had a coworker write his own version of this >> test using Visual Studio 2013 on Windows (no code sharing), and he saw the >> same behavior: c# async suffering massive packet loss while c# sync is >> keeping up easily. >> >> Is there anything in the opensim client stack that I am missing, that >> allows for higher async performance? Or am I on a witch hunt, and this >> performance is within Opensim bounds? I have been increasing the >> threadpool and iocp threads as Opensim does. I have pasted important >> pieces of the code below, as the entirety could be too long for this >> messageboard: (The UDPPacketBuffer class is coped from opensim code) >> >> private void AsyncBeginReceive() >>> { >>> UDPPacketBuffer buf = new UDPPacketBuffer(); >>> this.u.BeginReceiveFrom( >>> buf.Data, >>> 0, >>> UDPPacketBuffer.BUFFER_SIZE, >>> SocketFlags.None, >>> ref buf.RemoteEndPoint, >>> AsyncEndReceive, >>> buf); >>> } >>> >>> private void AsyncEndReceive(IAsyncResult iar) >>> { >>> //schedule another receive >>> AsyncBeginReceive(); >>> >>> lock(this) >>> { >>> this.i++; >>> } >>> } >>> >>> public static void Main (string[] args) >>> { >>> int iocpThreads; >>> int workerThreads; >>> ThreadPool.GetMinThreads(out workerThreads, out iocpThreads); >>> workerThreads = 32; >>> iocpThreads = 32; >>> Console.WriteLine(workerThreads); >>> ThreadPool.SetMinThreads(workerThreads,iocpThreads); >>> MainClass mc = new MainClass(); >>> mc.AsyncBeginReceive(); >>> //manually trigger packet report after test run completion >>> Console.ReadLine(); >>> Console.WriteLine(mc.i); >>> } >>> >> >> I have run this test repeatedly, varying workerThread and iocpThreads >> from 8, to 16, 32, 64 ... 512, without any change in packets received. If >> anyone had any insight into why this is suffering from so much packet loss, >> I would appreciate it. Thanks. >> >> -- >> Michael Heilmann >> Research Associate >> Institute for Simulation and Training >> University of Central Florida >> >> _______________________________________________ >> Opensim-dev mailing list >> [email protected] >> http://opensimulator.org/cgi-bin/mailman/listinfo/opensim-dev >> > >
_______________________________________________ Opensim-dev mailing list [email protected] http://opensimulator.org/cgi-bin/mailman/listinfo/opensim-dev
