Hi, I've run into an odd problem in a streaming TV application that I'm building. I have a client and server on the same machine with the server sending data as fast as Socket.Send() allows. The client uses NetworkStream.Read() to retrieve data on-demand.
Occasionally the client will underrun a quite sizable buffer set aside to smooth out gaps in network delivery. Here's a sample from when that occurs: Read 2048 bytes took 00:00:00.0000810 Read 2048 bytes took 00:00:00.0000570 Read 2048 bytes took 00:00:00.0000650 Read 2048 bytes took 00:00:00.0000630 Read 2048 bytes took 00:00:00.0000550 Read 2048 bytes took 00:00:00.0000650 Read 2048 bytes took 00:00:00.0000620 Read 2048 bytes took 00:00:00.0990950 Read 2048 bytes took 00:00:00.0001190 Read 2048 bytes took 00:00:00.0000670 Read 2048 bytes took 00:00:00.0996880 Read 2048 bytes took 00:00:00.0001030 Read 2048 bytes took 00:00:00.0000700 There are small expected fluctuations in recv() latency across all of the reads, but notice the two at suspiciously close to 0.1s. I added timings probes to mono/metadata/socket-io.c to measure the Socket_Receive_internal call and it showed no significant correlating spike in latency, so I must assume this spike is being introduced elsewhere in the Mono runtime. Might there be an obvious cause of this? The spikes are always so close to 0.1s that it cannot be coincidental. -- Jay L. T. Cornwall http://www.jcornwall.me.uk/ _______________________________________________ Mono-list maillist - [email protected] http://lists.ximian.com/mailman/listinfo/mono-list
