Thanks for your comment (and those of others). The problem I'm facing is more of a conceptual problem than a real-world problem. We don't have any servers that require very high throughput and very low latency written in .NET - we stick to C++ for those. I was hoping that next time around, we could use .NET because a managed environment offers a huge advantage in the form of much shorter development times.
All this rose because I'm writing a client that connects to one of those high-throughput low-latency servers. This server running on state-of-the-art hardware of last year can manage thousands of messages per second distributed to dozens of clients (100,000s messages per second overall). A .NET client (on a weaker desktop, but not 50% weaker) trying to listen to this server (receiving thousands of messages a second at peak time) simply can't keep up. The client is almost fully implemented (so I'm not optimizing too early...). My computer is connected to a 10Mbit network, and still the CPU is what limits the performance. When using the .NET profiler to figure out why I'm getting 70% CPU usage (on a hyperthreaded CPU - 50% in the thread that handles the communications, 20% in the UI thread which does basically nothing). I fixed a few things (mainly removing extra allocations) and performance improved. However, now a major part of the time is spent translating byte[]s to strings, and the other major chunk of time is spent inside BeginReceive and the code preparing the handles for WaitHandle.WaitAny (which is not the same as WaitForMultipleObjects, because it takes an array and not an array and a length parameters). I thought the reason for this poor performance was a framework feature I was missing. I guess there isn't, and that's the reason IIS is not written in C#, either :-) Thanks, Itay -----Original Message----- From: J. Merrill [mailto:[EMAIL PROTECTED] Sent: Tuesday, October 31, 2006 5:41 AM Subject: Re: Efficient .NET Networking (I re-read this just before sending, and I think it's necessary to say that I don't intend this to be treated as a personal attack. When I say "you" I'm not trying to single you out personally. Treat of it as "those who have these ideas" or some such. I do not intend to attack anyone personally, but I am willing to "take on" what I think are baseless concerns.) I think you're worried about (what I think are) non-issues without any evidence that they're actually issues. It's not as though memory allocation in .Net is a performance pig; it's a few assembler instructions (an add followed by a compare to see if a GC is needed). When your network hardware and software stack is so fast and efficient that it makes your "high performance communications" code a bottleneck, maybe you'll need to re-write. However, keep in mind that any "high performance computer" today is literally 100 times faster than the fastest machine available only a couple of years ago -- and that's before the server is a multiple-processor box with 4 or 8 quad-core processors, like it will be in 6-12 months. (The laptop I'm writing this on has 4 times the RAM of the desktop I've been writing software on for years.) How much faster must the hardware get before it's time to stop worrying about the performance of the software you didn't write and worry instead about finishing the software that you're being asked to write? Who said "first make it work, then make it fast"? I don't remember, but that was a good thought. Are you in the "performance tuning" part of your application life-cycle? Are there no user-visible features left to build? The whole idea of the .Net framework (and every other bit of "software layering" that we deal with) is to reduce the time, effort, and low-level knowledge it takes to produce the desired software. You do that by giving up some control over the nitty-gritty in order to work, and hopefully think, at a higher level. "Premature optimization is the root of all evil" was said by people much smarter than I. Keep it in mind, now that the machines are so unbelievably much faster than they were when TCP/IP networking, garbage collectors, dynamic memory management (and on and one) were first implemented. Back then, perhaps you had to worry about every memory allocation. Today, you're wasting the money of the people who are paying you if you don't start working on the problems they think they're paying you to solve and ignore the problems that they think someone has already solved and provided as tools for you to use. At 04:53 AM 10/30/2006, Itay Zandbank wrote (in part) >The answer may very well be "for high performance communications, use C++ and the Windows API", I'm just hoping it's not. J. Merrill / Analytical Software Corp =================================== This list is hosted by DevelopMentor(r) http://www.develop.com View archives and manage your subscription(s) at http://discuss.develop.com <html><body><center><hr><b><font face=arial size=2>Visit the Tel Aviv Stock Exchange's Website<a href=http://www.tase.co.il> www.tase.co.il</a></b></body></html> =================================== This list is hosted by DevelopMentor® http://www.develop.com View archives and manage your subscription(s) at http://discuss.develop.com