Title: Nachricht
Hello Daniel
 
I don not know whether you can modify the length of the TIME_WAIT period,
or increase the number of client ports supported by the OS.
 
The TIME_WAIT state is a mechanism used to give late packages a chance to be handled correctly.
The length is implementation specific to the OS you use.
 
Some operting systems limit the number of client ports available.
On win XP I noticed that the highest port number was 5000 during my tests.
1024 port are reserved by the system, so only (5001-1024)=3077 ports
are available to the all the client software.
(The max. number I saw during my tests on XP was 3932)
 
Together with a long recovery time this leads to a low connection frequency.
During my tests the highest frequency without failure was about 31 Hz !
 
Andreas
 
-----Original Message-----
From: Mayer, Daniel S [mailto:[EMAIL PROTECTED]
Sent: Friday, August 26, 2005 12:45 AM
To: [email protected]
Subject: RE: random connection refusal

Thanks that is helpful.

 

I ran netstat –a to see how many were in the wait state and there seem to be tons… like a hundred in the wait state. Still almost all of my requests are handled just a few missing here and there. I was thinking all of these in the waiting state might be kept open after I have a client execute a request and return a result.

 

Do I have to do anything on either the client side or server side to say after I return close that socket I am done with it. I know that these sockets are eventually closing from some time out because after I have closed all my programs for awhile they all still show up in netstat with hundreds waiting and then all disappear pretty much at once a few minutes after I closed everything.

 

Or does this mean I am using the client wrong, by creating a new client and executing on it each time I want to send / receive a result. Do I need to set up a dispatcher or something and only use one static client the whole time? I am talking to multiple hosts so would it be best to have one client per host?

 

I added keepAlive(true) to both servers and clients, which seemed to help a lot and really reduce how many were waiting all but eliminated from the local connections, but there are still many many sockets from remote hosts in the wait state, when it should really only have one or two connections going from each server at a time. It seems that eventually my messages never go through after I build up a ton of messages in the wait state (which happens when I am running 4 remote clients communicating with 2 threads all the time with the server.  This does lead to a bug that someone has placed on the list or mentioned before where the server keeps printing out java.util.NoSuchElementException, which doesn’t seem to cause an error in execution, but keeps being printed out with no other trace.

 

When I was working with asynchronous awhile ago I ran into another similar reported error that even had a patch suggested for the fix, but didn’t have the patch applied in the binary release which seems outdated, so I had to get the code from CVS and apply the patch myself.

 

I am scared to ask but after going through the bug tracking for this project (http://issues.apache.org/jira/browse/XMLRPC) , there seems to be many unresolved bugs, some of which have solutions or patches on the web, but don’t seem to be resolved on the tracking system. So is the development of this project still active? Is it safe to rely on XML RPC for software that should run 24/7 with a reboot about once every 2 weeks? Has everyone doing this sort of thing moved to soap and JAX-rpc? The amount of open bug fixes has just made me a little nervous, and I haven’t been following xml-rpc very long. The user list still seems to be very active, but is the development? Anyways thanks for your time, help, concern…

 

Peace,

Dan “I am nervous, I guess I should dance to some techno to relax” Man

 


Reply via email to