Thank you very much Simone. Here is an update…



Bumping up idle timeout from 200 to 3000 seem to work. I don’t get any timeouts 
when I set idle timeout at 3000ms. However, 3000ms is a long time for us. 




What do you recommend I tweak or change for us to work with 200ms idle timeout?




Thank you!



Rajiv

On Mon, Nov 3, 2014 at 10:32 AM, Simone Bordet <[email protected]>
wrote:

> Hi,
> On Mon, Nov 3, 2014 at 6:34 PM, Rajiv Bandaru <[email protected]> wrote:
>> Thanks again.
>>
>> What do you mean by “your application does not write data correctly”? I am
>> receiving a json request in the http request body. Issue is that after
>> reading partial json, there is a block and wait to receive rest of the json
>> request. However, rest of the json never comes through thereby timing out.
>> So, this is just at the request receiving stage. Could you please elaborate
>> what you mean?
> There is an application that writes JSON. I assume it's your application.
> You did not describe your system so I don't know much more.
>> "your system is
>> overloaded for the configuration you have” - Yes, this crossed my mind.
>> However, my question is why would a request thats in the middle of
>> processing get timed out? And, also is the timeout symptom of an overload?
> For example, the thread that is writing is preempted by the OS, 200ms
> pass, and finally when it's resumed it would write the rest of the
> data, but unfortunately the connection has already been closed.
> Timeouts may be symptoms of overload, but also of application mistakes
> (e.g. the application does not write all content).
> Just to give you an example:
> response.setContentLength(18);
> response.getOutputStream().write(new byte[10]);
> Since the content length and the actual bytes written are different,
> the receiver will wait for the missing 8 bytes to arrive; not seeing
> them, it will idle timeout.
> There is about another gazimillion cases an application can do things wrong :)
>> And, what other information you are looking for? I am happy provide. As I
>> said, with the current configuration, I am sending a blast of 75 requests at
>> one go to the server, and I immediately see this error.
>>
>> Also, I bumped up corePoolSize and maxPoolSize to 256, and acceptors to 3
>> (number of CPU_Cores - 1), and I still see this exception.
> We perform load tests in Jetty at rates of 350k or more requests/s and
> we don't see any exception.
> I don't know what to say without analyzing your whole application and
> load test client.
> I'd suggest that you carefully analyze your code to understand if
> there are resources that are exhausted during your load test runs.
> Start with Jetty at the default configuration, and monitor thread
> pools, connections, locks, JVM, GC, etc. of both the client(s)
> (especially the clients) and the server(s).
> JMC may help you in this: http://docs.oracle.com/javacomponents/jmc.htm
> It is very unlikely that the problem is in Jetty.
> -- 
> Simone Bordet
> ----
> http://cometd.org
> http://webtide.com
> http://intalio.com
> Developer advice, training, services and support
> from the Jetty & CometD experts.
> Intalio, the modern way to build business applications.
> _______________________________________________
> jetty-users mailing list
> [email protected]
> To change your delivery options, retrieve your password, or unsubscribe from 
> this list, visit
> https://dev.eclipse.org/mailman/listinfo/jetty-users
_______________________________________________
jetty-users mailing list
[email protected]
To change your delivery options, retrieve your password, or unsubscribe from 
this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users

Reply via email to