Thanks Willem, Based on your response, I'll be interested to know if there is a better approach to handle my scenario. Obviously it is a simplified case, but I have been using a similar approach to achieve high throughput http requests and has been working fine so far. It is only since I have added the initial http request through the same channel which has resulted in this issue.
The reason I am re-using the same endpoint is that the URIs are dynamic, so I set a fixed endpoint URI but then fetch the required content using the CamelHttpUri header. Many thanks, Elvio -----Original Message----- From: Willem Jiang [mailto:willem.ji...@gmail.com] Sent: 02 February 2015 05:37 To: users@camel.apache.org; Elvio Caruana (ecaruana) Subject: Re: camel-ahc issue It’s not a good idea to send the multiple requests through a single channel, as it could confuse the async handler when it receive the response. -- Willem Jiang Red Hat, Inc. Web: http://www.redhat.com Blog: http://willemjiang.blogspot.com (English) http://jnn.iteye.com (Chinese) Twitter: willemjiang Weibo: 姜宁willem On January 31, 2015 at 7:12:34 AM, Elvio Caruana (ecaruana) (ecaru...@cisco.com) wrote: > Hi, > > I think I may have run into a potential issue with camel-ahc. I've > narrowed it down to a very specific scenario - a combination of > re-using the same ahc component in combination with using > split().parallelProcessing() EIP. Tested with camel 2.14.1 and 2.14.0. > > I'll try and demonstrate in a simple route: > > List list = Arrays.asList("alice", "bob", "charles", "david", > "edward"); > > from("timer://foo?repeatCount=1") > .setBody(constant("someMainRouteContent")) > .to("direct:fetch") // [1] > .setBody(constant(list)) > .split(body()).parallelProcessing() // [2] .log("processing ${body}") > .to("direct:fetch") // [3] > .end() > .log("and back to main processing") > .end(); > > from("direct:fetch") > .log("making http request for ${body}") > .to("ahc:http://localhost:8765/rest/test") > .log("returned from http request for ${body}") .end(); > > Note: > > 1. The routes share the same AHC component [1, 3] (i.e. common set of ahc > worker threads). > > 2. The ahc worker thread will continue processing synchronously > between [1] and [2],until the point the split() parallel thread pool > takes over. On an http response, the ahc worker thread will continue > processing synchronouslyas expected. > > 3. The above will result in a timeout exception for one of the split > exchanges (irrespective of the size of the list), and the route never > completes. > java.util.concurrent.TimeoutException: Request timeout of 60000 ms at > com.ning.http.client.providers.netty.timeout.TimeoutTimerTask.expire(T > imeoutTimerTask.java:43) at > com.ning.http.client.providers.netty.timeout.RequestTimeoutTimerTask.r > un(RequestTimeoutTimerTask.java:41) > at > org.jboss.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(Hashed > WheelTimer.java:556) at > org.jboss.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts > (HashedWheelTimer.java:632) at > org.jboss.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java > :369) at > org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable > .java:108) at java.lang.Thread.run(Thread.java:722) > > Trace of the above route is attached. > > Having two separate ahc endpoints (i.e. with separate thread pools), > or without parallel processing in the split EIP, then the above works fine. > Any thoughts? > > > Kind Regards, > Elvio > >