Hello all!

We are starting to migrate to Forgerock's OpenIG as a reverse proxy.  It uses 
the HttpAsyncClient, and as such it is using the BasicAsyncResponseConsumer to 
fetch the responses from the backend web site.  We have run into a problem when 
the backend web sites return response data that is too large to fit in our JVM 
heap.  We understand that the BasicAsyncResponseConsumer was not meant to 
handle arbitrarily large responses, and we are looking at other ways to handle 
these situations in the future.

However, we have observed a problem in the handling of OutOfMemoryErrors in the 
AbstractMultiworkerIOReactor.  In short, when the BasicAsyncResponseConsumer 
throws an OutOfMemoryError, the worker thread in the IOReactor does seem to 
return to the pool of available threads.  Once the number of requests through 
the pool matches the number of worker threads, new requests hang indefinitely:

2016/05/27 11:48:20:158 EDT [DEBUG] MainClientExec - [exchange: 3] start 
execution
2016/05/27 11:48:20:159 EDT [DEBUG] RequestAuthCache - Auth cache not set in 
the context
2016/05/27 11:48:20:159 EDT [DEBUG] InternalHttpAsyncClient - [exchange: 3] 
Request connection for {}->http://c3dufedsso2.premierinc.com:8080
2016/05/27 11:48:20:159 EDT [DEBUG] PoolingNHttpClientConnectionManager - 
Connection request: [route: {}->http://c3dufedsso2.premierinc.com:8080][total 
kept alive: 0; route allocated: 0 of 64; total allocated: 0 of 64]


We have observed that the main thread returns a ConnectionClosedException:

org.apache.http.ConnectionClosedException: Connection closed unexpectedly
        at 
org.apache.http.nio.protocol.HttpAsyncRequestExecutor.closed(HttpAsyncRequestExecutor.java:137)
        at 
org.apache.http.impl.nio.client.InternalRequestExecutor.closed(InternalRequestExecutor.java:64)
        at 
org.apache.http.impl.nio.client.InternalIODispatch.onClosed(InternalIODispatch.java:71)
        at 
org.apache.http.impl.nio.client.InternalIODispatch.onClosed(InternalIODispatch.java:39)
        at 
org.apache.http.impl.nio.reactor.AbstractIODispatch.disconnected(AbstractIODispatch.java:102)
        at 
org.apache.http.impl.nio.reactor.BaseIOReactor.sessionClosed(BaseIOReactor.java:281)
        at 
org.apache.http.impl.nio.reactor.AbstractIOReactor.processClosedSessions(AbstractIOReactor.java:442)
        at 
org.apache.http.impl.nio.reactor.AbstractIOReactor.hardShutdown(AbstractIOReactor.java:578)
        at 
org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:307)
        at 
org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:106)
        at 
org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:590)
        at java.lang.Thread.run(Thread.java:745)


And the dispatcher thread ends this way:

Exception in thread "I/O dispatcher 6" java.lang.OutOfMemoryError: Java heap 
space
        at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
        at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
        at 
org.apache.http.nio.util.HeapByteBufferAllocator.allocate(HeapByteBufferAllocator.java:51)
        at 
org.apache.http.nio.util.ExpandableBuffer.<init>(ExpandableBuffer.java:67)
        at 
org.apache.http.nio.util.SimpleInputBuffer.<init>(SimpleInputBuffer.java:47)
        at 
org.apache.http.nio.protocol.BasicAsyncResponseConsumer.onEntityEnclosed(BasicAsyncResponseConsumer.java:74)


I have been able to also reproduce this outside of OpenIG using a 
PoolingNHttpClientConnectionManager with defaults settings.  To fix it, I have 
been testing a patch against httpcore 4.4.1 that catches OutOfMemoryErrors in 
two methods of the BasicAsyncResponseConsumer class.  I throw them as new 
IOException's in order to keep the same method signatures, and I only catch 
OutOfMemoryError specifically because catching all throwables broke a unit test 
looking for a Truncated content exception.

Using this patch, I can request HTTP responses that exceed available memory 
without resulting in a hang.  Does this seem like the appropriate place to 
patch this error?  Or should I create a JIRA against this issue and suggest 
this patch as a solution?  I've attached a diff to this email

Thanks for your assistance and for working on such a great set of Java 
libraries.

- edan


--
Edan Idzerda
[email protected]


$ git diff tags/4.4.1
diff --git 
a/httpcore-nio/src/main/java/org/apache/http/nio/protocol/BasicAsyncResponseConsumer.java
 
b/httpcore-nio/src/main/java/org/apache/http/nio/protocol/BasicAsyncResponseConsumer.java
old mode 100644
new mode 100755
index 10c3c40..1fd8341
--- 
a/httpcore-nio/src/main/java/org/apache/http/nio/protocol/BasicAsyncResponseConsumer.java
+++ 
b/httpcore-nio/src/main/java/org/apache/http/nio/protocol/BasicAsyncResponseConsumer.java
@@ -71,15 +71,23 @@ public class BasicAsyncResponseConsumer extends 
AbstractAsyncResponseConsumer<Ht
         if (len < 0) {
             len = 4096;
         }
-        this.buf = new SimpleInputBuffer((int) len, new 
HeapByteBufferAllocator());
-        this.response.setEntity(new ContentBufferEntity(entity, this.buf));
+        try {
+            this.buf = new SimpleInputBuffer((int) len, new 
HeapByteBufferAllocator());
+            this.response.setEntity(new ContentBufferEntity(entity, this.buf));
+        } catch (OutOfMemoryError oom) {
+            throw new IOException("Could not allocate memory for entity 
content, length was: " + len, oom);
+        }
     }

     @Override
     protected void onContentReceived(
             final ContentDecoder decoder, final IOControl ioctrl) throws 
IOException {
         Asserts.notNull(this.buf, "Content buffer");
-        this.buf.consumeContent(decoder);
+        try {
+            this.buf.consumeContent(decoder);
+        } catch (OutOfMemoryError oom) {
+            throw new IOException("Expanding buffer failed while receiving 
content", oom);
+        }
     }

     @Override
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to