Can you provide a reproducer? Also did you try to run with paranoid leak 
detection

> Am 19.07.2016 um 20:04 schrieb Chris Conroy <[email protected]>:
> 
> Ah okay: I didn't see the calls to failFlushed since they occur above the 
> stanza I found suspicious. 
> 
> So, the above explanation is probably not correct. Still, I am seeing a leak 
> where DirectByteBufs are rooted to the recycler, and the speed at which these 
> buffers leak appears to be correlated with slow/partial readers.
> 
>> On Monday, July 18, 2016 at 4:36:31 PM UTC-4, Norman Maurer wrote:
>> failFlushed(...) should be called to fail and release all flushed messages.
>> 
>> Are you saying this not happens?
>> 
>>> Am 18.07.2016 um 22:02 schrieb Chris Conroy <[email protected]>:
>>> 
>>> I’ve been trying to track down a NIO memory leak that occurs in a Netty 
>>> application I am porting from Netty 3 to Netty 4. This leak does not occur 
>>> in the Netty 3 version of the application.
>>> 
>>> For now, I’m using only unpooled heap buffers in Netty 4, but NIO buffers 
>>> do come into play for socket communication.
>>> 
>>> I’ve captured a few heap dumps from affected instances, and in each it 
>>> appears that the leaked DirectByteBuf java objects are rooted in an 
>>> io.netty.util.Recycler.
>>> 
>>> These buffers remain indefinitely: I can disable the application to drain 
>>> traffic and force GCs, but the # of NIO buffers and NIO allocated space 
>>> stays flat.
>>> 
>>> The issue is likely related to slow readers. However, the leak persists 
>>> long after all channels have been closed.
>>> 
>>> I implemented a writability listener and the leak does appear to go away if 
>>> I stop writing to a channel after it goes unwritable. This is good, but I’m 
>>> still worried that this just makes the problem less likely since it’s still 
>>> possible to write/flush and have pending data: writability just limits how 
>>> much data will be buffered.
>>> 
>>> Digging into ChannelOutBoundBuffer I see the following stanza in close:
>>> 
>>> 
>>> // Release all unflushed messages.
>>> try {
>>>     Entry e = unflushedEntry;
>>>     while (e != null) {
>>>         // Just decrease; do not trigger any events via 
>>> decrementPendingOutboundBytes()
>>>         int size = e.pendingSize;
>>>         TOTAL_PENDING_SIZE_UPDATER.addAndGet(this, -size);
>>> 
>>>         if (!e.cancelled) {
>>>             ReferenceCountUtil.safeRelease(e.msg);
>>>             safeFail(e.promise, cause);
>>>         }
>>>         e = e.recycleAndGetNext();
>>>     }
>>> } finally {
>>>     inFail = false;
>>> }
>>> clearNioBuffers();
>>> This seems a bit curious to me: why are flushed buffers not released here? 
>>> Since the leak seems to be rooted in the Recycler, this could be the 
>>> culprit…What do you think?
>>> 
>>> -- 
>>> You received this message because you are subscribed to the Google Groups 
>>> "Netty discussions" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to [email protected].
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/netty/CA%2B%3DgZKADssKFcs-WCc8%2Br2RWrvbgg3csaJPdcsXL_mCD5yG2bg%40mail.gmail.com.
>>> For more options, visit https://groups.google.com/d/optout.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Netty discussions" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/netty/b66894c3-1e65-4235-9201-b4f1dca11a81%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Netty discussions" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/netty/C9E03C94-C7A8-4989-8939-054D95C88673%40googlemail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to