[ https://issues.apache.org/jira/browse/SSHD-754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Goldstein Lyor resolved SSHD-754. --------------------------------- Resolution: Workaround Fix Version/s: 1.7.0 [Added capability|https://github.com/apache/mina-sshd/commit/e0a6b8da2b9b6af2fa6ff23fdcd76188ab1db2fe] to register a {{ChannelStreamPacketWriterResolver}} through which one can wrap the channel inside one's own code that can do throttling. See also (experimental) {{ThrottlingPacketWriter}} example in _sshd-contrib_ module. > OOM in sending data for channel > ------------------------------- > > Key: SSHD-754 > URL: https://issues.apache.org/jira/browse/SSHD-754 > Project: MINA SSHD > Issue Type: Bug > Affects Versions: 1.6.0 > Reporter: Eugene Petrenko > Assignee: Goldstein Lyor > Labels: channel, stream, throttle > Fix For: 1.7.0 > > > I have an implementation of SSHD server with the library. It sends gigabytes > (e.g. 5GB) of data as command output. > Starting from Putty plink 0.68 (also includes plink 0.69) we started to have > OOM errors. Checking memory dumps shown the most of the memory is consumed > from the function > org.apache.sshd.common.session.AbstractSession#writePacket(org.apache.sshd.common.util.buffer.Buffer) > In the hprof I see thousands of PendingWriteFuture objects (btw, each holds a > reference to a logger instance). And those objects are only created from this > function. > It is clear the session is running through rekey. I see the kexState > indicating the progress. > Is there a way to artificially limit the sending queue, no matter if related > remote window allows sending that enormous amount of data? As of my > estimation, the window was reported to be around 1.5 GB or more. Maybe, such > huge window size was caused by an arithmetic overflow that is fixed on > SSHD-701 -- This message was sent by Atlassian JIRA (v6.4.14#64029)