[ https://issues.apache.org/jira/browse/SSHD-754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16075469#comment-16075469 ]
Eugene Petrenko commented on SSHD-754: -------------------------------------- After reproducing the problem in tests I was able to come up with the following patch (in Kotlin) for an inheritor of ServerSessionImpl to fix the test {code} private class PressureLock { private val semaphore = Semaphore(100) fun acquire() : SshFutureListener<IoWriteFuture?> { semaphore.acquire() return listener } private val listener = object : SshFutureListener<IoWriteFuture?> { override fun operationComplete(future: IoWriteFuture?) { semaphore.release() } } } private val CHANNEL_STDOUT_LOCK = PressureLock() private val CHANNEL_STDERR_LOCK = PressureLock() override fun writePacket(buffer: Buffer): IoWriteFuture { // The workaround for VCS-797 // and https://issues.apache.org/jira/browse/SSHD-754 // the trick is to block writer thread once there are more // than 100 messages in either rekey wait queue or nio write queue val lock = when (buffer.array()[buffer.rpos()]) { SshConstants.SSH_MSG_CHANNEL_DATA -> CHANNEL_STDOUT_LOCK SshConstants.SSH_MSG_CHANNEL_EXTENDED_DATA -> CHANNEL_STDERR_LOCK else -> null }?.acquire() val future = super.writePacket(buffer) if (lock != null) { future.addListener(lock) } return future } } {code} > OOM in sending data for channel > ------------------------------- > > Key: SSHD-754 > URL: https://issues.apache.org/jira/browse/SSHD-754 > Project: MINA SSHD > Issue Type: Bug > Affects Versions: 1.1.0 > Reporter: Eugene Petrenko > > I have an implementation of SSHD server with the library. It sends gigabytes > (e.g. 5GB) of data as command output. > Starting from Putty plink 0.68 (also includes plink 0.69) we started to have > OOM errors. Checking memory dumps shown the most of the memory is consumed > from the function > org.apache.sshd.common.session.AbstractSession#writePacket(org.apache.sshd.common.util.buffer.Buffer) > In the hprof I see thousands of PendingWriteFuture objects (btw, each holds a > reference to a logger instance). And those objects are only created from this > function. > It is clear the session is running through rekey. I see the kexState > indicating the progress. > Is there a way to artificially limit the sending queue, no matter if related > remote window allows sending that enormous amount of data? As of my > estimation, the window was reported to be around 1.5 GB or more. Maybe, such > huge window size was caused by an arithmetic overflow that is fixed on > SSHD-701 -- This message was sent by Atlassian JIRA (v6.4.14#64029)