[
https://issues.apache.org/jira/browse/SSHD-754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16081972#comment-16081972
]
Goldstein Lyor commented on SSHD-754:
-------------------------------------
Looks like a good idea - I believe though that the better solution would be to
implement such a "throttling" at the *channel* level instead of the *session* -
after all, we might want to control each channel separately. I.e.,
{{ChannelOutputStream/ChannelAsyncOutputStream - write/flush}}. Furthermore,
the "throttle" rate should be *configurable* - where zero (default) means "no
throttling".
> OOM in sending data for channel
> -------------------------------
>
> Key: SSHD-754
> URL: https://issues.apache.org/jira/browse/SSHD-754
> Project: MINA SSHD
> Issue Type: Bug
> Affects Versions: 1.1.0
> Reporter: Eugene Petrenko
>
> I have an implementation of SSHD server with the library. It sends gigabytes
> (e.g. 5GB) of data as command output.
> Starting from Putty plink 0.68 (also includes plink 0.69) we started to have
> OOM errors. Checking memory dumps shown the most of the memory is consumed
> from the function
> org.apache.sshd.common.session.AbstractSession#writePacket(org.apache.sshd.common.util.buffer.Buffer)
> In the hprof I see thousands of PendingWriteFuture objects (btw, each holds a
> reference to a logger instance). And those objects are only created from this
> function.
> It is clear the session is running through rekey. I see the kexState
> indicating the progress.
> Is there a way to artificially limit the sending queue, no matter if related
> remote window allows sending that enormous amount of data? As of my
> estimation, the window was reported to be around 1.5 GB or more. Maybe, such
> huge window size was caused by an arithmetic overflow that is fixed on
> SSHD-701
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)