tomaswolf commented on pull request #206:
URL: https://github.com/apache/mina-sshd/pull/206#issuecomment-955025850
> Great - it just occurred to me that perhaps we should consider making this
new behavior **configurable** where the feature is enabled by default. This
way, if any of the nor
lgoldstein edited a comment on pull request #206:
URL: https://github.com/apache/mina-sshd/pull/206#issuecomment-954001356
> I'm in the process of updating sftp.md with some hints about this,
because this also affects normal clients.
Great - it just occurred to me that perhaps we sh
lgoldstein commented on pull request #206:
URL: https://github.com/apache/mina-sshd/pull/206#issuecomment-954001356
> I'm in the process of updating sftp.md with some hints about this,
because this also affects normal clients.
Great - it just occurred to me that perhaps we should co
lgoldstein commented on pull request #206:
URL: https://github.com/apache/mina-sshd/pull/206#issuecomment-953998871
> can't that section of CHANGES.md be auto-generated from JIRA or from the
git history?
I don't think so - not every commit is "worthy" of a full blown release
note.
tomaswolf commented on pull request #206:
URL: https://github.com/apache/mina-sshd/pull/206#issuecomment-953995845
I know. I'm in the process of updating sftp.md with some hints about this,
because this also affects normal clients.
BTW, can't that section of CHANGES.md be auto-genera
lgoldstein commented on pull request #206:
URL: https://github.com/apache/mina-sshd/pull/206#issuecomment-953973015
Great work - one comment though for future reference - we are maintaining a
`CHANGES.md` file were we log any major features, bug fixes, etc. that we merge
into the code so
tomaswolf merged pull request #206:
URL: https://github.com/apache/mina-sshd/pull/206
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: dev-unsubscr
tomaswolf commented on pull request #206:
URL: https://github.com/apache/mina-sshd/pull/206#issuecomment-953717734
Some minor clean-up, plus handling SFTP V3 "longName" if the upstream
connection is SFTP > V3.
--
This is an automated message from the Apache Git Service.
To respond to the
tomaswolf commented on pull request #206:
URL: https://github.com/apache/mina-sshd/pull/206#issuecomment-953180025
I'll wait a bit for Roberto's feedback in JIRA.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
UR
lgoldstein commented on pull request #206:
URL: https://github.com/apache/mina-sshd/pull/206#issuecomment-953095292
Looks fine to me - feel free to merge if you are satisfied
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub an
tomaswolf commented on pull request #206:
URL: https://github.com/apache/mina-sshd/pull/206#issuecomment-953084176
Thanks for the comments; all done.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
lgoldstein commented on a change in pull request #206:
URL: https://github.com/apache/mina-sshd/pull/206#discussion_r737584656
##
File path:
sshd-sftp/src/main/java/org/apache/sshd/sftp/server/RemoteDirectoryHandle.java
##
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Softwa
tomaswolf opened a new pull request #206:
URL: https://github.com/apache/mina-sshd/pull/206
An Apache MINA sshd SFTP server configured to use an SftpFileSystem
pointing to yet another SFTP server would serve directory listings
only very slowly.
This was caused by the SFTP server
[
https://issues.apache.org/jira/browse/DIRMINA-663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Emmanuel Lecharny closed DIRMINA-663.
-
> CumulativeProtocolDecoder doDecode performance prob
ode performance problem
> --
>
> Key: DIRMINA-663
> URL: https://issues.apache.org/jira/browse/DIRMINA-663
> Project: MINA
> Issue Type: Bug
> Components: Filter
>
! I was the one who removed this default
initialization ...
We may have to add an entry in the FAQ to explain this change.
Thanks !
> CumulativeProtocolDecoder doDecode performance problem
> --
>
>
the hardcoded receive buffer size = 1024 was causing this
performance problem. The system default taken from the socket is 8192.
I can now easily fix the problem by adding this code to the test case:
acceptor.getSessionConfig().setReceiveBufferSize(8192);
It actually works fine with values larger
the problem is specific to both the windows
environment/hardware and the MINA version. Really puzzled. Any ideas?
> CumulativeProtocolDecoder doDecode performance problem
> --
>
> Key: DIRMINA-663
>
.
> CumulativeProtocolDecoder doDecode performance problem
> --
>
> Key: DIRMINA-663
> URL: https://issues.apache.org/jira/browse/DIRMINA-663
> Project: MINA
> Issue Type: Bug
>
veProtocolDecoder doDecode performance problem
> --
>
> Key: DIRMINA-663
> URL: https://issues.apache.org/jira/browse/DIRMINA-663
> Project: MINA
> Issue Type: Bug
> Co
JVM 1.6.0_11, and MINA trunk (Just for
information)
I want to know if it's just a problem on Vista, and if so, what could go wrong.
Serge, can you attach the complete trace you get on your env ? Thanks !
> CumulativeProtocolDecoder doDecode perform
veProtocolDecoder doDecode performance problem
> --
>
> Key: DIRMINA-663
> URL: https://issues.apache.org/jira/browse/DIRMINA-663
> Project: MINA
> Issue Type: Bug
> Co
t it with another JVM (Jrockit). Also testing
with a 32 bits JVM could be valuable.
> CumulativeProtocolDecoder doDecode performance problem
> --
>
> Key: DIRMINA-663
> URL: https://issues.apa
der doDecode performance problem
> --
>
> Key: DIRMINA-663
> URL: https://issues.apache.org/jira/browse/DIRMINA-663
> Project: MINA
> Issue Type: Bug
> Components: Filter
bug.
> CumulativeProtocolDecoder doDecode performance problem
> --
>
> Key: DIRMINA-663
> URL: https://issues.apache.org/jira/browse/DIRMINA-663
> Project: MINA
> Issue Type:
r completely useless.
> CumulativeProtocolDecoder doDecode performance problem
> --
>
> Key: DIRMINA-663
> URL: https://issues.apache.org/jira/browse/DIRMINA-663
> Pr
CumulativeProtocolDecoder doDecode performance problem
--
Key: DIRMINA-663
URL: https://issues.apache.org/jira/browse/DIRMINA-663
Project: MINA
Issue Type: Bug
Components: Filter
After some undergo tests, my conclusion is that the write() operation to each
session is the one who gets the most cpu usage. My question is why ?
Is there a way i can avoid this usage on simple i/o operations ?
--
View this message in context:
http://www.nabble.com/Performance-problem
vide a little bit more context?
> Text messages, size varies from 20 bytes to even 2 kBytes, im doing no
> treatment.
>
> --
> --
> cordialement, regards,
> Emmanuel Lécharny
> www.iktek.com
> directory.apache.org
>
>
>
>
--
View this message in context:
http://www.nabble.com/Performance-problem-tf4783282s16868.html#a13689158
Sent from the Apache MINA Support Forum mailing list archive at Nabble.com.
pietry wrote:
But this shouldnt be happening right ?
What should not happen ? he 100% CPU usage? certainly. Biut the
interesting question is 'why does _your_ server use 100% of the CPU. In
other terms, what is your server doing ?
So mina supports what i want for my
server.
Did i choose the r
normal ? ( I think its possible that because of passing through
all linked list this could happen )
--
View this message in context:
http://www.nabble.com/Performance-problem-tf4783282s16868.html#a13685229
Sent from the Apache MINA Support Forum mailing list archive at Nabble.com.
pietry wrote:
I have read the configure thread model tutorial, and i dont know exaclty what
to pick for my server program. I have lots of connections ( like 1000) and i
want to get even more ( upto 5000, 1).
The main problem is that at about 500 connections, broadcast messages get
cpu to 100
ation ( 100 % cpu usage )?
Thanks
--
View this message in context:
http://www.nabble.com/Performance-problem-tf4783282s16868.html#a13684209
Sent from the Apache MINA Support Forum mailing list archive at Nabble.com.
Hi.
We are still having problems with this.
We have noticed that, although we are creating a number of IoProcessors
on the IoConnector side equal to number of cpu's + 1 but because we only
have a single bind on this side, there is only ever one
SocketConnectorIoProcessor thread created. If I
Hi Trustin
I have made the changes you suggested to the spring configuration and it
hasn't made any obvious difference. We do not call future.join() but I
will look at implementing the IoFutureListener that you suggested.
We believe that we are now able to replicate the problem in-house. We
On 6/13/07, Paddy O'Neill <[EMAIL PROTECTED]> wrote:
Hi.
We are having a strange problem with a production server. We have set
up mina to have 2 thread pools, one for incoming and 1 for outgoing
connections. Typically there will be multiple incoming connections
(typically around 300) and 1 ou
Hi.
We are having a strange problem with a production server. We have set
up mina to have 2 thread pools, one for incoming and 1 for outgoing
connections. Typically there will be multiple incoming connections
(typically around 300) and 1 outgoing connection. Traffic flows both
ways throug
37 matches
Mail list logo