From: Sebastian Kuzminsky <[EMAIL PROTECTED]>
Date: Mon, 16 Apr 2007 15:45:19 -0600
> I'm seeing some weird behavior in TCP. The issue is perfectly
> reproducible using netcat and other programs. This is what I do:
Please send your bug report again, but this time to the
[EMAIL PROTECTED]
From: Sebastian Kuzminsky [EMAIL PROTECTED]
Date: Mon, 16 Apr 2007 15:45:19 -0600
I'm seeing some weird behavior in TCP. The issue is perfectly
reproducible using netcat and other programs. This is what I do:
Please send your bug report again, but this time to the
[EMAIL PROTECTED] mailing
Hello!
> Im my case P-MTU discovery
Sorry, I lied. Not pmtu discovery but exaclty opposite effect
is important here: collapsing of small frames to larger ones.
Each such merge results in loss of 1 "sack" in 2.2.
> I only wrote that it was active when got stuck. It may be idle before -
> I
Hello.
On Wed, Apr 18, 2001 at 11:28:43PM +0400, [EMAIL PROTECTED] wrote:
> > However, this is _not_ a staled state. When I resume ssh on 194.190.166.31,
> > buffer gets empty and connection behaves as normal. I made this experiment
> > waiting for keepalive packets from both sides, as well
Hello.
On Wed, Apr 18, 2001 at 11:28:43PM +0400, [EMAIL PROTECTED] wrote:
However, this is _not_ a staled state. When I resume ssh on 194.190.166.31,
buffer gets empty and connection behaves as normal. I made this experiment
waiting for keepalive packets from both sides, as well as
Hello!
Im my case P-MTU discovery
Sorry, I lied. Not pmtu discovery but exaclty opposite effect
is important here: collapsing of small frames to larger ones.
Each such merge results in loss of 1 "sack" in 2.2.
I only wrote that it was active when got stuck. It may be idle before -
I do
Hello.
On Wed, Apr 11, 2001 at 11:09:35PM +0400, [EMAIL PROTECTED] wrote:
> BTW if that cursed socket is still alive, try to make the experiment
> with filling window on it. It must stuck, or my theory is completely wrong.
Filling the socket via writing to pty (controlled by sshd), I found
Hello.
On Wed, Apr 11, 2001 at 11:09:35PM +0400, [EMAIL PROTECTED] wrote:
BTW if that cursed socket is still alive, try to make the experiment
with filling window on it. It must stuck, or my theory is completely wrong.
Filling the socket via writing to pty (controlled by sshd), I found
Hello!
>mtu 382 + keepalive yes -> loss
>mtu 382 + keepalive no -> ok
Well, I ignored this because it looked as full sense. Sorry. 8)
> such a picture? If the answer is "yes", I am almost satisfied. :-)
No, the answer is strict "no". Until keepalive is triggered the first
time, it
Hello.
On Wed, Apr 11, 2001 at 11:04:04PM +0400, [EMAIL PROTECTED] wrote:
> > In my experiments linux simply sets mss=mtu-40 at the start of ethernet
> > connections. I do not know why, but belive it's ok. How the version of
> > kernel and configuration options can affect mss later?
[...]
>
Hello.
On Wed, Apr 11, 2001 at 11:09:35PM +0400, [EMAIL PROTECTED] wrote:
> Is the machine UP? The only other known dubious place is smp specific...
It is a HP NetServer E40 with signle PPro-180. SMP is turned off in .config.
> BTW if that cursed socket is still alive, try to make the
Hello!
> If your model does not cover such situation, pls, take it in mind. :)
Taken.
Is the machine UP? The only other known dubious place is smp specific...
BTW if that cursed socket is still alive, try to make the experiment
with filling window on it. It must stuck, or my theory is
Hello!
> In my experiments linux simply sets mss=mtu-40 at the start of ethernet
> connections. I do not know why, but belive it's ok. How the version of
> kernel and configuration options can affect mss later?
You can figure out this yourself. In fact you measured this.
With mss=1460 the
On Wed, Apr 11, 2001 at 08:35:51PM +0400, [EMAIL PROTECTED] wrote:
> To get this on new socket you should leave session idle for >2hours
> until the first keeplaive. After this it will never probe under
> any curcumstances. The bug was that keepalive corrupts state of timer
> and probe0 timer is
Hello.
On Wed, Apr 11, 2001 at 08:56:41PM +0400, [EMAIL PROTECTED] wrote:
> ppp also inclined to the mss/mtu bug, it allocates too large buffers
> and never breaks them. The difference between kernels looks funny, but
> I think it finds explanation in differences between mss/mtu's.
In my
Hello!
> At last, I tried several MTUs on 3d computer, running "right" 2.2.17, and
> could not find conditions, under which any loss of ACKs can be detected.
8)8)8)
ppp also inclined to the mss/mtu bug, it allocates too large buffers
and never breaks them. The difference between kernels
Hello!
> > If my guess is right, you can easily put this socket to funny state
> > just catting a large file and kill -STOP'ing ssh. ssh will close window,
> > but sshd will not send zero probes.
>
> [1] I have checked your statement on 2 different machines, running 2.2.17.
> No confirmation.
Hello.
I'd like to make additional comments to my previous message.
On Wed, Apr 11, 2001 at 01:19:01AM +0400, Eugene B. Berdnikov wrote:
> The thing is that one machine (which run ssh client in my bug report)
> do send ACKs when ssh is SIGSTOP'ed. The other one does not send ACKs,
> but
Hello.
On Wed, Apr 11, 2001 at 08:56:41PM +0400, [EMAIL PROTECTED] wrote:
ppp also inclined to the mss/mtu bug, it allocates too large buffers
and never breaks them. The difference between kernels looks funny, but
I think it finds explanation in differences between mss/mtu's.
In my
On Wed, Apr 11, 2001 at 08:35:51PM +0400, [EMAIL PROTECTED] wrote:
To get this on new socket you should leave session idle for 2hours
until the first keeplaive. After this it will never probe under
any curcumstances. The bug was that keepalive corrupts state of timer
and probe0 timer is not
Hello!
mtu 382 + keepalive yes - loss
mtu 382 + keepalive no - ok
Well, I ignored this because it looked as full sense. Sorry. 8)
such a picture? If the answer is "yes", I am almost satisfied. :-)
No, the answer is strict "no". Until keepalive is triggered the first
time, it cannot
Hello!
If my guess is right, you can easily put this socket to funny state
just catting a large file and kill -STOP'ing ssh. ssh will close window,
but sshd will not send zero probes.
[1] I have checked your statement on 2 different machines, running 2.2.17.
No confirmation. But this
Hello.
I'd like to make additional comments to my previous message.
On Wed, Apr 11, 2001 at 01:19:01AM +0400, Eugene B. Berdnikov wrote:
The thing is that one machine (which run ssh client in my bug report)
do send ACKs when ssh is SIGSTOP'ed. The other one does not send ACKs,
but much
Hello!
If your model does not cover such situation, pls, take it in mind. :)
Taken.
Is the machine UP? The only other known dubious place is smp specific...
BTW if that cursed socket is still alive, try to make the experiment
with filling window on it. It must stuck, or my theory is
Hello.
On Wed, Apr 11, 2001 at 11:09:35PM +0400, [EMAIL PROTECTED] wrote:
Is the machine UP? The only other known dubious place is smp specific...
It is a HP NetServer E40 with signle PPro-180. SMP is turned off in .config.
BTW if that cursed socket is still alive, try to make the
Hello.
On Tue, Apr 10, 2001 at 09:38:43PM +0400, [EMAIL PROTECTED] wrote:
> If my guess is right, you can easily put this socket to funny state
> just catting a large file and kill -STOP'ing ssh. ssh will close window,
> but sshd will not send zero probes.
[1] I have checked your statement
Hello!
> In brief: a stale state of the tcp send queue was observed for 2.2.17
> while send-q counter and connection window sizes are not zero:
I think I pinned down this. The patch is appended.
> diagnostic, I'll try to get it. In any case, I plan to run something through
> this
Hello!
In brief: a stale state of the tcp send queue was observed for 2.2.17
while send-q counter and connection window sizes are not zero:
I think I pinned down this. The patch is appended.
diagnostic, I'll try to get it. In any case, I plan to run something through
this connection
Hello.
On Tue, Apr 10, 2001 at 09:38:43PM +0400, [EMAIL PROTECTED] wrote:
If my guess is right, you can easily put this socket to funny state
just catting a large file and kill -STOP'ing ssh. ssh will close window,
but sshd will not send zero probes.
[1] I have checked your statement on 2
Hi all.
In brief: a stale state of the tcp send queue was observed for 2.2.17
while send-q counter and connection window sizes are not zero:
% netstat -n -eot | grep 1018
tcp0 13064 194.190.166.31:22 194.190.161.106:1018
ESTABLISHED 0 11964 off
Hi all.
In brief: a stale state of the tcp send queue was observed for 2.2.17
while send-q counter and connection window sizes are not zero:
% netstat -n -eot | grep 1018
tcp0 13064 194.190.166.31:22 194.190.161.106:1018
ESTABLISHED 0 11964 off
Hi Stephen
>Let me see if I understand this correct:
>
> Client Router Server
>
> Linux Linux HPUXBad
> WinNT Linux HPUXGood
> HPUXLinux HPUXGood
>
>With all Linux
Hi Thomas,
On Fri, 3 Nov 2000, Thomas Pollinger wrote:
> Running a 'cvs get' on the Linux clients of a larger source tree
> eventually hangs the client in the middle of the get process. The
> hang is *always* reproduceable (however it does not always hang at
> the same place, sometimes after 1',
Hi Thomas,
On Fri, 3 Nov 2000, Thomas Pollinger wrote:
Running a 'cvs get' on the Linux clients of a larger source tree
eventually hangs the client in the middle of the get process. The
hang is *always* reproduceable (however it does not always hang at
the same place, sometimes after 1',
Hi Stephen
I've been experiencing similar problems with what would seem at first a
completely unrelated problem. To make things short:
I have the following configuration:
- CVS server (vers. 1.10.4) running on HPUX B.11.00
- different CVS clients (running different cvs client versions:
Hi Stephen
I've been experiencing similar problems with what would seem at first a
completely unrelated problem. To make things short:
I have the following configuration:
- CVS server (vers. 1.10.4) running on HPUX B.11.00
- different CVS clients (running different cvs client versions:
36 matches
Mail list logo