Bug#604470: linux-image-2.6.32-5-openvz-amd64: degraded inbound network bandwidth

2010-11-23 Thread Vladimir Stavrinov
On Tue, Nov 23, 2010 at 10:44:56AM +0300, Vladimir Stavrinov wrote:

> There is some other strange effect: after some idle time the network in
> the container stop working at all. I see this problem (drop
> connectivity) only on the new created for testing container may be
> because it is idle most of time, while other containers are production
> and are getting activity continuously.

Further investigation show that route to this container in the Hardware
Node was dropped for unknown reason. But it is repeated situation.
Adding it get connectivity back.

-- 

***
  Vladimir Stavrinov  *
 vstavri...@gmail.com  
***




-- 
To UNSUBSCRIBE, email to debian-kernel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20101123083635.ga5...@magus.playfast.oz



Bug#604470: linux-image-2.6.32-5-openvz-amd64: degraded inbound network bandwidth

2010-11-22 Thread Vladimir Stavrinov
There is some other strange effect: after some idle time the network in
the container stop working at all. I see this problem (drop
connectivity) only on the new created for testing container may be
because it is idle most of time, while other containers are production
and are getting activity continuously.

Here is what You asking for (netstat -s):

Ip:
7298988 total packets received
0 forwarded
0 incoming packets discarded
7262036 incoming packets delivered
4830632 requests sent out
Icmp:
13593 ICMP messages received
2 input ICMP message failed.
ICMP input histogram:
destination unreachable: 357
timeout in transit: 34
redirects: 13106
echo requests: 52
echo replies: 44
949 ICMP messages sent
0 ICMP messages failed
ICMP output histogram:
destination unreachable: 719
echo request: 178
echo replies: 52
IcmpMsg:
InType0: 44
InType3: 357
InType5: 13106
InType8: 52
InType11: 34
OutType0: 52
OutType3: 719
OutType8: 178
Tcp:
85940 active connections openings
91936 passive connection openings
303 failed connection attempts
3309 connection resets received
6 connections established
7227101 segments received
4776345 segments send out
32345 segments retransmited
9 bad segments received.
2180 resets sent
Udp:
20616 packets received
726 packets to unknown port received.
0 packet receive errors
20993 packets sent
UdpLite:
TcpExt:
265 invalid SYN cookies received
69 resets received for embryonic SYN_RECV sockets
163 packets pruned from receive queue because of socket buffer overrun
84541 TCP sockets finished time wait in fast timer
26 time wait sockets recycled by time stamp
5 packets rejects in established connections because of timestamp
27496 delayed acks sent
2 delayed acks further delayed because of locked socket
Quick ack mode was activated 8377 times
2255888 packets directly queued to recvmsg prequeue.
1439293771 bytes directly in process context from backlog
3017217537 bytes directly received in process context from prequeue
2255586 packet headers predicted
3511681 packets header predicted and directly queued to user
452075 acknowledgments not containing data payload received
717999 predicted acknowledgments
10 times recovered from packet loss due to fast retransmit
5885 times recovered from packet loss by selective acknowledgements
3 bad SACK blocks received
Detected reordering 10 times using FACK
Detected reordering 5 times using SACK
Detected reordering 3 times using time stamp
4 congestion windows fully recovered without slow start
24 congestion windows partially recovered using Hoe heuristic
181 congestion windows recovered without slow start by DSACK
95 congestion windows recovered without slow start after partial ack
8805 TCP data loss events
TCPLostRetransmit: 495
8 timeouts after reno fast retransmit
1603 timeouts after SACK recovery
1006 timeouts in loss state
14585 fast retransmits
503 forward retransmits
8431 retransmits in slow start
3385 other TCP timeouts
5 classic Reno fast retransmits failed
952 SACK retransmits failed
7885 packets collapsed in receive queue due to low socket buffer
8407 DSACKs sent for old packets
115 DSACKs sent for out of order packets
1623 DSACKs received
29 DSACKs for out of order packets received
27 connections reset due to unexpected data
18 connections reset due to early user close
188 connections aborted due to timeout
TCPDSACKIgnoredOld: 1238
TCPDSACKIgnoredNoUndo: 213
TCPSpuriousRTOs: 49
TCPSackShiftFallback: 49650
IpExt:
InOctets: 495299200
OutOctets: -1995348470
-- 

***
  Vladimir Stavrinov  *
 vstavri...@gmail.com  
***



-- 
To UNSUBSCRIBE, email to debian-kernel-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20101123074456.ga3...@magus.playfast.oz



Bug#604470: linux-image-2.6.32-5-openvz-amd64: degraded inbound network bandwidth

2010-11-22 Thread Ben Hutchings
On Mon, 2010-11-22 at 16:06 +0300, Vladimir Stavrinov wrote:
> Package: linux-2.6
> Version: 2.6.32-27
> Severity: normal
> 
> 
> For traffic from Internet to Container the bandwidth cut down in 5-10 times
> compared to Hardware Nodes. And for connection between two containers on
> different Hardware Nodes but in the same Local Network inbound speed slow down
> dramatically to 1 times. At the same time outbound bandwidth is close to
> Hardware Node one. There are no such problems with kernel I've got from openvz
> site:
> 
> http://download.openvz.org/kernel/branches/2.6.32/2.6.32-dzhanibekov.1/kernel-2.6.32-dzhanibekov.1.x86_64.rpm
> 
> as well as with old Debian kernel linux-image-2.6.26-2-openvz-amd64 thre are 
> no
> problem too.
[...]

Please could you send 'netstat -s' output for the containers where this
is happening.

Ben.

-- 
Ben Hutchings
Once a job is fouled up, anything done to improve it makes it worse.


signature.asc
Description: This is a digitally signed message part


Bug#604470: linux-image-2.6.32-5-openvz-amd64: degraded inbound network bandwidth

2010-11-22 Thread Vladimir Stavrinov
Package: linux-2.6
Version: 2.6.32-27
Severity: normal


For traffic from Internet to Container the bandwidth cut down in 5-10 times
compared to Hardware Nodes. And for connection between two containers on
different Hardware Nodes but in the same Local Network inbound speed slow down
dramatically to 1 times. At the same time outbound bandwidth is close to
Hardware Node one. There are no such problems with kernel I've got from openvz
site:

http://download.openvz.org/kernel/branches/2.6.32/2.6.32-dzhanibekov.1/kernel-2.6.32-dzhanibekov.1.x86_64.rpm

as well as with old Debian kernel linux-image-2.6.26-2-openvz-amd64 thre are no
problem too.


-- Package-specific info:
** Kernel log: boot messages should be attached

** Model information
sys_vendor: HP
product_name: ProLiant DL180 G6  
product_version:  
chassis_vendor: HP
chassis_version:  
bios_vendor: HP
bios_version: O20

** PCI devices:
00:00.0 Host bridge [0600]: Intel Corporation 5520 I/O Hub to ESI Port 
[8086:3406] (rev 13)
Subsystem: Hewlett-Packard Company Device [103c:330b]
Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr+ 
Stepping- SERR+ FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- SERR- 

00:01.0 PCI bridge [0604]: Intel Corporation 5520/5500/X58 I/O Hub PCI Express 
Root Port 1 [8086:3408] (rev 13) (prog-if 00 [Normal decode])
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ 
Stepping- SERR+ FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- SERR- TAbort- Reset- FastB2B-
PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
Capabilities: [40] Subsystem: Hewlett-Packard Company Device [103c:330b]
Capabilities: [60] MSI: Enable+ Count=1/2 Maskable+ 64bit-
Address: fee1  Data: 4051
Masking: 0003  Pending: 
Capabilities: [90] Express (v2) Root Port (Slot+), MSI 00
DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s <64ns, 
L1 <1us
ExtTag+ RBE+ FLReset-
DevCtl: Report errors: Correctable- Non-Fatal+ Fatal+ 
Unsupported-
RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
MaxPayload 256 bytes, MaxReadReq 128 bytes
DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- 
TransPend-
LnkCap: Port #0, Speed 5GT/s, Width x4, ASPM L0s L1, Latency L0 
<512ns, L1 <64us
ClockPM- Surprise+ LLActRep+ BwNot+
LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 2.5GT/s, Width x4, TrErr- Train- SlotClk+ 
DLActive+ BWMgmt+ ABWMgmt-
SltCap: AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug- 
Surprise-
Slot #0, PowerLimit 0.000W; Interlock- NoCompl-
SltCtl: Enable: AttnBtn- PwrFlt- MRL- PresDet- CmdCplt- HPIrq- 
LinkChg-
Control: AttnInd Off, PwrInd Off, Power- Interlock-
SltSta: Status: AttnBtn- PowerFlt- MRL- CmdCplt- PresDet+ 
Interlock-
Changed: MRL- PresDet+ LinkState+
RootCtl: ErrCorrectable- ErrNon-Fatal+ ErrFatal+ PMEIntEna- 
CRSVisible-
RootCap: CRSVisible-
RootSta: PME ReqID , PMEStatus- PMEPending-
DevCap2: Completion Timeout: Range BCD, TimeoutDis+ ARIFwd+
DevCtl2: Completion Timeout: 260ms to 900ms, TimeoutDis- ARIFwd+
LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-, 
Selectable De-emphasis: -6dB
 Transmit Margin: Normal Operating Range, 
EnterModifiedCompliance- ComplianceSOS-
 Compliance De-emphasis: -6dB
LnkSta2: Current De-emphasis Level: -6dB
Capabilities: [e0] Power Management version 3
Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA 
PME(D0+,D1-,D2-,D3hot+,D3cold+)
Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [100 v1] Advanced Error Reporting
UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- 
RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
UEMsk:  DLP- SDES+ TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt+ 
RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
UESvrt: DLP+ SDES+ TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- 
RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
AERCap: First Error Pointer: 00, GenCap- CGenEn- ChkCap- ChkEn-
Capabilities: [150 v1] Access Control Services
ACSCap: SrcValid+ TransBlk+ ReqRedir+ CmpltRedir+ UpstreamFwd+ 
EgressCtrl- DirectTrans-