W dniu 09.04.2013 20:17, Michał Margula pisze:
> And we are analysing it in nfsen. Also we receive on the same nfsen
> netflow from our Cisco router. Traffic is identical for both (because
> nprobe is getting LSPAN of all L2 ports of that router via 10ge port).
> But while router netflow is showing 1.8 Gbps of traffic, nprobe shows
> only 300 Mbps. Also when we measuere traffic on dna0 interface is shows
> about 1.8Gbps. So it seems that is something wrong with nprobe.

Also some stats with -b 1:

Apr  9 21:05:10 sauron nprobe[11855]: 09/Apr/2013 21:05:10
[nprobe.c:2009] ---------------------------------
Apr  9 21:05:10 sauron nprobe[11855]: 09/Apr/2013 21:05:10
[nprobe.c:2010] Average traffic: [74.396 K pps][437 Mb/sec]
Apr  9 21:05:10 sauron nprobe[11855]: 09/Apr/2013 21:05:10
[nprobe.c:2017] Current traffic: [73.392 K pps][436 Mb/sec]
Apr  9 21:05:10 sauron nprobe[11855]: 09/Apr/2013 21:05:10
[nprobe.c:2023] Current flow export rate: [2247.1 flows/sec]
Apr  9 21:05:10 sauron nprobe[11855]: 09/Apr/2013 21:05:10
[nprobe.c:2026] Flow drops: [export queue too long=0][too many flows=0]
Apr  9 21:05:10 sauron nprobe[11855]: 09/Apr/2013 21:05:10
[nprobe.c:2030] Export Queue: 0/512000 [0.0 %]
Apr  9 21:05:10 sauron nprobe[11855]: 09/Apr/2013 21:05:10
[nprobe.c:2035] Flow Buckets:
[active=92212][allocated=92212][toBeExported=0]
Apr  9 21:05:10 sauron nprobe[11855]: 09/Apr/2013 21:05:10
[cache.c:1226] LRUCache L7Cache [find: 0 operations/0.0 find/sec][cache
miss 0/0.0 %][add: 0 operations/0.0 add/sec][tot: 0]
Apr  9 21:05:10 sauron nprobe[11855]: 09/Apr/2013 21:05:10
[cache.c:1226] LRUCache FlowUserCache [find: 0 operations/0.0
find/sec][cache miss 0/0.0 %][add: 0 operations/0.0 add/sec][tot: 0]
Apr  9 21:05:10 sauron nprobe[11855]: 09/Apr/2013 21:05:10
[nprobe.c:1877] Processed packets: 6770230 (max bucket search: 4)
Apr  9 21:05:10 sauron nprobe[11855]: 09/Apr/2013 21:05:10
[nprobe.c:1860] Fragment queue lenght: 61
Apr  9 21:05:10 sauron nprobe[11855]: 09/Apr/2013 21:05:10
[nprobe.c:1886] Flow export stats: [644505126 bytes/980748 pkts][131003
flows/12002 pkts sent]
Apr  9 21:05:10 sauron nprobe[11855]: 09/Apr/2013 21:05:10
[nprobe.c:1896] Flow drop stats:   [0 bytes/0 pkts][0 flows]
Apr  9 21:05:10 sauron nprobe[11855]: 09/Apr/2013 21:05:10
[nprobe.c:1901] Total flow stats:  [644505126 bytes/980748 pkts][131003
flows/12002 pkts sent]
Apr  9 21:05:10 sauron nprobe[11855]: 09/Apr/2013 21:05:10
[nprobe.c:288] Packet stats (pcap): 6770230/0 pkts rcvd/dropped [0.0%]
[Last 79320/0 pkts rcvd/dropped]

So it seems no performance issues, no drops, CPU load is low (about
10%). Also tried with pfcount and it worries me the most:

Absolute Stats: [5371726 pkts rcvd][5371726 pkts filtered][0 pkts dropped]
Total Pkts=5371726/Dropped=0.0 %
5'371'726 pkts - 4'118'028'981 bytes [78'463.93 pkt/sec - 481.21 Mbit/sec]
=========================
Actual Stats: 33742 pkts [456.98 ms][73'836.28 pps/0.47 Gbps]
=========================

Also I have latest DNA enabled ixgbe from SVN and latest pf_ring:

/proc/net/pf_ring/info:

PF_RING Version     : 5.5.3 ($Revision: 6122$)
Ring slots          : 4096
Slot version        : 15
Capture TX          : Yes [RX+TX]
IP Defragment       : No
Socket Mode         : Standard
Transparent mode    : Yes (mode 0)
Total rings         : 0
Total plugins       : 0

Also our dmesg:

[  297.013753] [PF_RING] Welcome to PF_RING 5.5.3 ($Revision: 6122$)
[  297.013755] (C) 2004-13 ntop.org
[  297.013774] [PF_RING] registered /proc/net/pf_ring/
[  297.013776] NET: Registered protocol family 27
[  297.013850] [PF_RING] Min # ring slots 4096
[  297.013851] [PF_RING] Slot version     15
[  297.013852] [PF_RING] Capture TX       Yes [RX+TX]
[  297.013853] [PF_RING] Transparent Mode 0
[  297.013855] [PF_RING] IP Defragment    No
[  297.013856] [PF_RING] Initialized correctly
[  327.308058] ixgbe: eth5: ixgbe_remove: complete
[  327.308082] ixgbe 0000:09:00.0: PCI INT A disabled
[  330.525598] Intel(R) 10 Gigabit PCI Express Network Driver - version
3.10.16-DNA
[  330.525601] Copyright (c) 1999-2012 Intel Corporation.
[  330.525651] ixgbe 0000:09:00.0: PCI INT A -> GSI 16 (level, low) ->
IRQ 16
[  330.525804] ixgbe 0000:09:00.0: setting latency timer to 64
[  330.525925] ixgbe: 0000:09:00.0: ixgbe_check_options: FCoE Offload
feature enabled
[  330.693455] ixgbe 0000:09:00.0: irq 68 for MSI/MSI-X
[  330.693458] ixgbe 0000:09:00.0: irq 69 for MSI/MSI-X
[  330.693461] ixgbe 0000:09:00.0: irq 70 for MSI/MSI-X
[  330.693464] ixgbe 0000:09:00.0: irq 71 for MSI/MSI-X
[  330.693467] ixgbe 0000:09:00.0: irq 72 for MSI/MSI-X
[  330.759052] ixgbe 0000:09:00.0: (PCI Express:2.5GT/s:Width x8)
90:e2:ba:37:13:02
[  330.759147] ixgbe 0000:09:00.0: dna0: MAC: 2, PHY: 15, SFP+: 5, PBA
No: E68787-006
[  330.759154] ixgbe 0000:09:00.0: dna0: Enabled Features: RxQ: 4 TxQ: 4
FdirHash
[  330.759592] ixgbe 0000:09:00.0: dna0: Intel(R) 10 Gigabit Network
Connection
[  356.095105] ADDRCONF(NETDEV_UP): dna0: link is not ready
[  356.409514] ixgbe 0000:09:00.0: dna0: detected SFP+: 5
[  356.660027] ixgbe 0000:09:00.0: dna0: NIC Link is Up 10 Gbps, Flow
Control: None
[  356.660532] ADDRCONF(NETDEV_CHANGE): dna0: link becomes ready

I will be very grateful for any tips - where to look, what to check?

Thanks!

-- 
Michał Margula, [email protected], http://alchemyx.uznam.net.pl/
"W życiu piękne są tylko chwile" [Ryszard Riedel]
_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to