To wrap this thread up: Tamas had both bgp_daemon and bmp_daemon set to
true - but only the BGP thread with a bunch of peers, and no BMP peers. 
Since both threads use the same lookup algorithm and BMP is evaluated
last, this was resulting always in no results.

Commenting out the bmp_daemon directive solved the problem and had zero
impact for Tamas. I've additionally committed the following to make the
configuration unsupported (unless a use-case is made for it):

https://github.com/pmacct/pmacct/commit/ee4927672764c39537500ce506d172e870d99b90

Paolo 

On Thu, Nov 09, 2017 at 04:19:24PM +0000, Paolo Lucente wrote:
> 
> Hi Tamas,
> 
> Thanks to make things extra sure with the 18+ hours wait :D Let me know
> if it's a possibility to have a look at the issue myself on the box in
> order to support you further.
> 
> Paolo 
> 
> On Thu, Nov 09, 2017 at 12:25:24PM +0100, Varga Tamas wrote:
> > Hi Paolo,
> > 
> > the config is now simplified as you suggested, but for me it seems
> > that the bgp attributes are still not merged with the netflow. In
> > order to make sure, that the peers are up and running when netflow
> > hits nfacctd, I waited 18+ hours before checking the logs.
> > 
> > Thanks,
> > 
> > Tamas
> > 
> > vargat@noc-netflow-aggr:/tmp$ sudo cat test_20171109-1045.txt | head -5
> > TAG,SRC_MAC,DST_MAC,SRC_AS,DST_AS,AS_PATH,PREF,MED,SRC_IP,DST_IP,SRC_PORT,DST_PORT,PROTOCOL,SH_COUNTRY,DH_COUNTRY,SH_POCODE,DH_POCODE,PACKETS,BYTES
> > 0,00:00:00:00:00:00,00:00:00:00:00:00,0,0,,0,0,109.71.164.16,5.159.216.124,65153,51403,udp,LU,LU,,,1,261
> > 0,00:00:00:00:00:00,00:00:00:00:00:00,0,0,,0,0,109.71.162.28,109.71.164.6,61524,1935,tcp,LU,LU,,,20,846
> > 0,00:00:00:00:00:00,00:00:00:00:00:00,0,0,,0,0,109.71.161.162,149.154.157.151,443,7272,tcp,LU,IT,,06049,1,217
> > 0,00:00:00:00:00:00,00:00:00:00:00:00,0,0,,0,0,41.230.1.145,185.13.90.76,54360,443,tcp,TN,LU,,,1,40
> > 
> > vargat@noc-netflow-aggr:/tmp$ sudo grep -r "\"41.230.0" bgp-*-1000.log | 
> > head -3
> > bgp-5_159_218_1-1000.log:{"timestamp": "2017-11-09 10:00:00",
> > "peer_ip_src": "5.159.218.1", "event_type": "dump", "afi": 1, "safi":
> > 1, "ip_prefix": "41.230.0.0/16", "bgp_nexthop": "212.3.238.185",
> > "as_path": "3356 6762 2609", "comms": "34655:3 34655:40 34655:406
> > 34655:4060", "origin": 0, "local_pref": 100, "med": 400}
> > bgp-5_159_218_2-1000.log:{"timestamp": "2017-11-09 10:00:00",
> > "peer_ip_src": "5.159.218.2", "event_type": "dump", "afi": 1, "safi":
> > 1, "ip_prefix": "41.230.0.0/16", "bgp_nexthop": "212.3.238.185",
> > "as_path": "3356 6762 2609", "comms": "34655:3 34655:40 34655:406
> > 34655:4060", "origin": 0, "local_pref": 100, "med": 400}
> > bgp-5_159_218_4-1000.log:{"timestamp": "2017-11-09 10:00:00",
> > "peer_ip_src": "5.159.218.4", "event_type": "dump", "afi": 1, "safi":
> > 1, "ip_prefix": "41.230.0.0/16", "bgp_nexthop": "212.3.238.185",
> > "as_path": "3356 6762 2609", "comms": "34655:3 34655:40 34655:406
> > 34655:4060", "origin": 0, "local_pref": 100, "med": 400}
> > 
> > debug: false
> > daemonize: true
> > nfacctd_ip: 216.172.X.X
> > nfacctd_port: 9966
> > logfile: /var/log/nfacctd.log
> > plugins: print[test]
> > !BGP
> > bgp_daemon: true
> > bmp_daemon: true
> > bgp_daemon_ip: 216.172.X.X
> > bgp_daemon_id: 216.172.X.X
> > bgp_peer_as_skip_subas: true
> > bgp_daemon_max_peers: 20
> > bgp_table_dump_file: /tmp/bgp-$peer_src_ip-%H%M.log
> > bgp_table_dump_refresh_time: 36000
> > bgp_follow_default: 5
> > bgp_agent_map: bgp_agents
> > nfacctd_as: bgp
> > pmacctd_as: bgp
> > nfacctd_net: bgp
> > pmacctd_net: bgp
> > bgp_aspath_radius: 3
> > bgp_daemon_msglog: true
> > nfacctd_as_new: bgp
> > bgp_peer_src_as_type: bgp
> > geoipv2_file: /usr/local/share/GeoIP/GeoLite2-City.mmdb
> > aggregate[test]: src_host, dst_host, src_port, dst_port, proto, tag,
> > src_host_country, dst_host_country, src_host_pocode, dst_host_pocode,
> > src_mac, dst_mac, src_as, dst_as, local_pref, med, as_path
> > print_refresh_time[test]: 900
> > print_history[test]: 15m
> > print_output[test]: csv
> > print_output_file[test]: /tmp/test_%Y%m%d-%H%M.txt
> > print_cache_entries[test]: 1310840
> > print_history_roundoff[test]: m
> > !fine tuning
> > plugin_buffer_size:16384
> > plugin_pipe_size:161920000
> > plugin_pipe_size[test]: 80240000
> > 
> > On Wed, Nov 8, 2017 at 3:21 PM, Paolo Lucente <pa...@pmacct.net> wrote:
> > >
> > > Hi Tamas,
> > >
> > > From your outputs definitely looks everything is in order. I wonder
> > > though, since you use the IMT plugin, if those entries are created
> > > before the BGP session is successfully established. Any chance, keeping
> > > things simple and for the sake of a test, you can try the same with the
> > > print plugin writing to flat file(s)? Failing that, i'd be glad to have
> > > a look at the issue myself if SSH access to the box is possible (in
> > > which case we can follow-up by unicast email).
> > >
> > > Paolo
> > >
> > > On Wed, Nov 08, 2017 at 12:33:32PM +0100, Varga Tamas wrote:
> > >> Hi Paolo,
> > >>
> > >> I have some problems getting info from bgp tables merged with netflow
> > >> data. Unfortunately, after going through the wiki/faq/mailling-list I
> > >> still can't figure out what is the missing piece from the config files
> > >> in order to get these feature working.
> > >>
> > >> I have checked the following things based on suggestions found on the
> > >> mailing list:
> > >> - Bgp peer ip == netflow agent IP
> > >> - bgp_agent_map
> > >> - Dump bgp tables fetched from the bgp peer
> > >> - Use "bgp" for (pmacctd|nfacctd)_net/as
> > >> - Verified bgp peering and netflow
> > >>
> > >> Thanks,
> > >>
> > >> Tamas
> > >>
> > >> Relevant debug/config info
> > >>
> > >> ### nfacctd.conf
> > >> !BGP
> > >> bgp_daemon: true
> > >> bmp_daemon: true
> > >> bgp_daemon_ip: 216.172.X.X
> > >> bgp_daemon_id: 216.172.X.x
> > >> bgp_peer_as_skip_subas: true
> > >> bgp_daemon_max_peers: 20
> > >> bgp_table_dump_file: /tmp/bgp-$peer_src_ip-%H%M.log
> > >> bgp_table_dump_refresh_time: 3600
> > >> bgp_follow_default: 2
> > >> bgp_agent_map: bgp_agents
> > >> nfacctd_as: bgp
> > >> pmacctd_as: bgp
> > >> nfacctd_net: bgp
> > >> pmacctd_net: bgp
> > >> bgp_aspath_radius: 3
> > >> nfacctd_as_new: bgp
> > >> bgp_peer_src_as_type: bgp
> > >> aggregate[mem]: src_host, dst_host, src_port, dst_port, proto, tag,
> > >> src_host_country, dst_host_country, src_host_pocode, dst_host_pocode,
> > >> src_as, dst_as, med, as_path
> > >>
> > >> ### bgp_agents
> > >> /etc/pmacct# cat bgp_agents
> > >> bgp_ip=5.159.218.5 ip=5.159.218.5
> > >>
> > >> ###Netflow agent
> > >> DEBUG ( default/core ): NfV9 agent         : 5.159.218.5:2097
> > >> DEBUG ( default/core ): NfV9 template type : flow
> > >> DEBUG ( default/core ): NfV9 template ID   : 260
> > >> DEBUG ( default/core ):
> > >> -------------------------------------------------------------
> > >> DEBUG ( default/core ): |    pen     |         field type         |
> > >> offset |  size  |
> > >> DEBUG ( default/core ): | 0          | in packets         [2    ] |
> > >>   0 |      4 |
> > >> DEBUG ( default/core ): | 0          | in bytes           [1    ] |
> > >>   4 |      4 |
> > >> DEBUG ( default/core ): | 0          | IPv4 src addr      [8    ] |
> > >>   8 |      4 |
> > >> DEBUG ( default/core ): | 0          | IPv4 dst addr      [12   ] |
> > >>  12 |      4 |
> > >> DEBUG ( default/core ): | 0          | input snmp         [10   ] |
> > >>  16 |      4 |
> > >> DEBUG ( default/core ): | 0          | output snmp        [14   ] |
> > >>  20 |      4 |
> > >> DEBUG ( default/core ): | 0          | last switched      [21   ] |
> > >>  24 |      4 |
> > >> DEBUG ( default/core ): | 0          | first switched     [22   ] |
> > >>  28 |      4 |
> > >> DEBUG ( default/core ): | 0          | L4 src port        [7    ] |
> > >>  32 |      2 |
> > >> DEBUG ( default/core ): | 0          | L4 dst port        [11   ] |
> > >>  34 |      2 |
> > >> DEBUG ( default/core ): | 0          | src as             [16   ] |
> > >>  36 |      4 |
> > >> DEBUG ( default/core ): | 0          | dst as             [17   ] |
> > >>  40 |      4 |
> > >> DEBUG ( default/core ): | 0          | BGP IPv4 next hop  [18   ] |
> > >>  44 |      4 |
> > >> DEBUG ( default/core ): | 0          | IPv4 src mask      [9    ] |
> > >>  48 |      1 |
> > >> DEBUG ( default/core ): | 0          | IPv4 dst mask      [13   ] |
> > >>  49 |      1 |
> > >> DEBUG ( default/core ): | 0          | L4 protocol        [4    ] |
> > >>  50 |      1 |
> > >> DEBUG ( default/core ): | 0          | tcp flags          [6    ] |
> > >>  51 |      1 |
> > >> DEBUG ( default/core ): | 0          | tos                [5    ] |
> > >>  52 |      1 |
> > >> DEBUG ( default/core ): | 0          | direction          [61   ] |
> > >>  53 |      1 |
> > >> DEBUG ( default/core ): | 0          | forwarding status  [89   ] |
> > >>  54 |      1 |
> > >> DEBUG ( default/core ): | 0          | sampler ID         [48   ] |
> > >>  55 |      2 |
> > >> DEBUG ( default/core ): | 0          | 234                [234  ] |
> > >>  57 |      4 |
> > >> DEBUG ( default/core ): | 0          | 235                [235  ] |
> > >>  61 |      4 |
> > >> DEBUG ( default/core ):
> > >> -------------------------------------------------------------
> > >> DEBUG ( default/core ): Netflow V9/IPFIX record size : 65
> > >>
> > >> ### BGP peer
> > >> INFO ( default/core/BGP ): [5.159.218.5] BGP peers usage: 1/50
> > >> INFO ( default/core/BGP ): [5.159.218.5] Capability: MultiProtocol [1]
> > >> AFI [1] SAFI [1]
> > >> INFO ( default/core/BGP ): [5.159.218.5] Capability: 4-bytes AS [41] ASN 
> > >> [34655]
> > >> INFO ( default/core/BGP ): [5.159.218.5] BGP_OPEN: Local AS: 34655
> > >> Remote AS: 34655 HoldTime: 180
> > >> DEBUG ( default/core/BGP ): [5.159.218.5] BGP_KEEPALIVE received
> > >> DEBUG ( default/core/BGP ): [5.159.218.5] BGP_KEEPALIVE sent
> > >>
> > >> ### pmacct
> > >> /etc/pmacct# pmacct -s -T flows,5 -c dst_host_country
> > >> TAG         SRC_AS      DST_AS      AS_PATH                  MED
> > >> SRC_IP           DST_IP           SRC_PORT  DST_PORT  PROTOCOL
> > >> SH_COUNTRY  DH_COUNTRY  SH_POCODE     DH_POCODE     PACKETS
> > >>    BYTES
> > >> 0           0           0           ^$                       0
> > >> 109.71.162.32    178.187.170.198  1935      4010      tcp         LU
> > >>        RU                        659300        2
> > >> 2904
> > >> 0           0           0           ^$                       0
> > >> 46.188.106.58    93.93.53.199     62090     80        tcp         RU
> > >>        LU          101194                      1
> > >> 40
> > >> 0           0           0           ^$                       0
> > >> 185.13.90.86     85.29.74.98      443       49498     tcp         LU
> > >>        FI                        74130         98
> > >> 124015
> > >> 0           0           0           ^$                       0
> > >> 93.93.51.195     5.228.17.199     80        48324     tcp         LU
> > >>        RU                        101194        1
> > >> 1280
> > >> 0           0           0           ^$                       0
> > >> 109.71.161.146   2.86.239.153     443       58337     tcp         LU
> > >>        GR                                      1
> > >> 1400
> > >>
> > >> ### Example
> > >>
> > >> Json output from bgp dump
> > >> {"timestamp": "2017-11-02 08:41:00", "peer_ip_src": "5.159.218.5",
> > >> "event_type": "dump", "afi": 1, "safi": 1, "ip_prefix":
> > >> "85.29.64.0/18", "bgp_nexthop": "80.81.192.144", "as_path": "6667
> > >> 13170", "comms": "6667:900 6667:1000 6667:2000 6667:3000 6667:4002
> > >> 34655:2 34655:40 34655:401 34655:2002 34655:4010", "origin": 0,
> > >> "local_pref": 110, "med": 206}
> > >>
> > >> pmacct output
> > >> 0           0           0           ^$                       0
> > >> 185.13.90.86     85.29.74.98      443       49498     tcp         LU
> > >>        FI                        74130         98
> > >> 124015
> > >>
> > >> _______________________________________________
> > >> pmacct-discussion mailing list
> > >> http://www.pmacct.net/#mailinglists
> > >
> > > _______________________________________________
> > > pmacct-discussion mailing list
> > > http://www.pmacct.net/#mailinglists
> > 
> > _______________________________________________
> > pmacct-discussion mailing list
> > http://www.pmacct.net/#mailinglists
> 
> _______________________________________________
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

_______________________________________________
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Reply via email to