Re: [pmacct-discussion] BGP AS values are 0

2019-10-20 Thread Paolo Lucente


He he he, 'fallback' is the legacy keyword for 'longest'. You should use
'longest', yes. High moments for a developer :-) 

Paolo

On Sun, Oct 20, 2019 at 10:37:51AM -0400, Brooks Swinnerton wrote:
> I tried switching over the iBGP session to eBGP but it oddly started
> putting my AS as the `dst_as` for quite a few flows. I suspect this may
> have been because of my BGP configuration now that it's not wired up as a
> route reflector. I'll investigate this more, but option two intrigues me.
> Looking at the documentation
> 
> it
> appears that `fallback` is not a valid option for `pmacctd_as` and
> `pmacctd_net`. Is that right?
> 
> On Sun, Oct 20, 2019 at 9:43 AM Paolo Lucente  wrote:
> 
> >
> > Hi Brooks,
> >
> > There would be a few ways to achieve that:
> >
> > 1) change the iBGP session into an eBGP session;
> >
> > 2) set pmacctd_as and pmacctd_net to 'fallback' and add a networks_file
> >where you list (some of your) prefixes and associated ASN. While the
> >map can be refreshed at runtime - no need to restart the daemon - it
> >may involve a manual step. Unless you can generate it automatically
> >and/or the set of prefixes is quite static (again, you want to list
> >there just your own prefixes and perhaps they don't change that much).
> >
> > 3) Use bgp_stdcomm_pattern_to_asn or bgp_lrgcomm_pattern_to_asn: you tag
> >prefixes of interest with certain BGP communities that indicate the
> >ASN to associate the prefix with. While more automatic than #2, it
> >would require messing with actual BGP.
> >
> > Paolo
> >
> >
> > On Sun, Oct 20, 2019 at 08:48:18AM -0400, Brooks Swinnerton wrote:
> > > Hi Paolo,
> > >
> > > One quick follow up question regarding:
> > >
> > > > +1 to Felix's answer. Also maybe two obvious pointsa: 1) with an iBGP
> > > peering setup, AS0 can mean unknown or your own ASN (being a number
> > rather
> > > than a string, null is not an option) and 2) until routes are received,
> > > source/destination IP prefixes can get associated to AS0.
> > >
> > > Is there a way to distinguish between AS0 being my own AS and an unknown
> > > one?
> > >
> > > On Sun, Oct 13, 2019 at 3:39 PM Paolo Lucente  wrote:
> > >
> > > >
> > > > Wonderful. Thank you Brooks for sharing your finding. I will add a note
> > > > to documentation, it seems very relevant.
> > > >
> > > > Paolo
> > > >
> > > > On Sun, Oct 13, 2019 at 12:50:43PM -0400, Brooks Swinnerton wrote:
> > > > > Got it! I think for some reason BIRD didn't like that both BGP
> > instances
> > > > > were sharing the same address. Here is the new configuration on both
> > > > sides
> > > > > which works:
> > > > >
> > > > > ```
> > > > > !
> > > > > ! pmacctd configuration example
> > > > > !
> > > > > ! Did you know CONFIG-KEYS contains the detailed list of all
> > > > configuration
> > > > > keys
> > > > > ! supported by 'nfacctd' and 'pmacctd' ?
> > > > > !
> > > > > ! debug: true
> > > > > daemonize: false
> > > > > pcap_interface: ens3
> > > > > pmacctd_as: bgp
> > > > > pmacctd_net: bgp
> > > > > sampling_rate: 10
> > > > > !
> > > > > bgp_daemon: true
> > > > > bgp_daemon_ip: 127.0.0.2
> > > > > bgp_daemon_port: 180
> > > > > bgp_daemon_max_peers: 10
> > > > > bgp_agent_map: /etc/pmacct/peering_agent.map
> > > > > bgp_table_dump_file: /tmp/bgp-$peer_src_ip-%H%M.log
> > > > > bgp_table_dump_refresh_time: 120
> > > > > !
> > > > > aggregate: src_host, dst_host, src_port, dst_port, src_as, dst_as,
> > proto
> > > > > !
> > > > > plugins: kafka
> > > > > kafka_output: json
> > > > > kafka_broker_host: kafka.fqdn.com
> > > > > kafka_topic: pmacct.acct
> > > > > kafka_refresh_time: 10
> > > > > kafka_history: 5m
> > > > > kafka_history_roundoff: m
> > > > > ```
> > > > >
> > > > > And in BIRD:
> > > > >
> > > > > ```
> > > > > protocol bgp AS00v4c1 from monitor46 {
> > > > >   description "pmacctd";
> > > > >   local 127.0.0.1 as 00;
> > > > >   neighbor 127.0.0.2 port 180 as 00;
> > > > >   rr client;
> > > > > }
> > > > > ```
> > > > >
> > > > > Thank you so much for the tip about 127.0.0.2, Paolo!
> > > > >
> > > > > On Sun, Oct 13, 2019 at 11:35 AM Paolo Lucente 
> > wrote:
> > > > >
> > > > > >
> > > > > > So the session comes up and gets established: this would rule out
> > > > firewall
> > > > > > filters, TCP MD5 or session mis-configurations (AS numbers,
> > > > capabilities,
> > > > > > etc.). This should also mean that the BGP OPEN process is
> > successful
> > > > (this
> > > > > > is also confirmed by pmacct log you sent earlier on).
> > > > > >
> > > > > > Now, from the tcpdump output you sent, looking at the tiny packet
> > sizes
> > > > > > i would almost say those are BGP keepalives; if not timestamps
> > reveal
> > > > they
> > > > > > do take place too frequently (so they are not BGP keepalives). They
> > > > could
> > > > > > still be BGP UPDATEs and it would take longer to 

Re: [pmacct-discussion] BGP AS values are 0

2019-10-20 Thread Paolo Lucente


Hi Brooks,

There would be a few ways to achieve that:

1) change the iBGP session into an eBGP session;

2) set pmacctd_as and pmacctd_net to 'fallback' and add a networks_file
   where you list (some of your) prefixes and associated ASN. While the
   map can be refreshed at runtime - no need to restart the daemon - it
   may involve a manual step. Unless you can generate it automatically
   and/or the set of prefixes is quite static (again, you want to list
   there just your own prefixes and perhaps they don't change that much).

3) Use bgp_stdcomm_pattern_to_asn or bgp_lrgcomm_pattern_to_asn: you tag
   prefixes of interest with certain BGP communities that indicate the
   ASN to associate the prefix with. While more automatic than #2, it
   would require messing with actual BGP.

Paolo 


On Sun, Oct 20, 2019 at 08:48:18AM -0400, Brooks Swinnerton wrote:
> Hi Paolo,
> 
> One quick follow up question regarding:
> 
> > +1 to Felix's answer. Also maybe two obvious pointsa: 1) with an iBGP
> peering setup, AS0 can mean unknown or your own ASN (being a number rather
> than a string, null is not an option) and 2) until routes are received,
> source/destination IP prefixes can get associated to AS0.
> 
> Is there a way to distinguish between AS0 being my own AS and an unknown
> one?
> 
> On Sun, Oct 13, 2019 at 3:39 PM Paolo Lucente  wrote:
> 
> >
> > Wonderful. Thank you Brooks for sharing your finding. I will add a note
> > to documentation, it seems very relevant.
> >
> > Paolo
> >
> > On Sun, Oct 13, 2019 at 12:50:43PM -0400, Brooks Swinnerton wrote:
> > > Got it! I think for some reason BIRD didn't like that both BGP instances
> > > were sharing the same address. Here is the new configuration on both
> > sides
> > > which works:
> > >
> > > ```
> > > !
> > > ! pmacctd configuration example
> > > !
> > > ! Did you know CONFIG-KEYS contains the detailed list of all
> > configuration
> > > keys
> > > ! supported by 'nfacctd' and 'pmacctd' ?
> > > !
> > > ! debug: true
> > > daemonize: false
> > > pcap_interface: ens3
> > > pmacctd_as: bgp
> > > pmacctd_net: bgp
> > > sampling_rate: 10
> > > !
> > > bgp_daemon: true
> > > bgp_daemon_ip: 127.0.0.2
> > > bgp_daemon_port: 180
> > > bgp_daemon_max_peers: 10
> > > bgp_agent_map: /etc/pmacct/peering_agent.map
> > > bgp_table_dump_file: /tmp/bgp-$peer_src_ip-%H%M.log
> > > bgp_table_dump_refresh_time: 120
> > > !
> > > aggregate: src_host, dst_host, src_port, dst_port, src_as, dst_as, proto
> > > !
> > > plugins: kafka
> > > kafka_output: json
> > > kafka_broker_host: kafka.fqdn.com
> > > kafka_topic: pmacct.acct
> > > kafka_refresh_time: 10
> > > kafka_history: 5m
> > > kafka_history_roundoff: m
> > > ```
> > >
> > > And in BIRD:
> > >
> > > ```
> > > protocol bgp AS00v4c1 from monitor46 {
> > >   description "pmacctd";
> > >   local 127.0.0.1 as 00;
> > >   neighbor 127.0.0.2 port 180 as 00;
> > >   rr client;
> > > }
> > > ```
> > >
> > > Thank you so much for the tip about 127.0.0.2, Paolo!
> > >
> > > On Sun, Oct 13, 2019 at 11:35 AM Paolo Lucente  wrote:
> > >
> > > >
> > > > So the session comes up and gets established: this would rule out
> > firewall
> > > > filters, TCP MD5 or session mis-configurations (AS numbers,
> > capabilities,
> > > > etc.). This should also mean that the BGP OPEN process is successful
> > (this
> > > > is also confirmed by pmacct log you sent earlier on).
> > > >
> > > > Now, from the tcpdump output you sent, looking at the tiny packet sizes
> > > > i would almost say those are BGP keepalives; if not timestamps reveal
> > they
> > > > do take place too frequently (so they are not BGP keepalives). They
> > could
> > > > still be BGP UPDATEs and it would take longer to transfer 150k
> > prefixes at
> > > > that pace but, yeah, weird. It would be great to confirm if those
> > packets
> > > > are BGP UPDATEs: perhaps tcpdump sees port 180/tcp and does not apply
> > > > the BGP decoder (and hence you can't see the expected BGP cleartext in
> > the
> > > > tcpdump output); you could save it and open and decode with Wireshark
> > (or
> > > > setup a 127.0.0.2 and do a 127.0.0.1:179 <-> 127.0.0.2:179 peering.
> > > >
> > > > Really not sure what is going on :-? Also, if you prefer, we could
> > continue
> > > > the troubleshooting via unicast email and summarize findings on list
> > later.
> > > >
> > > > Paolo
> > > >
> > > > On Sun, Oct 13, 2019 at 10:55:55AM -0400, Brooks Swinnerton wrote:
> > > > > Oops, sorry I mismatched the tcpdump and bgp table dump values. They
> > were
> > > > > both indeed using 55881 at the time, but here is another capture that
> > > > will
> > > > > make more sense:
> > > > >
> > > > > ```
> > > > > {"timestamp": "2019-10-13 14:48:00", "peer_ip_src": "127.0.0.1",
> > > > > "peer_tcp_port": 36143, "event_type": "dump_close", "entries": 0,
> > > > "tables":
> > > > > 1, "seq": 4}
> > > > > {"timestamp": "2019-10-13 14:50:00", "peer_ip_src": "127.0.0.1",
> > > > > "peer_tcp_port": 36143, 

Re: [pmacct-discussion] BGP AS values are 0

2019-10-13 Thread Paolo Lucente


Wonderful. Thank you Brooks for sharing your finding. I will add a note
to documentation, it seems very relevant.

Paolo

On Sun, Oct 13, 2019 at 12:50:43PM -0400, Brooks Swinnerton wrote:
> Got it! I think for some reason BIRD didn't like that both BGP instances
> were sharing the same address. Here is the new configuration on both sides
> which works:
> 
> ```
> !
> ! pmacctd configuration example
> !
> ! Did you know CONFIG-KEYS contains the detailed list of all configuration
> keys
> ! supported by 'nfacctd' and 'pmacctd' ?
> !
> ! debug: true
> daemonize: false
> pcap_interface: ens3
> pmacctd_as: bgp
> pmacctd_net: bgp
> sampling_rate: 10
> !
> bgp_daemon: true
> bgp_daemon_ip: 127.0.0.2
> bgp_daemon_port: 180
> bgp_daemon_max_peers: 10
> bgp_agent_map: /etc/pmacct/peering_agent.map
> bgp_table_dump_file: /tmp/bgp-$peer_src_ip-%H%M.log
> bgp_table_dump_refresh_time: 120
> !
> aggregate: src_host, dst_host, src_port, dst_port, src_as, dst_as, proto
> !
> plugins: kafka
> kafka_output: json
> kafka_broker_host: kafka.fqdn.com
> kafka_topic: pmacct.acct
> kafka_refresh_time: 10
> kafka_history: 5m
> kafka_history_roundoff: m
> ```
> 
> And in BIRD:
> 
> ```
> protocol bgp AS00v4c1 from monitor46 {
>   description "pmacctd";
>   local 127.0.0.1 as 00;
>   neighbor 127.0.0.2 port 180 as 00;
>   rr client;
> }
> ```
> 
> Thank you so much for the tip about 127.0.0.2, Paolo!
> 
> On Sun, Oct 13, 2019 at 11:35 AM Paolo Lucente  wrote:
> 
> >
> > So the session comes up and gets established: this would rule out firewall
> > filters, TCP MD5 or session mis-configurations (AS numbers, capabilities,
> > etc.). This should also mean that the BGP OPEN process is successful (this
> > is also confirmed by pmacct log you sent earlier on).
> >
> > Now, from the tcpdump output you sent, looking at the tiny packet sizes
> > i would almost say those are BGP keepalives; if not timestamps reveal they
> > do take place too frequently (so they are not BGP keepalives). They could
> > still be BGP UPDATEs and it would take longer to transfer 150k prefixes at
> > that pace but, yeah, weird. It would be great to confirm if those packets
> > are BGP UPDATEs: perhaps tcpdump sees port 180/tcp and does not apply
> > the BGP decoder (and hence you can't see the expected BGP cleartext in the
> > tcpdump output); you could save it and open and decode with Wireshark (or
> > setup a 127.0.0.2 and do a 127.0.0.1:179 <-> 127.0.0.2:179 peering.
> >
> > Really not sure what is going on :-? Also, if you prefer, we could continue
> > the troubleshooting via unicast email and summarize findings on list later.
> >
> > Paolo
> >
> > On Sun, Oct 13, 2019 at 10:55:55AM -0400, Brooks Swinnerton wrote:
> > > Oops, sorry I mismatched the tcpdump and bgp table dump values. They were
> > > both indeed using 55881 at the time, but here is another capture that
> > will
> > > make more sense:
> > >
> > > ```
> > > {"timestamp": "2019-10-13 14:48:00", "peer_ip_src": "127.0.0.1",
> > > "peer_tcp_port": 36143, "event_type": "dump_close", "entries": 0,
> > "tables":
> > > 1, "seq": 4}
> > > {"timestamp": "2019-10-13 14:50:00", "peer_ip_src": "127.0.0.1",
> > > "peer_tcp_port": 36143, "event_type": "dump_init", "dump_period": 120,
> > > "seq": 5}
> > > {"timestamp": "2019-10-13 14:50:00", "peer_ip_src": "127.0.0.1",
> > > "peer_tcp_port": 36143, "event_type": "dump_close", "entries": 0,
> > "tables":
> > > 1, "seq": 5}
> > > {"timestamp": "2019-10-13 14:52:00", "peer_ip_src": "127.0.0.1",
> > > "peer_tcp_port": 36143, "event_type": "dump_init", "dump_period": 120,
> > > "seq": 6}
> > > {"timestamp": "2019-10-13 14:52:00", "peer_ip_src": "127.0.0.1",
> > > "peer_tcp_port": 36143, "event_type": "dump_close", "entries": 0,
> > "tables":
> > > 1, "seq": 6}
> > > {"timestamp": "2019-10-13 14:54:00", "peer_ip_src": "127.0.0.1",
> > > "peer_tcp_port": 36143, "event_type": "dump_init", "dump_period": 120,
> > > "seq": 7}
> > > {"timestamp": "2019-10-13 14:54:00", "peer_ip_src": "127.0.0.1",
> > > "peer_tcp_port": 36143, "event_type": "dump_close", "entries": 0,
> > "tables":
> > > 1, "seq": 7}
> > > ```
> > >
> > > ```
> > > $ sudo tcpdump -nv -i any tcp port 180 and host 127.0.0.1
> > > tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture
> > size
> > > 262144 bytes
> > > 14:55:05.277665 IP (tos 0xc0, ttl 64, id 3378, offset 0, flags [DF],
> > proto
> > > TCP (6), length 79)
> > > 127.0.0.1.36143 > 127.0.0.1.180: Flags [P.], cksum 0xfe43 (incorrect
> > ->
> > > 0x58be), seq 100499878:100499905, ack 1900709696, win 342, options
> > > [nop,nop,TS val 1973199584 ecr 1973197086], length 27
> > > 14:55:05.277694 IP (tos 0x0, ttl 64, id 35509, offset 0, flags [DF],
> > proto
> > > TCP (6), length 52)
> > > 127.0.0.1.180 > 127.0.0.1.36143: Flags [.], cksum 0xfe28 (incorrect
> > ->
> > > 0x383e), ack 27, win 342, options [nop,nop,TS val 1973199584 ecr
> > > 1973199584], length 0
> > > 14:55:05.282623 IP (tos 0xc0, ttl 64, id 3379, 

Re: [pmacct-discussion] BGP AS values are 0

2019-10-13 Thread Paolo Lucente


So the session comes up and gets established: this would rule out firewall
filters, TCP MD5 or session mis-configurations (AS numbers, capabilities,
etc.). This should also mean that the BGP OPEN process is successful (this
is also confirmed by pmacct log you sent earlier on).

Now, from the tcpdump output you sent, looking at the tiny packet sizes
i would almost say those are BGP keepalives; if not timestamps reveal they
do take place too frequently (so they are not BGP keepalives). They could
still be BGP UPDATEs and it would take longer to transfer 150k prefixes at
that pace but, yeah, weird. It would be great to confirm if those packets
are BGP UPDATEs: perhaps tcpdump sees port 180/tcp and does not apply
the BGP decoder (and hence you can't see the expected BGP cleartext in the
tcpdump output); you could save it and open and decode with Wireshark (or 
setup a 127.0.0.2 and do a 127.0.0.1:179 <-> 127.0.0.2:179 peering.

Really not sure what is going on :-? Also, if you prefer, we could continue
the troubleshooting via unicast email and summarize findings on list later.

Paolo
 
On Sun, Oct 13, 2019 at 10:55:55AM -0400, Brooks Swinnerton wrote:
> Oops, sorry I mismatched the tcpdump and bgp table dump values. They were
> both indeed using 55881 at the time, but here is another capture that will
> make more sense:
> 
> ```
> {"timestamp": "2019-10-13 14:48:00", "peer_ip_src": "127.0.0.1",
> "peer_tcp_port": 36143, "event_type": "dump_close", "entries": 0, "tables":
> 1, "seq": 4}
> {"timestamp": "2019-10-13 14:50:00", "peer_ip_src": "127.0.0.1",
> "peer_tcp_port": 36143, "event_type": "dump_init", "dump_period": 120,
> "seq": 5}
> {"timestamp": "2019-10-13 14:50:00", "peer_ip_src": "127.0.0.1",
> "peer_tcp_port": 36143, "event_type": "dump_close", "entries": 0, "tables":
> 1, "seq": 5}
> {"timestamp": "2019-10-13 14:52:00", "peer_ip_src": "127.0.0.1",
> "peer_tcp_port": 36143, "event_type": "dump_init", "dump_period": 120,
> "seq": 6}
> {"timestamp": "2019-10-13 14:52:00", "peer_ip_src": "127.0.0.1",
> "peer_tcp_port": 36143, "event_type": "dump_close", "entries": 0, "tables":
> 1, "seq": 6}
> {"timestamp": "2019-10-13 14:54:00", "peer_ip_src": "127.0.0.1",
> "peer_tcp_port": 36143, "event_type": "dump_init", "dump_period": 120,
> "seq": 7}
> {"timestamp": "2019-10-13 14:54:00", "peer_ip_src": "127.0.0.1",
> "peer_tcp_port": 36143, "event_type": "dump_close", "entries": 0, "tables":
> 1, "seq": 7}
> ```
> 
> ```
> $ sudo tcpdump -nv -i any tcp port 180 and host 127.0.0.1
> tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size
> 262144 bytes
> 14:55:05.277665 IP (tos 0xc0, ttl 64, id 3378, offset 0, flags [DF], proto
> TCP (6), length 79)
> 127.0.0.1.36143 > 127.0.0.1.180: Flags [P.], cksum 0xfe43 (incorrect ->
> 0x58be), seq 100499878:100499905, ack 1900709696, win 342, options
> [nop,nop,TS val 1973199584 ecr 1973197086], length 27
> 14:55:05.277694 IP (tos 0x0, ttl 64, id 35509, offset 0, flags [DF], proto
> TCP (6), length 52)
> 127.0.0.1.180 > 127.0.0.1.36143: Flags [.], cksum 0xfe28 (incorrect ->
> 0x383e), ack 27, win 342, options [nop,nop,TS val 1973199584 ecr
> 1973199584], length 0
> 14:55:05.282623 IP (tos 0xc0, ttl 64, id 3379, offset 0, flags [DF], proto
> TCP (6), length 83)
> 127.0.0.1.36143 > 127.0.0.1.180: Flags [P.], cksum 0xfe47 (incorrect ->
> 0x64b5), seq 27:58, ack 1, win 342, options [nop,nop,TS val 1973199589 ecr
> 1973199584], length 31
> 14:55:05.282652 IP (tos 0x0, ttl 64, id 35510, offset 0, flags [DF], proto
> TCP (6), length 52)
> 127.0.0.1.180 > 127.0.0.1.36143: Flags [.], cksum 0xfe28 (incorrect ->
> 0x3815), ack 58, win 342, options [nop,nop,TS val 1973199589 ecr
> 1973199589], length 0
> 14:55:05.436293 IP (tos 0xc0, ttl 64, id 3380, offset 0, flags [DF], proto
> TCP (6), length 91)
> 127.0.0.1.36143 > 127.0.0.1.180: Flags [P.], cksum 0xfe4f (incorrect ->
> 0xb7db), seq 58:97, ack 1, win 342, options [nop,nop,TS val 1973199742 ecr
> 1973199589], length 39
> 14:55:05.436330 IP (tos 0x0, ttl 64, id 35511, offset 0, flags [DF], proto
> TCP (6), length 52)
> 127.0.0.1.180 > 127.0.0.1.36143: Flags [.], cksum 0xfe28 (incorrect ->
> 0x36bb), ack 97, win 342, options [nop,nop,TS val 1973199743 ecr
> 1973199742], length 0
> ```
> 
> On Sun, Oct 13, 2019 at 10:53 AM Brooks Swinnerton 
> wrote:
> 
> > > 1) as super extra check, can you capture stuff with Wireshark and see
> > what is going on 'on the wire'? Do you see the routes being sent and
> > landing onto pmacct, etc.?
> >
> > Gosh, I'm stumped. I do see traffic on port 180 and the port that is
> > referenced in the table dump, but it's not the cleartext BGP traffic I was
> > expecting:
> >
> > ```
> > 127.0.0.1.36143 > 127.0.0.1.180: Flags [P.], cksum 0xfe4b (incorrect
> > -> 0xfc0d), seq 9200:9235, ack 234, win 342, options [nop,nop,TS val
> > 1972706066 ecr 1972699157], length 35
> > 14:46:51.746910 IP (tos 0x0, ttl 64, id 34830, offset 0, flags [DF], proto
> > TCP (6), 

Re: [pmacct-discussion] BGP AS values are 0

2019-10-13 Thread Brooks Swinnerton
Oops, sorry I mismatched the tcpdump and bgp table dump values. They were
both indeed using 55881 at the time, but here is another capture that will
make more sense:

```
{"timestamp": "2019-10-13 14:48:00", "peer_ip_src": "127.0.0.1",
"peer_tcp_port": 36143, "event_type": "dump_close", "entries": 0, "tables":
1, "seq": 4}
{"timestamp": "2019-10-13 14:50:00", "peer_ip_src": "127.0.0.1",
"peer_tcp_port": 36143, "event_type": "dump_init", "dump_period": 120,
"seq": 5}
{"timestamp": "2019-10-13 14:50:00", "peer_ip_src": "127.0.0.1",
"peer_tcp_port": 36143, "event_type": "dump_close", "entries": 0, "tables":
1, "seq": 5}
{"timestamp": "2019-10-13 14:52:00", "peer_ip_src": "127.0.0.1",
"peer_tcp_port": 36143, "event_type": "dump_init", "dump_period": 120,
"seq": 6}
{"timestamp": "2019-10-13 14:52:00", "peer_ip_src": "127.0.0.1",
"peer_tcp_port": 36143, "event_type": "dump_close", "entries": 0, "tables":
1, "seq": 6}
{"timestamp": "2019-10-13 14:54:00", "peer_ip_src": "127.0.0.1",
"peer_tcp_port": 36143, "event_type": "dump_init", "dump_period": 120,
"seq": 7}
{"timestamp": "2019-10-13 14:54:00", "peer_ip_src": "127.0.0.1",
"peer_tcp_port": 36143, "event_type": "dump_close", "entries": 0, "tables":
1, "seq": 7}
```

```
$ sudo tcpdump -nv -i any tcp port 180 and host 127.0.0.1
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size
262144 bytes
14:55:05.277665 IP (tos 0xc0, ttl 64, id 3378, offset 0, flags [DF], proto
TCP (6), length 79)
127.0.0.1.36143 > 127.0.0.1.180: Flags [P.], cksum 0xfe43 (incorrect ->
0x58be), seq 100499878:100499905, ack 1900709696, win 342, options
[nop,nop,TS val 1973199584 ecr 1973197086], length 27
14:55:05.277694 IP (tos 0x0, ttl 64, id 35509, offset 0, flags [DF], proto
TCP (6), length 52)
127.0.0.1.180 > 127.0.0.1.36143: Flags [.], cksum 0xfe28 (incorrect ->
0x383e), ack 27, win 342, options [nop,nop,TS val 1973199584 ecr
1973199584], length 0
14:55:05.282623 IP (tos 0xc0, ttl 64, id 3379, offset 0, flags [DF], proto
TCP (6), length 83)
127.0.0.1.36143 > 127.0.0.1.180: Flags [P.], cksum 0xfe47 (incorrect ->
0x64b5), seq 27:58, ack 1, win 342, options [nop,nop,TS val 1973199589 ecr
1973199584], length 31
14:55:05.282652 IP (tos 0x0, ttl 64, id 35510, offset 0, flags [DF], proto
TCP (6), length 52)
127.0.0.1.180 > 127.0.0.1.36143: Flags [.], cksum 0xfe28 (incorrect ->
0x3815), ack 58, win 342, options [nop,nop,TS val 1973199589 ecr
1973199589], length 0
14:55:05.436293 IP (tos 0xc0, ttl 64, id 3380, offset 0, flags [DF], proto
TCP (6), length 91)
127.0.0.1.36143 > 127.0.0.1.180: Flags [P.], cksum 0xfe4f (incorrect ->
0xb7db), seq 58:97, ack 1, win 342, options [nop,nop,TS val 1973199742 ecr
1973199589], length 39
14:55:05.436330 IP (tos 0x0, ttl 64, id 35511, offset 0, flags [DF], proto
TCP (6), length 52)
127.0.0.1.180 > 127.0.0.1.36143: Flags [.], cksum 0xfe28 (incorrect ->
0x36bb), ack 97, win 342, options [nop,nop,TS val 1973199743 ecr
1973199742], length 0
```

On Sun, Oct 13, 2019 at 10:53 AM Brooks Swinnerton 
wrote:

> > 1) as super extra check, can you capture stuff with Wireshark and see
> what is going on 'on the wire'? Do you see the routes being sent and
> landing onto pmacct, etc.?
>
> Gosh, I'm stumped. I do see traffic on port 180 and the port that is
> referenced in the table dump, but it's not the cleartext BGP traffic I was
> expecting:
>
> ```
> 127.0.0.1.36143 > 127.0.0.1.180: Flags [P.], cksum 0xfe4b (incorrect
> -> 0xfc0d), seq 9200:9235, ack 234, win 342, options [nop,nop,TS val
> 1972706066 ecr 1972699157], length 35
> 14:46:51.746910 IP (tos 0x0, ttl 64, id 34830, offset 0, flags [DF], proto
> TCP (6), length 52)
> 127.0.0.1.180 > 127.0.0.1.36143: Flags [.], cksum 0xfe28 (incorrect ->
> 0x9b98), ack 9235, win 342, options [nop,nop,TS val 1972706066 ecr
> 1972706066], length 0
> 14:46:51.988455 IP (tos 0xc0, ttl 64, id 2700, offset 0, flags [DF], proto
> TCP (6), length 91)
> 127.0.0.1.36143 > 127.0.0.1.180: Flags [P.], cksum 0xfe4f (incorrect
> -> 0x1b06), seq 9235:9274, ack 234, win 342, options [nop,nop,TS val
> 1972706308 ecr 1972706066], length 39
> 14:46:51.988488 IP (tos 0x0, ttl 64, id 34831, offset 0, flags [DF], proto
> TCP (6), length 52)
> 127.0.0.1.180 > 127.0.0.1.36143: Flags [.], cksum 0xfe28 (incorrect ->
> 0x998d), ack 9274, win 342, options [nop,nop,TS val 1972706308 ecr
> 1972706308], length 0
> ```
>
> Which aligns with:
>
> ```
> {"timestamp": "2019-10-13 14:40:00", "peer_ip_src": "127.0.0.1",
> "peer_tcp_port": 55881, "event_type": "dump_init", "dump_period": 120,
> "seq": 0}
> {"timestamp": "2019-10-13 14:40:00", "peer_ip_src": "127.0.0.1",
> "peer_tcp_port": 55881, "event_type": "dump_close", "entries": 0, "tables":
> 1, "seq": 0}
> {"timestamp": "2019-10-13 14:42:00", "peer_ip_src": "127.0.0.1",
> "peer_tcp_port": 55881, "event_type": "dump_init", "dump_period": 120,
> "seq": 1}
> {"timestamp": "2019-10-13 14:42:00", "peer_ip_src": "127.0.0.1",
> "peer_tcp_port": 55881, 

Re: [pmacct-discussion] BGP AS values are 0

2019-10-13 Thread Brooks Swinnerton
> 1) as super extra check, can you capture stuff with Wireshark and see
what is going on 'on the wire'? Do you see the routes being sent and
landing onto pmacct, etc.?

Gosh, I'm stumped. I do see traffic on port 180 and the port that is
referenced in the table dump, but it's not the cleartext BGP traffic I was
expecting:

```
127.0.0.1.36143 > 127.0.0.1.180: Flags [P.], cksum 0xfe4b (incorrect ->
0xfc0d), seq 9200:9235, ack 234, win 342, options [nop,nop,TS val
1972706066 ecr 1972699157], length 35
14:46:51.746910 IP (tos 0x0, ttl 64, id 34830, offset 0, flags [DF], proto
TCP (6), length 52)
127.0.0.1.180 > 127.0.0.1.36143: Flags [.], cksum 0xfe28 (incorrect ->
0x9b98), ack 9235, win 342, options [nop,nop,TS val 1972706066 ecr
1972706066], length 0
14:46:51.988455 IP (tos 0xc0, ttl 64, id 2700, offset 0, flags [DF], proto
TCP (6), length 91)
127.0.0.1.36143 > 127.0.0.1.180: Flags [P.], cksum 0xfe4f (incorrect ->
0x1b06), seq 9235:9274, ack 234, win 342, options [nop,nop,TS val
1972706308 ecr 1972706066], length 39
14:46:51.988488 IP (tos 0x0, ttl 64, id 34831, offset 0, flags [DF], proto
TCP (6), length 52)
127.0.0.1.180 > 127.0.0.1.36143: Flags [.], cksum 0xfe28 (incorrect ->
0x998d), ack 9274, win 342, options [nop,nop,TS val 1972706308 ecr
1972706308], length 0
```

Which aligns with:

```
{"timestamp": "2019-10-13 14:40:00", "peer_ip_src": "127.0.0.1",
"peer_tcp_port": 55881, "event_type": "dump_init", "dump_period": 120,
"seq": 0}
{"timestamp": "2019-10-13 14:40:00", "peer_ip_src": "127.0.0.1",
"peer_tcp_port": 55881, "event_type": "dump_close", "entries": 0, "tables":
1, "seq": 0}
{"timestamp": "2019-10-13 14:42:00", "peer_ip_src": "127.0.0.1",
"peer_tcp_port": 55881, "event_type": "dump_init", "dump_period": 120,
"seq": 1}
{"timestamp": "2019-10-13 14:42:00", "peer_ip_src": "127.0.0.1",
"peer_tcp_port": 55881, "event_type": "dump_close", "entries": 0, "tables":
1, "seq": 1}
```

Is it normal that `peer_tcp_port` is a random port and not 179? I know
pmacctd's BGP port is listening on 180, but the peer port (BIRD) is 179:

```
bgp_daemon: true
bgp_daemon_ip: 127.0.0.1
bgp_daemon_port: 180
```

I've also went ahead and removed any filters that were previously in place
in BIRD so it should be sending all prefixes.

On Sun, Oct 13, 2019 at 10:08 AM Paolo Lucente  wrote:

>
> Hi Brooks,
>
> Wow, interesting yes. Your decoding is right: BGP table is empty. May i
> ask you two things: 1) as super extra check, can you capture stuff with
> Wireshark and see what is going on 'on the wire'? Do you see the routes
> being sent and landing onto pmacct, etc.? 2) Should that be the case,
> ie. all looks good, could you try master code on GitHub? Should also
> that one not work, we should find a way for me to reproduce this (as i
> tested the scenario and it appears to work for me against an ExaBGP) or,
> let me just mention it, troubleshoot stuff on your box.
>
> Paolo
>
> On Sun, Oct 13, 2019 at 08:46:47AM -0400, Brooks Swinnerton wrote:
> > Thank you Paolo,
> >
> > Interesting, it looks like the pmacctd end of the BGP session isn't
> picking
> > up the routes if I'm reading the `bgp_table_dump_file` correctly:
> >
> > ```
> > {"timestamp": "2019-10-13 12:42:00", "peer_ip_src": "127.0.0.1",
> > "peer_tcp_port": 39587, "event_type": "dump_init", "dump_period": 120,
> > "seq": 0}
> > {"timestamp": "2019-10-13 12:42:00", "peer_ip_src": "127.0.0.1",
> > "peer_tcp_port": 39587, "event_type": "dump_close", "entries": 0,
> "tables":
> > 1, "seq": 0}
> > ```
> >
> > But looking at the BIRD side of things, I can see the routes are indeed
> > being exported:
> >
> > ```
> > bird> show route export AS00v4 count
> > 172973 of 336950 routes for 173059 networks in table master4
> > ```
> >
> > On Sun, Oct 13, 2019 at 8:30 AM Paolo Lucente  wrote:
> >
> > >
> > > Hi Brooks,
> > >
> > > +1 to Felix's answer. Also maybe two obvious pointsa: 1) with an iBGP
> > > peering setup, AS0 can mean unknown or your own ASN (being a number
> > > rather than a string, null is not an option) and 2) until routes are
> > > received, source/destination IP prefixes can get associated to AS0.
> > >
> > > Config looks good as well as the log extract you posted. For more debug
> > > info you can perhaps dump routes received via BGP just to make extra
> > > sure all is well on that side of the things too, ie.:
> > >
> > > bgp_table_dump_file: /path/to/spool/bgp-$peer_src_ip-%H%M.log
> > > bgp_table_dump_refresh_time: 120
> > >
> > > Let us know how it goes.
> > >
> > > Paolo
> > >
> > > On Sat, Oct 12, 2019 at 11:36:12PM -0400, Brooks Swinnerton wrote:
> > > > Hello there!
> > > >
> > > > I have pmacctd working with the Kafka addon and am attempting to
> include
> > > > `src_as` and `dst_as` information based on the BGP sessions running
> on
> > > the
> > > > same machine using the [BIRD router](https://bird.network.cz).
> > > >
> > > > I was able to successfully get the BGP session stood up using a
> loopback
> > > > address, but 

Re: [pmacct-discussion] BGP AS values are 0

2019-10-13 Thread Paolo Lucente


Hi Brooks,

Wow, interesting yes. Your decoding is right: BGP table is empty. May i
ask you two things: 1) as super extra check, can you capture stuff with
Wireshark and see what is going on 'on the wire'? Do you see the routes
being sent and landing onto pmacct, etc.? 2) Should that be the case,
ie. all looks good, could you try master code on GitHub? Should also
that one not work, we should find a way for me to reproduce this (as i
tested the scenario and it appears to work for me against an ExaBGP) or,
let me just mention it, troubleshoot stuff on your box.

Paolo

On Sun, Oct 13, 2019 at 08:46:47AM -0400, Brooks Swinnerton wrote:
> Thank you Paolo,
> 
> Interesting, it looks like the pmacctd end of the BGP session isn't picking
> up the routes if I'm reading the `bgp_table_dump_file` correctly:
> 
> ```
> {"timestamp": "2019-10-13 12:42:00", "peer_ip_src": "127.0.0.1",
> "peer_tcp_port": 39587, "event_type": "dump_init", "dump_period": 120,
> "seq": 0}
> {"timestamp": "2019-10-13 12:42:00", "peer_ip_src": "127.0.0.1",
> "peer_tcp_port": 39587, "event_type": "dump_close", "entries": 0, "tables":
> 1, "seq": 0}
> ```
> 
> But looking at the BIRD side of things, I can see the routes are indeed
> being exported:
> 
> ```
> bird> show route export AS00v4 count
> 172973 of 336950 routes for 173059 networks in table master4
> ```
> 
> On Sun, Oct 13, 2019 at 8:30 AM Paolo Lucente  wrote:
> 
> >
> > Hi Brooks,
> >
> > +1 to Felix's answer. Also maybe two obvious pointsa: 1) with an iBGP
> > peering setup, AS0 can mean unknown or your own ASN (being a number
> > rather than a string, null is not an option) and 2) until routes are
> > received, source/destination IP prefixes can get associated to AS0.
> >
> > Config looks good as well as the log extract you posted. For more debug
> > info you can perhaps dump routes received via BGP just to make extra
> > sure all is well on that side of the things too, ie.:
> >
> > bgp_table_dump_file: /path/to/spool/bgp-$peer_src_ip-%H%M.log
> > bgp_table_dump_refresh_time: 120
> >
> > Let us know how it goes.
> >
> > Paolo
> >
> > On Sat, Oct 12, 2019 at 11:36:12PM -0400, Brooks Swinnerton wrote:
> > > Hello there!
> > >
> > > I have pmacctd working with the Kafka addon and am attempting to include
> > > `src_as` and `dst_as` information based on the BGP sessions running on
> > the
> > > same machine using the [BIRD router](https://bird.network.cz).
> > >
> > > I was able to successfully get the BGP session stood up using a loopback
> > > address, but in both the Kafka consumer and `pmacct -s`, I do not see the
> > > AS values:
> > >
> > > ```
> > > {"event_type": "purge", "as_src": 0, "as_dst": 0, "ip_src": "1.1.1.138",
> > > "ip_dst": "5.9.43.211", "port_src": 443, "port_dst": 48268, "ip_proto":
> > > "tcp", "stamp_inserted": "2019-10-13 02:50:00", "stamp_updated":
> > > "2019-10-13 02:53:31", "packets": 1, "bytes": 52, "writer_id":
> > > "default_kafka/3725"}
> > > ```
> > >
> > > The pmacct log seems good:
> > >
> > > ```
> > > Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default/core ):
> > Promiscuous
> > > Mode Accounting Daemon, pmacctd 1.7.3-git (20190418-00+c4)
> > > Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default/core ):
> > >  '--enable-kafka' '--enable-jansson' '--enable-l2' '--enable-64bit'
> > > '--enable-traffic-bins' '-
> > > Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default/core ): Reading
> > > configuration file '/etc/pmacct/pmacctd.peering.conf'.
> > > Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ):
> > > cache entries=16411 base cache memory=54878384 bytes
> > > Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core ): [ens3,0]
> > > link type is: 1
> > > Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core ):
> > > [/etc/pmacct/peering_agent.map] (re)loading map.
> > > Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core ):
> > > [/etc/pmacct/peering_agent.map] map successfully (re)loaded.
> > > Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ):
> > > JSON: setting object handlers.
> > > Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ):
> > maximum
> > > BGP peers allowed: 2
> > > Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ):
> > waiting
> > > for BGP data on 127.0.0.1:180
> > > Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ):
> > ***
> > > Purging cache - START (PID: 3673) ***
> > > Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ):
> > ***
> > > Purging cache - END (PID: 3673, QN: 0/0, ET: 0) ***
> > > Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ):
> > > [127.0.0.1] BGP peers usage: 1/2
> > > Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ):
> > > [1.1.1.1] Capability: MultiProtocol [1] AFI [1] SAFI [1]
> > > Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ):
> > > [1.1.1.1] Capability: 4-bytes AS [41] ASN [30]
> > > Oct 13 

Re: [pmacct-discussion] BGP AS values are 0

2019-10-13 Thread Brooks Swinnerton
Thank you Paolo,

Interesting, it looks like the pmacctd end of the BGP session isn't picking
up the routes if I'm reading the `bgp_table_dump_file` correctly:

```
{"timestamp": "2019-10-13 12:42:00", "peer_ip_src": "127.0.0.1",
"peer_tcp_port": 39587, "event_type": "dump_init", "dump_period": 120,
"seq": 0}
{"timestamp": "2019-10-13 12:42:00", "peer_ip_src": "127.0.0.1",
"peer_tcp_port": 39587, "event_type": "dump_close", "entries": 0, "tables":
1, "seq": 0}
```

But looking at the BIRD side of things, I can see the routes are indeed
being exported:

```
bird> show route export AS00v4 count
172973 of 336950 routes for 173059 networks in table master4
```

On Sun, Oct 13, 2019 at 8:30 AM Paolo Lucente  wrote:

>
> Hi Brooks,
>
> +1 to Felix's answer. Also maybe two obvious pointsa: 1) with an iBGP
> peering setup, AS0 can mean unknown or your own ASN (being a number
> rather than a string, null is not an option) and 2) until routes are
> received, source/destination IP prefixes can get associated to AS0.
>
> Config looks good as well as the log extract you posted. For more debug
> info you can perhaps dump routes received via BGP just to make extra
> sure all is well on that side of the things too, ie.:
>
> bgp_table_dump_file: /path/to/spool/bgp-$peer_src_ip-%H%M.log
> bgp_table_dump_refresh_time: 120
>
> Let us know how it goes.
>
> Paolo
>
> On Sat, Oct 12, 2019 at 11:36:12PM -0400, Brooks Swinnerton wrote:
> > Hello there!
> >
> > I have pmacctd working with the Kafka addon and am attempting to include
> > `src_as` and `dst_as` information based on the BGP sessions running on
> the
> > same machine using the [BIRD router](https://bird.network.cz).
> >
> > I was able to successfully get the BGP session stood up using a loopback
> > address, but in both the Kafka consumer and `pmacct -s`, I do not see the
> > AS values:
> >
> > ```
> > {"event_type": "purge", "as_src": 0, "as_dst": 0, "ip_src": "1.1.1.138",
> > "ip_dst": "5.9.43.211", "port_src": 443, "port_dst": 48268, "ip_proto":
> > "tcp", "stamp_inserted": "2019-10-13 02:50:00", "stamp_updated":
> > "2019-10-13 02:53:31", "packets": 1, "bytes": 52, "writer_id":
> > "default_kafka/3725"}
> > ```
> >
> > The pmacct log seems good:
> >
> > ```
> > Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default/core ):
> Promiscuous
> > Mode Accounting Daemon, pmacctd 1.7.3-git (20190418-00+c4)
> > Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default/core ):
> >  '--enable-kafka' '--enable-jansson' '--enable-l2' '--enable-64bit'
> > '--enable-traffic-bins' '-
> > Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default/core ): Reading
> > configuration file '/etc/pmacct/pmacctd.peering.conf'.
> > Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ):
> > cache entries=16411 base cache memory=54878384 bytes
> > Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core ): [ens3,0]
> > link type is: 1
> > Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core ):
> > [/etc/pmacct/peering_agent.map] (re)loading map.
> > Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core ):
> > [/etc/pmacct/peering_agent.map] map successfully (re)loaded.
> > Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ):
> > JSON: setting object handlers.
> > Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ):
> maximum
> > BGP peers allowed: 2
> > Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ):
> waiting
> > for BGP data on 127.0.0.1:180
> > Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ):
> ***
> > Purging cache - START (PID: 3673) ***
> > Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ):
> ***
> > Purging cache - END (PID: 3673, QN: 0/0, ET: 0) ***
> > Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ):
> > [127.0.0.1] BGP peers usage: 1/2
> > Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ):
> > [1.1.1.1] Capability: MultiProtocol [1] AFI [1] SAFI [1]
> > Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ):
> > [1.1.1.1] Capability: 4-bytes AS [41] ASN [30]
> > Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ):
> > [1.1.1.1] BGP_OPEN: Local AS: 30 Remote AS: 397143 HoldTime: 90
> > Oct 13 02:51:51 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ):
> ***
> > Purging cache - START (PID: 3678) ***
> > Oct 13 02:51:53 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ):
> ***
> > Purging cache - END (PID: 3678, QN: 679/679, ET: 0) ***
> > ```
> >
> > And the configuration is as follows:
> >
> > ```
> > !
> > ! pmacctd configuration example
> > !
> > ! Did you know CONFIG-KEYS contains the detailed list of all
> configuration
> > keys
> > ! supported by 'nfacctd' and 'pmacctd' ?
> > !
> > ! debug: true
> > daemonize: false
> > pcap_interface: ens3
> > aggregate: src_host, dst_host, src_port, dst_port, src_as, dst_as, proto
> > sampling_rate: 10
> > !
> > plugins: kafka
> > 

Re: [pmacct-discussion] BGP AS values are 0

2019-10-13 Thread Paolo Lucente


Hi Brooks,

+1 to Felix's answer. Also maybe two obvious pointsa: 1) with an iBGP
peering setup, AS0 can mean unknown or your own ASN (being a number
rather than a string, null is not an option) and 2) until routes are
received, source/destination IP prefixes can get associated to AS0.

Config looks good as well as the log extract you posted. For more debug
info you can perhaps dump routes received via BGP just to make extra
sure all is well on that side of the things too, ie.:

bgp_table_dump_file: /path/to/spool/bgp-$peer_src_ip-%H%M.log
bgp_table_dump_refresh_time: 120 

Let us know how it goes.

Paolo

On Sat, Oct 12, 2019 at 11:36:12PM -0400, Brooks Swinnerton wrote:
> Hello there!
> 
> I have pmacctd working with the Kafka addon and am attempting to include
> `src_as` and `dst_as` information based on the BGP sessions running on the
> same machine using the [BIRD router](https://bird.network.cz).
> 
> I was able to successfully get the BGP session stood up using a loopback
> address, but in both the Kafka consumer and `pmacct -s`, I do not see the
> AS values:
> 
> ```
> {"event_type": "purge", "as_src": 0, "as_dst": 0, "ip_src": "1.1.1.138",
> "ip_dst": "5.9.43.211", "port_src": 443, "port_dst": 48268, "ip_proto":
> "tcp", "stamp_inserted": "2019-10-13 02:50:00", "stamp_updated":
> "2019-10-13 02:53:31", "packets": 1, "bytes": 52, "writer_id":
> "default_kafka/3725"}
> ```
> 
> The pmacct log seems good:
> 
> ```
> Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default/core ): Promiscuous
> Mode Accounting Daemon, pmacctd 1.7.3-git (20190418-00+c4)
> Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default/core ):
>  '--enable-kafka' '--enable-jansson' '--enable-l2' '--enable-64bit'
> '--enable-traffic-bins' '-
> Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default/core ): Reading
> configuration file '/etc/pmacct/pmacctd.peering.conf'.
> Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ):
> cache entries=16411 base cache memory=54878384 bytes
> Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core ): [ens3,0]
> link type is: 1
> Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core ):
> [/etc/pmacct/peering_agent.map] (re)loading map.
> Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core ):
> [/etc/pmacct/peering_agent.map] map successfully (re)loaded.
> Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ):
> JSON: setting object handlers.
> Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ): maximum
> BGP peers allowed: 2
> Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ): waiting
> for BGP data on 127.0.0.1:180
> Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ): ***
> Purging cache - START (PID: 3673) ***
> Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ): ***
> Purging cache - END (PID: 3673, QN: 0/0, ET: 0) ***
> Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ):
> [127.0.0.1] BGP peers usage: 1/2
> Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ):
> [1.1.1.1] Capability: MultiProtocol [1] AFI [1] SAFI [1]
> Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ):
> [1.1.1.1] Capability: 4-bytes AS [41] ASN [30]
> Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ):
> [1.1.1.1] BGP_OPEN: Local AS: 30 Remote AS: 397143 HoldTime: 90
> Oct 13 02:51:51 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ): ***
> Purging cache - START (PID: 3678) ***
> Oct 13 02:51:53 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ): ***
> Purging cache - END (PID: 3678, QN: 679/679, ET: 0) ***
> ```
> 
> And the configuration is as follows:
> 
> ```
> !
> ! pmacctd configuration example
> !
> ! Did you know CONFIG-KEYS contains the detailed list of all configuration
> keys
> ! supported by 'nfacctd' and 'pmacctd' ?
> !
> ! debug: true
> daemonize: false
> pcap_interface: ens3
> aggregate: src_host, dst_host, src_port, dst_port, src_as, dst_as, proto
> sampling_rate: 10
> !
> plugins: kafka
> kafka_output: json
> kafka_broker_host: kafka-broker.fqdn.com
> kafka_topic: pmacct.acct
> kafka_refresh_time: 10
> kafka_history: 5m
> kafka_history_roundoff: m
> !
> bgp_daemon: true
> bgp_daemon_ip: 127.0.0.1
> bgp_daemon_port: 180
> bgp_daemon_max_peers: 1
> bgp_agent_map: /etc/pmacct/peering_agent.map
> pmacctd_as: bgp
> ```
> 
> With the /etc/pmacct/peering_agent.map as:
> 
> ```
> bgp_ip=1.1.1.1 ip=0.0.0.0/0
> ```
> 
> And the other end of the BGP configuration (in BIRD) being:
> 
> ```
> protocol bgp AS30v4c1 from transit_customer4 {
>   description "pmacctd";
>   local 127.0.0.1 port 179 as 30;
>   neighbor 127.0.0.1 port 180 as 30;
>   rr client;
> }
> ```
> 
> And it has exported ~150k routes.
> 
> Is there anything obvious that I'm doing wrong or perhaps a way that I can
> turn on more debugging to lead me on the right trail?

> ___
> 

Re: [pmacct-discussion] BGP AS values are 0

2019-10-13 Thread Brooks Swinnerton
> assuming birds Router ID is 1.1.1.1

Yep, that's correct (I obviously modified that before submitting the email).

> Just a thought, as per the docs it’s recommended to set pmacctd_net to
the same value as pmacctd_as (bgp in this case).

That's a good point, it looks like the default

is not BGP, so I've updated my config to be:

```
!
! pmacctd configuration example
!
! Did you know CONFIG-KEYS contains the detailed list of all configuration
keys
! supported by 'nfacctd' and 'pmacctd' ?
!
! debug: true
daemonize: false
pcap_interface: ens3
aggregate: src_host, dst_host, src_port, dst_port, src_as, dst_as, src_net,
dst_net, proto
sampling_rate: 10
!
plugins: kafka
kafka_output: json
kafka_broker_host: kafka.neptunenetworks.org
kafka_topic: pmacct.acct
kafka_refresh_time: 10
kafka_history: 5m
kafka_history_roundoff: m
!
bgp_daemon: true
bgp_daemon_ip: 127.0.0.1
bgp_daemon_port: 180
bgp_daemon_max_peers: 1
bgp_agent_map: /etc/pmacct/peering_agent.map
pmacctd_as: bgp
pmacctd_net: bgp
```

(adding in the `src_net` and `dst_net` as you suggested), but it looks like
that's not working either:

```
{"event_type": "purge", "as_src": 0, "as_dst": 0, "ip_src":
"205.185.117.149", "net_src": "0.0.0.0", "ip_dst": "23.157.160.138",
"net_dst": "0.0.0.0", "port_src": 443, "port_dst": 34345, "ip_proto":
"tcp", "stamp_inserted": "2019-10-13 12:25:00", "stamp_updated":
"2019-10-13 12:26:21", "packets": 40, "bytes": 45332
, "writer_id": "default_kafka/6271"}
```

Very curious!

On Sun, Oct 13, 2019 at 7:12 AM Felix Stolba  wrote:

> Hey Brooks,
>
>
>
> I can confirm I have a similar setup collecting Netflow so in principle
> this should do what you want. The bgp_agent_map also looks fine, assuming
> birds Router ID is 1.1.1.1?
>
> Just a thought, as per the docs it’s recommended to set pmacctd_net to the
> same value as pmacctd_as (bgp in this case). Can you add src_net and
> dst_net to your aggregates and check if they also show up as zero?
>
>
>
> Greetings,
>
> Felix
>
>
>
> *Von: *pmacct-discussion  im
> Auftrag von Brooks Swinnerton 
> *Antworten an: *"pmacct-discussion@pmacct.net" <
> pmacct-discussion@pmacct.net>
> *Datum: *Sonntag, 13. Oktober 2019 um 05:38
> *An: *"pmacct-discussion@pmacct.net" 
> *Betreff: *[pmacct-discussion] BGP AS values are 0
>
>
>
> Hello there!
>
> I have pmacctd working with the Kafka addon and am attempting to include
> `src_as` and `dst_as` information based on the BGP sessions running on the
> same machine using the [BIRD router](https://bird.network.cz).
>
> I was able to successfully get the BGP session stood up using a loopback
> address, but in both the Kafka consumer and `pmacct -s`, I do not see the
> AS values:
>
> ```
> {"event_type": "purge", "as_src": 0, "as_dst": 0, "ip_src": "1.1.1.138",
> "ip_dst": "5.9.43.211", "port_src": 443, "port_dst": 48268, "ip_proto":
> "tcp", "stamp_inserted": "2019-10-13 02:50:00", "stamp_updated":
> "2019-10-13 02:53:31", "packets": 1, "bytes": 52, "writer_id":
> "default_kafka/3725"}
> ```
>
> The pmacct log seems good:
>
> ```
> Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default/core ):
> Promiscuous Mode Accounting Daemon, pmacctd 1.7.3-git (20190418-00+c4)
> Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default/core ):
>  '--enable-kafka' '--enable-jansson' '--enable-l2' '--enable-64bit'
> '--enable-traffic-bins' '-
> Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default/core ): Reading
> configuration file '/etc/pmacct/pmacctd.peering.conf'.
> Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ):
> cache entries=16411 base cache memory=54878384 bytes
> Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core ): [ens3,0]
> link type is: 1
> Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core ):
> [/etc/pmacct/peering_agent.map] (re)loading map.
> Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core ):
> [/etc/pmacct/peering_agent.map] map successfully (re)loaded.
> Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ):
> JSON: setting object handlers.
> Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ):
> maximum BGP peers allowed: 2
> Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ):
> waiting for BGP data on 127.0.0.1:180
> Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ): ***
> Purging cache - START (PID: 3673) ***
> Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ): ***
> Purging cache - END (PID: 3673, QN: 0/0, ET: 0) ***
> Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ):
> [127.0.0.1] BGP peers usage: 1/2
> Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ):
> [1.1.1.1] Capability: MultiProtocol [1] AFI [1] SAFI [1]
> Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ):
> [1.1.1.1] Capability: 4-bytes AS [41] ASN [30]
> Oct 13 02:51:41 bdr-nyiix 

Re: [pmacct-discussion] BGP AS values are 0

2019-10-13 Thread Felix Stolba
Hey Brooks,

I can confirm I have a similar setup collecting Netflow so in principle this 
should do what you want. The bgp_agent_map also looks fine, assuming birds 
Router ID is 1.1.1.1?
Just a thought, as per the docs it’s recommended to set pmacctd_net to the same 
value as pmacctd_as (bgp in this case). Can you add src_net and dst_net to your 
aggregates and check if they also show up as zero?

Greetings,
Felix

Von: pmacct-discussion  im Auftrag von 
Brooks Swinnerton 
Antworten an: "pmacct-discussion@pmacct.net" 
Datum: Sonntag, 13. Oktober 2019 um 05:38
An: "pmacct-discussion@pmacct.net" 
Betreff: [pmacct-discussion] BGP AS values are 0

Hello there!

I have pmacctd working with the Kafka addon and am attempting to include 
`src_as` and `dst_as` information based on the BGP sessions running on the same 
machine using the [BIRD 
router](https://bird.network.cz).

I was able to successfully get the BGP session stood up using a loopback 
address, but in both the Kafka consumer and `pmacct -s`, I do not see the AS 
values:

```
{"event_type": "purge", "as_src": 0, "as_dst": 0, "ip_src": "1.1.1.138", 
"ip_dst": "5.9.43.211", "port_src": 443, "port_dst": 48268, "ip_proto": "tcp", 
"stamp_inserted": "2019-10-13 02:50:00", "stamp_updated": "2019-10-13 
02:53:31", "packets": 1, "bytes": 52, "writer_id": "default_kafka/3725"}
```

The pmacct log seems good:

```
Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default/core ): Promiscuous 
Mode Accounting Daemon, pmacctd 1.7.3-git (20190418-00+c4)
Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default/core ):  
'--enable-kafka' '--enable-jansson' '--enable-l2' '--enable-64bit' 
'--enable-traffic-bins' '-
Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default/core ): Reading 
configuration file '/etc/pmacct/pmacctd.peering.conf'.
Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ): cache 
entries=16411 base cache memory=54878384 bytes
Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core ): [ens3,0] link 
type is: 1
Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core ): 
[/etc/pmacct/peering_agent.map] (re)loading map.
Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core ): 
[/etc/pmacct/peering_agent.map] map successfully (re)loaded.
Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ): JSON: 
setting object handlers.
Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ): maximum BGP 
peers allowed: 2
Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ): waiting for 
BGP data on 127.0.0.1:180
Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ): *** 
Purging cache - START (PID: 3673) ***
Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ): *** 
Purging cache - END (PID: 3673, QN: 0/0, ET: 0) ***
Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ): [127.0.0.1] 
BGP peers usage: 1/2
Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ): [1.1.1.1] 
Capability: MultiProtocol [1] AFI [1] SAFI [1]
Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ): [1.1.1.1] 
Capability: 4-bytes AS [41] ASN [30]
Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ): [1.1.1.1] 
BGP_OPEN: Local AS: 30 Remote AS: 397143 HoldTime: 90
Oct 13 02:51:51 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ): *** 
Purging cache - START (PID: 3678) ***
Oct 13 02:51:53 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ): *** 
Purging cache - END (PID: 3678, QN: 679/679, ET: 0) ***
```

And the configuration is as follows:

```
!
! pmacctd configuration example
!
! Did you know CONFIG-KEYS contains the detailed list of all configuration keys
! supported by 'nfacctd' and 'pmacctd' ?
!
! debug: true
daemonize: false
pcap_interface: ens3
aggregate: src_host, dst_host, src_port, dst_port, src_as, dst_as, proto
sampling_rate: 10
!
plugins: kafka
kafka_output: json
kafka_broker_host: kafka-broker.fqdn.com
kafka_topic: pmacct.acct
kafka_refresh_time: 10
kafka_history: 5m
kafka_history_roundoff: m
!
bgp_daemon: true
bgp_daemon_ip: 127.0.0.1
bgp_daemon_port: 180
bgp_daemon_max_peers: 1
bgp_agent_map: /etc/pmacct/peering_agent.map
pmacctd_as: bgp
```

With the /etc/pmacct/peering_agent.map as:

```
bgp_ip=1.1.1.1 ip=0.0.0.0/0
```

And the other end of the BGP configuration (in BIRD) being:

```
protocol bgp AS30v4c1 from transit_customer4 {
  description "pmacctd";
  local 127.0.0.1 port 179 as 30;
  neighbor 127.0.0.1 port 180 as 30;
  rr client;
}
```

And it has exported ~150k routes.

Is there anything obvious that I'm doing wrong or perhaps a way that I can turn 
on more debugging to lead me on the right trail?
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists