Re: [pmacct-discussion] peer_src_as vs src_as

2023-11-24 Thread Paolo Lucente

Hi Benedikt,

Yes, fields are directly populated with what is in the NetFlow packet. 
Super strange the Cisco is putting the Source AS in PeerSrcAS (confirmed 
also with tcpdump), maybe a bug?


You could probably get around it defining a custom primitive but it 
would be very dirty. I would make the Cisco device export right the 
informationi instead.


Paolo


On 22/11/23 10:49, Benedikt Sveinsson wrote:

Hi (hope this is not a duplicate email)

I’m running a new build of nfacct  - version below.

Exporting into Kafka

Collecting from two platforms Cisco and Huawei

I’m doing this nfacct -> kafka -> flowexporter -> Prometheus -> Grafana 
thing and initially had this in containers but now fresh setup natively 
on the Ubuntu host.


I straight up hit a snag – using same config files for nfacct as in the 
old setup – I now got as_src always 0


When looking at the kafka entries I noticed I have two as src fields – 
peer_as_src and as_src


{"event_type":"purge","label":"dublin","as_src":0,"as_dst":12969,"peer_as_src":32934,"peer_as_dst":0,"ip_src":"x.x.x.x","ip_dst":"x.x.x.x","port_src":443,"port_dst":59073,"stamp_inserted":"2023-11-09
 11:50:00","stamp_updated":"2023-11-09 12:32:36","packets":100,"bytes":5200,"writer_id":"default_kafka/592569"}

Our AS is 12969 – I have a networks file for our own networks etc.

I’m seeing the source AS as peer_as_src being populated with the source 
AS, but as_src always 0


Now to my confusion when I added the Huawei router to the collector :

{"event_type":"purge","label":"arbaer","as_src":24940,"as_dst":12969,"peer_as_src":0,"peer_as_dst":0,"ip_src":"x.x.x.x","ip_dst":"x.x.x.x","port_src":50196,"port_dst":443,"stamp_inserted":"1995-01-29
 09:45:00","stamp_updated":"2023-11-09 12:36:36","packets":200,"bytes":12000,"writer_id":"default_kafka/592828"}

I get as_src and as_dst correct  - this is an issue as I modified the 
flow-exporter code to pick up peer_as_src


Now looking at the tcpdump of the Netflow packet from the cisco router – 
it uses field name PeerSrcAS (not been able to decode the Huawei packets 
for some reason)


Can someone help me understand the Kafka fields – and from where they 
are populated ? is this directly related to what is in the actual 
netflow packet from the device – or something config related in nfacct 
?  - sorry if I’m missing something from the documentation and bit 
scrambling to get this running.


  * Benedikt 


root@netflow:/etc/pmacct# nfacctd -V

NetFlow Accounting Daemon, nfacctd 1.7.9-git [20231101-0 (a091a85e)]

Arguments:

'--enable-kafka' '--enable-jansson' '--enable-l2' 
'--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins' 
'--enable-st-bins'


Libs:

cdada 0.5.0

libpcap version 1.10.1 (with TPACKET_V3)

rdkafka 1.8.0

jansson 2.13.1

Plugins:

memory

print

nfprobe

sfprobe

tee

kafka

System:

Linux 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64

Compiler:

gcc 11.4.0

config of nfacct:

!daemonize: true

!syslog: daemon

pre_tag_map: /etc/pmacct/pretag.map

nfacctd_as: longest

nfacctd_net: longest

networks_file: /etc/pmacct/networks.lst

networks_file_no_lpm: true

aggregate: peer_src_as,peer_dst_as,src_host, dst_host, src_port, 
dst_port, src_as, dst_as, label


snaplen: 700

!sampling_rate: 100

!

bgp_daemon: true

bgp_daemon_ip: 10.131.24.11

bgp_daemon_port: 179

bgp_daemon_max_peers: 10

bgp_agent_map: /etc/pmacct/peering_agent.map

!

plugins: kafka

!bgp_table_dump_kafka_topic: pmacct.bgp

!bgp_table_dump_refresh_time: 300

kafka_cache_entries: 1

kafka_topic: netflow

kafka_max_writers: 10

kafka_output: json

kafka_broker_host: localhost

kafka_refresh_time: 5

kafka_history: 5m

kafka_history_roundoff: m

!print_refresh_time: 300

!print_history: 300

!print_history_roundoff: m

!print_output_file_append: true

!print_output_file: /var/netflow/flow_%s

!print_output: csv

nfacctd_ext_sampling_rate: 100

nfacctd_renormalize: true

nfacctd_port: 

nfacctd_time_secs: true

nfacctd_time_new: true


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


[pmacct-discussion] peer_src_as vs src_as

2023-11-22 Thread Benedikt Sveinsson
Hi (hope this is not a duplicate email)

I’m running a new build of nfacct  - version below.

Exporting into Kafka
Collecting from two platforms Cisco and Huawei

I’m doing this nfacct -> kafka -> flowexporter -> Prometheus -> Grafana thing 
and initially had this in containers but now fresh setup natively on the Ubuntu 
host.
I straight up hit a snag – using same config files for nfacct as in the old 
setup – I now got as_src always 0
When looking at the kafka entries I noticed I have two as src fields – 
peer_as_src and as_src

{"event_type":"purge","label":"dublin","as_src":0,"as_dst":12969,"peer_as_src":32934,"peer_as_dst":0,"ip_src":"x.x.x.x","ip_dst":"x.x.x.x","port_src":443,"port_dst":59073,"stamp_inserted":"2023-11-09
 11:50:00","stamp_updated":"2023-11-09 
12:32:36","packets":100,"bytes":5200,"writer_id":"default_kafka/592569"}

Our AS is 12969 – I have a networks file for our own networks etc.

I’m seeing the source AS as peer_as_src being populated with the source AS, but 
as_src always 0

Now to my confusion when I added the Huawei router to the collector :

{"event_type":"purge","label":"arbaer","as_src":24940,"as_dst":12969,"peer_as_src":0,"peer_as_dst":0,"ip_src":"x.x.x.x","ip_dst":"x.x.x.x","port_src":50196,"port_dst":443,"stamp_inserted":"1995-01-29
 09:45:00","stamp_updated":"2023-11-09 
12:36:36","packets":200,"bytes":12000,"writer_id":"default_kafka/592828"}

I get as_src and as_dst correct  - this is an issue as I modified the 
flow-exporter code to pick up peer_as_src
Now looking at the tcpdump of the Netflow packet from the cisco router – it 
uses field name PeerSrcAS (not been able to decode the Huawei packets for some 
reason)

Can someone help me understand the Kafka fields – and from where they are 
populated ? is this directly related to what is in the actual netflow packet 
from the device – or something config related in nfacct ?  - sorry if I’m 
missing something from the documentation and bit scrambling to get this running.



  *   Benedikt




root@netflow:/etc/pmacct# nfacctd -V
NetFlow Accounting Daemon, nfacctd 1.7.9-git [20231101-0 (a091a85e)]

Arguments:
'--enable-kafka' '--enable-jansson' '--enable-l2' '--enable-traffic-bins' 
'--enable-bgp-bins' '--enable-bmp-bins' '--enable-st-bins'

Libs:
cdada 0.5.0
libpcap version 1.10.1 (with TPACKET_V3)
rdkafka 1.8.0
jansson 2.13.1

Plugins:
memory
print
nfprobe
sfprobe
tee
kafka

System:
Linux 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64

Compiler:
gcc 11.4.0

config of nfacct:
!daemonize: true
!syslog: daemon
pre_tag_map: /etc/pmacct/pretag.map
nfacctd_as: longest
nfacctd_net: longest

networks_file: /etc/pmacct/networks.lst
networks_file_no_lpm: true
aggregate: peer_src_as,peer_dst_as,src_host, dst_host, src_port, dst_port, 
src_as, dst_as, label

snaplen: 700

!sampling_rate: 100
!
bgp_daemon: true
bgp_daemon_ip: 10.131.24.11
bgp_daemon_port: 179
bgp_daemon_max_peers: 10
bgp_agent_map: /etc/pmacct/peering_agent.map
!
plugins: kafka

!bgp_table_dump_kafka_topic: pmacct.bgp
!bgp_table_dump_refresh_time: 300
kafka_cache_entries: 1
kafka_topic: netflow
kafka_max_writers: 10
kafka_output: json
kafka_broker_host: localhost
kafka_refresh_time: 5
kafka_history: 5m
kafka_history_roundoff: m

!print_refresh_time: 300
!print_history: 300
!print_history_roundoff: m
!print_output_file_append: true
!print_output_file: /var/netflow/flow_%s
!print_output: csv

nfacctd_ext_sampling_rate: 100
nfacctd_renormalize: true
nfacctd_port: 
nfacctd_time_secs: true
nfacctd_time_new: true


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists