Re: [pmacct-discussion] pmbgpd -> Kafka Local Queue Full

2020-09-02 Thread Paolo Lucente


Hi Andy,

I may suggest to check Kafka logs and perhaps see if anything useful 
comes out of librdkafka stats (ie. set "global, statistics.interval.ms, 
6" in your librdkafka.conf). Check also that, if you are adding load 
to existing load, the Kafka broker is not pegging 100% CPU or maxing out 
some threads count (or perhaps, if this is a testing environment, remove 
the existing load and test with only the pmbgpd export .. that may proof 
something too).


My first suggestion would have been to tune buffers in librdkafka but 
you did that already. In any case this is, yes, an interaction between 
librdkafka and the Kafka broker; i go a bit errands here: make sure you 
have recent versions of both the library and the broker.


Especially if the topic is newly provisioned i may also suggest to try 
to produce / consume some data "by hand", like using the 
kafka-console-producer.sh and kafka-console-consumer.sh scripts shipped 
with Kafka to proof data passing through no problem.


Paolo

On 02/09/2020 09:09, Andy Davidson wrote:

Hello!

I am feeding some BMP feeds via pmbmpd into Kafka and it’s working well.  I now 
want to feed some BGP feeds into a Kafka topic using pmbgpd but similar 
configuration is causing a different behaviour.

Sep  1 22:48:00 bump pmbgpd[10992]: INFO ( default/core ): Reading 
configuration file '/etc/pmacct/pmbgpd.conf'.
Sep  1 22:48:00 bump pmbgpd[10992]: INFO ( default/core ): maximum BGP peers 
allowed: 100
Sep  1 22:48:00 bump pmbgpd[10992]: INFO ( default/core ): waiting for BGP data 
on 185.1.94.6:179
Sep  1 22:48:03 bump pmbgpd[10992]: INFO ( default/core ): [185.1.94.1] BGP 
peers usage: 1/100
Sep  1 22:48:03 bump pmbgpd[10992]: INFO ( default/core ): [185.1.94.1] 
Capability: MultiProtocol [1] AFI [1] SAFI [1]
Sep  1 22:48:03 bump pmbgpd[10992]: INFO ( default/core ): [185.1.94.1] 
Capability: 4-bytes AS [41] ASN [59964]
Sep  1 22:48:03 bump pmbgpd[10992]: INFO ( default/core ): [185.1.94.1] 
BGP_OPEN: Local AS: 43470 Remote AS: 59964 HoldTime: 240
Sep  1 22:48:05 bump pmbgpd[10992]: ERROR ( default/core ): Failed to produce 
to topic bgptest partition -1: Local: Queue full
Sep  1 22:48:05 bump pmbgpd[10992]: ERROR ( default/core ): Connection failed 
to Kafka: p_kafka_close()
Sep  1 22:48:05 bump systemd[1]: pmbgpd.service: Main process exited, 
code=killed, status=11/SEGV
Sep  1 22:48:05 bump systemd[1]: pmbgpd.service: Failed with result 'signal'.

I have verified that it's not connectivity - the topic is created at the Kafka 
end of the link, and I can open a tcp socket with telnet from the computer 
running pmbgpd and the Kafka server's port 9092

I have of course read some Github issues and list archives about the Local: 
Queue full fault and it suggests some librdkafka buffer and timer tweaking, I 
have played with various values (some of them insane) and I don't see any 
different behaviour logged by pmbgpd:

root@bump:/home/andy# cat /etc/pmacct/pmbgpd.conf
bgp_daemon_ip: 185.1.94.6
bgp_daemon_max_peers: 100
bgp_daemon_as: 43470
!
syslog: user
daemonize: true
!
kafka_config_file: /etc/pmacct/librdkafka.conf
!
bgp_daemon_msglog_kafka_output: json
bgp_daemon_msglog_kafka_broker_host: .hostname
bgp_daemon_msglog_kafka_broker_port: 9092
bgp_daemon_msglog_kafka_topic: bgptest

root@bump:/home/andy# cat /etc/pmacct/librdkafka.conf
global, queue.buffering.max.messages, 800
global, batch.num.messages, 10
global, queue.buffering.max.messages, 2
global, queue.buffering.max.ms, 100
global, queue.buffering.max.kbytes, 900
global, linger.ms, 100
global, socket.request.max.bytes, 104857600
global, socket.receive.buffer.bytes, 10485760
global, socket.send.buffer.bytes, 10485760
global, queued.max.requests, 1000

Any advice on where to troubleshoot next?


Thanks
Andy

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Capturing interface traffic with pmacct and inserting the data in PostgreSQL

2020-08-26 Thread Paolo Lucente


Hi Arda,

I see that in your config you have 'daemonize: true' but no logfile 
statement set, ie. 'logfile: /tmp/pmacctd.log': this is preventing you 
from seeing any errors / warnings that pmacctd is logging and that may 
put you on the right path - is it an auth issue, is it a schema issue, 
etc. So that would be my first and foremost advice.


A second advice i may give you is, since you ask 'Should I expect the 
same level of detail that I see when I use tshark or tcpdump?', to get 
started with the 'print' plugin and follow 
https://github.com/pmacct/pmacct/blob/master/QUICKSTART#L2521-#L2542 . 
For example, given your config:


[..]
!
plugins: print[in], print[out]
aggregate[in]: dst_host
aggregate[out]: src_host
aggregate_filter[in]: dst net 10.10.10.0/24
aggregate_filter[out]: src net 10.10.10.0/24
!
print_refresh_time: 60
print_history: 1h
print_history_roundoff: h
print_output: csv
!
print_output_file[in]: /path/to/file-in-%Y%m%d-%H%M.csv
print_output_file[out]: /path/to/file-out-%Y%m%d-%H%M.csv
!
pcap_interfaces_map: /usr/local/share/pmacct/pcap_interfaces.map

This way, although in a CSV format in a file, playing with 'aggregate' 
you can get an idea what pmacct can get you compared to tcpdump/tshark 
(it will be pretty immediate to realise given the output).


Once you baseline pmacct is the tool for you and you get familiar with 
it, i guess you can complicate things putting a SQL database in the way.


Paolo


On 26/08/2020 19:30, Arda Savran wrote:
I just installed pmacct with postgres support on CentOS8 from GitHub; 
and I think it was a successful installation based on the following:


*[root@pcap pmacct]# pmacct -V*

*pmacct IMT plugin client, pmacct 1.7.6-git (20200826-0 (57a0334d))*

*'--enable-pgsql' '--enable-l2' '--enable-traffic-bins' 
'--enable-bgp-bins' '--enable-bmp-bins' '--enable-st-bins'*


**

*For suggestions, critics, bugs, contact me: Paolo Lucente 
.*


*[root@pcap pmacct]# pmacctd -V*

*Promiscuous Mode Accounting Daemon, pmacctd 1.7.6-git [20200826-0 
(57a0334d)]*


**

*Arguments:*

*'--enable-pgsql' '--enable-l2' '--enable-traffic-bins' 
'--enable-bgp-bins' '--enable-bmp-bins' '--enable-st-bins'*


**

*Libs:*

*libpcap version 1.9.0-PRE-GIT (with TPACKET_V3)*

*PostgreSQL 120001*

**

*System:*

*Linux 4.18.0-193.14.2.el8_2.x86_64 #1 SMP Sun Jul 26 03:54:29 UTC 2020 
x86_64*


**

*Compiler:*

*gcc 8.3.1*

**

*For suggestions, critics, bugs, contact me: Paolo Lucente 
.*


My goal is to capture the in/out network traffic on this machine’s 
interfaces and record them in PostgreSQL. I created myself a 
pmacctd.conf file under /usr/local/share/pmacct folder and a 
pcap_interfaces.map under the same folder. Before my question, can 
someone please confirm that my expectations from pmacct is accurate:


  * Pmacct can capture all the network traffic on the local interface
(ens192) and record it in PostgreSQL. Should I expect the same level
detail that I see when I use tshark or tcpdump?
  * Pmacct can store all the packet details in PostgreSQL if needed. If
this is not supported, does this mean that I am obligated to
aggregate the interface traffic before it is inserted into PostgreSQL.

My issue is that I am not seeing any data being written into any of the 
following tables:


*pmacct=# \dt*

*  List of relations*

*Schema |   Name   | Type  |  Owner*

*+--+---+--*

*public | acct | table | postgres*

*public | acct_as  | table | postgres*

*public | acct_uni | table | postgres*

*public | acct_v9  | table | postgres*

*public | proto    | table | postgres*

I started the daemon by running: pmacctd -f pmacctd.conf

My conf file is based on what I read on the WiKi page:

*!*

*daemonize: true*

*plugins: pgsql[in], pgsql[out]*

*aggregate[in]: dst_host*

*aggregate[out]: src_host*

*aggregate_filter[in]: dst net 10.10.10.0/24*

*aggregate_filter[out]: src net 10.10.10.0/24*

*sql_table[in]: acct_in*

*sql_table[out]: acct_out*

*sql_refresh_time: 60*

*sql_history: 1h*

*sql_history_roundoff: h*

*pcap_interfaces_map: /usr/local/share/pmacct/pcap_interfaces.map*

*! ...*

I am not sure how to proceed from here. I don’t know if I am supposed to 
be creating a table on PostgreSQL manually first based on my aggregation 
settings and somehow include that in the config file.


Can some please point me to the right direction.

Thanks,

Sent from Mail <https://go.microsoft.com/fwlink/?LinkId=550986> for 
Windows 10



___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] tee plugin ipv6 problem

2020-07-28 Thread Paolo Lucente


Hey Alexander,

Can you send me a sample of the IPv6 packets by unicast email? Ideally 
two tcpdump captures, ie. 'tcpdump -i lo -n -w  port ' 
and 'tcpdump -i  -n -w  port 2101', taken in 
parallel. Shall i find you positive on generating a sample, please do 
not do one single capture with '-i any' as that would cut out some of 
the lower layers data which could be of interest for the analysis.


Paolo

On 28/07/2020 13:07, Alexander Brusilov wrote:

I've tested in latest code 1.7.5-git (20200510-00) with same result.
Some clarification to my previous message
In ipv4 all checksums and lengths in all packets are fine.
About ipv6 bad packet example:
BAD UDP LENGTH 1332 > IP PAYLOAD LENGTH] Len=1324 [ILLEGAL CHECKSUM (0)
Data (1304 bytes)
UDP header: Length: 1332 (bogus, payload length 1312)    <<< in my 
understanding length should be 1312 (data + 8 bytes)
IPV6 header: Length: 1332 (bogus, payload length 1312)   <<< in my 
understanding length should be 1372 (data + 8 bytes + 40)


вт, 28 июл. 2020 г. в 12:34, Alexander Brusilov >:


Hi all,
i use following scenario in ipv4 and it work fine:
tee plugin listen on external interface and replicate sflow data in
two streams via loopback interface, here is part of configs:
/opt/etc/sf_tee.conf
promisc: false
interface: 
!
sfacctd_port: 2101
sfacctd_ip: 
!
plugins: tee[sf]
tee_receivers[sf]: /opt/etc/tee_receivers_sf.lst
tee_transparent: true
!
pre_tag_map: /opt/etc/pretag.map
!

/opt/etc/tee_receivers_sf.lst
id=2101 ip=127.0.0.1:2101 
id=111 ip=127.0.0.1:20111  tag=111

/opt/etc/pretag.map
set_tag=111 ip=

i am trying do same with ipv6, but with no success, here is configs:
/opt/etc/sf_tee_v6.conf
promisc: false
interface: 
!
sfacctd_port: 2101
sfacctd_ip: 
!
plugins: tee[sf]
tee_receivers[sf]: /opt/etc/tee_receivers_sf_v6.lst
tee_transparent: true
!
pre_tag_map: /opt/etc/pretag.map
!

/opt/etc/tee_receivers_sf_v6.lst
id=2101 ip=[::1]:2101
id=111 ip=[::1]:20111 tag=111

ipv6 sflow data stream replicated according configs, but
sfacctd backend (and some other software too) ignore this replicated
packets.
I've run tcpdump on external and lo interface and see that packets
on lo interface (replicated by tee plugin) have wrong payload length
in ipv6 header (in udp may be too). In ipv4 all checksums in
all packets fine.
It's normal behaviour or not? Can this cause that sfaccd backend
ignore this packets? Or may be i missing something?

Here example some info of bad packet from wireshark
BAD UDP LENGTH 1332 > IP PAYLOAD LENGTH] Len=1324 [ILLEGAL CHECKSUM (0)
Data (1304 bytes)
UDP: Length: 1332 (bogus, payload length 1312)
IPV6: Length: 1332 (bogus, payload length 1312)   <<< in my
understanding length should be 1372

# /opt/sbin/sfacctd -V
sFlow Accounting Daemon, sfacctd 1.7.4-git (20191126-01+c6)

Arguments:
  '--prefix=/opt' '--enable-geoipv2' '--enable-jansson'
'--enable-zmq' '--enable-pgsql'
'PKG_CONFIG_PATH=/usr/pgsql-11/lib/pkgconfig' '--enable-l2'
'--enable-64bit' '--enable-traffic-bins' '--enable-bgp-bins'
'--enable-bmp-bins' '--enable-st-bins'

System:
Linux 3.10.0-1127.13.1.el7.x86_64 #1 SMP Tue Jun 23 15:46:38 UTC
2020 x86_64

# cat /etc/redhat-release
CentOS Linux release 7.8.2003 (Core)


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] master - ndpi on 32bit CentOS 6

2020-07-09 Thread Paolo Lucente


I did test on a Debian 10:

4.19.0-8-686-pae #1 SMP Debian 4.19.98-1 (2020-01-26) i686 GNU/Linux

As i was suspecting, passing the pcap you sent me through a daemon 
compiled on this box went fine (that is, i can't reproduce the issue).

From what i see, by the way, this is not something related to nDPI.

Paolo

On 09/07/2020 18:19, Steve Clark wrote:

Thanks for checking, could you tell what distro and version you tested on?

Also when I compile on 32 bit I get a lot of warning of redefines 
between ndpi.h and pmacct.h

do you get those also?




On 07/09/2020 11:55 AM, Paolo Lucente wrote:

Hi Steve,

I do have avail of a i686-based VM. I can't say everything is tested on
i686 but i tend to check every now and then that nothing fundamental is
broken. I took the example config you used, compiled master code with
the same config switches as you did (essentially --enable-ndpi) and had
no joy reproducing the issue.

You could send me privately your capture and i may try with that one
(although i am not highly positive it will be a successful test); or you
could arrange me access to your box to read the pcap. Let me know.

Paolo

On 09/07/2020 14:54, Steve Clark wrote:

Hi Paolo,

I have compiled master with nDPI on both 32bit and 64bit CentOS 6
systems. The 64 bit pmacctd seems
to work fine. But I get bogus byte counts when I run the 32bit version
against the same pcap file.

Just wondered if you have done any testing on 32bit intel system with
the above combination.

below is the output when using 32bit pmacctd - first the pmacctd
invocation then the nfacctd output
pmacct/src/pmacctd -f ./mypaolo.conf -I v1.7.5_v9_ndpi_class_paolo.pcap
INFO ( default/core ): Promiscuous Mode Accounting Daemon, pmacctd
1.7.6-git (20200707-01)
INFO ( default/core ):  '--enable-ndpi'
'--with-ndpi-static-lib=/usr/local/lib/' '--enable-l2'
'--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins'
'--enable-st-bins'
INFO ( default/core ): Reading configuration file
'/var/lib/pgsql/sclark/mypaolo.conf'.
INFO ( p4p1/nfprobe ): NetFlow probe plugin is originally based on
softflowd 0.9.7 software, Copyright 2002 Damien Miller 
All rights reserved.
INFO ( p4p1/nfprobe ):   TCP timeout: 3600s
INFO ( p4p1/nfprobe ):  TCP post-RST timeout: 120s
INFO ( p4p1/nfprobe ):  TCP post-FIN timeout: 300s
INFO ( p4p1/nfprobe ):   UDP timeout: 300s
INFO ( p4p1/nfprobe ):  ICMP timeout: 300s
INFO ( p4p1/nfprobe ):   General timeout: 3600s
INFO ( p4p1/nfprobe ):  Maximum lifetime: 604800s
INFO ( p4p1/nfprobe ):   Expiry interval: 60s
INFO ( default/core ): PCAP capture file, sleeping for 2 seconds
INFO ( p4p1/nfprobe ): Exporting flows to [172.24.109.157]:rrac
WARN ( p4p1/nfprobe ): Shutting down on user request.
INFO ( default/core ): OK, Exiting ...

src/nfacctd -f examples/nfacctd-print.conf.example
INFO ( default/core ): NetFlow Accounting Daemon, nfacctd 1.7.6-git
(20200623-00)
INFO ( default/core ):  '--enable-ndpi'
'--with-ndpi-static-lib=/usr/local/lib/' '--enable-l2'
'--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins'
'--enable-st-bins'
INFO ( default/core ): Reading configuration file
'/var/lib/pgsql/sclark/pmacct/examples/nfacctd-print.conf.example'.
INFO ( default/core ): waiting for NetFlow/IPFIX data on :::5678
INFO ( foo/print ): cache entries=16411 base cache memory=56322552 bytes
WARN ( foo/print ): no print_output_file and no print_output_lock_file
defined.
INFO ( foo/print ): *** Purging cache - START (PID: 21926) ***
CLASS SRC_IP
DST_IP SRC_PORT  DST_PORT
PROTOCOL    PACKETS   BYTES
NetFlow   172.24.110.104
172.24.109.247 41900 2055
udp 26 1576253010996
NetFlow   172.24.110.104
172.24.109.247 58131 2055
udp 21    1576253008620
INFO ( foo/print ): *** Purging cache - END (PID: 21926, QN: 2/2, ET: 
0) ***

^CINFO ( foo/print ): *** Purging cache - START (PID: 21559) ***
INFO ( foo/print ): *** Purging cache - END (PID: 21559, QN: 0/0, ET: 
X) ***

INFO ( default/core ): OK, Exiting ...

Now the output when using and the same .pcap file 64bit version of 
pmacctd


sudo /root/pmacctd-176 -f ./mypaolo.conf -I 
v1.7.5_v9_ndpi_class_paolo.pcap

INFO ( default/core ): Promiscuous Mode Accounting Daemon, pmacctd
1.7.6-git (20200623-00)
INFO ( default/core ):  '--enable-ndpi'
'--with-ndpi-static-lib=/usr/local/lib/' '--enable-l2'
'--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins'
'--enable-st-bins'
INFO ( default/core ): Reading configuration file
'/var/lib/pgsql/sclark/mypaolo.conf'.
INFO ( p4p1/nfprobe ): NetFlow probe plugin is originally based on
softflowd 0.9.7 software, Copyright 2002 Damien Miller 
All rights reserved.
INFO ( default/core ): PCAP capture file, sleeping for 2 seconds
INFO ( p4p1/nfprobe ):   TCP timeout: 3600s
INFO ( p4p1/nfprobe ):  TCP post-RST timeout: 120s

Re: [pmacct-discussion] master - ndpi on 32bit CentOS 6

2020-07-09 Thread Paolo Lucente


Hi Steve,

I do have avail of a i686-based VM. I can't say everything is tested on 
i686 but i tend to check every now and then that nothing fundamental is 
broken. I took the example config you used, compiled master code with 
the same config switches as you did (essentially --enable-ndpi) and had 
no joy reproducing the issue.


You could send me privately your capture and i may try with that one 
(although i am not highly positive it will be a successful test); or you 
could arrange me access to your box to read the pcap. Let me know.


Paolo

On 09/07/2020 14:54, Steve Clark wrote:

Hi Paolo,

I have compiled master with nDPI on both 32bit and 64bit CentOS 6 
systems. The 64 bit pmacctd seems
to work fine. But I get bogus byte counts when I run the 32bit version 
against the same pcap file.


Just wondered if you have done any testing on 32bit intel system with 
the above combination.


below is the output when using 32bit pmacctd - first the pmacctd 
invocation then the nfacctd output

pmacct/src/pmacctd -f ./mypaolo.conf -I v1.7.5_v9_ndpi_class_paolo.pcap
INFO ( default/core ): Promiscuous Mode Accounting Daemon, pmacctd 
1.7.6-git (20200707-01)
INFO ( default/core ):  '--enable-ndpi' 
'--with-ndpi-static-lib=/usr/local/lib/' '--enable-l2' 
'--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins' 
'--enable-st-bins'
INFO ( default/core ): Reading configuration file 
'/var/lib/pgsql/sclark/mypaolo.conf'.
INFO ( p4p1/nfprobe ): NetFlow probe plugin is originally based on 
softflowd 0.9.7 software, Copyright 2002 Damien Miller  
All rights reserved.

INFO ( p4p1/nfprobe ):   TCP timeout: 3600s
INFO ( p4p1/nfprobe ):  TCP post-RST timeout: 120s
INFO ( p4p1/nfprobe ):  TCP post-FIN timeout: 300s
INFO ( p4p1/nfprobe ):   UDP timeout: 300s
INFO ( p4p1/nfprobe ):  ICMP timeout: 300s
INFO ( p4p1/nfprobe ):   General timeout: 3600s
INFO ( p4p1/nfprobe ):  Maximum lifetime: 604800s
INFO ( p4p1/nfprobe ):   Expiry interval: 60s
INFO ( default/core ): PCAP capture file, sleeping for 2 seconds
INFO ( p4p1/nfprobe ): Exporting flows to [172.24.109.157]:rrac
WARN ( p4p1/nfprobe ): Shutting down on user request.
INFO ( default/core ): OK, Exiting ...

src/nfacctd -f examples/nfacctd-print.conf.example
INFO ( default/core ): NetFlow Accounting Daemon, nfacctd 1.7.6-git 
(20200623-00)
INFO ( default/core ):  '--enable-ndpi' 
'--with-ndpi-static-lib=/usr/local/lib/' '--enable-l2' 
'--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins' 
'--enable-st-bins'
INFO ( default/core ): Reading configuration file 
'/var/lib/pgsql/sclark/pmacct/examples/nfacctd-print.conf.example'.

INFO ( default/core ): waiting for NetFlow/IPFIX data on :::5678
INFO ( foo/print ): cache entries=16411 base cache memory=56322552 bytes
WARN ( foo/print ): no print_output_file and no print_output_lock_file 
defined.

INFO ( foo/print ): *** Purging cache - START (PID: 21926) ***
CLASS SRC_IP 
DST_IP SRC_PORT  DST_PORT  
PROTOCOL    PACKETS   BYTES
NetFlow   172.24.110.104 
172.24.109.247 41900 2055  
udp 26 1576253010996
NetFlow   172.24.110.104 
172.24.109.247 58131 2055  
udp 21    1576253008620

INFO ( foo/print ): *** Purging cache - END (PID: 21926, QN: 2/2, ET: 0) ***
^CINFO ( foo/print ): *** Purging cache - START (PID: 21559) ***
INFO ( foo/print ): *** Purging cache - END (PID: 21559, QN: 0/0, ET: X) ***
INFO ( default/core ): OK, Exiting ...

Now the output when using and the same .pcap file 64bit version of pmacctd

sudo /root/pmacctd-176 -f ./mypaolo.conf -I v1.7.5_v9_ndpi_class_paolo.pcap
INFO ( default/core ): Promiscuous Mode Accounting Daemon, pmacctd 
1.7.6-git (20200623-00)
INFO ( default/core ):  '--enable-ndpi' 
'--with-ndpi-static-lib=/usr/local/lib/' '--enable-l2' 
'--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins' 
'--enable-st-bins'
INFO ( default/core ): Reading configuration file 
'/var/lib/pgsql/sclark/mypaolo.conf'.
INFO ( p4p1/nfprobe ): NetFlow probe plugin is originally based on 
softflowd 0.9.7 software, Copyright 2002 Damien Miller  
All rights reserved.

INFO ( default/core ): PCAP capture file, sleeping for 2 seconds
INFO ( p4p1/nfprobe ):   TCP timeout: 3600s
INFO ( p4p1/nfprobe ):  TCP post-RST timeout: 120s
INFO ( p4p1/nfprobe ):  TCP post-FIN timeout: 300s
INFO ( p4p1/nfprobe ):   UDP timeout: 300s
INFO ( p4p1/nfprobe ):  ICMP timeout: 300s
INFO ( p4p1/nfprobe ):   General timeout: 3600s
INFO ( p4p1/nfprobe ):  Maximum lifetime: 604800s
INFO ( p4p1/nfprobe ):   Expiry interval: 60s
INFO ( p4p1/nfprobe ): Exporting flows to [172.24.109.157]:rrac
WARN ( p4p1/nfprobe ): Shutting down on user request.
INFO ( default/core 

Re: [pmacct-discussion] 1.7.5 with static ndpi

2020-06-24 Thread Paolo Lucente


Hi Steve,

Apart from asking the obvious - personal curiosity! - why do you want to
link against a static nDPI library. There are a couple main avenues i
can point you to depending on your goal:

1) You can supply configure with a --with-ndpi-static-lib knob; guess
the static lib and the dynamic lib are in different places, you should
be game. Even simplifying further: should you make the 'shared object'
library disappear then things will be forced onto the static library;

2) did you see the "pmacct & Docker" email that did just circulate on
the list? In the seek for a static library? Perhaps time to look into a
container instead? :-D

Paolo 

On Tue, Jun 23, 2020 at 01:44:32PM -0400, Stephen Clark wrote:
> Hello,
> 
> Can anyone give the magic configuration items I need to build using a static
> libndpi.a
> 
> I have spend all day trying to do this without any success. It seem like I
> tried every combination
> that ./configure --help displays.
> 
> Any help would be appreciated.
> 
> Thanks,
> Steve
> 
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


[pmacct-discussion] pmacct & Docker

2020-06-24 Thread Paolo Lucente


Dears,

A brief email to say that thanks to the monumental efforts of Marc Sune
and Claudio Ortega we could bring pmacct a bit closer to the Docker
universe. Since today we are shipping official pmacct containers on
Docker Hub ( https://hub.docker.com/u/pmacct ) organized as follows:

* A special container, base container, that is the base of the rest of
containers with all pmacct daemons installed and bash as an entry point.
It can be useful to debug and to create your customized docker image.

* One container per daemon (pmacctd, nfacctd, sfacctd, uacctd, pmbgpd,
pmbmpd, pmtelemetryd) where the entry point is the daemon itself and a
config file is expected in /etc/pmacct . For more info you can read the
'How to use it' section of the description on Docker Hub (ie.
https://hub.docker.com/r/pmacct/nfacctd ).

Three tags are being offered:

* latest: latest stable image of that container
* vX.Y.Z: version specific tag
* bleeding-edge: only for the brave. Latest commit on master

We also created a docker-doct...@pmacct.net email address which is going
to be used for maintenance and development. Should you have any
comments, questions, critics, bug reports please write us there. Marc
and myself will be reading. We are eager to hear goods and bads from you. 

Finally, although fragmentation is not always avoidable, in an effort to
prevent confusion among users, if you had your Dockerfile published on,
say, GitHub or Docker Hub we would much appreciate if you could make it
explicit / clear that it is an unofficial effort. You are very welcome
to join effort with us if you have an interest in pmacct & Docker!

Regards,
Paolo 

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


[pmacct-discussion] pmacct 1.7.5 released !

2020-06-17 Thread Paolo Lucente
VERSION.
1.7.5


DESCRIPTION.
pmacct is a small set of multi-purpose passive network monitoring tools. It
can account, classify, aggregate, replicate and export forwarding-plane data,
ie. IPv4 and IPv6 traffic; collect and correlate control-plane data via BGP
and BMP; collect and correlate RPKI data; collect infrastructure data via
Streaming Telemetry. Each component works both as a standalone daemon and
as a thread of execution for correlation purposes (ie. enrich NetFlow with
BGP data).

A pluggable architecture allows to store collected forwarding-plane data into
memory tables, RDBMS (MySQL, PostgreSQL, SQLite), noSQL databases (MongoDB,
BerkeleyDB), AMQP (RabbitMQ) and Kafka message exchanges and flat-files.
pmacct offers customizable historical data breakdown, data enrichments like
BGP and IGP correlation and GeoIP lookups, filtering, tagging and triggers.
Libpcap, Linux Netlink/NFLOG, sFlow v2/v4/v5, NetFlow v5/v8/v9 and IPFIX are
all supported as inputs for forwarding-plane data. Replication of incoming
NetFlow, IPFIX and sFlow datagrams is also available. Statistics can be
easily exported to time-series databases like ElasticSearch and InfluxDB and
traditional tools Cacti RRDtool MRTG, Net-SNMP, GNUPlot, etc.

Control-plane and infrastructure data, collected via BGP, BMP and Streaming
Telemetry, can be all logged real-time or dumped at regular time intervals
to AMQP (RabbitMQ) and Kafka message exchanges and flat-files.


HOMEPAGE.
http://www.pmacct.net/


DOWNLOAD.
http://www.pmacct.net/pmacct-1.7.5.tar.gz


CHANGELOG.
+ pmacct & Redis: pmacct daemons can now connect to a Redis cache.
  The main use-case currenly covered is: registering every stable
  daemon component in a table so to have, when running a cluster
  comprising several daemons / components, an olistic view of what
  is currently running and where; shall a component stop running
  or crash it will disappear from the inventory.
+ BMP daemon: as part of the IETF 107 vHackaton, preliminar support
  for draft-xu-grow-bmp-route-policy-attr-trace and draft-lucente-
  grow-bmp-tlv-ebit was introduced. Also added support for Peer
  Distinguisher field in the BMP Per-Peer Header.
+ BMP daemon: added support for reading from savefiles in libpcap
  format (pcap_savefile, pcap_savefile_delay, pcap_savefile_replay,
  pcap_filter) as an alternative to the use of bmp_play.py.
+ BMP daemon: re-worked, improved and generalized support for TLVs
  at the end of BMP messages. In this context, unknown Stats data
  is handled as a generic TLV. 
+ BMP daemon: added SO_KEEPALIVE TCP socket option (ie. to keep the
  sessions alive via a firewall / NAT kind of device). Thanks to
  Jared Mauch ( @jaredmauch ) for his patch. 
+ nfacctd, nfprobe plugin: added usec timestamp resolution to IPFIX
  collector and export via IEs #154, #155. For export, this can be
  configured via the new nfprobe_tstamp_usec knob.
+ nfacctd: new nfacctd_templates_receiver and nfacctd_templates_port
  config directives allow respectively to specify a destination
  where to copy NetFlow v9/IPFIX templates to and a port where to
  listen for templates from. If nfacctd_templates_receiver points to
  a replicator and the replicator exports to nfacctd_templates_port
  of a set of collectors then, for example, it gets possible to share
  templates among collectors in a cluster for the purpose of seamless
  scale-out.
+ pmtelemetryd: in addition to existing TCP, UDP and ZeroMQ inputs,
  the daemon can now read Streaming Telemetry data in JSON format
  from a Kafka broker (telemetry_daemon_kafka_* config knobs).
+ pmgrpcd.py: Use of multiple processes for the Kafka Avro exporter
  to leverage the potential of multi-core/processors architectures.
  Code is from Raphael P. Barazzutti ( @rbarazzutti ).
+ pmgrpcd.py: added -F / --no-flatten command-line option to disable
  object flattening (default true for backward compatibility); also
  export to a Kafka broker for (flattened) JSON objects was added (in
  addition to existing export to ZeroMQ).
+ nDPI: introduced support for nDPI 3.2 and dropped support for all
  earlier versions of the library due to changes to the API.
+ Docker: embraced the technology for CI purposes; added a docker/
  directory in the file distribution where Dockerfile and scripts to
  build pmacct and dependencies are shared. Thanks to Claudio Ortega
  ( @claudio-ortega ) for contributing his excellent work in the area.
! fix, pmacctd: pcap_setdirection() enabled and moved to the right
  place in code. Libpcap tested for function presence. Thanks to
  Mikhail Sennikovsky for his patch.
! fix, pmacctd: SEGV has been detected if passing messages with an
  unsupported link layer. 
! fix, uacctd: handle non-ethernet packets correctly. Use mac_len = 0
  for non-ethernet packets in which case a zeroed ethernet header is
  used. Thanks to @aleksandrgilfanov for his patch.
! fix, BGP daemon: improved handling of withdrawals for label-unicast
  and mpls-vpn NLRIs.
! fix, BGP 

Re: [pmacct-discussion] networks_file reload

2020-06-08 Thread Paolo Lucente


Hi Olaf,

To confirm that the file is reloaded. Unfortunately all log messages in
loading up a networks_file are related to errors, warnings and debug. No
info message to say that simply all went good. So i just added one as an
action item for the issue you raised:

https://github.com/pmacct/pmacct/commit/5f4c424f86d20821b4c028d9d180aa506f76

Now you can see the file is loaded upon startup and also upon sending a
SIGUSR2 to the process(es). Thank you!

Paolo

On Fri, Jun 05, 2020 at 11:16:19AM +0100, Olaf de Bree wrote:
> Hi all,
> 
> hoping someone can help.
> 
> I am using networks_file to map ASNs to prefixes under nfacctd version 1.7.5
> 
> The pmacct documentation suggests under the maps_refresh directive that
> the networks_file is reloadable via -SIGUSR2 but when I issue a "pkill
> -SIGUSR2 nfacctd" while running debug I see evidence that pre_tag_map is
> reloaded in the logs but not the networks_file.
> 
> Is the networks_file silently reloaded with no log? or could this be a bug?
> 
> Thanks in advance
> Olaf

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] pmacctd and src_std_comm aggregation

2020-05-26 Thread Paolo Lucente

Ciao Simone,

You see, that is the thing with the example you proposed: it is very
likely that one of the two paths is not in the RIB (like you said) so
you can't really make 'suggestions', you have to force it through a
static mapping; i do see people being fine with that for the Peer
Source ASN because in majority of cases (not all, ie. some remote
peerings may escape this scheme depending how they are built) you may
map it 1:1 to an interface, a VLAN or a MAC address. But being the
communities bound to a NLRI, anything static does not really scale (and
that is the main reason the knob is not there today).

So your alternate idea, let's make the path visible via ADD-PATH:, sure,
that would be actually a nice test to use your lab for. A possible
logics could be (again as you suggested): 1) a bgp_peer_src_as_map
compiled as you do today + bgp_nexthop (key already supported so zero
work there), to associate ASNs, the expected next-hop and interface /
VLAN / MAC address; 2) have a BGP session with ADD-PATH so that the
multiple paths remain visible (also here zero work to do) and 3) use the
BGP next-hop info in the bgp_peer_src_as_map as selector in the ADD-PATH
vector (also this is logics already existing BUT NOT in conjunction with
a BGP next-hop fed by a bgp_peer_src_as_map or, let me re-phrase, at
least this is untested / uncharted territory). 

Sounds like fun. Shall we move to unicast email for lab access and
arranging all the different pieces? Since it's not urgent, doing some
spare-time work on it, i guess we can converge on this in a week or a
couple.

Paolo

On Mon, May 25, 2020 at 06:21:56PM +0200, Simone Ricci wrote:
> Ciao Paolo,
> 
> > Il giorno 25 mag 2020, alle ore 16:03, Paolo Lucente  ha 
> > scritto:
> > 
> > Ciao Simone,
> > 
> > If i got it correct you are after static mapping of communities to input
> > traffic - given an input interface / vlan or an ingress router or a
> > source MAC address.
> 
> Yes, just to be clear imagine this scenario:
> 
> route 192.0.2.0/24, originated by AS1000, coming in from two upstreams 
> (AS100, AS200) announced as follows:
> 
> 192.0.2.0/24 100 500 1000 (100:100)
> 192.0.2.0/24 200 1000 (200:100)
> 
> Obviously the path via AS200 will be the best one, so pmacctd always attaches 
> community 200:100 to inbound traffic…even if enters via AS100 (it can 
> discriminate the peer src AS thanks to the relevant map)
> 
> > It seems doable, like you said, adding a machinery
> > like it exists for the source peer ASN.
> 
> If I understand correctly, in nfacctd/sfacctd the association is done looking 
> at the BGP next-hop attribute; maybe it’s possible to sort it out just by 
> “extending” the existing map to let the user “suggest” that traffic matching 
> a relevant filter has a specific bgp next-hop as well as a peer src AS…but 
> I’m just thinking out loud here.
> 
> > I'd have one question for you:
> > 
> > How would the 'output' look like: one single community or a list of
> > communities (ths may make less sense but still i'd like to double-check
> > with you)?
> 
> Regarding the format, the actual output will be OK (single string, 
> communities separated by _), as I push everything via AMQP and parsing gets 
> done upper in the stack; it’s not a bad thing, when considering that not all 
> databases are going to accept arrays of objects and that trasformation is 
> easily supported by a lot of tooling (be it logstash, telegraph, fluentd…)
> 
> > I guess you may be interested in either standard or large
> > communities but not extended, true? And, if true, would you have any
> > preferences among the two? Perhaps the standard ones since you mention
> > 'src_std_comm’?
> 
> At the moment the support for the standard ones will suffice, for me
> 
> > It's not a biggie and i guess i can converge on this relatively soon;
> > can you confrm your priority / urgency? 
> 
> Oh it’s not urgent at all, but it would be a very nice-to-have feature which 
> helps getting a lot of interesting insights.
> Just one thing: as you may remember I’ve got a nice testing environment that 
> you’re welcome to use if it helps.
> 
> Thank you!
> Simone.
> 

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] pmacctd and src_std_comm aggregation

2020-05-25 Thread Paolo Lucente

Ciao Simone,

If i got it correct you are after static mapping of communities to input
traffic - given an input interface / vlan or an ingress router or a
source MAC address. It seems doable, like you said, adding a machinery
like it exists for the source peer ASN. I'd have one question for you:

How would the 'output' look like: one single community or a list of
communities (ths may make less sense but still i'd like to double-check
with you)? I guess you may be interested in either standard or large
communities but not extended, true? And, if true, would you have any
preferences among the two? Perhaps the standard ones since you mention
'src_std_comm'?

It's not a biggie and i guess i can converge on this relatively soon;
can you confrm your priority / urgency? 

Paolo

On Sun, May 24, 2020 at 02:09:24PM +0200, Simone Ricci wrote:
> Good Evening,
> 
> I’m trying to configure pmacctd to aggregate inbound traffic by these 
> primitives:
> 
> peer_src_as, src_as, src_std_comm
> 
> The goal is to see if traffic from certain networks announced by carrier X 
> with specific communities comes in from X or another hypothetical path (in 
> that case the communities are not relevant, but that’s another story).
> 
> My configuration is the following: machine running pmacctd sees the traffic 
> thru 2 NICs, connected to SPAN port on core switches (where the carriers are 
> linked); only inbound traffic is presented. I also setup a bgp peering with 
> the border router, and enabled ADD-PATH capability on the session.
> 
> The setup seems to work, the problem being that the community list always 
> refers to the bgp best path; digging thru the documentation I see that in the 
> ADD-PATH case, the method to select the relevant entry is looking at the 
> bgp_next_hop of the flow…but I think that's actually applicable only to 
> netflow/sflow collectors, right? I were wondering if it’s possible to extend 
> bgp_peer_src_as_map to set the relevant information, so that every flow will 
> have the community field populated by leveraging the same mechanics actually 
> used to populate the peer_src_as field.
> 
> Thank you
> Simone
> 
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] BGP correlation not working with nfacctd, all BGP set to 0

2020-05-19 Thread Paolo Lucente


Hi Wilfrid,

"we already capture all the flows matching the different rd because
of our netflow setup.". Although i may appreciate if you could elaborate
more on your netflow setup (it makes the exercise less a treasure hunt
for me), I am sure you do: can you paste me the content of one of your
flows with any indication that would point a collector, ie. a RD field,
to the right VPN RIB? You know, we need a linking pin - it that's there
(i hint it is not) then we are all good. You can take a capture of your
flows with tcpdump and inspect them conveniently with Wireshark. 

So the exercise for you is the following, take this record for example:

{
   "seq": 3,
   "timestamp": "2020-05-19 07:15:00",
   "peer_ip_src": "w.x.y.z",
   "ip_prefix": "a.b.c.d/27",
   "rd": "0:ASN:900290024",
   "label": "63455"
}

We need to match . peer_ip_src is the IP of
the device exporting NetFlow, easy, check. ip_prefix is contained in the
NetFlow record, easy, check. Where is 0:ASN:900290024 being mentioned in
the flow? Hint, hint: nowhere and you need to help the collector
deriving this information with a flow_to_rd_map.

Paolo

On Tue, May 19, 2020 at 03:21:53PM +, Grassot, Wilfrid wrote:
> Hi Paolo,
> 
> Unless I misunderstand the flow_to_rd_map, but this one would not help in
> our case.
> Indeed we already capture all the flows matching the different rd because
> of our netflow setup.
> nfacctd already receives only flows from the specific RDs involved in the
> monitored L3VPN .
> 
> My concern is about the correlation of a flow to a src_as, dst_as,
> dst_peer... retrieved from the captured BGP RIB  of this L3VPN and dumped
> to  bgp-$peer_src_ip-%H%M.log,
> 
> Would you please confirm that the flow, BGP correlation can only work only
> if router only advertise to the pmbgpd the best path
> 
> In other words, would you please confirm that our setup is not supported
> because for each prefixes there are not a unique entry in the captured BGP
> RIB, but at least 2 or 3 entries (no best path selection at the
> route-reflector because each vpnv4 address are seen as unique because of
> the different RDs involved) ?
> 
> Please see below an example of the pmbgpd dump file:
> 
> sudo jq '. | select(."ip_prefix" | contains ("a.b.c.d"))'
> bgp-w_x_y_z-0715.log | more
> {
>   "seq": 3,
>   "timestamp": "2020-05-19 07:15:00",
>   "peer_ip_src": "w.x.y.z",
>   "ip_prefix": "a.b.c.d/27",
>   "rd": "0:ASN:900290024",
>   "label": "63455"
> }
> {
>   "seq": 3,
>   "timestamp": "2020-05-19 07:15:00",
>   "peer_ip_src": " w.x.y.z ",
>   "ip_prefix": "a.b.c.d/27",
>   "rd": "0:ASN:911790015",
>   "label": "49061"
> }
> {
>   "seq": 3,
>   "timestamp": "2020-05-19 07:15:00",
>   "peer_ip_src": " w.x.y.z ",
>   "ip_prefix": "a.b.c.d/27",
>   "rd": "0:ASN:911790023",
>   "label": "49059"
> }
> 
> Thank you
> Wilfrid
> 
> -Original Message-
> From: Paolo Lucente 
> Sent: Tuesday, 19 May 2020 16:01
> To: Grassot, Wilfrid 
> Cc: pmacct-discussion@pmacct.net
> Subject: Re: [pmacct-discussion] BGP correlation not working with nfacctd,
> all BGP set to 0
> 
> 
> Hi Wilfrid,
> 
> This is very possibly point #1 of my previous email. The need for a
> flow_to_rd_map to associate flows to the right RD. You can find some
> examples here on how to compose it:
> 
> https://github.com/pmacct/pmacct/blob/1.7.5/examples/flow_to_rd.map.exampl
> e
> 
> Paolo
> 
> On Tue, May 19, 2020 at 08:17:44AM +, Grassot, Wilfrid wrote:
> > Hi Paolo,
> >
> > Could the issue be that correlation does not work because for each
> > "ip_prefix" there is not one, but two or three routes collected by
> > pmbgpd ?
> > Indeed because of redundancies, each prefixes are received by several
> > different routers in our network and by design each of the routers use
> > different route distinguisher (rd).
> > Hence the pmbgpd does not receive a unique route corresponding to best
> > path selected by the route-reflector, but the two or three different
> > vpnv4 addresses (rd:a.b.c.d) corresponding to ip_prefix = a.b.c.d ?
> >
> > Wilfrid
> >
> >
> > -Original Message-
> > From: Grassot, Wilfrid 
> > Sent: Monday, 18 May 2020 17:05
> > To: Paolo Lucente ; 

Re: [pmacct-discussion] help configuration cisco 4948E-F netflow-lite

2020-05-19 Thread Paolo Lucente

Hi Ionut,

Thanks for getting in touch with this.

From the log file you sent apparently the switch sends element #104
(layer2packetSectionData) to include portion of the sampled frame.
Unfortunately such element has been "deprecated in favor of 315
dataLinkFrameSection. Layer 2 packet section data." according to
IANA. Element #315, implemented by some Nexus-family kits, is instead
supported by pmacct just fine.

The above is just to say that implementing element #104 should be
relatively straightforward, pretty much handle it same as #315 that is
already implemented, we speak about adding a couple lines of code.

Can you send me privately a sample of your data to see if the theory
holds and we are not set off by any pesky details? 

Paolo
 
On Tue, May 19, 2020 at 04:45:34PM +0300, Ionuț Bîru wrote:
> Hi guys,
> 
> I'm struggling a bit to collect netflow-v9 lite from this particular device.
> 
> cisco configuration: https://paste.xinu.at/QLW0j/
> nfacctd config: https://paste.xinu.at/bnKHc3/
> nfacctd -f netflow.conf -d log: https://paste.xinu.at/oaJ/
> 
> pmacct -s doesn't have any information, is like nfacctd doesn't receive any
> information from cisco related to src_ip and so on.
> 
> Is somebody that managed to collect flow information using netflow-lite?

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] BGP correlation not working with nfacctd, all BGP set to 0

2020-05-19 Thread Paolo Lucente


Hi Wilfrid,

This is very possibly point #1 of my previous email. The need for a
flow_to_rd_map to associate flows to the right RD. You can find some
examples here on how to compose it:

https://github.com/pmacct/pmacct/blob/1.7.5/examples/flow_to_rd.map.example

Paolo 

On Tue, May 19, 2020 at 08:17:44AM +, Grassot, Wilfrid wrote:
> Hi Paolo,
> 
> Could the issue be that correlation does not work because for each
> "ip_prefix" there is not one, but two or three routes collected by pmbgpd
> ?
> Indeed because of redundancies, each prefixes are received by several
> different routers in our network and by design each of the routers use
> different route distinguisher (rd).
> Hence the pmbgpd does not receive a unique route corresponding to best
> path selected by the route-reflector, but the two or three different vpnv4
> addresses (rd:a.b.c.d) corresponding to ip_prefix = a.b.c.d ?
> 
> Wilfrid
> 
> 
> -Original Message-
> From: Grassot, Wilfrid 
> Sent: Monday, 18 May 2020 17:05
> To: Paolo Lucente ; pmacct-discussion@pmacct.net
> Subject: RE: [pmacct-discussion] BGP correlation not working with nfacctd,
> all BGP set to 0
> 
> Hi Paolo,
> 
> Thank you for your answer.
> 
> My bad in the description of the issue:
> w.x.y.z is indeed the ipv4 address of the router loop0 which is also its
> router-id.
> 
> Currently our setup is to iBGP peer with the router (router-id w.x.y.z) at
> the address-family vpnv4.
> We already filter out using route-target on the router for nfacctd  to
> receive only ipv4 routes from the monitored L3VPN.
> So the BGP daemon is only collecting routes of the monitored L3VPN
> 
> On nfacctd collector we also receive only the netflow from routers
> interfaces configured on this vrf.
> If I manually make the correlation of the captured netflow, I can see in
> the BGP dump files the corresponding src_as, dest_as, peer_dst_ip
> 
> So netflow and BGP are fine and bgp_agent_map file is  bgp_ip=w.x.y.z.
> ip=0.0.0.0/0 where w.x.y.z is the loopback0 (router-id) of the router,
> and nfacctd is peering with it (sorry again for the mishap).
> 
> I use the latest pmacctd 1.7.4 and I compile with ./configure
> --enable-jansson  (--enable-threads is not available)
> 
> And yes our network is a confederation of 6 sub_as.
> 
> Thank you
> 
> Wilfrid Grassot
> 
> 
> 
> 
> 
> 
> -Original Message-
> From: Paolo Lucente 
> Sent: Monday, 18 May 2020 16:30
> To: pmacct-discussion@pmacct.net; Grassot, Wilfrid
> 
> Subject: Re: [pmacct-discussion] BGP correlation not working with nfacctd,
> all BGP set to 0
> 
> 
> Hi Wilfrid,
> 
> Thanks for getting in touch. A couple of notes:
> 
> 1) if you are sending vpnv4 routes - and if that is a requirement - then
> you will need a flow_to_rd_map to map flows to the right VPN (maybe basing
> on the input interface at the ingress router? just an idea);
> 
> 2) Confederations always do add up to the fun :-) I may not have the
> complete info at the moment in order to comment further on this;
> 
> 3) bgp_ip in the bgp_agent_map may have been set incorrectly; in the
> comment you say "where w.x.y.z is the IP address of the nfacctd collector"
> but, according to docs, it should be set to the "IPv4/IPv6 session address
> or Router ID of the BGP peer.".
> 
> You may start working on #1 and #3. Probably more info is needed for #2
> and for this reason I suggest that, if things do not just work out at this
> round, we move the conversation to unicast email.
> 
> Paolo
> 
> 
> On 17/05/2020 16:24, Grassot, Wilfrid wrote:
> > Good afternoon
> >
> > I cannot have my netflow augmented with bgp data (src_as, dst_as,
> > peer_dst_ip…) all of the BGP data stay 0 or are empty
> >
> > An output of the csv file is:
> >
> > 0,0,63.218.164.15,,62.140.128.166,220.206.187.242,2123,2123,udp,1,40
> >
> > Where 0,0 are the missing src_as, dst_as  and , , is the missing
> > peer_dst_ip
> >
> > I try to monitor traffic of a L3VPN by having all routers sending
> > netflow to nfacctd and augment them with BGP data.
> >
> > The nfacctd collector peers with the route-reflector on address-family
> > vpnv4.
> >
> > _Please mind the network is a confederation network with sub-as_
> >
> > __
> >
> > I cannot figure out what is wrong
> >
> > __
> >
> > BGP session is up,
> >
> > bgp_table_dump_file collects properly all routes from the vrf
> >
> > netflow is properly collected by nfacctd
> >
> > But all aggregate values that should augment the data stay at zero for
> > the 

Re: [pmacct-discussion] BGP correlation not working with nfacctd, all BGP set to 0

2020-05-18 Thread Paolo Lucente



Hi Wilfrid,

Thanks for getting in touch. A couple of notes:

1) if you are sending vpnv4 routes - and if that is a requirement - then 
you will need a flow_to_rd_map to map flows to the right VPN (maybe 
basing on the input interface at the ingress router? just an idea);


2) Confederations always do add up to the fun :-) I may not have the 
complete info at the moment in order to comment further on this;


3) bgp_ip in the bgp_agent_map may have been set incorrectly; in the 
comment you say "where w.x.y.z is the IP address of the nfacctd 
collector" but, according to docs, it should be set to the "IPv4/IPv6 
session address or Router ID of the BGP peer.".


You may start working on #1 and #3. Probably more info is needed for #2 
and for this reason I suggest that, if things do not just work out at 
this round, we move the conversation to unicast email.


Paolo


On 17/05/2020 16:24, Grassot, Wilfrid wrote:

Good afternoon

I cannot have my netflow augmented with bgp data (src_as, dst_as, 
peer_dst_ip…) all of the BGP data stay 0 or are empty


An output of the csv file is:

0,0,63.218.164.15,,62.140.128.166,220.206.187.242,2123,2123,udp,1,40

Where 0,0 are the missing src_as, dst_as  and , , is the missing peer_dst_ip

I try to monitor traffic of a L3VPN by having all routers sending 
netflow to nfacctd and augment them with BGP data.


The nfacctd collector peers with the route-reflector on address-family 
vpnv4.


_Please mind the network is a confederation network with sub-as_

__

I cannot figure out what is wrong

__

BGP session is up,

bgp_table_dump_file collects properly all routes from the vrf

netflow is properly collected by nfacctd

But all aggregate values that should augment the data stay at zero for 
the AS, or empty like peer_dst_ip


My bgp_agent_map file has the below entry

bgp_ip=w.x.y.z.   ip=0.0.0.0/0     where w.x.y.z is the IP address of 
the nfacctd collector


my nfacctd config file is:

daemonize: false

debug: true

bgp_peer_as_skip_subas: true

bgp_src_std_comm_type: bgp

bgp_src_ext_comm_type: bgp

bgp_src_as_path_type: bgp

bgp_agent_map: /usr/local/etc/pmacct/map.txt

nfacctd_as_new: bgp

nfacctd_net: bgp

nfacctd_as: bgp

nfacctd_port: 2055

nfacctd_templates_file: /usr/local/etc/pmacct/nfacctd-template.txt

nfacctd_time_new: true

plugin_buffer_size: 70240

plugin_pipe_size: 2024000

bgp_daemon: true

bgp_daemon_ip: w.x.y.z

bgp_daemon_id: w.x.y.z

bgp_daemon_max_peers: 100

bgp_table_dump_file: /var/spool/bgp-$peer_src_ip-%H%M.log

plugins: print

print_output_file: /var/spool/plugin.log

print_output_file_append: true

print_refresh_time: 3

print_output: cvs

aggregate: proto, src_host, src_port, dst_host, dst_port, src_as, 
dst_as, peer_src_ip, peer_dst_ip


Thank you in advance

Wilfrid


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


[pmacct-discussion] pmacct 1.7.5 code freeze

2020-05-10 Thread Paolo Lucente


Dears,

pmacct 1.7.5 has entered code freeze today with the outlook of having the
official release wrapped up in approx one month. The code has been
branched out on GitHub:

https://github.com/pmacct/pmacct/tree/1.7.5

Code freeze means that until release time only capital bug fixes will be
committed to this code branch. To allow us all to benefit of an improved
quality of released code, i encourage everybody to test this code (should
you have a non-production environment available) and report any issues
you may stumble upon.

To clone code in a specific branch you need to use the -b knob, ie.:

git clone -b 1.7.5 https://github.com/pmacct/pmacct.git

Regards,
Paolo


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Tracking ingress throughput

2020-04-30 Thread Paolo Lucente


Hi,

By sendng a SIGUSR1 to the daemon you are returned some stats informaton
in the log. Please see here:

https://github.com/pmacct/pmacct/blob/1.7.4/docs/SIGNALS#L17-#L40

Paolo 

On Wed, Apr 29, 2020 at 10:12:53AM +0530, HEMA CHANDRA YEDDULA wrote:
> 
> Hi paolo,
> 
> Is there any way to track the amount data pmacct is receiving. Is there any 
> counter 
> for this in the code ?
> 
> Any help regarding the query is appreciated.
> 
> Thanks & Regards,
> Hema Chandra
> ---
> ::Disclaimer::
> ---
> 
> The contents of this email and any attachment(s) are confidential and intended
> for the named recipient(s) only. It shall not attach any liability on C-DOT.
> Any views or opinions presented in this email are solely those of the author
> and  may  not  necessarily  reflect  the  opinions  of  C-DOT.  Any  form of
> reproduction, dissemination, copying, disclosure, modification, distribution
> and / or publication of this message without the prior written consent of the
> author of this e-mail is strictly prohibited. If you have received this email
> in error please delete it and notify the sender immediately.
> 
> ---
> 

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


[pmacct-discussion] Test

2020-04-23 Thread Paolo Lucente


Please ignore

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


[pmacct-discussion] Test

2020-04-23 Thread Paolo Lucente


Please ignore


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


[pmacct-discussion] Test

2020-04-23 Thread Paolo Lucente


Please ignore

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


[pmacct-discussion] Test

2020-04-23 Thread Paolo Lucente


Please ignore


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] BGP attributes are empty for almost all the data

2020-04-17 Thread Paolo Lucente


Hi Alexandre,

Why don't you try to do a dump of routes received by pmacct? Like:

https://github.com/pmacct/pmacct/blob/1.7.4/QUICKSTART#L1780-#L1781

This test may require you compiling pmacct with JSON / Jansson support.
Also, for a test you could also add 'dst_host' on your 'aggregate'
config directive so you can see what comes into flows (that is, before
pmacct trying to perform the magics with network masking). With all of
this you should have a full view: what you get from BGP, what you get
from flows, what works and what does not work and perhaps establish a
pattern (if not already finding a cause, ie. a partial BGP view and
suchs).

Paolo
 
On Thu, Apr 16, 2020 at 05:17:55PM +0200, alexandre S. wrote:
> Hello,
> 
> I am trying to log sflow data in a sqlite database, aggregated by
> destination AS and prefix.
> 
> Currently I have configured the bgp_deamon plugin, along with the sqlite3
> one to run with sfacctd.
> 
> 
> The problem is that the data that almost all the traffic as an AS set to 0
> with a destination prefix set to 0.0.0.0/0. By looking at the database entry
> I found that a small part of the data is saved with the right values, and I
> can't find why.
> 
> My guess was that the first hour had wrong values because of the time it
> took to receive all bgp information, so I let the daemon run for a few hours
> but it has happened each time the data was saved.
> 
> The configuration file look like this :
> 
> 
> 
> #daemonize: true
> 
> #debug: true
> #debug_internal_msg: true
> 
> bgp_daemon: true
> bgp_daemon_ip: 127.0.0.1
> bgp_daemon_port: 1180
> bgp_daemon_as: 
> bgp_daemon_max_peers: 1
> bgp_table_dump_file:
> /pmacct-1.7.3/output/bgp-$peer_src_ip-%Y_%m_%dT%H_%M_%S.txt
> bgp_table_dump_refresh_time: 3600
> bgp_agent_map: /pmacct-1.7.3/etc/sfacctd_bgp.map
> 
> plugins: sqlite3[simple]
> 
> sql_db[simple]: /pmacct-1.7.3/output/pmacct.db
> sql_refresh_time[simple]: 3600
> sql_history[simple]: 60m
> sql_history_roundoff[simple]: h
> sql_table[simple]: acct
> sql_table_version[simple]: 9
> 
> aggregate: dst_net, dst_mask, dst_as
> 
> sfacctd_as: bgp
> sfacctd_net: bgp
> sfacctd_port: 2602
> sfacctd_ip: 
> sfacctd_time_new: true
> 
> --
> 
> BGP agent map:
> 
> bgp_ip=127.0.0.1 ip=
> 
> --
> 
> Regards, Alexandre
> 
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Multiples nfacctd deamons writing to same Kafka topic

2020-04-15 Thread Paolo Lucente


Hey Emanuel,

The config is correct and I did try your same config and that does work
for me, ie.:

$ ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic 
pmacct.flows
{"event_type": "purge", "tag": 1, [ .. ]}

What version of the software are you using? Is it 1.7.4p1 (latest
stable) or master code from GitHub? If so, is it possible an old running
nfacctd process is reading the data instead of the newly configured one?

Paolo 

On Wed, Apr 15, 2020 at 12:17:43AM -0400, Emanuel dos Reis Rodrigues wrote:
> I tried, follow my config:
> 
> kafka_topic: netflow
> kafka_broker_host: 192.168100.105
> kafka_broker_port: 9092
> kafka_refresh_time: 1
> #daemonize: true
> plugins: kafka
> nfacctd_port: 9995
> post_tag: 1
> aggregate: tag, peer_src_ip, src_host, dst_host, timestamp_start,
> timestamp_end, src_port, dst_port, proto
> 
> 
> I kept the peer_src_ip, but the tag one is not being posted to Kafka.
> 
> {'event_type': 'purge', 'peer_ip_src': '172.18.0.2', 'ip_src':
> '192.168.1.100', 'ip_dst': 'x.46.x.245', 'port_src': 51184, 'port_dst':
> 443, 'ip_proto': 'tcp', 'timestamp_start': '2020-04-14 14:15:39.00',
> 'timestamp_end': '2020-04-14 14:15:54.00', 'packets': 5, 'bytes': 260,
> 'writer_id': 'default_kafka/75091'}
> 
> Did I miss anything ?
> 
> 
> Thanks !
> 
> 
> 
> On Tue, Apr 14, 2020 at 10:26 AM Paolo Lucente  wrote:
> 
> >
> > I may have skipped the important detail you need to add the 'tag' key to
> > your 'aggregate' line in the config, my bad. This is in addition to, say,
> > 'post_tag: 1' to identify collector 1. Let me know how it goes.
> >
> > Paolo
> >
> > On Tue, Apr 14, 2020 at 10:18:55AM -0400, Emanuel dos Reis Rodrigues wrote:
> > > Thank you man, I did this test but I did not see the id being pushed
> > along
> > > with the Netflow info to Kafka topic. Is there the place the information
> > > would show up ?
> > >
> > >
> > > On Tue, Apr 14, 2020 at 9:15 AM Paolo Lucente  wrote:
> > >
> > > >
> > > > Hi Emanuel,
> > > >
> > > > Apologies i did not get you wanted and ID for the collector. The
> > > > simplest way of achieving that is 'post_tag' as you just have to supply
> > > > a number as ID; pre_tag_map expects a map and may be better to be
> > > > reserved for more complex use-cases.
> > > >
> > > > Paolo
> > > >
> > > > On Mon, Apr 13, 2020 at 03:35:52PM -0400, Emanuel dos Reis Rodrigues
> > wrote:
> > > > > Thank you for your help. Appreciate it !
> > > > >
> > > > > See, I did use it for testing after I sent this email. However, the
> > ip
> > > > > showed there was the IP from my nfacctd machine, the collector
> > itself.
> > > > Not
> > > > > the exporter.
> > > > >
> > > > > peer_src_ip  : IP address or identificator of
> > > > telemetry
> > > > > exporting device
> > > > >
> > > > > In fact, it may have todo with the fact I currently have an SSH
> > tunnel
> > > > with
> > > > > socat with the remote machine in order to collect the data. This may
> > be
> > > > the
> > > > > reason why which is definitively not a ordinary condition. :)
> > > > >
> > > > > I am wondering if I could use this one to include a different tag on
> > it
> > > > > process/collector, but have not yet figured out how. Any thoughts ?
> > > > >
> > > > > label: String label, ie. as result of
> > > > > pre_tag_map evaluation
> > > > >
> > > > >
> > > > > Thank you again.
> > > > >
> > > > > On Mon, Apr 13, 2020 at 9:07 AM Paolo Lucente 
> > wrote:
> > > > >
> > > > > >
> > > > > > Hi Emanuel,
> > > > > >
> > > > > > I think you are looking for (i admit, non-intuitive) 'peer_src_ip'
> > > > > > primitive:
> > > > > >
> > > > > > $ nfacctd -a | grep peer_src_ip
> > > > > > peer_src_ip  : IP address or identificator of
> > > > > > telemetry exporting device
> > > > > >
> > > > > > Without the grep you can see all supported primitives by the
> > nfacctd
> > > > > > release you are us

Re: [pmacct-discussion] Multiples nfacctd deamons writing to same Kafka topic

2020-04-14 Thread Paolo Lucente


I may have skipped the important detail you need to add the 'tag' key to
your 'aggregate' line in the config, my bad. This is in addition to, say,
'post_tag: 1' to identify collector 1. Let me know how it goes.

Paolo

On Tue, Apr 14, 2020 at 10:18:55AM -0400, Emanuel dos Reis Rodrigues wrote:
> Thank you man, I did this test but I did not see the id being pushed along
> with the Netflow info to Kafka topic. Is there the place the information
> would show up ?
> 
> 
> On Tue, Apr 14, 2020 at 9:15 AM Paolo Lucente  wrote:
> 
> >
> > Hi Emanuel,
> >
> > Apologies i did not get you wanted and ID for the collector. The
> > simplest way of achieving that is 'post_tag' as you just have to supply
> > a number as ID; pre_tag_map expects a map and may be better to be
> > reserved for more complex use-cases.
> >
> > Paolo
> >
> > On Mon, Apr 13, 2020 at 03:35:52PM -0400, Emanuel dos Reis Rodrigues wrote:
> > > Thank you for your help. Appreciate it !
> > >
> > > See, I did use it for testing after I sent this email. However, the ip
> > > showed there was the IP from my nfacctd machine, the collector itself.
> > Not
> > > the exporter.
> > >
> > > peer_src_ip  : IP address or identificator of
> > telemetry
> > > exporting device
> > >
> > > In fact, it may have todo with the fact I currently have an SSH tunnel
> > with
> > > socat with the remote machine in order to collect the data. This may be
> > the
> > > reason why which is definitively not a ordinary condition. :)
> > >
> > > I am wondering if I could use this one to include a different tag on it
> > > process/collector, but have not yet figured out how. Any thoughts ?
> > >
> > > label: String label, ie. as result of
> > > pre_tag_map evaluation
> > >
> > >
> > > Thank you again.
> > >
> > > On Mon, Apr 13, 2020 at 9:07 AM Paolo Lucente  wrote:
> > >
> > > >
> > > > Hi Emanuel,
> > > >
> > > > I think you are looking for (i admit, non-intuitive) 'peer_src_ip'
> > > > primitive:
> > > >
> > > > $ nfacctd -a | grep peer_src_ip
> > > > peer_src_ip  : IP address or identificator of
> > > > telemetry exporting device
> > > >
> > > > Without the grep you can see all supported primitives by the nfacctd
> > > > release you are using along with a text explanation.
> > > >
> > > > Paolo
> > > >
> > > > On Sun, Apr 12, 2020 at 06:55:26PM -0400, Emanuel dos Reis Rodrigues
> > wrote:
> > > > > Hello guys,
> > > > >
> > > > > I implemented nfacctd acting as a Netflow collector using pmacct. It
> > is
> > > > > working perfectly and writing the flows to a Kafka topic which I
> > have an
> > > > > application processing it.
> > > > >
> > > > > Following is my configuration:
> > > > >
> > > > > kafka_topic: netflow
> > > > > kafka_broker_host: Kafka-host
> > > > > kafka_broker_port: 9092
> > > > > kafka_refresh_time: 1
> > > > > daemonize: true
> > > > > plugins: kafka
> > > > > pcap_interface: enp0s8
> > > > > nfacctd_ip: 192.168.1.100
> > > > > nfacctd_port: 9995
> > > > > aggregate: src_host, dst_host, timestamp_start, timestamp_end,
> > src_port,
> > > > > dst_port, proto
> > > > >
> > > > > Currently, there is only one Netflow exporter sending data to this
> > > > > demon and I would like to add another exporter. The problem is that
> > I am
> > > > > not finding a way to differentiate the flows coming from different
> > > > > exporters.
> > > > >
> > > > > Let's say I have the exporter A currently sending data to nfacctd
> > running
> > > > > at port 9995 and the data is being written to Kafka topic Netflow.
> > > > >
> > > > > Now I want a new exporter B to start sending data to nfacctd port
> > 9996
> > > > which
> > > > > will be running as a separate demon ( just because I though so, not
> > sure
> > > > > yet if it is a necessary approach)  and writing the data to the
> > > > > same Netflow topic in Kafka.
> > > > >
> > > > > When the data comes from Kafka to my application, I cannot tell from
> > > > > which exporter the data came from. I would need some sort of
> > > > identification
> > > > > in order to make this differentiation. It is important for me,
> > because my
> > > > > application may treat differently Netflow traffic coming from these
> > > > > two Netflow exporters.
> > > > >
> > > > > Thanks in advance.
> > > > >
> > > > > Emanuel
> > > >
> > > > > ___
> > > > > pmacct-discussion mailing list
> > > > > http://www.pmacct.net/#mailinglists
> > > >
> > > >
> > > > ___
> > > > pmacct-discussion mailing list
> > > > http://www.pmacct.net/#mailinglists
> > > >
> >

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] How to record ICMP and ICMP6 types/codes by pmacctd?

2020-04-14 Thread Paolo Lucente


Hi,

I see. I am sorry to confirm that, yes, the feature is not there right
now. It's not a biggie but still it would require a bit of work in order
to converge. I can gladly put on my todo list but it may take a few
weeks to get it out; or if you could perform a small fun C coding work
on you own, please get in touch by unicast email, i'd be happy to
assist.

Paolo
 
On Mon, Apr 13, 2020 at 03:59:48PM -0400, fireballiso wrote:
> 
>   
> 
>   
>   
> Hi Paolo,
> 
> 
> 
> Sorry, I should have said I was
>   replacing the netflow *generators*, not collectors. My mistake!
> 
> 
> 
> Yes, I posted the config that generates
>   the netflow 9 flows, since I hoped to see if it was missing
>   something for including the ICMP and ICMP6 types/codes.
> 
> 
> -Indy
> 
> 
> 
> 
> On 4/13/2020 8:59 AM, Paolo Lucente
>   wrote:
> 
>cite="mid:20200413125955.gb16...@moussaka.pmacct.net">
>   
> Hi,
> 
> Let me confirm that collecting the ICMP type is partially supported; the
> native dst_port primitive is locked to UDP and TCP only - making this
> not suitable for NetFlow v5 kind of scenarios; but if using NetFlow v9
> and/or IPFIX you could define your own custom primitive via the
> aggregate_primitives infrastructure, see also an example here:  
> 
>  href="https://github.com/pmacct/pmacct/blob/1.7.4/examples/primitives.lst.example;>https://github.com/pmacct/pmacct/blob/1.7.4/examples/primitives.lst.example
> 
> By the way: you speak collecting NetFlow but your config example is
> actually about the 'nfprobe' plugin, that is, generating NetFlow out of
> raw traffic. Is that what you are after?
> 
> Paolo 
> 
> On Sun, Apr 12, 2020 at 04:20:08PM -0400, fireballiso wrote:
> 
>   
> Hi! I've started using pmacctd to 
> replace old netflow collectors for my
> main and test networks, which run both IPv6 and IPv4. It works very
> well, except that I haven't yet found a way to record the ICMP and ICMP6
> types and codes.
> 
> In other collectors, these are often stored in the destination port
> (otherwise unused for ICMP/ICMP6), in the format "A.B", where A is the
> type and B is the code. For example, "3.1" would represent ICMP type 3
> (Destination Unreachable), code 1 (Host Unreachable). I see lots of ICMP
> and ICMP6 flows, but unfortunately, the destination port is always set
> to "0.0", as if nothing is being recorded there.
> 
> A simple config:
> 
> daemonize: true
> !
> interface: net1
> aggregate: src_host, dst_host, src_port, dst_port, proto, tos
> plugins: nfprobe
> nfprobe_receiver: 192.168.14.2:9997
> nfprobe_version: 9
> 
> 
> I haven't found documentation or examples that show how to enable
> recording the types and codes, and no relevant primitives to add to the
> aggregate statement. Would someone be able to tell me how to do this?
> 
> Thank you!
> 
> -Indy
> 
> ___
> pmacct-discussion mailing list
>  href="http://www.pmacct.net/#mailinglists;>http://www.pmacct.net/#mailinglists
> 
>   
>   
> ___
> pmacct-discussion mailing list
>  href="http://www.pmacct.net/#mailinglists;>http://www.pmacct.net/#mailinglists
> 
> 
> 
> 
> -- 
> 
> -Indy
>  href="mailto:fireball...@yahoo.com;>fireball...@yahoo.com
>   
> 

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Multiples nfacctd deamons writing to same Kafka topic

2020-04-14 Thread Paolo Lucente


Hi Emanuel,

Apologies i did not get you wanted and ID for the collector. The
simplest way of achieving that is 'post_tag' as you just have to supply
a number as ID; pre_tag_map expects a map and may be better to be
reserved for more complex use-cases.

Paolo

On Mon, Apr 13, 2020 at 03:35:52PM -0400, Emanuel dos Reis Rodrigues wrote:
> Thank you for your help. Appreciate it !
> 
> See, I did use it for testing after I sent this email. However, the ip
> showed there was the IP from my nfacctd machine, the collector itself. Not
> the exporter.
> 
> peer_src_ip  : IP address or identificator of telemetry
> exporting device
> 
> In fact, it may have todo with the fact I currently have an SSH tunnel with
> socat with the remote machine in order to collect the data. This may be the
> reason why which is definitively not a ordinary condition. :)
> 
> I am wondering if I could use this one to include a different tag on it
> process/collector, but have not yet figured out how. Any thoughts ?
> 
> label: String label, ie. as result of
> pre_tag_map evaluation
> 
> 
> Thank you again.
> 
> On Mon, Apr 13, 2020 at 9:07 AM Paolo Lucente  wrote:
> 
> >
> > Hi Emanuel,
> >
> > I think you are looking for (i admit, non-intuitive) 'peer_src_ip'
> > primitive:
> >
> > $ nfacctd -a | grep peer_src_ip
> > peer_src_ip  : IP address or identificator of
> > telemetry exporting device
> >
> > Without the grep you can see all supported primitives by the nfacctd
> > release you are using along with a text explanation.
> >
> > Paolo
> >
> > On Sun, Apr 12, 2020 at 06:55:26PM -0400, Emanuel dos Reis Rodrigues wrote:
> > > Hello guys,
> > >
> > > I implemented nfacctd acting as a Netflow collector using pmacct. It is
> > > working perfectly and writing the flows to a Kafka topic which I have an
> > > application processing it.
> > >
> > > Following is my configuration:
> > >
> > > kafka_topic: netflow
> > > kafka_broker_host: Kafka-host
> > > kafka_broker_port: 9092
> > > kafka_refresh_time: 1
> > > daemonize: true
> > > plugins: kafka
> > > pcap_interface: enp0s8
> > > nfacctd_ip: 192.168.1.100
> > > nfacctd_port: 9995
> > > aggregate: src_host, dst_host, timestamp_start, timestamp_end, src_port,
> > > dst_port, proto
> > >
> > > Currently, there is only one Netflow exporter sending data to this
> > > demon and I would like to add another exporter. The problem is that I am
> > > not finding a way to differentiate the flows coming from different
> > > exporters.
> > >
> > > Let's say I have the exporter A currently sending data to nfacctd running
> > > at port 9995 and the data is being written to Kafka topic Netflow.
> > >
> > > Now I want a new exporter B to start sending data to nfacctd port 9996
> > which
> > > will be running as a separate demon ( just because I though so, not sure
> > > yet if it is a necessary approach)  and writing the data to the
> > > same Netflow topic in Kafka.
> > >
> > > When the data comes from Kafka to my application, I cannot tell from
> > > which exporter the data came from. I would need some sort of
> > identification
> > > in order to make this differentiation. It is important for me, because my
> > > application may treat differently Netflow traffic coming from these
> > > two Netflow exporters.
> > >
> > > Thanks in advance.
> > >
> > > Emanuel
> >
> > > ___
> > > pmacct-discussion mailing list
> > > http://www.pmacct.net/#mailinglists
> >
> >
> > ___
> > pmacct-discussion mailing list
> > http://www.pmacct.net/#mailinglists
> >

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Multiples nfacctd deamons writing to same Kafka topic

2020-04-13 Thread Paolo Lucente


Hi Emanuel,

I think you are looking for (i admit, non-intuitive) 'peer_src_ip'
primitive:

$ nfacctd -a | grep peer_src_ip
peer_src_ip  : IP address or identificator of telemetry 
exporting device

Without the grep you can see all supported primitives by the nfacctd
release you are using along with a text explanation.

Paolo

On Sun, Apr 12, 2020 at 06:55:26PM -0400, Emanuel dos Reis Rodrigues wrote:
> Hello guys,
> 
> I implemented nfacctd acting as a Netflow collector using pmacct. It is
> working perfectly and writing the flows to a Kafka topic which I have an
> application processing it.
> 
> Following is my configuration:
> 
> kafka_topic: netflow
> kafka_broker_host: Kafka-host
> kafka_broker_port: 9092
> kafka_refresh_time: 1
> daemonize: true
> plugins: kafka
> pcap_interface: enp0s8
> nfacctd_ip: 192.168.1.100
> nfacctd_port: 9995
> aggregate: src_host, dst_host, timestamp_start, timestamp_end, src_port,
> dst_port, proto
> 
> Currently, there is only one Netflow exporter sending data to this
> demon and I would like to add another exporter. The problem is that I am
> not finding a way to differentiate the flows coming from different
> exporters.
> 
> Let's say I have the exporter A currently sending data to nfacctd running
> at port 9995 and the data is being written to Kafka topic Netflow.
> 
> Now I want a new exporter B to start sending data to nfacctd port 9996 which
> will be running as a separate demon ( just because I though so, not sure
> yet if it is a necessary approach)  and writing the data to the
> same Netflow topic in Kafka.
> 
> When the data comes from Kafka to my application, I cannot tell from
> which exporter the data came from. I would need some sort of identification
> in order to make this differentiation. It is important for me, because my
> application may treat differently Netflow traffic coming from these
> two Netflow exporters.
> 
> Thanks in advance.
> 
> Emanuel

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] How to record ICMP and ICMP6 types/codes by pmacctd?

2020-04-13 Thread Paolo Lucente


Hi,

Let me confirm that collecting the ICMP type is partially supported; the
native dst_port primitive is locked to UDP and TCP only - making this
not suitable for NetFlow v5 kind of scenarios; but if using NetFlow v9
and/or IPFIX you could define your own custom primitive via the
aggregate_primitives infrastructure, see also an example here:  

https://github.com/pmacct/pmacct/blob/1.7.4/examples/primitives.lst.example

By the way: you speak collecting NetFlow but your config example is
actually about the 'nfprobe' plugin, that is, generating NetFlow out of
raw traffic. Is that what you are after?

Paolo 

On Sun, Apr 12, 2020 at 04:20:08PM -0400, fireballiso wrote:
> Hi! I've started using pmacctd to replace old netflow collectors for my
> main and test networks, which run both IPv6 and IPv4. It works very
> well, except that I haven't yet found a way to record the ICMP and ICMP6
> types and codes.
> 
> In other collectors, these are often stored in the destination port
> (otherwise unused for ICMP/ICMP6), in the format "A.B", where A is the
> type and B is the code. For example, "3.1" would represent ICMP type 3
> (Destination Unreachable), code 1 (Host Unreachable). I see lots of ICMP
> and ICMP6 flows, but unfortunately, the destination port is always set
> to "0.0", as if nothing is being recorded there.
> 
> A simple config:
> 
> daemonize: true
> !
> interface: net1
> aggregate: src_host, dst_host, src_port, dst_port, proto, tos
> plugins: nfprobe
> nfprobe_receiver: 192.168.14.2:9997
> nfprobe_version: 9
> 
> 
> I haven't found documentation or examples that show how to enable
> recording the types and codes, and no relevant primitives to add to the
> aggregate statement. Would someone be able to tell me how to do this?
> 
> Thank you!
> 
> -Indy
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Looking for value suggestions

2020-04-06 Thread Paolo Lucente


Hi Mark,

Since 'nfprobe' plugin would generate sequence numbers on output of
IPFIX packets, that would rule out any buffering issue (not saying
buffers should not be looked at, just saying buffering considerations
are disjoint from the sequencing issue).

Do you have any multi-paths between your routers (where you generate
IPFIX packets) and your collector? Is it possible, in other words, we
may be looking at an out of order delivery issue? If you are not doing
that already, can you take a pcap sample on the router (that is, where
IPFIX data is generated)? Depending if the sequencing issue is notified
there, we can pin-point (or not!) to out of ordering. If it's not an out
of order issue it would help a lot if you could send me the pcap trace
privately for me to inspect it and get further clues.

Wrt buffers: if there is a buffering issue inside pmacct then you are
notified by the means of warning/error messages so it's easy to spot;
trickier could be to spot issues between the kernel and pmacct for the
case of libpcap: you should send a SIGUSR1 to pmacctd ('killall -USR1
pmacctd' would do) and read output back in the log file. Here it is
documented what kind of output you should expect (see 'dropped_packets'):

https://github.com/pmacct/pmacct/blob/master/docs/SIGNALS#L17-#L40
 
Paolo

On Fri, Apr 03, 2020 at 03:46:53PM +0200, Mark Schouten wrote:
> 
> Hi,
> 
> I'm from Tuxis, a small ISP from the Netherlands. We run a small network, 
> with three transits and one peering network. Our routers are 
> Debian/Bird-based, and perform pretty well, if I may say so myself.
> 
> On each router, we run a pmacct per external interface. Primary goal is 
> invoicing (through amqp), secondary goal is sFlow and Netflow for obvious 
> reasons.
> 
> I have sFlow sampling at 1:512 and all is working well there. Netflow 
> however, is an issue.
> 
> We export Netflow to nprobe, and nprobe complains about Flow Collection 
> Drops. And I can confirm that with Wireshark, which reports:
>     FlowSequence: 889946358 (expected 888912934)
>         [Expert Info (Warning/Sequence): Unexpected flow sequence for domain 
> ID 77 (expected 888912934, got 889946358)]
>             [Unexpected flow sequence for domain ID 77 (expected 888912934, 
> got 889946358)]
>             [Severity level: Warning]
>             [Group: Sequence]
> 
> 
> So it seems that pmacct is missing data on flows, probably due to my 
> configuration. Two questions:
> 
> 1: How can I see that pmacct buffers or buckets are overrunning?
> 2: Looking at my current configuration, which values should I alter to 
> achieve both (mostly) complete flowdata and not too much CPU usage?
> 
> 
> Thanks in advance, also for the cool product that pmacctd is!
> 
> 
> daemonize: true
> pidfile: /var/run/pmacctd.pid
> syslog: daemon
> 
> interface: v-amsix
> 
> pmacctd_flow_buffer_size: 268435456 !256MB
> pmacctd_flow_buffer_buckets: 65536
> pmacctd_conntrack_buffer_size: 134217728 !128MB
> pmacctd_flow_tcp_lifetime: 3600
> 
> plugins: nfprobe[in],nfprobe[out],sfprobe[fnm],amqp
> !
> plugin_buffer_size: 102400
> plugin_pipe_size: 10240
> 
> sampling_rate: 1
> sampling_rate[fnm]: 512
> timestamps_since_epoch: true
> geoipv2_file: /var/lib/GeoIP/GeoLite2-Country.mmdb
> aggregate: src_host_country, dst_host_country
> aggregate: 
> src_host,dst_host,src_port,dst_port,proto,src_mac,dst_mac,src_host_country,dst_host_country
> amqp_exchange: pmacct
> amqp_routing_key: acct
> amqp_refresh_time: 300
> amqp_history_roundoff: m
> amqp_host: 
> amqp_user: 
> amqp_passwd: 
> amqp_exchange_type: direct
> amqp_persistent_msg: false
> amqp_cache_entries: 262147
> amqp_multi_values: 65536
> amqp_history: 5m
> 
> sfprobe_agentsubid: 1402
> sfprobe_receiver: 
> 
> nfprobe_receiver: 
> nfprobe_version: 10
> nfprobe_source_ip: 
> nfprobe_direction[in]: in
> nfprobe_direction[out]: out
> nfprobe_ifindex[in]: 77
> nfprobe_ifindex[out]: 77
> nfprobe_engine: 77
> 
> 
> --
> Mark Schouten 
> 
> Tuxis, Ede, https://www.tuxis.nl
> 
> T: +31 318 200208 
>  
> 

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Besoin d’aide: Unknown plug-in type: mysql. Ignoring, No plug-in has been activated; defaulting to in memory table.

2020-03-31 Thread Paolo Lucente

Hi,

You can review the following section of the QUICKSTART document on how
to configured pmbmpd to collect BMP data and send it to a Kafka broker:

https://github.com/pmacct/pmacct/blob/1.7.4/QUICKSTART#L2226-#L2378

Once data is in Kafka in JSON format you can choose what to do with it:
you can Google how to import it in MySQL via a connector; or you can
use perhaps more suited Big Data tools for the task, like ElasticSearch
or Apache Druid. Alternatively to Kafka, should you not be familiar with
the tool, you can save data in JSON format to files.

Paolo

On Mon, Mar 30, 2020 at 03:04:39PM +0200, Abdul Rahim barry wrote:
> Hello everyone
> I am working on my end of study project, I would like to collect BMP
> (Protocol Monnitoring BGP) data
> Here is the content of my pmbmp.conf file
> 
> !
> daemonize: true
> pidfile: /var/pmbmpd.pid
> syslog:daemon
> aggregate: src_host, as_path, peer_src_ip, peer_dst_ip, peer_src_as,
> peer_dst_as, local_pref, sum_net, sum_as
> pcap_filter: net 10.75.9.4
> interface: lo
> plugins: mysql
> sql_host: localhost
> sql_passwd:  sql_refresh_time:60
> sql_history: 1h
> sql_history_roundoff: mhd
> sql_table_version: 1
> !
> 
> I would like to know:
> 1- how to recover BMP data
> 2- how to feed the database (mysql) with BMP data
> 
> thank you in advance for your help
> I am in a hurry so that I can move forward thank you.
> 
> Le jeu. 26 mars 2020 à 14:05, Paolo Lucente  a écrit :
> 
> >
> > Hi,
> >
> > You need to compile pmacct with MySQL support, --enable-mysql. You may
> > profit from the following section of the QUICKSTART document:
> >
> > https://github.com/pmacct/pmacct/blob/1.7.4/QUICKSTART#L109-#L167
> >
> > Bu perhaps the whole chapter I and II are good readings to start.
> >
> > Paolo
> >
> > On Wed, Mar 25, 2020 at 04:25:14PM +0100, Abdul Rahim barry wrote:
> > > Bonjour tout le monde
> > > Je travail sur mon projet de stage de fin cycle, le but est de collecté
> > les
> > > routes bgp et de les stocké dans une base de donnée mysql.
> > > Selon mes recherches pmacct peut le faire, j’ai installé pmacct sur un
> > > sever Ubuntu, j’ai configuré le fichier nfacct.conf.
> > > En compilant: pmacctd -f /etc/pmacct/pmacctd.com .
> > >  J’ai cette erreur :
> > > ERROR: (/etc/pmacctd/pmacctd.conf) Unknown plug-in type: mysql. Ignoring.
> > > WARN:  (/etc/pmacctd/pmacctd.conf) No plug-in has been activated;
> > > defaulting to in memory table.
> > > Je besoin de votre aide, je sais pas quoi faire.
> > > Merci d’avance pour votre aide
> >
> > > ___
> > > pmacct-discussion mailing list
> > > http://www.pmacct.net/#mailinglists
> >
> >
> > ___
> > pmacct-discussion mailing list
> > http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] Besoin d’aide: Unknown plug-in type: mysql. Ignoring, No plug-in has been activated; defaulting to in memory table.

2020-03-26 Thread Paolo Lucente

Hi,

You need to compile pmacct with MySQL support, --enable-mysql. You may
profit from the following section of the QUICKSTART document:

https://github.com/pmacct/pmacct/blob/1.7.4/QUICKSTART#L109-#L167

Bu perhaps the whole chapter I and II are good readings to start.

Paolo

On Wed, Mar 25, 2020 at 04:25:14PM +0100, Abdul Rahim barry wrote:
> Bonjour tout le monde
> Je travail sur mon projet de stage de fin cycle, le but est de collecté les
> routes bgp et de les stocké dans une base de donnée mysql.
> Selon mes recherches pmacct peut le faire, j’ai installé pmacct sur un
> sever Ubuntu, j’ai configuré le fichier nfacct.conf.
> En compilant: pmacctd -f /etc/pmacct/pmacctd.com .
>  J’ai cette erreur :
> ERROR: (/etc/pmacctd/pmacctd.conf) Unknown plug-in type: mysql. Ignoring.
> WARN:  (/etc/pmacctd/pmacctd.conf) No plug-in has been activated;
> defaulting to in memory table.
> Je besoin de votre aide, je sais pas quoi faire.
> Merci d’avance pour votre aide

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] tee and other plugins simultaneously in nfacctd?

2020-03-25 Thread Paolo Lucente

Hi Jason,

Yes, you should use multiple nfacctd instances; one to replicate, one to
collect. I intend in future to allow these two distinct functions to run
within the same daemon but that's not yet possible at the moment (some
coding needed).

Paolo
 
On Wed, Mar 25, 2020 at 08:03:39AM -0400, Jason Lixfeld wrote:
> Hi,
> 
> I had tried to add a tee plugin to my existing kafka plugin under nfacctd, 
> but it didn’t seem to like that:
> 
> ERROR ( default/core ): 'tee' plugins are not compatible with data 
> (memory/mysql/pgsql/etc.) plugins. Exiting…
> 
> Am I missing something, or would I need to use multiple nfacctd instances, 
> one for tee and one for whatever else?
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] TCP segments handling

2020-03-05 Thread Paolo Lucente


Hi,

Would you have a packet trace in pcap format to share via unicast email
in order to reproduce the issue at my end? If not, i am afraid i can't
do much.

Paolo
 
On Wed, Mar 04, 2020 at 12:27:36PM +0530, HEMA CHANDRA YEDDULA wrote:
> Hi
> 
> We have a case where the packet has the reassembled tcp segments. In such a 
> case the payload_ptr is pointing to some random lines though the offset is 
> set to 0. What can 
> be reason for this ?
> 
> Thanks & Regards,
> Hema Chandra
> 
> ---
> ::Disclaimer::
> ---
> 
> The contents of this email and any attachment(s) are confidential and intended
> for the named recipient(s) only. It shall not attach any liability on C-DOT.
> Any views or opinions presented in this email are solely those of the author
> and  may  not  necessarily  reflect  the  opinions  of  C-DOT.  Any  form of
> reproduction, dissemination, copying, disclosure, modification, distribution
> and / or publication of this message without the prior written consent of the
> author of this e-mail is strictly prohibited. If you have received this email
> in error please delete it and notify the sender immediately.
> 
> ---
> 

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Pmacct configuration with direction of traffic

2020-02-27 Thread Paolo Lucente


Hi Alex,

Ack. The other way you could "filter" out is with a networks_file: in
there you specify the network(s) you are interested in following the
example here:

https://github.com/pmacct/pmacct/blob/master/examples/networks.lst.example

In the simplest case, you just want to list networks of interest one per
line. Then in the config you want to set 'networks_file_filter: true' as
well. This is kind-of filtering: networks / IPs not of interest will be
just zeroed out and rolled up as a 0.0.0.0 src_host / dst_host.

Paolo

On Wed, Feb 26, 2020 at 11:32:31AM +0200, Alex K wrote:
> Hi Paolo,
> 
> On Tue, Feb 25, 2020 at 6:41 PM Paolo Lucente  wrote:
> 
> >
> > Hi Alex,
> >
> > Thanks for your feedback. I see you did run "tcpdump -n -vv -i nflog:1"
> > which is equivalent to run uacctd without any filters; as you may know,
> > you can append a BPF-style filter to the tcpdump command-line, precisely
> > as you express it in pre_tag_map. Can you give that a try and see if you
> > get any luck?
> >
> Bad luck... I get:
> tcpdump -nvv -i  nflog:1 src net 192.168.28.0/24
> tcpdump: NFLOG link-layer type filtering not implemented
> It seems that filtering at nflog interface is not supported.
> Running tcpdump -nvv -i eth0 src net 192.168.28.0/24 does capture traffic
> normally.
> Is there any other way I could apply some filtering with uacctd? I need to
> use uacctd since I get all the pre-nat, post-nat details of the flows, so
> as to account traffic at the WAN interfaces with the real source details.
> 
> 
> > My expextation is: if something does not work with pre_tag_map, it
> > should also not work with tcpdump; if you work out a filter to work
> > against tcpdump, that should work in pre_tag_map as well. Any disconnect
> > among the two may bring the scent of a bug.
> >
> > Paolo
> >
> > On Tue, Feb 25, 2020 at 11:20:21AM +0200, Alex K wrote:
> > > Here is the output when running in debug mode:
> > >
> > > INFO ( default/core ): Linux NetFilter NFLOG Accounting Daemon, uacctd
> > > (20200222-01)
> > > INFO ( default/core ):  '--prefix=/usr' '--enable-mysql' '--enable-nflog'
> > > '--enable-l2' '--enable-traffic-bins' '--enable-bgp-bins'
> > > '--enable-bmp-bins' '--enable-st-bins'
> > > INFO ( default/core ): Reading configuration file
> > > '/root/pmacct/uacctd2.conf'.
> > > INFO ( print_wan0_in/print ): plugin_pipe_size=4096000 bytes
> > > plugin_buffer_size=280 bytes
> > > INFO ( print_wan0_in/print ): ctrl channel: obtained=212992 bytes
> > > target=117024 bytes
> > > INFO ( print_wan0_out/print ): plugin_pipe_size=4096000 bytes
> > > plugin_buffer_size=280 bytes
> > > INFO ( print_wan0_out/print ): ctrl channel: obtained=212992 bytes
> > > target=117024 bytes
> > > INFO ( print_wan0_in/print ): cache entries=16411 base cache
> > > memory=54878384 bytes
> > > INFO ( default/core ): [pretag2.map] (re)loading map.
> > > INFO ( print_wan0_out/print ): cache entries=16411 base cache
> > > memory=54878384 bytes
> > > INFO ( default/core ): [pretag2.map] map successfully (re)loaded.
> > > INFO ( default/core ): [pretag2.map] (re)loading map.
> > > INFO ( default/core ): [pretag2.map] map successfully (re)loaded.
> > > INFO ( default/core ): [pretag2.map] (re)loading map.
> > > INFO ( default/core ): [pretag2.map] map successfully (re)loaded.
> > > INFO ( default/core ): Successfully connected Netlink NFLOG socket
> > >
> > > It doesn't seem to have any issues loading the maps, though it is not
> > > collecting anything. When capturing with tcpdump I see packets going
> > > through:
> > >
> > > tcpdump -n -vv -i nflog:1
> > > 09:16:05.831131 IP (tos 0x0, ttl 64, id 36511, offset 0, flags [DF],
> > proto
> > > ICMP (1), length 84)
> > > 192.168.28.11 > 8.8.8.8: ICMP echo request, id 17353, seq 1, length
> > 64
> > > 09:16:05.831362 IP (tos 0x0, ttl 49, id 0, offset 0, flags [none], proto
> > > ICMP (1), length 84)
> > > 8.8.8.8 > 192.168.28.11: ICMP echo reply, id 17353, seq 1, length 64
> > > 09:16:05.831392 IP (tos 0x0, ttl 64, id 36682, offset 0, flags [DF],
> > proto
> > > ICMP (1), length 84)
> > > 192.168.28.11 > 8.8.8.8: ICMP echo request, id 17353, seq 2, length
> > 64
> > > 09:16:06.855200 IP (tos 0x0, ttl 49, id 0, offset 0, flags [none], proto
> > > ICMP (1), length 84)
> > > 8.8.8.8 > 192.168.28.11: ICMP echo reply, id 17353, seq 2, length 64
> > >
> &

Re: [pmacct-discussion] Pmacct configuration with direction of traffic

2020-02-25 Thread Paolo Lucente


Hi Alex,

Thanks for your feedback. I see you did run "tcpdump -n -vv -i nflog:1"
which is equivalent to run uacctd without any filters; as you may know,
you can append a BPF-style filter to the tcpdump command-line, precisely
as you express it in pre_tag_map. Can you give that a try and see if you
get any luck?

My expextation is: if something does not work with pre_tag_map, it
should also not work with tcpdump; if you work out a filter to work
against tcpdump, that should work in pre_tag_map as well. Any disconnect
among the two may bring the scent of a bug.

Paolo
 
On Tue, Feb 25, 2020 at 11:20:21AM +0200, Alex K wrote:
> Here is the output when running in debug mode:
> 
> INFO ( default/core ): Linux NetFilter NFLOG Accounting Daemon, uacctd
> (20200222-01)
> INFO ( default/core ):  '--prefix=/usr' '--enable-mysql' '--enable-nflog'
> '--enable-l2' '--enable-traffic-bins' '--enable-bgp-bins'
> '--enable-bmp-bins' '--enable-st-bins'
> INFO ( default/core ): Reading configuration file
> '/root/pmacct/uacctd2.conf'.
> INFO ( print_wan0_in/print ): plugin_pipe_size=4096000 bytes
> plugin_buffer_size=280 bytes
> INFO ( print_wan0_in/print ): ctrl channel: obtained=212992 bytes
> target=117024 bytes
> INFO ( print_wan0_out/print ): plugin_pipe_size=4096000 bytes
> plugin_buffer_size=280 bytes
> INFO ( print_wan0_out/print ): ctrl channel: obtained=212992 bytes
> target=117024 bytes
> INFO ( print_wan0_in/print ): cache entries=16411 base cache
> memory=54878384 bytes
> INFO ( default/core ): [pretag2.map] (re)loading map.
> INFO ( print_wan0_out/print ): cache entries=16411 base cache
> memory=54878384 bytes
> INFO ( default/core ): [pretag2.map] map successfully (re)loaded.
> INFO ( default/core ): [pretag2.map] (re)loading map.
> INFO ( default/core ): [pretag2.map] map successfully (re)loaded.
> INFO ( default/core ): [pretag2.map] (re)loading map.
> INFO ( default/core ): [pretag2.map] map successfully (re)loaded.
> INFO ( default/core ): Successfully connected Netlink NFLOG socket
> 
> It doesn't seem to have any issues loading the maps, though it is not
> collecting anything. When capturing with tcpdump I see packets going
> through:
> 
> tcpdump -n -vv -i nflog:1
> 09:16:05.831131 IP (tos 0x0, ttl 64, id 36511, offset 0, flags [DF], proto
> ICMP (1), length 84)
> 192.168.28.11 > 8.8.8.8: ICMP echo request, id 17353, seq 1, length 64
> 09:16:05.831362 IP (tos 0x0, ttl 49, id 0, offset 0, flags [none], proto
> ICMP (1), length 84)
> 8.8.8.8 > 192.168.28.11: ICMP echo reply, id 17353, seq 1, length 64
> 09:16:05.831392 IP (tos 0x0, ttl 64, id 36682, offset 0, flags [DF], proto
> ICMP (1), length 84)
> 192.168.28.11 > 8.8.8.8: ICMP echo request, id 17353, seq 2, length 64
> 09:16:06.855200 IP (tos 0x0, ttl 49, id 0, offset 0, flags [none], proto
> ICMP (1), length 84)
> 8.8.8.8 > 192.168.28.11: ICMP echo reply, id 17353, seq 2, length 64
> 
> The pmacct  version I am running is latest master.
> Thank you for your assistance.
> 
> Alex
> 
> 
> On Mon, Feb 24, 2020 at 6:20 PM Alex K  wrote:
> 
> > Hi Paolo,
> >
> > On Sat, Feb 22, 2020 at 4:18 PM Paolo Lucente  wrote:
> >
> >>
> >> Hi Alex,
> >>
> >> Is it possible with the new setup - the one where pre_tag_map does not
> >> match anything - the traffic is VLAN-tagged (or MPLS-labelled)? If so,
> >> you should adjust filters accordingly and add 'vlan and', ie. "vlan and
> >> src net 192.168.28.0/24 or vlan and src net 192.168.100.0/24".
> >>
> > The traffic is not VLAN or MPLS. It is simple one. I confirm I can collect
> > traffic when removing the pretag directives. Also when stopping uacctd, I
> > can capture traffic at nflog:1 interface.
> > I simplified the configuration as below:
> >
> > !
> > daemonize: true
> > promisc:   false
> > uacctd_group: 1
> > !
> > pre_tag_map: pretag2.map
> > pre_tag_filter[print_wan0_in]: 1
> > pre_tag_filter[print_wan0_out]: 2
> > !
> > !-
> > plugins: print[print_wan0_in], print[print_wan0_out]
> > print_refresh_time: 10
> > print_history: 15m
> > print_output_file_append: true
> > !
> > print_output[print_wan0_in]: csv
> > print_output[print_wan0_out]: csv
> > print_output_file[print_wan0_in]: traffic-wan0-in.csv
> > print_output_file[print_wan0_out]: traffic-wan0-out.csv
> > !
> > aggregate[print_wan0_in]: tag, src_host, dst_host, src_port, dst_port,
> > proto
> > aggregate[print_wan0_out]: tag, src_host, dst_host, src_port, dst_port,
> > proto
> > !
> >
> > with pret

Re: [pmacct-discussion] Pmacct configuration with direction of traffic

2020-02-22 Thread Paolo Lucente


Hi Alex,

Is it possible with the new setup - the one where pre_tag_map does not
match anything - the traffic is VLAN-tagged (or MPLS-labelled)? If so,
you should adjust filters accordingly and add 'vlan and', ie. "vlan and
src net 192.168.28.0/24 or vlan and src net 192.168.100.0/24".

Paolo
 
On Fri, Feb 21, 2020 at 01:04:25PM +0200, Alex K wrote:
> Working further on this, it seems that for pmacct is sufficient to filter
> traffic using only the pre_tag_filter, thus no need for the aggregation
> filters.
> The issue with this setup though is that I loose the information of the
> pre_nat source IP address when monitoring at the WAN interfaces. Due to
> this I am switching to uacctd as following:
> 
> !
> daemonize: true
> promisc:   false
> uacctd_group: 1
> !networks_file: networks.lst
> !ports_file: ports.lst
> !
> pre_tag_map: pretag2.map
> pre_tag_filter[print_wan0_in]: 1
> pre_tag_filter[print_wan0_out]: 2
> pre_tag_filter[wan0_in]: 1
> pre_tag_filter[wan0_out]: 2
> !
> plugins: print[print_wan0_in], print[print_wan0_out], mysql[wan0_in],
> mysql[wan0_out]
> plugin_pipe_size[wan0_in]: 1024000
> plugin_pipe_size[wan0_out]: 1024000
> print_refresh_time: 10
> print_history: 15m
> print_output_file_append: true
> !
> print_output[print_wan0_in]: csv
> print_output_file[print_wan0_in]: in_traffic.csv
> print_output[print_wan0_out]: csv
> print_output_file[print_wan0_out]: out_traffic.csv
> !
> aggregate[print_wan0_in]: dst_host, src_port, dst_port, proto
> aggregate[print_wan0_out]: src_host, src_port, dst_port, proto
> !
> sql_table[wan0_in]: traffic_wan0_in_%Y%m%d_%H%M
> sql_table[wan0_out]: traffic_wan0_out_%Y%m%d_%H%M
> !
> sql_table_schema[wan0_in]: traffic_wan0_in.schema
> sql_table_schema[wan0_out]: traffic_wan0_out.schema
> !
> sql_host: localhost
> sql_db : uacct
> sql_user : uacct
> sql_passwd: uacct
> sql_refresh_time: 30
> sql_optimize_clauses: true
> sql_history : 24h
> sql_history_roundoff: mhd
> !
> aggregate[wan0_in]: dst_host, src_port, dst_port, proto
> aggregate[wan0_out]: src_host, src_port, dst_port, proto
> 
> Where pretag2.map:
> set_tag=1 filter='src net 192.168.28.0/24 or src net 192.168.100.0/24'
> set_tag=2 filter='dst net 192.168.28.0/24 or dst net 192.168.100.0/24'
> 
> The issue I have with the above config is that no traffic is being
> collected at all. I confirm that when removing the pre_tag filters, traffic
> is collected, though it is not sorted per direction as I would like to
> have.
> Can I use pre_tag_map and pre_tag_filter with uacctd? I don't see any
> examples for uacctd at
> https://github.com/pmacct/pmacct/blob/master/examples/pretag.map.example.
> 
> Thanx,
> Alex
> 
> On Thu, Feb 20, 2020 at 6:33 PM Alex K  wrote:
> 
> > Hi all,
> >
> > I have a router with multiple interfaces and will need to account traffic
> > at its several WAN interfaces. My purpose is toaccount the traffic with the
> > tuple details and the direction.
> >
> > As a test I have compiled the following simple configuration for pmacctd:
> >
> > !
> > daemonize: true
> > plugins: print[wan0_in], print[wan0_out]
> > print_refresh_time: 10
> > print_history: 15m
> > !
> > print_output[wan0_in]: csv
> > print_output_file[wan0_in]: in_traffic.csv
> > print_output[wan0_out]: csv
> > print_output_file[wan0_out]: out_traffic.csv
> > !
> > aggregate[wan0_in]: src_host, dst_host, src_port, dst_port, tag
> > aggregate[wan0_out]: src_host, dst_host, src_port, dst_port, tag
> > !
> > pre_tag_filter[wan0_in]:1
> > pre_tag_filter[wan0_out]:2
> > !
> > pcap_interface: eth0
> > pre_tag_map: pretag.map
> > networks_file: networks.lst
> > ports_file: ports.lst
> > !
> >
> > where pretag.map is:
> > set_tag=1 filter='ether dst 52:54:00:69:a6:0b'
> > set_tag=2 filter='ether src 52:54:00:69:a6:0b'
> >
> > and networks.lst is:
> > 10.100.100.0/24
> >
> > It seems that the details output at the CSV are correctly filtered
> > according to the tag, thus recording the direction also, based on the MAC
> > address of the WAN0 interface.
> >
> > Is this the correct approach to achieve this or is there any other
> > recommended way? Do I need to use aggregate_filters?
> >
> > Also, although I have set a network filter to capture only 10.100.100.0/24,
> > I observe several networks in/out being collected, indicating that the
> > network_file directive is ignored or I have misunderstood its purpose. My
> > purpose it to collect traffic only generated from subnets that belong to
> > configured interfaces of the router.
> >
> > Thanx for your feedback!
> > Alex
> >
> >
> >

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Using large hex values for "label" in pre_tag_map results in strange SQL

2020-02-13 Thread Paolo Lucente


Hey Tim,

It should be that the issue you described in the previous email and this
one are connected. And it should be that i pin-pointed to a common root
cause addressed by this commit:

https://github.com/pmacct/pmacct/commit/4e648cc96aae99ee5f4b1c9e135a1afa73b864b3

Which, in turn, is connected to some recent work that was done on
labels. Can you give this a try (latest code or apply the patch to the
code you are running) and let me know if this works for you?

Paolo

On Wed, Feb 12, 2020 at 05:56:35PM -0600, Tim Jackson wrote:
> I'm using some large hex values as the set_label in our pre_tag_map and
> getting some weird behavior..
> 
> example map:
> 
> https://paste.somuch.fail/?bafc96e84fe95322#j6T+54l/gxN90POeMi3yuhBT9XOPMmEqt3IF5cvHOJk=
> 
> When using this w/ pgsql as the output plugin, I see some errors randomly
> from postgres (I have dont_try_update+use_copy on):
> 
> 2020-02-12 23:49:41.148 UTC [11632] postgres@sfacctd ERROR:  invalid byte
> sequence for encoding "UTF8": 0xc0 0x2c
> 2020-02-12 23:49:41.148 UTC [11632] postgres@sfacctd CONTEXT:  COPY
> acct_v9, line 1
> 2020-02-12 23:49:41.148 UTC [11632] postgres@sfacctd STATEMENT:  COPY
> acct_v9 (mac_src, mac_dst, vlan, ip_src, ip_dst, as_src, iface_in,
> iface_out, as_dst, comms, peer_ip_src, port_src, port_dst, tos, ip_proto,
> sampling_rate, timestamp_arrival, tag, tag2, label, packets, bytes) FROM
> STDIN DELIMITER ','
> 
> I'm also seeing when doing some debugging some strange rows being generated
> where the label field is longer than any label I have set in the
> pre_tag_map file itself:
> 
> DEBUG ( all/pgsql ):
> ff:ff:fb:f9:b6:c4,ff:ff:ff:a1:8c:2b,0,1.1.1.1,2.2.2.2,0,547,607,30419,,1.1.1.1,23836,2,6,1024,2020-02-12
> 23:50:21,0,1,c7515ed894354725bc60160ee48775ce0e3b3924fb730,1,307
> 
> Any ideas where that could be coming from?
> 
> --
> Tim

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Realistic Scaling of pre_tag_map?

2020-02-13 Thread Paolo Lucente


Hey Tim,

Since it may take a few seconds to reload a large map, maybe i may
discourage from - dunno - sub one minute reloading or suchs.

The label, yes, can be as long as you like since it's malloc()'d - of
course the longer you make it, the more space you burn and the more
expensive it is to aggregate upon (since it's a key field).  

In theory the only char that you can't use is the default separator,
the ',' sign. But i see you stumbled in some issues (both in this email
i'm replying to and in the other): let me try to reproduce them at my
end and come back to you.

Paolo
 
On Wed, Feb 12, 2020 at 08:23:30AM -0600, Tim Jackson wrote:
> That's good news, since everything I've tested so far the maps_index has
> worked with. Any worries about reloading the map often/quickly?
> 
> Also is there a limit to how large the label can be in the pre_tag_map and
> any characters that aren't supported? Seems as if '-' in any set_label
> operation means the whole string gets ignored..
> 
> The use-case is just mapping ip+ifIndex -> downstream devices with a label,
> but I've got a lot of interfaces to match there..
> 
> --
> Tim
> 
> 
> On Wed, Feb 12, 2020, 12:47 AM Paolo Lucente  wrote:
> 
> >
> > Hey Tim,
> >
> > It really depends whether you can leverage maps_index (*) or not. If yes
> > then computations are O(1) and hence you can scale it as much as you
> > like and i can confirm you there is people building maps of the same
> > magnitude as you have in mind. If not then it's not going to work but
> > then again i'd be interested in your use-case, how the map would look
> > like, etc.
> >
> > Paolo
> >
> > (*) https://github.com/pmacct/pmacct/blob/1.7.4/CONFIG-KEYS#L1878-#L1891
> >
> > On Tue, Feb 11, 2020 at 05:54:27PM -0600, Tim Jackson wrote:
> > > Just curious, what's the realistic scaling of pre_tag_map?
> > >
> > > I'm looking to maybe put 50k+ entries in it and reload it every few
> > > minutes..
> > >
> > > Any real gotchas w/ that approach?
> > >
> > > --
> > > Tim
> >
> > > ___
> > > pmacct-discussion mailing list
> > > http://www.pmacct.net/#mailinglists
> >
> >
> > ___
> > pmacct-discussion mailing list
> > http://www.pmacct.net/#mailinglists
> >

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Realistic Scaling of pre_tag_map?

2020-02-11 Thread Paolo Lucente


Hey Tim,

It really depends whether you can leverage maps_index (*) or not. If yes
then computations are O(1) and hence you can scale it as much as you
like and i can confirm you there is people building maps of the same
magnitude as you have in mind. If not then it's not going to work but
then again i'd be interested in your use-case, how the map would look
like, etc. 

Paolo

(*) https://github.com/pmacct/pmacct/blob/1.7.4/CONFIG-KEYS#L1878-#L1891
 
On Tue, Feb 11, 2020 at 05:54:27PM -0600, Tim Jackson wrote:
> Just curious, what's the realistic scaling of pre_tag_map?
> 
> I'm looking to maybe put 50k+ entries in it and reload it every few
> minutes..
> 
> Any real gotchas w/ that approach?
> 
> --
> Tim

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


[pmacct-discussion] pmacct 1.7.4p1 released !

2020-02-09 Thread Paolo Lucente


VERSION.
1.7.4p1


DESCRIPTION.
pmacct is a small set of multi-purpose passive network monitoring tools. It
can account, classify, aggregate, replicate and export forwarding-plane data,
ie. IPv4 and IPv6 traffic; collect and correlate control-plane data via BGP
and BMP; collect and correlate RPKI data; collect infrastructure data via
Streaming Telemetry. Each component works both as a standalone daemon and
as a thread of execution for correlation purposes (ie. enrich NetFlow with
BGP data).

A pluggable architecture allows to store collected forwarding-plane data into
memory tables, RDBMS (MySQL, PostgreSQL, SQLite), noSQL databases (MongoDB,
BerkeleyDB), AMQP (RabbitMQ) and Kafka message exchanges and flat-files.
pmacct offers customizable historical data breakdown, data enrichments like
BGP and IGP correlation and GeoIP lookups, filtering, tagging and triggers.
Libpcap, Linux Netlink/NFLOG, sFlow v2/v4/v5, NetFlow v5/v8/v9 and IPFIX are
all supported as inputs for forwarding-plane data. Replication of incoming
NetFlow, IPFIX and sFlow datagrams is also available. Statistics can be
easily exported to time-series databases like ElasticSearch and InfluxDB and
traditional tools Cacti RRDtool MRTG, Net-SNMP, GNUPlot, etc.

Control-plane and infrastructure data, collected via BGP, BMP and Streaming
Telemetry, can be all logged real-time or dumped at regular time intervals
to AMQP (RabbitMQ) and Kafka message exchanges and flat-files.


HOMEPAGE.
http://www.pmacct.net/


DOWNLOAD.
http://www.pmacct.net/pmacct-1.7.4p1.tar.gz


CHANGELOG.
! fix, pre_tag_map: a memory leak in pretag_entry_process() has been
  introduced in 1.7.4. Thanks to Fabien Vincent and Olivier Benghozi
  for their support resolving the issue.


NOTES.
See UPGRADE file.


Cheers,
Paolo

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] [PATCH 2/2] * nfprobe: per-interface flows

2020-01-28 Thread Paolo Lucente


Hi Mikhail,

Thanks very much also for this contribution. I have passed the patch, it
does make full sense. Also in this case i removed the part of the config
knob: interfaces, if populated, _must_ be taken into account when
comparing flows; if not populated, which is the default case, well it
will be comparing a few zeroes. Here the commit log, again, with kudos
to you:

https://github.com/pmacct/pmacct/commit/977faeee8e794e24b85beaf5b33e1c4be9f3fb6f

Paolo
 
On Fri, Jan 17, 2020 at 01:01:53PM +0100, Mikhail Sennikovsky wrote:
> nfprobe flow tree does not take interface index into consideration
> when searching/aggregating the flow data.
> This means that for the case multiple pcap interfaces are being
> monitored and same src/dst ip/port traffic pattern is being handled
> over several of those interfaces, this will all land in the same
> FLOW entry.
> This leads to the issues that flows being handled by one network
> interface are actually reported via NetFlow (via Flow InputInt
> and OutputInt fields)as being handled by another network interface
> (held by the FLOW entry originally created for matching the given
> src/dst ip/port traffic pattern).
> 
> Introduce a new nfprobe_per_interface_flows config variable to
> allow taking flow interface indexes into consideration when
> matching/searching for the FLOW entries in the flow cache tree.
> 
> Signed-off-by: Mikhail Sennikovsky 
> ---
>  src/cfg.c   |  1 +
>  src/cfg.h   |  1 +
>  src/cfg_handlers.c  | 22 ++
>  src/cfg_handlers.h  |  1 +
>  src/nfprobe_plugin/nfprobe_plugin.c |  8 
>  5 files changed, 33 insertions(+)
> 
> diff --git a/src/cfg.c b/src/cfg.c
> index ddad54c..2b102bd 100644
> --- a/src/cfg.c
> +++ b/src/cfg.c
> @@ -414,6 +414,7 @@ static const struct _dictionary_line dictionary[] = {
>{"sfprobe_ifindex", cfg_key_nfprobe_ifindex},
>{"sfprobe_ifspeed", cfg_key_sfprobe_ifspeed},
>{"sfprobe_ifindex_override", cfg_key_nfprobe_ifindex_override},
> +  {"nfprobe_per_interface_flows", cfg_key_nfprobe_per_interface_flows},
>{"tee_receivers", cfg_key_tee_receivers},
>{"tee_source_ip", cfg_key_nfprobe_source_ip},
>{"tee_transparent", cfg_key_tee_transparent},
> diff --git a/src/cfg.h b/src/cfg.h
> index 631b19b..d652a59 100644
> --- a/src/cfg.h
> +++ b/src/cfg.h
> @@ -550,6 +550,7 @@ struct configuration {
>int nfprobe_ifindex_type;
>int nfprobe_dont_cache;
>int nfprobe_tstamp_usec;
> +  int nfprobe_per_interface_flows;
>char *sfprobe_receiver;
>char *sfprobe_agentip;
>int sfprobe_agentsubid;
> diff --git a/src/cfg_handlers.c b/src/cfg_handlers.c
> index eac176c..3fa6ed5 100644
> --- a/src/cfg_handlers.c
> +++ b/src/cfg_handlers.c
> @@ -5859,6 +5859,28 @@ int cfg_key_nfprobe_dont_cache(char *filename, char 
> *name, char *value_ptr)
>return changes;
>  }
>  
> +int cfg_key_nfprobe_per_interface_flows(char *filename, char *name, char 
> *value_ptr)
> +{
> +  struct plugins_list_entry *list = plugins_list;
> +  int value, changes = 0;
> +
> +  value = parse_truefalse(value_ptr);
> +  if (value < 0) return ERR;
> +
> +  if (!name) for (; list; list = list->next, changes++) 
> list->cfg.nfprobe_per_interface_flows = value;
> +  else {
> +for (; list; list = list->next) {
> +  if (!strcmp(name, list->name)) {
> +list->cfg.nfprobe_per_interface_flows = value;
> +changes++;
> +break;
> +  }
> +}
> +  }
> +
> +  return changes;
> +}
> +
>  int cfg_key_sfprobe_receiver(char *filename, char *name, char *value_ptr)
>  {
>struct plugins_list_entry *list = plugins_list;
> diff --git a/src/cfg_handlers.h b/src/cfg_handlers.h
> index 5ab0585..5d90a9c 100644
> --- a/src/cfg_handlers.h
> +++ b/src/cfg_handlers.h
> @@ -288,6 +288,7 @@ extern int cfg_key_nfprobe_ifindex(char *, char *, char 
> *);
>  extern int cfg_key_nfprobe_ifindex_override(char *, char *, char *);
>  extern int cfg_key_nfprobe_tstamp_usec(char *, char *, char *);
>  extern int cfg_key_nfprobe_dont_cache(char *, char *, char *);
> +extern int cfg_key_nfprobe_per_interface_flows(char *, char *, char *);
>  extern int cfg_key_sfprobe_receiver(char *, char *, char *);
>  extern int cfg_key_sfprobe_agentip(char *, char *, char *);
>  extern int cfg_key_sfprobe_agentsubid(char *, char *, char *);
> diff --git a/src/nfprobe_plugin/nfprobe_plugin.c 
> b/src/nfprobe_plugin/nfprobe_plugin.c
> index f7b0dc6..46a8166 100644
> --- a/src/nfprobe_plugin/nfprobe_plugin.c
> +++ b/src/nfprobe_plugin/nfprobe_plugin.c
> @@ -150,6 +150,14 @@ flow_compare(struct FLOW *a, struct FLOW *b)
>   if (a->port[1] != b->port[1])
>   return (ntohs(a->port[1]) > ntohs(b->port[1]) ? 1 : -1);
>  
> + if (config.nfprobe_per_interface_flows) {
> + if (a->ifindex[0] != b->ifindex[0])
> + return (a->ifindex[0] > b->ifindex[0] ? 1 : -1);
> +
> + if (a->ifindex[1] != 

Re: [pmacct-discussion] Periodically printing flowtrack structure values

2020-01-27 Thread Paolo Lucente


Hi,

Unfortunately this is not possible.

Paolo

On Mon, Jan 27, 2020 at 01:51:03PM +0530, HEMA CHANDRA YEDDULA wrote:
> 
> Hi Paolo,
> 
> Thanks for previous replies.
> 
> Is it possible to log FLOWTRACK structure components like flows_exported, 
> packets_exported etc., for every 5m and refreshing them at every 1 m to some 
> flat files.
> 
> Thanks and Regards,
> Hema Chandra 
> 
> ---
> ::Disclaimer::
> ---
> 
> The contents of this email and any attachment(s) are confidential and intended
> for the named recipient(s) only. It shall not attach any liability on C-DOT.
> Any views or opinions presented in this email are solely those of the author
> and  may  not  necessarily  reflect  the  opinions  of  C-DOT.  Any  form of
> reproduction, dissemination, copying, disclosure, modification, distribution
> and / or publication of this message without the prior written consent of the
> author of this e-mail is strictly prohibited. If you have received this email
> in error please delete it and notify the sender immediately.
> 
> ---
> 
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] pmacct 1.7.4 released !

2020-01-27 Thread Paolo Lucente


Hi Olivier,

Thanks for reporting this. We could not yet make progress with Fabien as
he is busy; but i have his config. Any chance you can send me privately
your config so i may start drafting a bottom line among the two? 

Paolo

On Mon, Jan 27, 2020 at 10:42:51PM +0100, Olivier Benghozi wrote:
> Hi !
> 
> oom-killer just killed my instance, so «same here»...
> 
> Some infos:
> 
> 
> # src/nfacctd -V
> NetFlow Accounting Daemon, nfacctd 1.7.4-git (20191126-01+c5)
> 
> Arguments:
>  '--enable-jansson' '--enable-64bit' '--enable-zmq' '--enable-pgsql' 
> '--enable-l2' '--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins' 
> '--enable-st-bins'
> 
> Libs:
> libpcap version 1.8.1
> PostgreSQL 90615
> jansson 2.9
> ZeroMQ 4.2.1
> 
> System:
> Linux 4.9.0-9-amd64 #1 SMP Debian 4.9.168-1+deb9u2 (2019-05-13) x86_64
> 
> Compiler:
> gcc 6.3.0
> 
> 
> > Le 7 janv. 2020 à 09:06, Fabien VINCENT  a écrit :
> > 
> > Hi Paolo,
> > Thanks for this release and enhancements !
> > Since upgrade, I see a huge memory leak without any reason.
> > https://github.com/pmacct/pmacct/issues/356
> > The only changes is I dist-upgrade the machine itself as installed from 
> > source 1.7.4 release.
> > I use print plugin on my side on nfacctd processes.
> > Please let me know how I can troubleshoot this, I will rollback to 1.7.3 
> > temporarly
> > Regards,
> > 
> > Le 31-12-2019 17:48, Paolo Lucente a écrit :
> >> VERSION.
> >> 1.7.4
> 

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] [PATCH 1/2] * pmacctd: allow configuring pcap_setdirection

2020-01-24 Thread Paolo Lucente


Hi Mikhail,

Many thanks for your contribution. I have slightly reviewed your patch
to test libpcap for pcap_setdirection() in configure.ac (as doing it the
way you did would fail compiling for older libpcap versions). See the
commit log here (of course with kudos to you):

https://github.com/pmacct/pmacct/commit/0780a48136f0f8bf9ad1e796253cfa100f64a90f

I will review soon your other patch.

Paolo

On Fri, Jan 17, 2020 at 01:01:52PM +0100, Mikhail Sennikovsky wrote:
> The pcap direction configuration was explicitly disabled
> in 81fe649917036b9ef1ed4b3ea521befcaf36496b,
> however even before that commit it apparently did not work,
> because the pcap_setdirection must be called after pcap_activate,
> not before it.
> 
> Introduce a new config variable, pcap_set_direction
> to allow pmacctd do pcap_setdirection.
> 
> Signed-off-by: Mikhail Sennikovsky 
> ---
>  src/cfg.c  |  1 +
>  src/cfg.h  |  1 +
>  src/cfg_handlers.c | 14 ++
>  src/cfg_handlers.h |  1 +
>  src/pmacctd.c  | 15 +++
>  5 files changed, 24 insertions(+), 8 deletions(-)
> 
> diff --git a/src/cfg.c b/src/cfg.c
> index 0dcf061..ddad54c 100644
> --- a/src/cfg.c
> +++ b/src/cfg.c
> @@ -47,6 +47,7 @@ static const struct _dictionary_line dictionary[] = {
>{"pcap_interface_wait", cfg_key_pcap_interface_wait},
>{"pcap_direction", cfg_key_pcap_direction},
>{"pcap_ifindex", cfg_key_pcap_ifindex},
> +  {"pcap_set_direction", cfg_key_pcap_set_direction},
>{"pcap_interfaces_map", cfg_key_pcap_interfaces_map},
>{"core_proc_name", cfg_key_proc_name},
>{"proc_priority", cfg_key_proc_priority},
> diff --git a/src/cfg.h b/src/cfg.h
> index 3641935..631b19b 100644
> --- a/src/cfg.h
> +++ b/src/cfg.h
> @@ -474,6 +474,7 @@ struct configuration {
>char *pcap_savefile;
>int pcap_direction;
>int pcap_ifindex;
> +  int pcap_set_direction;
>char *pcap_interfaces_map;
>char *pcap_if;
>int pcap_if_wait;
> diff --git a/src/cfg_handlers.c b/src/cfg_handlers.c
> index 0818b12..eac176c 100644
> --- a/src/cfg_handlers.c
> +++ b/src/cfg_handlers.c
> @@ -547,6 +547,20 @@ int cfg_key_pcap_ifindex(char *filename, char *name, 
> char *value_ptr)
>return changes;
>  }
>  
> +int cfg_key_pcap_set_direction(char *filename, char *name, char *value_ptr)
> +{
> +  struct plugins_list_entry *list = plugins_list;
> +  int value, changes = 0;
> +
> +  value = parse_truefalse(value_ptr);
> +  if (value < 0) return ERR;
> +
> +  for (; list; list = list->next, changes++) list->cfg.pcap_set_direction = 
> value;
> +  if (name) Log(LOG_WARNING, "WARN: [%s] plugin name not supported for key 
> 'pcap_set_direction'. Globalized.\n", filename);
> +
> +  return changes;
> +}
> +
>  int cfg_key_pcap_interfaces_map(char *filename, char *name, char *value_ptr)
>  {
>struct plugins_list_entry *list = plugins_list;
> diff --git a/src/cfg_handlers.h b/src/cfg_handlers.h
> index 3fdd103..5ab0585 100644
> --- a/src/cfg_handlers.h
> +++ b/src/cfg_handlers.h
> @@ -48,6 +48,7 @@ extern int cfg_key_pcap_savefile_delay(char *, char *, char 
> *);
>  extern int cfg_key_pcap_savefile_replay(char *, char *, char *);
>  extern int cfg_key_pcap_direction(char *, char *, char *);
>  extern int cfg_key_pcap_ifindex(char *, char *, char *);
> +extern int cfg_key_pcap_set_direction(char *, char *, char *);
>  extern int cfg_key_pcap_interfaces_map(char *, char *, char *);
>  extern int cfg_key_use_ip_next_hop(char *, char *, char *);
>  extern int cfg_key_decode_arista_trailer(char *, char *, char *);
> diff --git a/src/pmacctd.c b/src/pmacctd.c
> index 88fc367..1376a13 100644
> --- a/src/pmacctd.c
> +++ b/src/pmacctd.c
> @@ -152,18 +152,17 @@ pcap_t *pm_pcap_open(const char *dev_ptr, int snaplen, 
> int promisc,
>if (protocol)
>  Log(LOG_WARNING, "WARN ( %s/core ): pcap_protocol specified but linked 
> against a version of libpcap that does not support pcap_set_protocol().\n", 
> config.name);
>  #endif
> -
> -  /* XXX: rely on external filtering for now */
> -/* 
> -  ret = pcap_setdirection(p, direction);
> -  if (ret < 0 && direction != PCAP_D_INOUT)
> -Log(LOG_WARNING, "INFO ( %s/core ): direction specified but linked 
> against a version of libpcap that does not support pcap_setdirection().\n", 
> config.name);
> -*/
> -
> + 
>ret = pcap_activate(p);
>if (ret < 0)
>  goto err;
>  
> +  if (config.pcap_set_direction) {
> +ret = pcap_setdirection(p, direction);
> +if (ret < 0 && direction != PCAP_D_INOUT)
> +  Log(LOG_WARNING, "INFO ( %s/core ): direction specified but linked 
> against a version of libpcap that does not support pcap_setdirection()\n", 
> config.name);
> +  }
> +
>return p;
>  
>  err:
> -- 
> 2.7.4
> 
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] sfacctd and nfacctd using the same db tables

2020-01-22 Thread Paolo Lucente


Hi Jordan,

Yes, that is a valid scenario. To minimize effects locking (which
impact is only increased memory footprint due to the writers sitting
there waiting for the lock), you could play with sql_startup_delay so
that the two daemons fire up writers with some time offset; but, if the
two daemons touch distinct set of tuples and you successfully enforce
an INSERT-only kind of behaviour, then you may give a try to disable
locking, ie. 'sql_locking_style: none'.  

Paolo

On Wed, Jan 22, 2020 at 11:39:19AM +0200, Jordan Grigorov (Neterra NMT) wrote:
> Hello,
> 
> We're using a mixed network environment with equipment that supports either
> sflow or netflow.
> 
> Currently we're using sfacctd only and mysql plugin which stores data into
> MariaDB CS database.
> 
> Is there any option to use both sfacctd and nfacctd that are using the same
> DB and tables?
> 
> 
> Thank you in advance.
> 
> Kind Regards,
> 
> 
> -- 
> ---
> 
> 
>Jordan Grigorov
> 
> 
>Network Engineer IP Services
> 
> 
> 

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] uninitialized req passed to plugin_requests and load_id_file in pm_pcap_cb??

2020-01-21 Thread Paolo Lucente


Hi Mikhail,

I see, yes. The reason nothing happens (not out of coincidences :-)) is
that all the reads are checking whether we are in the context of a tee
plugin (which does not apply to pmacctd / uacctd). But i agree with you:
that is recipe for potential disaster so i committed a memset() there:

https://github.com/pmacct/pmacct/commit/ff77d4ba58c5e11205577dff32fe1696bfca360d

Thanks very much for your input!

Paolo

On Tue, Jan 21, 2020 at 11:03:03AM +0100, Mikhail Sennikovsky wrote:
> Hi Paolo,
> 
> The pm_pcap_cb has however its own instance of struct plugin_requests
> req : 
> https://github.com/pmacct/pmacct/blob/d72440dc9a7d0d0a7ed9502f1dd31b90105b1d95/src/nl.c#L51
> ,
> and noone zeroes it up before using it seems.
> 
> Mikhail
> 
> On Tue, 21 Jan 2020 at 02:25, Paolo Lucente  wrote:
> >
> >
> > Hi Mikhail,
> >
> > If you see all the daemons that make use of the 'req' structure have a
> > memset() for 'req' shortly after its declaration. For example here in
> > pmacctd: https://github.com/pmacct/pmacct/blob/master/src/pmacctd.c#L360
> >
> > Paolo
> >
> > On Fri, Jan 17, 2020 at 07:10:13PM +0100, Mikhail Sennikovsky wrote:
> > > Hi all,
> > >
> > > I was running through the pm_pcap_cb code, and it looks like the "req"
> > > passed to exec_plugins(, ); at
> > > https://github.com/pmacct/pmacct/blob/d72440dc9a7d0d0a7ed9502f1dd31b90105b1d95/src/nl.c#L167
> > > and to load_id_file at
> > > https://github.com/pmacct/pmacct/blob/d72440dc9a7d0d0a7ed9502f1dd31b90105b1d95/src/nl.c#L179
> > > and below
> > > is actually uninitialized. (See struct plugin_requests req;  at
> > > https://github.com/pmacct/pmacct/blob/d72440dc9a7d0d0a7ed9502f1dd31b90105b1d95/src/nl.c#L51
> > > )
> > > Note that the exec_plugins and load_id_file actually read from req
> > > rather than write to it.
> > > If I'm getting this right, that code might be working just by coincidence.
> > >
> > > Thanks,
> > > Mikhail
> > >
> > > ___
> > > pmacct-discussion mailing list
> > > http://www.pmacct.net/#mailinglists
> >
> > ___
> > pmacct-discussion mailing list
> > http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] uninitialized req passed to plugin_requests and load_id_file in pm_pcap_cb??

2020-01-20 Thread Paolo Lucente


Hi Mikhail,

If you see all the daemons that make use of the 'req' structure have a
memset() for 'req' shortly after its declaration. For example here in
pmacctd: https://github.com/pmacct/pmacct/blob/master/src/pmacctd.c#L360

Paolo

On Fri, Jan 17, 2020 at 07:10:13PM +0100, Mikhail Sennikovsky wrote:
> Hi all,
> 
> I was running through the pm_pcap_cb code, and it looks like the "req"
> passed to exec_plugins(, ); at
> https://github.com/pmacct/pmacct/blob/d72440dc9a7d0d0a7ed9502f1dd31b90105b1d95/src/nl.c#L167
> and to load_id_file at
> https://github.com/pmacct/pmacct/blob/d72440dc9a7d0d0a7ed9502f1dd31b90105b1d95/src/nl.c#L179
> and below
> is actually uninitialized. (See struct plugin_requests req;  at
> https://github.com/pmacct/pmacct/blob/d72440dc9a7d0d0a7ed9502f1dd31b90105b1d95/src/nl.c#L51
> )
> Note that the exec_plugins and load_id_file actually read from req
> rather than write to it.
> If I'm getting this right, that code might be working just by coincidence.
> 
> Thanks,
> Mikhail
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Only meaningful custom primitives should pick the value

2020-01-20 Thread Paolo Lucente


Hi,

You could put the port filter in a pcap_filter and have two pmacctd, one
reading http traffic, one reading dns traffic and each configured to
pick up and export the relevant primitives. Would that work for you?

Paolo

On Fri, Jan 17, 2020 at 11:45:26AM +0530, HEMA CHANDRA YEDDULA wrote:
> Hi,
> 
> Thanks for the prompt reply. I think there is some misinterpretation of the 
> scenario.I'm
> trying to explain it litte more explicitly
> 
>  
> We want to add some primitives defined by us with our PEN value. And the 
> primitives are
> of different protocols like http and dns in same template. In our case 
> httpStatusCode,
> httpRequestHost, dnsType and dnsDomainName are the primitives of interest in 
> our case.
> We have regex based payload processing and payload offset defined for each 
> primitive.
> For suppose if the flow has http request packets then only httpRequestHost 
> should be
> processed and pick the value and the rest of them should be blank. But what 
> happens here
> is httpRequestHost will get desired value and the rest of them picking some 
> junk from
> the offset ptr of size defined through primitive length.
> 
> So, What we were thinking is to perform a check on port number like if dest 
> port is 80
> then only httprequesthost should pick the value and rest should be blank.
> 
> Is there any way to perform this type of check on port number.
> 
> Thanks and Regards,
> Hema Chandra Yeddula
> 
> 
> 
> 
> 
> 
> On Thu, 16 Jan 2020 23:48:04 +, Paolo Lucente wrote
> Hi,
> 
> If you define certain primitives, those not present in the parsed flow
> entry should be indeed left blank. If that is not the case, then it's a
> bug and i'd like to ask you for a way to reproduce the issue (so your
> config along with a brief capture (template + data packets) of your
> data.
> 
> Paolo
> 
> On Wed, Jan 15, 2020 at 04:07:17PM +0530, HEMA CHANDRA YEDDULA wrote:
> > 
> > Hi,
> > 
> > I have a scenario where we are planning to add custom primitives that 
> > includes fields 
> > across different protocols like http_request_host, http_response_code, 
> > sip_request_uri 
> > and sip_status_code. In the existing version, if they are defined to be 
> > picked up from
> > payload, then all four of them pick some value depending on the length 
> > defined. Is there 
> > any way to sense the protocol and only meaningful custom primitives will 
> > pick the value
> > and the rest should be blank. 
> > 
> > Thanks and regards,
> > Hema Chandra
> > 
> > ---
> > ::Disclaimer::
> > ---
> > 
> > The contents of this email and any attachment(s) are confidential and 
> > intended
> > for the named recipient(s) only. It shall not attach any liability on C-DOT.
> > Any views or opinions presented in this email are solely those of the author
> > and  may  not  necessarily  reflect  the  opinions  of  C-DOT.  Any  form of
> > reproduction, dissemination, copying, disclosure, modification, distribution
> > and / or publication of this message without the prior written consent of 
> > the
> > author of this e-mail is strictly prohibited. If you have received this 
> > email
> > in error please delete it and notify the sender immediately.
> > 
> > ---
> > 
> > 
> > ___
> > pmacct-discussion mailing list
> > http://www.pmacct.net/#mailinglists
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists
> 
> 
> ---
> ::Disclaimer::
> ---
> 
> The contents of this email and any attachment(s) are confidential and intended
> for the named recipient(s) only. It shall not attach any liability on C-DOT.
> Any views or opinions presented in this email are solely those of the author
> and  may  not  necessarily  reflect  the  opinions  of  C-DOT.  Any  form of
> reproduction, dissemination, copying, disclosure, modification, distribution
> and / or publication of this message without the prior written consent of the
> author of this e-mail is strictly prohibited. If you have received this email
> in error please delete it and notify the sender immediately.
> 
> ---
> 

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Only meaningful custom primitives should pick the value

2020-01-16 Thread Paolo Lucente


Hi,

If you define certain primitives, those not present in the parsed flow
entry should be indeed left blank. If that is not the case, then it's a
bug and i'd like to ask you for a way to reproduce the issue (so your
config along with a brief capture (template + data packets) of your
data. 

Paolo

On Wed, Jan 15, 2020 at 04:07:17PM +0530, HEMA CHANDRA YEDDULA wrote:
> 
> Hi,
> 
> I have a scenario where we are planning to add custom primitives that 
> includes fields 
> across different protocols like http_request_host, http_response_code, 
> sip_request_uri 
> and sip_status_code. In the existing version, if they are defined to be 
> picked up from
> payload, then all four of them pick some value depending on the length 
> defined. Is there 
> any way to sense the protocol and only meaningful custom primitives will pick 
> the value
> and the rest should be blank. 
> 
> Thanks and regards,
> Hema Chandra
> 
> ---
> ::Disclaimer::
> ---
> 
> The contents of this email and any attachment(s) are confidential and intended
> for the named recipient(s) only. It shall not attach any liability on C-DOT.
> Any views or opinions presented in this email are solely those of the author
> and  may  not  necessarily  reflect  the  opinions  of  C-DOT.  Any  form of
> reproduction, dissemination, copying, disclosure, modification, distribution
> and / or publication of this message without the prior written consent of the
> author of this e-mail is strictly prohibited. If you have received this email
> in error please delete it and notify the sender immediately.
> 
> ---
> 
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Processing payload data

2020-01-10 Thread Paolo Lucente


Hi,

Matching on regex seems indeed a good use-case for this feature - did
you test whether this would work for you in theory? That is, given the
full payload, is there a regex that can extract what you are looking
for? This said, unfortunately this is not implemented today but it would
be a good candidate feature for future (and kiler would be if you could
code it yourself and share it back). 

Paolo
 
On Thu, Jan 09, 2020 at 10:25:04AM +0530, HEMA CHANDRA YEDDULA wrote:
> 
> Hi,
> 
> I want to add 'httpRequestHost' information element 460 custom_primitive as 
> aggregate key
> but the size of this field s not fixed. If length is declared as "vlen" then 
> it is
> extracting complete payload. Is there any way to extract the host based on 
> some regex
> matching.
> 
> Thanks & Regards
> Hema Chandra
> ---
> ::Disclaimer::
> ---
> 
> The contents of this email and any attachment(s) are confidential and intended
> for the named recipient(s) only. It shall not attach any liability on C-DOT.
> Any views or opinions presented in this email are solely those of the author
> and  may  not  necessarily  reflect  the  opinions  of  C-DOT.  Any  form of
> reproduction, dissemination, copying, disclosure, modification, distribution
> and / or publication of this message without the prior written consent of the
> author of this e-mail is strictly prohibited. If you have received this email
> in error please delete it and notify the sender immediately.
> 
> ---
> 
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] effort to relicense pmacct from GPL to a BSD-style license

2020-01-09 Thread Paolo Lucente


Hi Karl and Lennert, Community,

Thanks very much for your comments and for the opportunity you are
giving me to address them. 

I'd like to comment publicly - so that we can cast this in stone and
have it archvied for time to tell - that there are no hidden reasons,
secret plans or favouritism behind this initiative and the motivations
expressed. The reasons stated are the real reasons.

In the very early days of starting the project i picked a license semi
randomly, that is, after having been recommended so. Many years passed
and i focused solely on core develoment of pmacct. Am i a believer of
restrictive licenses? No, i've never been. I personally believe in
innovating and looking forward rather than making sure nobody is
'copying' (pass me the simplistic term) pmacct - i think by doing
open-source that is simply a given. So one day happened that we had
this kind of conversation with Job and decided to attempt to re-align
licensing.

Related to the above, I also believe that people so far have contributed
to pmacct not out of desire to comply with the license terms but because
they thought that sharing their code would benefit all parties involved.

The other point that Job raised, let's call it the mortality one, i
truly believe it's self-explanatory. I may perhaps add that over the
course of 17 years intellectual thinking of a person may evolve.

One more reason that perhaps did not emerge is - again on a pure
intelllectual level - the wish to maximize who uses pmacct, including
inside their products. Licensing has several time risen as a concern
for integrating pmacct in 3rd parties so I hope we can now put that to
rest.

Finally, I too consent relicense my pmacct work under the proposed
license. I'm excited to see this change happen! :-)

Paolo
 
On Wed, Jan 08, 2020 at 05:05:48PM +0200, Lennert Buytenhek wrote:
> On Wed, Jan 08, 2020 at 08:53:57AM -0600, Karl O. Pinc wrote:
> 
> > > Summary: The pmacct project is looking to relicense its code from the
> > > current GPL license to a more liberal BSD-style license.
> > > 
> > > 1) Faced with our own mortality, it became clear that succession
> > >planning is of paramount importance for this project's continued
> > >success. We contemplated what happens in context of intellectual
> > >property rights should one of pmacct's contributors pass away, and
> > >realized potential heirs won't necessarily desire involvement in this
> > >open source project, potentially hampering changes to intellectual
> > >property policies in the project's future. 
> > > 
> > > 2) We suspect there are entities who violate the terms of pmacct's
> > >current GPL license, but at the same time we don't wish to litigate.
> > >Instead of getting infringers to change their behavior, relicensing
> > >the project could be another way to resolve the potential for
> > >conflict: we see benefits to removing rules we don't plan on
> > >enforcing anyway.
> > 
> > On Wed, 8 Jan 2020 16:02:35 +0200
> > Lennert Buytenhek  wrote:
> > 
> > > Although the stated reasoning
> > > for the relicensing effort feels somewhat specious to me, 
> > 
> > I agree.  (Disclaimer: I'm not a contributor.)
> > 
> > 1) If we wanted to change the licensing, we couldn't, after we're
> > dead.
> > 
> > 2) People are violating our rights and we don't want to do
> > anything about it.
> > 
> > I don't want to argue one way or another but it would be nice
> > to have a real reason.  There are good reasons available,
> > all the way down to "I wrote most of it and I changed my mind."
> > 
> > Even if there's some commercial entity that wants to sell
> > pmacct in their product and won't because of the licensing,
> > it would be nice to know this.  Especially knowing who.
> > (E.g. We know that Amazon uses PostgreSQL as the basis
> > of their RDS database product, and does not contribute
> > back as far as I can tell.)  It'd be nice to know who 
> > the contributors are helping.
> 
> FWIW, I fully agree with this analysis.  I chose to agree with the
> relicensing anyway as I don't think this is an important enough
> battle to fight.   (If someone were to ask me to relicense my Linux
> kernel contributions under a closed-source-able license, I would be
> a lot more upset.)
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


[pmacct-discussion] pmacct 1.7.4 released !

2019-12-31 Thread Paolo Lucente


VERSION.
1.7.4


DESCRIPTION.
pmacct is a small set of multi-purpose passive network monitoring tools. It
can account, classify, aggregate, replicate and export forwarding-plane data,
ie. IPv4 and IPv6 traffic; collect and correlate control-plane data via BGP
and BMP; collect and correlate RPKI data; collect infrastructure data via
Streaming Telemetry. Each component works both as a standalone daemon and
as a thread of execution for correlation purposes (ie. enrich NetFlow with
BGP data).

A pluggable architecture allows to store collected forwarding-plane data into
memory tables, RDBMS (MySQL, PostgreSQL, SQLite), noSQL databases (MongoDB,
BerkeleyDB), AMQP (RabbitMQ) and Kafka message exchanges and flat-files.
pmacct offers customizable historical data breakdown, data enrichments like
BGP and IGP correlation and GeoIP lookups, filtering, tagging and triggers.
Libpcap, Linux Netlink/NFLOG, sFlow v2/v4/v5, NetFlow v5/v8/v9 and IPFIX are
all supported as inputs for forwarding-plane data. Replication of incoming
NetFlow, IPFIX and sFlow datagrams is also available. Statistics can be
easily exported to time-series databases like ElasticSearch and InfluxDB and
traditional tools Cacti RRDtool MRTG, Net-SNMP, GNUPlot, etc.

Control-plane and infrastructure data, collected via BGP, BMP and Streaming
Telemetry, can be all logged real-time or dumped at regular time intervals
to AMQP (RabbitMQ) and Kafka message exchanges and flat-files.


HOMEPAGE.
http://www.pmacct.net/


DOWNLOAD.
http://www.pmacct.net/pmacct-1.7.4.tar.gz


CHANGELOG.
+ Released pmgrpcd.py v3: a Streaming Telemetry collector and decoder
  for multi-vendor environments written in Python3. It supports gRPC
  transport along with Protobuf encoding as input and can output to
  Kafka with Avro encoding. Output to files and JSON encoding is
  currently supported sending data via ZMQ to pmtelemetryd first. It
  was tested working with data input from Cisco and Huawei routers
  and v3 replaces v2. Thanks to the Streaming Telemetry core team:
  Matthias Arnold ( @tbearma1 ), Camilo Cardona ( @jccardonar ),
  Thomas Graf ( @graf3 , @graf3net ), Paolo Lucente ( @paololucente ).
+ Introduced support for the 'vxlan' VXLAN/VNI primitive in all traffic
  daemons (NetFlow/IPFIX, sFlow and libpcap/ULOG). Existing inner tunnel
  primitives (ie. tunnel_src_host, tunnel_dst_host, tunnel_proto, etc.)
  have been wired to the VXLAN decoding and new ones (tunnel_src_mac, 
  tunnel_dst_mac, tunnel_src_port, tunnel_dst_port) were defined.
+ BMP daemon: added support for Peer Up message namespace for TLVs
  (draft-ietf-grow-bmp-peer-up) and also support for Route Monitor
  and Peer Down TLVs (draft-ietf-grow-bmp-tlv).
+ BGP, BMP daemons: in addition to existing JSON export, data can now
  be exported in Apache Avro format. There is also support for the
  Confluent Schema Registry.
+ Introduced support for JSON-encoded Apache Avro encoding. While the
  binary-encoded Apache Avro is always recommended for any production
  scenarios (also to optionallly leverage Confluent Schema Registry
  support), JSON-encoded is powerful for testing and troubleshooting
  scenarios.
+ sfprobe plugin: added support for IPv6 transport for sFlow export.
  sfprobe_agentip is an IP address put in the header of the sFlow
  packet. If underlying transport is IPv6, this must be configured to
  an IPv6 address.
+ zmq_common.[ch]: Improved modularity of the ZMQ internal API and
  decoupled bind/connect from push/pull and pub/sub; also improved
  support for inproc sockets. All to increase the amount of use-cases
  covered by the API.
+ bgp_peer_src_as_map: added 'filter' key to cover pmacctd/uacctd use
  cases.
+ nfprobe, sfprobe plugins: introduced [sn]fprobe_index_override to
  override ifindexes dynamically determined (ie. by NFLOG) with values
  computed by [sn]fprobe_ifindex.
+ MySQL, PostgreSQL plugins: added support for SSL/TLS connections by
  specifying a CA certificate (sql_conn_ca_file).
+ Kafka, AMQP plugins: amqp_markers and kafka_markers have now been
  properly re-implemented when output encoding is Avro using an own
  Avro schema (instead of squatting pieces of JSON in the data stream
  for the very purpose).
+ print plugin: introduced print_write_empty_file config knob (true,
  false) to create an empty output file when there are no cache entries
  to purge. Such behaviour was present in versions up to 0.14 and may
  be preferred by some to the new >= 1.5 versions behaviour. Thanks to
  Lee Yongjae ( @setup74 ) for the contribution.
! fix, signals.c: signals handling has been restructured in order to
  block certain signals during critical sections of data processing.
  Thanks to Vaibhav Phatarpekar ( @vphatarp ) for the contribution.
! fix, signals.c: slimmed reload() signal handler code and moved it to
  a synchronous section. The handler is to reset logging output to
  files or syslog. Thanks to Jared Mauch ( @jaredmauch ) for his
  support resolving this.
! fix, pmb

Re: [pmacct-discussion] only log_type update in BMP messages

2019-12-10 Thread Paolo Lucente


Hi Rasto,

May you try master code on GitHub or the 1.7.4 branch (curently on
freeze and due to be released later in the month)? In the last few
months there has been plenty of working on the BMP-related code.

Should that still not work, it would help if you could generate me a
brief capture and send it via unicst email. Here how to produce it:

https://github.com/pmacct/pmacct/blob/1.7.4/QUICKSTART#L2863-#L2874

Paolo

On Tue, Dec 10, 2019 at 06:15:21PM +0100, Rasto Rickardt wrote:
> Hello,
> 
> i am running 1.7.3 pmbmpd built with json support as a collector for IOS XR
> router.
> 
> Everything is working as expected, except that i only see log_type update
> messages, no withdraws when the prefix from routing table disappeares and
> routers sends it towards collector.
> 
> Config is really simple:
> 
> bmp_daemon: true
> bmp_daemon_msglog_file: /tmp/bmp-bb
> bmp_daemon_max_peers: 10
> 
> is there something i am missing in my setup?
> 
> With openbmpd i can see withdraws correctly.
> 
> Kind Regards
> 
> Rasto Rickardt
> (null)
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Incorporating GTP fields in IPFIX data

2019-12-06 Thread Paolo Lucente


Hi,

Do you have NetFlow/IPFIX data containing such info already and you want
to collect it with pmacct? If so, it should be possible to define some
custom primitives for that: ping me via unicast email sending a sample
trace of such data.

If the question is instead to read some GTP traffic and create NetFlow/
IPFIX out of it then it's matter of coding the capture / decoding part
(that is to say it's not a quick win; if you can code it yourself and
contribute back would be great).

Paolo

On Thu, Dec 05, 2019 at 04:40:12PM +0530, HEMA CHANDRA YEDDULA wrote:
> 
> Hi paolo,
> 
> This is Hema Chandra from CDOT india. I have some queries regarding pmacct 
> which are as follows - Is there any scope to incorporate GTP fields like 
> Mobile number, session_duration, cell ID etc into netflow/ipfix data? If so 
> can u please share some insights about this.
> 
> Thanks & Regards,
> Hema Chandra Yeddula,
> Research Engineer,
> Cert team,
> CDOT-Delhi
> 
> ---
> ::Disclaimer::
> ---
> 
> The contents of this email and any attachment(s) are confidential and intended
> for the named recipient(s) only. It shall not attach any liability on C-DOT.
> Any views or opinions presented in this email are solely those of the author
> and  may  not  necessarily  reflect  the  opinions  of  C-DOT.  Any  form of
> reproduction, dissemination, copying, disclosure, modification, distribution
> and / or publication of this message without the prior written consent of the
> author of this e-mail is strictly prohibited. If you have received this email
> in error please delete it and notify the sender immediately.
> 
> ---
> 
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] pmbgpd looking glass don't start

2019-10-25 Thread Paolo Lucente


Hi Alex,

The Looking Glass feature depends on ZeroMQ so you will need to compile
pmacct with --enable-zmq. I will produce asap a small patch to output an
error if pmacct is not compiled against ZeroMQ and bgp_daemon_lg is set
to true: somehow this dependencey is mentioned in both CONFIG-KEYS and
QUICKSTART, nevertheless the code should throw an error.

Paolo

On Thu, Oct 24, 2019 at 03:47:55PM +0300, Alex Ivanov wrote:
> Hi all,
> i trying to set up looking glass feature, but no success.
> pmbgpd starts, but bgp_daemon_lg_* options dont' work, socket with
> specified address/port not shown.
> Please help, what i doing wrong.
> ]# /opt/sbin/pmbgpd -f /opt/etc/pmbgpd.conf -d -g
> DEBUG: [/opt/etc/pmbgpd.conf] plugin name/type: 'default'/'core'.
> DEBUG: [/opt/etc/pmbgpd.conf] daemonize:false
> DEBUG: [/opt/etc/pmbgpd.conf] debug:true
> DEBUG: [/opt/etc/pmbgpd.conf] bgp_daemon_lg:true
> DEBUG: [/opt/etc/pmbgpd.conf] bgp_daemon_lg_ip:192.168.31.10
> DEBUG: [/opt/etc/pmbgpd.conf] bgp_daemon_lg_port:17900
> DEBUG: [/opt/etc/pmbgpd.conf] bgp_daemon_lg_user:pmacct
> DEBUG: [/opt/etc/pmbgpd.conf] bgp_daemon_lg_passwd:arealsmartpwd
> DEBUG: [/opt/etc/pmbgpd.conf] bgp_daemon_as:57629
> DEBUG: [/opt/etc/pmbgpd.conf] bgp_daemon_ip:127.0.0.250
> DEBUG: [/opt/etc/pmbgpd.conf] bgp_daemon_port:1791
> DEBUG: [/opt/etc/pmbgpd.conf] debug:true
> DEBUG: [/opt/etc/pmbgpd.conf] bgp_daemon_lg:true
> INFO ( default/core ): pmacct BGP Collector Daemon, pmbgpd 1.7.4-git
> (20190618-01)
> INFO ( default/core ):  '--prefix=/opt' '--enable-mysql' '--enable-jansson'
> '--enable-geoipv2' '--enable-l2' '--enable-64bit' '--enable-traffic-bins'
> '--enable-bgp-bins' '--enable-bmp-bins' '--enable-st-bins'
> INFO ( default/core ): Reading configuration file '/opt/etc/pmbgpd.conf'.
> INFO ( default/core ): maximum BGP peers allowed: 4
> INFO ( default/core ): waiting for BGP data on 127.0.0.250:1791
> INFO ( default/core ): [127.0.0.30] BGP peers usage: 1/4
> INFO ( default/core ): [192.168.31.10] Capability: MultiProtocol [1] AFI
> [1] SAFI [1]
> INFO ( default/core ): [192.168.31.10] Capability: 4-bytes AS [41] ASN
> [57629]
> INFO ( default/core ): [192.168.31.10] BGP_OPEN: Local AS: 57629 Remote AS:
> 57629 HoldTime: 240
> DEBUG ( default/core ): [192.168.31.10] BGP_KEEPALIVE received
> DEBUG ( default/core ): [192.168.31.10] BGP_KEEPALIVE sent
> 
> # ss -tnlp | grep 1791
> LISTEN 0  1  127.0.0.250:1791 *:*
> users:(("pmbgpd",pid=11974,fd=3))
> 
> # ss -tnlp | grep 17900
> #
> 
> Full config:
> # cat /opt/etc/pmbgpd.conf
> daemonize: false
> !pidfile: /var/run/pmbgpd.pid
> !promisc: false
> debug: true
> 
> bgp_daemon_lg: true
> bgp_daemon_lg_ip: 192.168.31.10
> bgp_daemon_lg_port: 17900
> bgp_daemon_lg_user: pmacct
> bgp_daemon_lg_passwd: arealsmartpwd
> 
> bgp_daemon_as: 57629
> bgp_daemon_ip: 127.0.0.250
> bgp_daemon_port: 1791

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] BGP AS values are 0

2019-10-20 Thread Paolo Lucente


He he he, 'fallback' is the legacy keyword for 'longest'. You should use
'longest', yes. High moments for a developer :-) 

Paolo

On Sun, Oct 20, 2019 at 10:37:51AM -0400, Brooks Swinnerton wrote:
> I tried switching over the iBGP session to eBGP but it oddly started
> putting my AS as the `dst_as` for quite a few flows. I suspect this may
> have been because of my BGP configuration now that it's not wired up as a
> route reflector. I'll investigate this more, but option two intrigues me.
> Looking at the documentation
> <https://github.com/pmacct/pmacct/blob/93e414f5f34a380281328df58069cc521c33a3c5/CONFIG-KEYS#L1743>
> it
> appears that `fallback` is not a valid option for `pmacctd_as` and
> `pmacctd_net`. Is that right?
> 
> On Sun, Oct 20, 2019 at 9:43 AM Paolo Lucente  wrote:
> 
> >
> > Hi Brooks,
> >
> > There would be a few ways to achieve that:
> >
> > 1) change the iBGP session into an eBGP session;
> >
> > 2) set pmacctd_as and pmacctd_net to 'fallback' and add a networks_file
> >where you list (some of your) prefixes and associated ASN. While the
> >map can be refreshed at runtime - no need to restart the daemon - it
> >may involve a manual step. Unless you can generate it automatically
> >and/or the set of prefixes is quite static (again, you want to list
> >there just your own prefixes and perhaps they don't change that much).
> >
> > 3) Use bgp_stdcomm_pattern_to_asn or bgp_lrgcomm_pattern_to_asn: you tag
> >prefixes of interest with certain BGP communities that indicate the
> >ASN to associate the prefix with. While more automatic than #2, it
> >would require messing with actual BGP.
> >
> > Paolo
> >
> >
> > On Sun, Oct 20, 2019 at 08:48:18AM -0400, Brooks Swinnerton wrote:
> > > Hi Paolo,
> > >
> > > One quick follow up question regarding:
> > >
> > > > +1 to Felix's answer. Also maybe two obvious pointsa: 1) with an iBGP
> > > peering setup, AS0 can mean unknown or your own ASN (being a number
> > rather
> > > than a string, null is not an option) and 2) until routes are received,
> > > source/destination IP prefixes can get associated to AS0.
> > >
> > > Is there a way to distinguish between AS0 being my own AS and an unknown
> > > one?
> > >
> > > On Sun, Oct 13, 2019 at 3:39 PM Paolo Lucente  wrote:
> > >
> > > >
> > > > Wonderful. Thank you Brooks for sharing your finding. I will add a note
> > > > to documentation, it seems very relevant.
> > > >
> > > > Paolo
> > > >
> > > > On Sun, Oct 13, 2019 at 12:50:43PM -0400, Brooks Swinnerton wrote:
> > > > > Got it! I think for some reason BIRD didn't like that both BGP
> > instances
> > > > > were sharing the same address. Here is the new configuration on both
> > > > sides
> > > > > which works:
> > > > >
> > > > > ```
> > > > > !
> > > > > ! pmacctd configuration example
> > > > > !
> > > > > ! Did you know CONFIG-KEYS contains the detailed list of all
> > > > configuration
> > > > > keys
> > > > > ! supported by 'nfacctd' and 'pmacctd' ?
> > > > > !
> > > > > ! debug: true
> > > > > daemonize: false
> > > > > pcap_interface: ens3
> > > > > pmacctd_as: bgp
> > > > > pmacctd_net: bgp
> > > > > sampling_rate: 10
> > > > > !
> > > > > bgp_daemon: true
> > > > > bgp_daemon_ip: 127.0.0.2
> > > > > bgp_daemon_port: 180
> > > > > bgp_daemon_max_peers: 10
> > > > > bgp_agent_map: /etc/pmacct/peering_agent.map
> > > > > bgp_table_dump_file: /tmp/bgp-$peer_src_ip-%H%M.log
> > > > > bgp_table_dump_refresh_time: 120
> > > > > !
> > > > > aggregate: src_host, dst_host, src_port, dst_port, src_as, dst_as,
> > proto
> > > > > !
> > > > > plugins: kafka
> > > > > kafka_output: json
> > > > > kafka_broker_host: kafka.fqdn.com
> > > > > kafka_topic: pmacct.acct
> > > > > kafka_refresh_time: 10
> > > > > kafka_history: 5m
> > > > > kafka_history_roundoff: m
> > > > > ```
> > > > >
> > > > > And in BIRD:
> > > > >
> > > > > ```
> > > > > protocol bgp AS00v4c1 from 

Re: [pmacct-discussion] BGP AS values are 0

2019-10-20 Thread Paolo Lucente


Hi Brooks,

There would be a few ways to achieve that:

1) change the iBGP session into an eBGP session;

2) set pmacctd_as and pmacctd_net to 'fallback' and add a networks_file
   where you list (some of your) prefixes and associated ASN. While the
   map can be refreshed at runtime - no need to restart the daemon - it
   may involve a manual step. Unless you can generate it automatically
   and/or the set of prefixes is quite static (again, you want to list
   there just your own prefixes and perhaps they don't change that much).

3) Use bgp_stdcomm_pattern_to_asn or bgp_lrgcomm_pattern_to_asn: you tag
   prefixes of interest with certain BGP communities that indicate the
   ASN to associate the prefix with. While more automatic than #2, it
   would require messing with actual BGP.

Paolo 


On Sun, Oct 20, 2019 at 08:48:18AM -0400, Brooks Swinnerton wrote:
> Hi Paolo,
> 
> One quick follow up question regarding:
> 
> > +1 to Felix's answer. Also maybe two obvious pointsa: 1) with an iBGP
> peering setup, AS0 can mean unknown or your own ASN (being a number rather
> than a string, null is not an option) and 2) until routes are received,
> source/destination IP prefixes can get associated to AS0.
> 
> Is there a way to distinguish between AS0 being my own AS and an unknown
> one?
> 
> On Sun, Oct 13, 2019 at 3:39 PM Paolo Lucente  wrote:
> 
> >
> > Wonderful. Thank you Brooks for sharing your finding. I will add a note
> > to documentation, it seems very relevant.
> >
> > Paolo
> >
> > On Sun, Oct 13, 2019 at 12:50:43PM -0400, Brooks Swinnerton wrote:
> > > Got it! I think for some reason BIRD didn't like that both BGP instances
> > > were sharing the same address. Here is the new configuration on both
> > sides
> > > which works:
> > >
> > > ```
> > > !
> > > ! pmacctd configuration example
> > > !
> > > ! Did you know CONFIG-KEYS contains the detailed list of all
> > configuration
> > > keys
> > > ! supported by 'nfacctd' and 'pmacctd' ?
> > > !
> > > ! debug: true
> > > daemonize: false
> > > pcap_interface: ens3
> > > pmacctd_as: bgp
> > > pmacctd_net: bgp
> > > sampling_rate: 10
> > > !
> > > bgp_daemon: true
> > > bgp_daemon_ip: 127.0.0.2
> > > bgp_daemon_port: 180
> > > bgp_daemon_max_peers: 10
> > > bgp_agent_map: /etc/pmacct/peering_agent.map
> > > bgp_table_dump_file: /tmp/bgp-$peer_src_ip-%H%M.log
> > > bgp_table_dump_refresh_time: 120
> > > !
> > > aggregate: src_host, dst_host, src_port, dst_port, src_as, dst_as, proto
> > > !
> > > plugins: kafka
> > > kafka_output: json
> > > kafka_broker_host: kafka.fqdn.com
> > > kafka_topic: pmacct.acct
> > > kafka_refresh_time: 10
> > > kafka_history: 5m
> > > kafka_history_roundoff: m
> > > ```
> > >
> > > And in BIRD:
> > >
> > > ```
> > > protocol bgp AS00v4c1 from monitor46 {
> > >   description "pmacctd";
> > >   local 127.0.0.1 as 00;
> > >   neighbor 127.0.0.2 port 180 as 00;
> > >   rr client;
> > > }
> > > ```
> > >
> > > Thank you so much for the tip about 127.0.0.2, Paolo!
> > >
> > > On Sun, Oct 13, 2019 at 11:35 AM Paolo Lucente  wrote:
> > >
> > > >
> > > > So the session comes up and gets established: this would rule out
> > firewall
> > > > filters, TCP MD5 or session mis-configurations (AS numbers,
> > capabilities,
> > > > etc.). This should also mean that the BGP OPEN process is successful
> > (this
> > > > is also confirmed by pmacct log you sent earlier on).
> > > >
> > > > Now, from the tcpdump output you sent, looking at the tiny packet sizes
> > > > i would almost say those are BGP keepalives; if not timestamps reveal
> > they
> > > > do take place too frequently (so they are not BGP keepalives). They
> > could
> > > > still be BGP UPDATEs and it would take longer to transfer 150k
> > prefixes at
> > > > that pace but, yeah, weird. It would be great to confirm if those
> > packets
> > > > are BGP UPDATEs: perhaps tcpdump sees port 180/tcp and does not apply
> > > > the BGP decoder (and hence you can't see the expected BGP cleartext in
> > the
> > > > tcpdump output); you could save it and open and decode with Wireshark
> > (or
> > > > setup a 127.0.0.2 and do a 127.0.0.1:179 <-> 127.0.0.2:179 pee

Re: [pmacct-discussion] BGP map for dual stack IPv4 & IPv6

2019-10-20 Thread Paolo Lucente

Hi Brooks,

We can certainly take this off list. The next step is to 100% confirm
that the IPv6 prefixes are landing onto pmacct. The fact a BGP dump does
not reveal IPv6 prefixes means this is not a mapping issue but either a
decoding one (super weird plus you would find tracks of this in the log) 
or the IPv6 prefixes are not really being sent onto pmacct by BIRD (also
weird but we already kind of ran in such a situation so ..).

For this i propose again to look in wire traffic with tcpdump/wireshark,
perhaps make a trace with tcpdump so that then it can be analised in
more comfort with wireshark via UI.

Paolo
 
On Sat, Oct 19, 2019 at 11:49:28PM -0400, Brooks Swinnerton wrote:
> Thank you for the suggestion, Paolo. I went ahead and dumped the BGP table
> but don't see any IPv6 routes in there (though it's quite large as it has
> the V4 table from an IX). I can share this off list if it would be helpful.
> 
> To recap, my `/etc/pmacct/peering_agent.map` file is:
> 
> ```
> bgp_ip=1.1.1.1 ip=0.0.0.0/0
> ```
> 
> (where `1.1.1.1` is the router ID of the BGP [bird] server pmacctd is
> peering with).
> 
> And my configuration file is:
> 
> ```
> !
> ! pmacctd configuration example
> !
> ! Did you know CONFIG-KEYS contains the detailed list of all configuration
> keys
> ! supported by 'nfacctd' and 'pmacctd' ?
> !
> ! debug: true
> daemonize: false
> pcap_interfaces_map: /etc/pmacct/interfaces.map
> pre_tag_map: /etc/pmacct/pretag.map
> pmacctd_as: bgp
> pmacctd_net: bgp
> sampling_rate: 1
> !
> bgp_daemon: true
> bgp_daemon_ip: 127.0.0.2
> bgp_daemon_port: 180
> bgp_daemon_max_peers: 10
> bgp_agent_map: /etc/pmacct/peering_agent.map
> !
> aggregate: src_host, dst_host, src_port, dst_port, src_as, dst_as, label,
> proto
> !
> plugins: kafka
> kafka_output: json
> kafka_broker_host: kafka.fqdn.com
> kafka_topic: pmacct.acct
> kafka_refresh_time: 5
> kafka_history: 5m
> kafka_history_roundoff: m
> ```
> 
> And BIRD does appear to be announcing the V6 routes to the pmacctd daemon:
> 
> ```
> bird> show protocols all pmacctd46
> Name   Proto  Table  State  Since Info
> pmacctd46  BGP---up 2019-10-20 03:39:08  Established
>   Description:pmacctd
>   Message:pmacct received SIGINT - shutting down
>   BGP state:  Established
> Neighbor address: 127.0.0.2
> Neighbor AS:  00
> Local AS: 00
> Neighbor ID:  127.0.0.2
> Local capabilities
>   Multiprotocol
> AF announced: ipv4 ipv6
>   Route refresh
>   Graceful restart
> Restart time: 120
> AF supported: ipv4 ipv6
> AF preserved:
>   4-octet AS numbers
>   Enhanced refresh
>   Long-lived graceful restart
> Neighbor capabilities
>   Multiprotocol
> AF announced: ipv4 ipv6
>   4-octet AS numbers
> Session:  internal multihop route-reflector AS4
> Source address:   127.0.0.1
> Hold timer:   69.116/90
> Keepalive timer:  13.114/30
>   Channel ipv4
> State:  UP
> Table:  master4
> Preference: 100
> Input filter:   (unnamed)
> Output filter:  (unnamed)
> Routes: 0 imported, 185041 exported, 0 preferred
> Route change stats: received   rejected   filteredignored
> accepted
>   Import updates:  0  0  0  0
>0
>   Import withdraws:0  0---  0
>0
>   Export updates: 185248  0  0---
> 185248
>   Export withdraws:   42---------
>   42
> BGP Next hop:   127.0.0.1
> IGP IPv4 table: master4
>   Channel ipv6
> State:  UP
> Table:  master6
> Preference: 100
> Input filter:   (unnamed)
> Output filter:  (unnamed)
> Routes: 0 imported, 74840 exported, 0 preferred
> Route change stats: received   rejected   filteredignored
> accepted
>   Import updates:  0  0  0  0
>0
>   Import withdraws:0  0---  0
>0
>   Export updates:  75161  0  0---
>  75161
>   Export withdraws:   81---------
>   81
> BGP Next hop:   ::
> IGP IPv6 table: master6
> ```
> 
> On Mon, Oct 14, 2019 at 2:47 AM Paolo Lucente  wrote:
> 
> >
> > Could we repeat the same troubleshooting as for the other issue: let's
> > enable dumping of BGP data to a file just to make sure da

Re: [pmacct-discussion] peer_src_ip empty

2019-10-19 Thread Paolo Lucente


Hi Brooks,

peer_src_ip is definitely the primitive you are looking for. From the
previous thread i have a suspect: you may be using the wrong daemon.
What daemon are you running? Is it possible you want to collect NetFlow/
IPFIX or sFlow but you are running pmacctd? That would explain. Just in
case this is the right path, please see here the list of daemons and
what they do:

https://github.com/pmacct/pmacct/blob/master/QUICKSTART#L30-#L106

Paolo
 
On Sat, Oct 19, 2019 at 01:31:23PM -0400, Brooks Swinnerton wrote:
> Hello again,
> 
> I'm trying to determine the host that is sending the flows and it sounds
> like peer_src_ip is what I want, but for some reason it's always empty.
> 
> Here is my config:
> 
> ```
> !
> ! pmacctd configuration example
> !
> ! Did you know CONFIG-KEYS contains the detailed list of all configuration
> keys
> ! supported by 'nfacctd' and 'pmacctd' ?
> !
> ! debug: true
> daemonize: false
> pcap_interfaces_map: /etc/pmacct/interfaces.map
> pmacctd_as: bgp
> pmacctd_net: bgp
> sampling_rate: 1
> !
> bgp_daemon: true
> bgp_daemon_ip: 127.0.0.2
> bgp_daemon_port: 180
> bgp_daemon_max_peers: 10
> bgp_agent_map: /etc/pmacct/peering_agent.map
> !
> aggregate: src_host, dst_host, src_port, dst_port, src_as, dst_as,
> peer_src_ip, proto
> !
> plugins: kafka
> kafka_output: json
> kafka_broker_host: kafka.fqdn.com
> kafka_topic: pmacct.acct
> kafka_refresh_time: 5
> kafka_history: 5m
> kafka_history_roundoff: m
> ```
> 
> And here is `/etc/pmacct/interfaces.map`:
> 
> ```
> ifindex=100  ifname=ens3
> ifindex=200  ifname=ens4
> ```
> 
> And here is `/etc/pmacct/peering_agent.map`:
> 
> ```
> bgp_ip= ip=0.0.0.0/0
> ```
> 
> This is what I see in the Kafka JSON:
> 
> ```
> {"event_type": "purge", "as_src": 12876, "as_dst": 0, "peer_ip_src": "",
> "ip_src": "51.15.81.148", "ip_dst": "23.157.160.138", "port_src": 46330,
> "port_dst":
> 9050, "ip_proto": "tcp", "stamp_inserted": "2019-10-19 17:30:00",
> "stamp_updated": "2019-10-19 17:30:56", "packets": 1, "bytes": 588,
> "writer_id": "default_ka
> fka/2969"}
> ```

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] BGP map for dual stack IPv4 & IPv6

2019-10-14 Thread Paolo Lucente

Could we repeat the same troubleshooting as for the other issue: let's
enable dumping of BGP data to a file just to make sure data is making it
over. Even just to check the route is among those 74479 of 132180 routes
exported.

Paolo
 
On Sun, Oct 13, 2019 at 07:19:09PM -0400, Brooks Swinnerton wrote:
> Hmph, no dice. It looks like BIRD is exporting IPv6 routes:
> 
> ```
> bird> show route export pmacctd46 count
> 173376 of 337458 routes for 173376 networks in table master4
> 74479 of 132180 routes for 74479 networks in table master6
> Total: 247855 of 469638 routes for 247855 networks in 2 tables
> ```
> 
> But the flows in Kafka still appear to have 0 as the AS for both src and
> dst:
> 
> ```
> {"event_type": "purge", "as_src": 0, "as_dst": 0, "ip_src":
> "2607:f8b0:4006:814::200e", "ip_dst": "2602:fe2e:42:8:8489:bdf6:1bbe:cc60",
> "port_src": 443, "port_dst": 51609, "ip_proto": "tcp", "stamp_inserted":
> "2019-10-13 23:10:00", "stamp_updated": "2019-10-13 23:13:51", "packets":
> 1, "bytes": 352, "writer_id": "default_kafka/25373"}
> ```
> 
> The destination AS being zero makes sense, as that's my own:
> 
> ```
> $ sudo birdc show route for 2602:fe2e:42:8:8489:bdf6:1bbe:cc60 all
> BIRD 2.0.6 ready.
> Table master6:
> 2602:fe2e:42::/48unicast [static4 2019-10-10] * (200)
> via 2602:fe2e:1::135 on ens5
> Type: static univ
> ```
> 
> But there should be an AS present for the source:
> 
> ```
> $ sudo birdc show route for 2607:f8b0:4006:814::200e all | grep as_path
> BGP.as_path: 15169
> BGP.as_path: 6939 15169
> ```
> 
> On Sun, Oct 13, 2019 at 4:28 PM Paolo Lucente  wrote:
> 
> >
> > Super cool. It would remain the one-liner you have got at the moment:
> >
> > bgp_ip=1.1.1.1 ip=0.0.0.0/0
> >
> > Keep me posted.
> >
> > Paolo
> >
> > On Sun, Oct 13, 2019 at 04:21:21PM -0400, Brooks Swinnerton wrote:
> > > I’m actually already doing option 1 : ), what would the map look like for
> > > that?
> > >
> > > On Sun, Oct 13, 2019 at 3:47 PM Paolo Lucente  wrote:
> > >
> > > >
> > > > Hi Brooks,
> > > >
> > > > You are in an unsupported use-case, ie. same BGP Agent ID maped onto
> > two
> > > > different entries. You can get out of it in three different ways: 1) my
> > > > top recommendation: travel both addrress families as part of the same
> > BGP
> > > > session; 2) use two different BGP Agent ID for ipv4 and for ipv6; 3)
> > use
> > > > session IP addesses (that is, not BGP Agent ID) for the mapping
> > (although
> > > > in your case i am afaid this won't work since it's all taking place
> > over
> > > > loopback interfaces). Let me know if any of this can work for you.
> > > >
> > > > Paolo
> > > >
> > > > On Sun, Oct 13, 2019 at 01:45:36PM -0400, Brooks Swinnerton wrote:
> > > > > Hello again!
> > > > >
> > > > > I'm using pmacct with Kafka to stream flows. This is paired with the
> > BGP
> > > > > functionality to add the `src_as` and `dst_as`. This all works great
> > for
> > > > > IPv4, but I'm struggling to figure out how to do this for IPv6 as
> > well.
> > > > >
> > > > > Here is the current configuration:
> > > > >
> > > > > ```
> > > > > !
> > > > > ! pmacctd configuration example
> > > > > !
> > > > > ! Did you know CONFIG-KEYS contains the detailed list of all
> > > > configuration
> > > > > keys
> > > > > ! supported by 'nfacctd' and 'pmacctd' ?
> > > > > !
> > > > > ! debug: true
> > > > > daemonize: false
> > > > > pcap_interface: ens3
> > > > > pmacctd_as: bgp
> > > > > pmacctd_net: bgp
> > > > > sampling_rate: 10
> > > > > !
> > > > > bgp_daemon: true
> > > > > bgp_daemon_ip: 127.0.0.2
> > > > > bgp_daemon_port: 180
> > > > > bgp_daemon_max_peers: 10
> > > > > bgp_agent_map: /etc/pmacct/peering_agent.map
> > > > > !
> > > > > aggregate: src_host, dst_host, src_port, dst_port, src_as, dst_as,
> > proto
> > > > > !
> > > > > plugins

Re: [pmacct-discussion] BGP map for dual stack IPv4 & IPv6

2019-10-13 Thread Paolo Lucente

Super cool. It would remain the one-liner you have got at the moment:

bgp_ip=1.1.1.1 ip=0.0.0.0/0

Keep me posted.

Paolo

On Sun, Oct 13, 2019 at 04:21:21PM -0400, Brooks Swinnerton wrote:
> I’m actually already doing option 1 : ), what would the map look like for
> that?
> 
> On Sun, Oct 13, 2019 at 3:47 PM Paolo Lucente  wrote:
> 
> >
> > Hi Brooks,
> >
> > You are in an unsupported use-case, ie. same BGP Agent ID maped onto two
> > different entries. You can get out of it in three different ways: 1) my
> > top recommendation: travel both addrress families as part of the same BGP
> > session; 2) use two different BGP Agent ID for ipv4 and for ipv6; 3) use
> > session IP addesses (that is, not BGP Agent ID) for the mapping (although
> > in your case i am afaid this won't work since it's all taking place over
> > loopback interfaces). Let me know if any of this can work for you.
> >
> > Paolo
> >
> > On Sun, Oct 13, 2019 at 01:45:36PM -0400, Brooks Swinnerton wrote:
> > > Hello again!
> > >
> > > I'm using pmacct with Kafka to stream flows. This is paired with the BGP
> > > functionality to add the `src_as` and `dst_as`. This all works great for
> > > IPv4, but I'm struggling to figure out how to do this for IPv6 as well.
> > >
> > > Here is the current configuration:
> > >
> > > ```
> > > !
> > > ! pmacctd configuration example
> > > !
> > > ! Did you know CONFIG-KEYS contains the detailed list of all
> > configuration
> > > keys
> > > ! supported by 'nfacctd' and 'pmacctd' ?
> > > !
> > > ! debug: true
> > > daemonize: false
> > > pcap_interface: ens3
> > > pmacctd_as: bgp
> > > pmacctd_net: bgp
> > > sampling_rate: 10
> > > !
> > > bgp_daemon: true
> > > bgp_daemon_ip: 127.0.0.2
> > > bgp_daemon_port: 180
> > > bgp_daemon_max_peers: 10
> > > bgp_agent_map: /etc/pmacct/peering_agent.map
> > > !
> > > aggregate: src_host, dst_host, src_port, dst_port, src_as, dst_as, proto
> > > !
> > > plugins: kafka
> > > kafka_output: json
> > > kafka_broker_host: kafka.fqdn.com
> > > kafka_topic: pmacct.acct
> > > kafka_refresh_time: 10
> > > kafka_history: 5m
> > > kafka_history_roundoff: m
> > > ```
> > >
> > > Where `/etc/pmacct/peering_agent.map` is defined as:
> > >
> > > ```
> > > bgp_ip=1.1.1.1 ip=0.0.0.0/0filter='ip'
> > > bgp_ip=1.1.1.1 ip=::/0 filter='ip6'
> > > ```
> > >
> > > (1.1.1.1 is the router ID on the other side of the BGP session)
> > >
> > > This works well for IPv4 traffic, resulting in the following Kafka
> > events:
> > >
> > > ```
> > > {"event_type": "purge", "as_src": 0, "as_dst": 396507, "ip_src":
> > > "23.157.160.138", "ip_dst": "23.129.64.208", "port_src": 37649,
> > "port_dst":
> > > 443, "ip_proto": "tcp", "stamp_inserted": "2019-10-13 17:40:00",
> > > "stamp_updated": "2019-10-13 17:43:11", "packets": 3, "bytes": 156,
> > > "writer_id": "default_kafka/15635"}
> > > {"event_type": "purge", "as_src": 0, "as_dst": 0, "ip_src":
> > > "2607:f8b0:400d:c01::bc", "ip_dst": "2602:fe2e:42:2::2", "port_src":
> > 5228,
> > > "port_dst": 63746, "ip_proto": "tcp", "stamp_inserted": "2019-10-13
> > > 17:40:00", "stamp_updated": "2019-10-13 17:43:11", "packets": 1, "bytes":
> > > 72, "writer_id": "default_kafka/15635"}
> > > ```
> > >
> > > But for IPv6 traffic, neither the `as_src` or `as_dst` comes through.
> >
> > > ___
> > > pmacct-discussion mailing list
> > > http://www.pmacct.net/#mailinglists
> >
> >
> > ___
> > pmacct-discussion mailing list
> > http://www.pmacct.net/#mailinglists
> >

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] BGP map for dual stack IPv4 & IPv6

2019-10-13 Thread Paolo Lucente


Hi Brooks,

You are in an unsupported use-case, ie. same BGP Agent ID maped onto two
different entries. You can get out of it in three different ways: 1) my
top recommendation: travel both addrress families as part of the same BGP
session; 2) use two different BGP Agent ID for ipv4 and for ipv6; 3) use
session IP addesses (that is, not BGP Agent ID) for the mapping (although  
in your case i am afaid this won't work since it's all taking place over
loopback interfaces). Let me know if any of this can work for you.

Paolo

On Sun, Oct 13, 2019 at 01:45:36PM -0400, Brooks Swinnerton wrote:
> Hello again!
> 
> I'm using pmacct with Kafka to stream flows. This is paired with the BGP
> functionality to add the `src_as` and `dst_as`. This all works great for
> IPv4, but I'm struggling to figure out how to do this for IPv6 as well.
> 
> Here is the current configuration:
> 
> ```
> !
> ! pmacctd configuration example
> !
> ! Did you know CONFIG-KEYS contains the detailed list of all configuration
> keys
> ! supported by 'nfacctd' and 'pmacctd' ?
> !
> ! debug: true
> daemonize: false
> pcap_interface: ens3
> pmacctd_as: bgp
> pmacctd_net: bgp
> sampling_rate: 10
> !
> bgp_daemon: true
> bgp_daemon_ip: 127.0.0.2
> bgp_daemon_port: 180
> bgp_daemon_max_peers: 10
> bgp_agent_map: /etc/pmacct/peering_agent.map
> !
> aggregate: src_host, dst_host, src_port, dst_port, src_as, dst_as, proto
> !
> plugins: kafka
> kafka_output: json
> kafka_broker_host: kafka.fqdn.com
> kafka_topic: pmacct.acct
> kafka_refresh_time: 10
> kafka_history: 5m
> kafka_history_roundoff: m
> ```
> 
> Where `/etc/pmacct/peering_agent.map` is defined as:
> 
> ```
> bgp_ip=1.1.1.1 ip=0.0.0.0/0filter='ip'
> bgp_ip=1.1.1.1 ip=::/0 filter='ip6'
> ```
> 
> (1.1.1.1 is the router ID on the other side of the BGP session)
> 
> This works well for IPv4 traffic, resulting in the following Kafka events:
> 
> ```
> {"event_type": "purge", "as_src": 0, "as_dst": 396507, "ip_src":
> "23.157.160.138", "ip_dst": "23.129.64.208", "port_src": 37649, "port_dst":
> 443, "ip_proto": "tcp", "stamp_inserted": "2019-10-13 17:40:00",
> "stamp_updated": "2019-10-13 17:43:11", "packets": 3, "bytes": 156,
> "writer_id": "default_kafka/15635"}
> {"event_type": "purge", "as_src": 0, "as_dst": 0, "ip_src":
> "2607:f8b0:400d:c01::bc", "ip_dst": "2602:fe2e:42:2::2", "port_src": 5228,
> "port_dst": 63746, "ip_proto": "tcp", "stamp_inserted": "2019-10-13
> 17:40:00", "stamp_updated": "2019-10-13 17:43:11", "packets": 1, "bytes":
> 72, "writer_id": "default_kafka/15635"}
> ```
> 
> But for IPv6 traffic, neither the `as_src` or `as_dst` comes through.

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] BGP AS values are 0

2019-10-13 Thread Paolo Lucente


Wonderful. Thank you Brooks for sharing your finding. I will add a note
to documentation, it seems very relevant.

Paolo

On Sun, Oct 13, 2019 at 12:50:43PM -0400, Brooks Swinnerton wrote:
> Got it! I think for some reason BIRD didn't like that both BGP instances
> were sharing the same address. Here is the new configuration on both sides
> which works:
> 
> ```
> !
> ! pmacctd configuration example
> !
> ! Did you know CONFIG-KEYS contains the detailed list of all configuration
> keys
> ! supported by 'nfacctd' and 'pmacctd' ?
> !
> ! debug: true
> daemonize: false
> pcap_interface: ens3
> pmacctd_as: bgp
> pmacctd_net: bgp
> sampling_rate: 10
> !
> bgp_daemon: true
> bgp_daemon_ip: 127.0.0.2
> bgp_daemon_port: 180
> bgp_daemon_max_peers: 10
> bgp_agent_map: /etc/pmacct/peering_agent.map
> bgp_table_dump_file: /tmp/bgp-$peer_src_ip-%H%M.log
> bgp_table_dump_refresh_time: 120
> !
> aggregate: src_host, dst_host, src_port, dst_port, src_as, dst_as, proto
> !
> plugins: kafka
> kafka_output: json
> kafka_broker_host: kafka.fqdn.com
> kafka_topic: pmacct.acct
> kafka_refresh_time: 10
> kafka_history: 5m
> kafka_history_roundoff: m
> ```
> 
> And in BIRD:
> 
> ```
> protocol bgp AS00v4c1 from monitor46 {
>   description "pmacctd";
>   local 127.0.0.1 as 00;
>   neighbor 127.0.0.2 port 180 as 00;
>   rr client;
> }
> ```
> 
> Thank you so much for the tip about 127.0.0.2, Paolo!
> 
> On Sun, Oct 13, 2019 at 11:35 AM Paolo Lucente  wrote:
> 
> >
> > So the session comes up and gets established: this would rule out firewall
> > filters, TCP MD5 or session mis-configurations (AS numbers, capabilities,
> > etc.). This should also mean that the BGP OPEN process is successful (this
> > is also confirmed by pmacct log you sent earlier on).
> >
> > Now, from the tcpdump output you sent, looking at the tiny packet sizes
> > i would almost say those are BGP keepalives; if not timestamps reveal they
> > do take place too frequently (so they are not BGP keepalives). They could
> > still be BGP UPDATEs and it would take longer to transfer 150k prefixes at
> > that pace but, yeah, weird. It would be great to confirm if those packets
> > are BGP UPDATEs: perhaps tcpdump sees port 180/tcp and does not apply
> > the BGP decoder (and hence you can't see the expected BGP cleartext in the
> > tcpdump output); you could save it and open and decode with Wireshark (or
> > setup a 127.0.0.2 and do a 127.0.0.1:179 <-> 127.0.0.2:179 peering.
> >
> > Really not sure what is going on :-? Also, if you prefer, we could continue
> > the troubleshooting via unicast email and summarize findings on list later.
> >
> > Paolo
> >
> > On Sun, Oct 13, 2019 at 10:55:55AM -0400, Brooks Swinnerton wrote:
> > > Oops, sorry I mismatched the tcpdump and bgp table dump values. They were
> > > both indeed using 55881 at the time, but here is another capture that
> > will
> > > make more sense:
> > >
> > > ```
> > > {"timestamp": "2019-10-13 14:48:00", "peer_ip_src": "127.0.0.1",
> > > "peer_tcp_port": 36143, "event_type": "dump_close", "entries": 0,
> > "tables":
> > > 1, "seq": 4}
> > > {"timestamp": "2019-10-13 14:50:00", "peer_ip_src": "127.0.0.1",
> > > "peer_tcp_port": 36143, "event_type": "dump_init", "dump_period": 120,
> > > "seq": 5}
> > > {"timestamp": "2019-10-13 14:50:00", "peer_ip_src": "127.0.0.1",
> > > "peer_tcp_port": 36143, "event_type": "dump_close", "entries": 0,
> > "tables":
> > > 1, "seq": 5}
> > > {"timestamp": "2019-10-13 14:52:00", "peer_ip_src": "127.0.0.1",
> > > "peer_tcp_port": 36143, "event_type": "dump_init", "dump_period": 120,
> > > "seq": 6}
> > > {"timestamp": "2019-10-13 14:52:00", "peer_ip_src": "127.0.0.1",
> > > "peer_tcp_port": 36143, "event_type": "dump_close", "entries": 0,
> > "tables":
> > > 1, "seq": 6}
> > > {"timestamp": "2019-10-13 14:54:00", "peer_ip_src": "127.0.0.1",
> > > "peer_tcp_port": 36143, "event_type": "dump_init", &quo

Re: [pmacct-discussion] BGP AS values are 0

2019-10-13 Thread Paolo Lucente
 127.0.0.1.180 > 127.0.0.1.36143: Flags [.], cksum 0xfe28 (incorrect ->
> 0x36bb), ack 97, win 342, options [nop,nop,TS val 1973199743 ecr
> 1973199742], length 0
> ```
> 
> On Sun, Oct 13, 2019 at 10:53 AM Brooks Swinnerton 
> wrote:
> 
> > > 1) as super extra check, can you capture stuff with Wireshark and see
> > what is going on 'on the wire'? Do you see the routes being sent and
> > landing onto pmacct, etc.?
> >
> > Gosh, I'm stumped. I do see traffic on port 180 and the port that is
> > referenced in the table dump, but it's not the cleartext BGP traffic I was
> > expecting:
> >
> > ```
> > 127.0.0.1.36143 > 127.0.0.1.180: Flags [P.], cksum 0xfe4b (incorrect
> > -> 0xfc0d), seq 9200:9235, ack 234, win 342, options [nop,nop,TS val
> > 1972706066 ecr 1972699157], length 35
> > 14:46:51.746910 IP (tos 0x0, ttl 64, id 34830, offset 0, flags [DF], proto
> > TCP (6), length 52)
> > 127.0.0.1.180 > 127.0.0.1.36143: Flags [.], cksum 0xfe28 (incorrect ->
> > 0x9b98), ack 9235, win 342, options [nop,nop,TS val 1972706066 ecr
> > 1972706066], length 0
> > 14:46:51.988455 IP (tos 0xc0, ttl 64, id 2700, offset 0, flags [DF], proto
> > TCP (6), length 91)
> > 127.0.0.1.36143 > 127.0.0.1.180: Flags [P.], cksum 0xfe4f (incorrect
> > -> 0x1b06), seq 9235:9274, ack 234, win 342, options [nop,nop,TS val
> > 1972706308 ecr 1972706066], length 39
> > 14:46:51.988488 IP (tos 0x0, ttl 64, id 34831, offset 0, flags [DF], proto
> > TCP (6), length 52)
> > 127.0.0.1.180 > 127.0.0.1.36143: Flags [.], cksum 0xfe28 (incorrect ->
> > 0x998d), ack 9274, win 342, options [nop,nop,TS val 1972706308 ecr
> > 1972706308], length 0
> > ```
> >
> > Which aligns with:
> >
> > ```
> > {"timestamp": "2019-10-13 14:40:00", "peer_ip_src": "127.0.0.1",
> > "peer_tcp_port": 55881, "event_type": "dump_init", "dump_period": 120,
> > "seq": 0}
> > {"timestamp": "2019-10-13 14:40:00", "peer_ip_src": "127.0.0.1",
> > "peer_tcp_port": 55881, "event_type": "dump_close", "entries": 0, "tables":
> > 1, "seq": 0}
> > {"timestamp": "2019-10-13 14:42:00", "peer_ip_src": "127.0.0.1",
> > "peer_tcp_port": 55881, "event_type": "dump_init", "dump_period": 120,
> > "seq": 1}
> > {"timestamp": "2019-10-13 14:42:00", "peer_ip_src": "127.0.0.1",
> > "peer_tcp_port": 55881, "event_type": "dump_close", "entries": 0, "tables":
> > 1, "seq": 1}
> > ```
> >
> > Is it normal that `peer_tcp_port` is a random port and not 179? I know
> > pmacctd's BGP port is listening on 180, but the peer port (BIRD) is 179:
> >
> > ```
> > bgp_daemon: true
> > bgp_daemon_ip: 127.0.0.1
> > bgp_daemon_port: 180
> > ```
> >
> > I've also went ahead and removed any filters that were previously in place
> > in BIRD so it should be sending all prefixes.
> >
> > On Sun, Oct 13, 2019 at 10:08 AM Paolo Lucente  wrote:
> >
> >>
> >> Hi Brooks,
> >>
> >> Wow, interesting yes. Your decoding is right: BGP table is empty. May i
> >> ask you two things: 1) as super extra check, can you capture stuff with
> >> Wireshark and see what is going on 'on the wire'? Do you see the routes
> >> being sent and landing onto pmacct, etc.? 2) Should that be the case,
> >> ie. all looks good, could you try master code on GitHub? Should also
> >> that one not work, we should find a way for me to reproduce this (as i
> >> tested the scenario and it appears to work for me against an ExaBGP) or,
> >> let me just mention it, troubleshoot stuff on your box.
> >>
> >> Paolo
> >>
> >> On Sun, Oct 13, 2019 at 08:46:47AM -0400, Brooks Swinnerton wrote:
> >> > Thank you Paolo,
> >> >
> >> > Interesting, it looks like the pmacctd end of the BGP session isn't
> >> picking
> >> > up the routes if I'm reading the `bgp_table_dump_file` correctly:
> >> >
> >> > ```
> >> > {"timestamp": "2019-10-13 12:42:00", "peer_ip_src": "127.0.0.1",
> >> > "peer_tcp_port": 39587, "event_type": "dump_init", "dump_period": 120,

Re: [pmacct-discussion] BGP AS values are 0

2019-10-13 Thread Paolo Lucente


Hi Brooks,

Wow, interesting yes. Your decoding is right: BGP table is empty. May i
ask you two things: 1) as super extra check, can you capture stuff with
Wireshark and see what is going on 'on the wire'? Do you see the routes
being sent and landing onto pmacct, etc.? 2) Should that be the case,
ie. all looks good, could you try master code on GitHub? Should also
that one not work, we should find a way for me to reproduce this (as i
tested the scenario and it appears to work for me against an ExaBGP) or,
let me just mention it, troubleshoot stuff on your box.

Paolo

On Sun, Oct 13, 2019 at 08:46:47AM -0400, Brooks Swinnerton wrote:
> Thank you Paolo,
> 
> Interesting, it looks like the pmacctd end of the BGP session isn't picking
> up the routes if I'm reading the `bgp_table_dump_file` correctly:
> 
> ```
> {"timestamp": "2019-10-13 12:42:00", "peer_ip_src": "127.0.0.1",
> "peer_tcp_port": 39587, "event_type": "dump_init", "dump_period": 120,
> "seq": 0}
> {"timestamp": "2019-10-13 12:42:00", "peer_ip_src": "127.0.0.1",
> "peer_tcp_port": 39587, "event_type": "dump_close", "entries": 0, "tables":
> 1, "seq": 0}
> ```
> 
> But looking at the BIRD side of things, I can see the routes are indeed
> being exported:
> 
> ```
> bird> show route export AS00v4 count
> 172973 of 336950 routes for 173059 networks in table master4
> ```
> 
> On Sun, Oct 13, 2019 at 8:30 AM Paolo Lucente  wrote:
> 
> >
> > Hi Brooks,
> >
> > +1 to Felix's answer. Also maybe two obvious pointsa: 1) with an iBGP
> > peering setup, AS0 can mean unknown or your own ASN (being a number
> > rather than a string, null is not an option) and 2) until routes are
> > received, source/destination IP prefixes can get associated to AS0.
> >
> > Config looks good as well as the log extract you posted. For more debug
> > info you can perhaps dump routes received via BGP just to make extra
> > sure all is well on that side of the things too, ie.:
> >
> > bgp_table_dump_file: /path/to/spool/bgp-$peer_src_ip-%H%M.log
> > bgp_table_dump_refresh_time: 120
> >
> > Let us know how it goes.
> >
> > Paolo
> >
> > On Sat, Oct 12, 2019 at 11:36:12PM -0400, Brooks Swinnerton wrote:
> > > Hello there!
> > >
> > > I have pmacctd working with the Kafka addon and am attempting to include
> > > `src_as` and `dst_as` information based on the BGP sessions running on
> > the
> > > same machine using the [BIRD router](https://bird.network.cz).
> > >
> > > I was able to successfully get the BGP session stood up using a loopback
> > > address, but in both the Kafka consumer and `pmacct -s`, I do not see the
> > > AS values:
> > >
> > > ```
> > > {"event_type": "purge", "as_src": 0, "as_dst": 0, "ip_src": "1.1.1.138",
> > > "ip_dst": "5.9.43.211", "port_src": 443, "port_dst": 48268, "ip_proto":
> > > "tcp", "stamp_inserted": "2019-10-13 02:50:00", "stamp_updated":
> > > "2019-10-13 02:53:31", "packets": 1, "bytes": 52, "writer_id":
> > > "default_kafka/3725"}
> > > ```
> > >
> > > The pmacct log seems good:
> > >
> > > ```
> > > Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default/core ):
> > Promiscuous
> > > Mode Accounting Daemon, pmacctd 1.7.3-git (20190418-00+c4)
> > > Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default/core ):
> > >  '--enable-kafka' '--enable-jansson' '--enable-l2' '--enable-64bit'
> > > '--enable-traffic-bins' '-
> > > Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default/core ): Reading
> > > configuration file '/etc/pmacct/pmacctd.peering.conf'.
> > > Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ):
> > > cache entries=16411 base cache memory=54878384 bytes
> > > Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core ): [ens3,0]
> > > link type is: 1
> > > Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core ):
> > > [/etc/pmacct/peering_agent.map] (re)loading map.
> > > Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core ):
> > > [/etc/pmacct/peering_agent.map] map successfully (re)loaded.
> > > Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ):
> &g

Re: [pmacct-discussion] BGP AS values are 0

2019-10-13 Thread Paolo Lucente


Hi Brooks,

+1 to Felix's answer. Also maybe two obvious pointsa: 1) with an iBGP
peering setup, AS0 can mean unknown or your own ASN (being a number
rather than a string, null is not an option) and 2) until routes are
received, source/destination IP prefixes can get associated to AS0.

Config looks good as well as the log extract you posted. For more debug
info you can perhaps dump routes received via BGP just to make extra
sure all is well on that side of the things too, ie.:

bgp_table_dump_file: /path/to/spool/bgp-$peer_src_ip-%H%M.log
bgp_table_dump_refresh_time: 120 

Let us know how it goes.

Paolo

On Sat, Oct 12, 2019 at 11:36:12PM -0400, Brooks Swinnerton wrote:
> Hello there!
> 
> I have pmacctd working with the Kafka addon and am attempting to include
> `src_as` and `dst_as` information based on the BGP sessions running on the
> same machine using the [BIRD router](https://bird.network.cz).
> 
> I was able to successfully get the BGP session stood up using a loopback
> address, but in both the Kafka consumer and `pmacct -s`, I do not see the
> AS values:
> 
> ```
> {"event_type": "purge", "as_src": 0, "as_dst": 0, "ip_src": "1.1.1.138",
> "ip_dst": "5.9.43.211", "port_src": 443, "port_dst": 48268, "ip_proto":
> "tcp", "stamp_inserted": "2019-10-13 02:50:00", "stamp_updated":
> "2019-10-13 02:53:31", "packets": 1, "bytes": 52, "writer_id":
> "default_kafka/3725"}
> ```
> 
> The pmacct log seems good:
> 
> ```
> Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default/core ): Promiscuous
> Mode Accounting Daemon, pmacctd 1.7.3-git (20190418-00+c4)
> Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default/core ):
>  '--enable-kafka' '--enable-jansson' '--enable-l2' '--enable-64bit'
> '--enable-traffic-bins' '-
> Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default/core ): Reading
> configuration file '/etc/pmacct/pmacctd.peering.conf'.
> Oct 13 02:51:37 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ):
> cache entries=16411 base cache memory=54878384 bytes
> Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core ): [ens3,0]
> link type is: 1
> Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core ):
> [/etc/pmacct/peering_agent.map] (re)loading map.
> Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core ):
> [/etc/pmacct/peering_agent.map] map successfully (re)loaded.
> Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ):
> JSON: setting object handlers.
> Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ): maximum
> BGP peers allowed: 2
> Oct 13 02:51:38 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ): waiting
> for BGP data on 127.0.0.1:180
> Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ): ***
> Purging cache - START (PID: 3673) ***
> Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ): ***
> Purging cache - END (PID: 3673, QN: 0/0, ET: 0) ***
> Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ):
> [127.0.0.1] BGP peers usage: 1/2
> Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ):
> [1.1.1.1] Capability: MultiProtocol [1] AFI [1] SAFI [1]
> Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ):
> [1.1.1.1] Capability: 4-bytes AS [41] ASN [30]
> Oct 13 02:51:41 bdr-nyiix pmacctd[3666]: INFO ( default/core/BGP ):
> [1.1.1.1] BGP_OPEN: Local AS: 30 Remote AS: 397143 HoldTime: 90
> Oct 13 02:51:51 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ): ***
> Purging cache - START (PID: 3678) ***
> Oct 13 02:51:53 bdr-nyiix pmacctd[3666]: INFO ( default_kafka/kafka ): ***
> Purging cache - END (PID: 3678, QN: 679/679, ET: 0) ***
> ```
> 
> And the configuration is as follows:
> 
> ```
> !
> ! pmacctd configuration example
> !
> ! Did you know CONFIG-KEYS contains the detailed list of all configuration
> keys
> ! supported by 'nfacctd' and 'pmacctd' ?
> !
> ! debug: true
> daemonize: false
> pcap_interface: ens3
> aggregate: src_host, dst_host, src_port, dst_port, src_as, dst_as, proto
> sampling_rate: 10
> !
> plugins: kafka
> kafka_output: json
> kafka_broker_host: kafka-broker.fqdn.com
> kafka_topic: pmacct.acct
> kafka_refresh_time: 10
> kafka_history: 5m
> kafka_history_roundoff: m
> !
> bgp_daemon: true
> bgp_daemon_ip: 127.0.0.1
> bgp_daemon_port: 180
> bgp_daemon_max_peers: 1
> bgp_agent_map: /etc/pmacct/peering_agent.map
> pmacctd_as: bgp
> ```
> 
> With the /etc/pmacct/peering_agent.map as:
> 
> ```
> bgp_ip=1.1.1.1 ip=0.0.0.0/0
> ```
> 
> And the other end of the BGP configuration (in BIRD) being:
> 
> ```
> protocol bgp AS30v4c1 from transit_customer4 {
>   description "pmacctd";
>   local 127.0.0.1 port 179 as 30;
>   neighbor 127.0.0.1 port 180 as 30;
>   rr client;
> }
> ```
> 
> And it has exported ~150k routes.
> 
> Is there anything obvious that I'm doing wrong or perhaps a way that I can
> turn on more debugging to lead me on the right trail?

> ___
> 

Re: [pmacct-discussion] getting IPv6 traffic per /64 subnet

2019-10-11 Thread Paolo Lucente
st_port=0 AND ip_proto='ip' AND
> mac_src='0:0:0:0:0:0' AND mac_dst='0:0:0:0:0:0' AND ip_src='0.0.0.0' AND
> ip_dst='0.0.0.0'
> 2019-10-11T01:10:01.110120Z18 Query INSERT INTO `acct6_in`
> (ip_dst, dst_port, ip_proto, mac_src, mac_dst, ip_src, ip_dst, packets,
> bytes) VALUES ('::', 0, 'ip', '0:0:0:0:0:0', '0:0:0:0:0:0', '0.0.0.0',
> '0.0.0.0', 294880, 412303624)
> 2019-10-11T01:10:01.146967Z20 Query INSERT INTO `acct6_out`
> (ip_src, ip_src, src_port, dst_port, ip_proto, mac_src, mac_dst, ip_src,
> ip_dst, packets, bytes) VALUES ('', '::', 0, 0, 'ip', '0:0:0:0:0:0',
> '0:0:0:0:0:0', '0.0.0.0', '0.0.0.0', 186580, 48111642)
> 2019-10-11T01:11:01.016057Z23 Query UPDATE `acct6_out` SET
> packets=packets+167580, bytes=bytes+44638648 WHERE ip_src='' AND
> ip_src='::' AND src_port=0 AND dst_port=0 AND ip_proto='ip' AND
> mac_src='0:0:0:0:0:0' AND mac_dst='0:0:0:0:0:0' AND ip_src='0.0.0.0' AND
> ip_dst='0.0.0.0'
> 2019-10-11T01:11:01.016832Z22 Query UPDATE `acct6_in` SET
> packets=packets+261858, bytes=bytes+363933056 WHERE ip_dst='::' AND
> dst_port=0 AND ip_proto='ip' AND mac_src='0:0:0:0:0:0' AND
> mac_dst='0:0:0:0:0:0' AND ip_src='0.0.0.0' AND ip_dst='0.0.0.0'
> 2019-10-11T01:11:01.107406Z22 Query INSERT INTO `acct6_in`
> (ip_dst, dst_port, ip_proto, mac_src, mac_dst, ip_src, ip_dst, packets,
> bytes) VALUES ('::', 0, 'ip', '0:0:0:0:0:0', '0:0:0:0:0:0', '0.0.0.0',
> '0.0.0.0', 261858, 363933056)
> 2019-10-11T01:11:01.133611Z23 Query INSERT INTO `acct6_out`
> (ip_src, ip_src, src_port, dst_port, ip_proto, mac_src, mac_dst, ip_src,
> ip_dst, packets, bytes) VALUES ('', '::', 0, 0, 'ip', '0:0:0:0:0:0',
> '0:0:0:0:0:0', '0.0.0.0', '0.0.0.0', 167580, 44638648)
> 2019-10-11T01:12:01.014452Z30 Query UPDATE `acct6_in` SET
> packets=packets+242934, bytes=bytes+335519993 WHERE ip_dst='::' AND
> dst_port=0 AND ip_proto='ip' AND mac_src='0:0:0:0:0:0' AND
> mac_dst='0:0:0:0:0:0' AND ip_src='0.0.0.0' AND ip_dst='0.0.0.0'
> 2019-10-11T01:12:01.015199Z29 Query UPDATE `acct6_out` SET
> packets=packets+162488, bytes=bytes+44937541 WHERE ip_src='' AND
> ip_src='::' AND src_port=0 AND dst_port=0 AND ip_proto='ip' AND
> mac_src='0:0:0:0:0:0' AND mac_dst='0:0:0:0:0:0' AND ip_src='0.0.0.0' AND
> ip_dst='0.0.0.0'
> 2019-10-11T01:12:01.132584Z29 Query INSERT INTO `acct6_out`
> (ip_src, ip_src, src_port, dst_port, ip_proto, mac_src, mac_dst, ip_src,
> ip_dst, packets, bytes) VALUES ('', '::', 0, 0, 'ip', '0:0:0:0:0:0',
> '0:0:0:0:0:0', '0.0.0.0', '0.0.0.0', 162488, 44937541)
> 2019-10-11T01:12:01.141778Z30 Query INSERT INTO `acct6_in`
> (ip_dst, dst_port, ip_proto, mac_src, mac_dst, ip_src, ip_dst, packets,
> bytes) VALUES ('::', 0, 'ip', '0:0:0:0:0:0', '0:0:0:0:0:0', '0.0.0.0',
> '0.0.0.0', 242934, 335519993)
> 
> 
> do you know why it behave like this
> the version i have is
> pmacct -V
> pmacct, pmacct client 1.5.2 (20150907-00)
> 
> which is the one by default in ubuntu 18 repo
> 
> 
> Thanks
> 
> On Thu, Oct 10, 2019 at 7:29 PM Paolo Lucente  wrote:
> 
> >
> > Hi,
> >
> > Thank you for reporting this. Can show the integral error message you
> > get back from MySQL? It may give relevant additional info; feel free to
> > anonimize any confidential data it may contain (ie. IP addresses).
> >
> > Paolo
> >
> >
> > On Thu, Oct 10, 2019 at 12:34:12PM -0400, moftah moftah wrote:
> > > Hi All,
> > > I have issue in making pmacct aggregate traffic for all ipv6 to be per
> > /64
> > > not individual ip
> > >
> > > I am logging ipv4 and ipv6 and i made special plugin for ipv6 so i can
> > use
> > > network_mask but it does not work
> > >
> > > attached is my config
> > >
> > > with this config the error i get is
> > > column ip_src specified twice
> > > column ip_dst specified twice
> > >
> > > what i am doing wring here
> > >
> > > the table is v1 table with added 2 fields
> > > net_dst
> > > net_src
> > >
> > > can anyone understand why i get column specified twice error
> > >
> > >
> > > Thanks
> > >
> > > my config
> > > aggregate[in]: dst_host
> > > aggregate[out]: src_host
> > > aggregate[in6]: dst_net
> > > aggregate[out6]: src_net
> > > aggregate_filter[in]: dst net xxx.xxx.xxx.0/22 or dst net
> > ttt.ttt.ttt.0/22
> > > aggregate_filter[out]: src net xxx.xxx.xxx.0/22  or src net
> > ttt.ttt.ttt.0/22
> > > aggregate_filter[in6]: dst net :::/36
> > > aggreg

Re: [pmacct-discussion] getting IPv6 traffic per /64 subnet

2019-10-10 Thread Paolo Lucente


Hi,

Thank you for reporting this. Can show the integral error message you
get back from MySQL? It may give relevant additional info; feel free to
anonimize any confidential data it may contain (ie. IP addresses).

Paolo 


On Thu, Oct 10, 2019 at 12:34:12PM -0400, moftah moftah wrote:
> Hi All,
> I have issue in making pmacct aggregate traffic for all ipv6 to be per /64
> not individual ip
> 
> I am logging ipv4 and ipv6 and i made special plugin for ipv6 so i can use
> network_mask but it does not work
> 
> attached is my config
> 
> with this config the error i get is
> column ip_src specified twice
> column ip_dst specified twice
> 
> what i am doing wring here
> 
> the table is v1 table with added 2 fields
> net_dst
> net_src
> 
> can anyone understand why i get column specified twice error
> 
> 
> Thanks
> 
> my config
> aggregate[in]: dst_host
> aggregate[out]: src_host
> aggregate[in6]: dst_net
> aggregate[out6]: src_net
> aggregate_filter[in]: dst net xxx.xxx.xxx.0/22 or dst net ttt.ttt.ttt.0/22
> aggregate_filter[out]: src net xxx.xxx.xxx.0/22  or src net ttt.ttt.ttt.0/22
> aggregate_filter[in6]: dst net :::/36
> aggregate_filter[out6]: src net :::/36
> networks_mask[in6]: 64
> networks_mask[out6]: 64
> interface: ens1f0
> plugins: , mysql[in], mysql[out] , mysql[in6], mysql[out6]
> sql_multi_values: 10240
> sql_history_roundoff: h
> sql_refresh_time: 60
> sql_table[in]: acct_in
> sql_table[out]: acct_out
> sql_table[in6]: acct6_in
> sql_table[out6]: acct6_out
> !sql_host: localhost
> sql_passwd:a
> sql_user:a
> plugin_buffer_size: 10240
> plugin_pipe_size: 2048000
> !sql_table_version: 9

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] nfacctd monthly accounting problem

2019-09-10 Thread Paolo Lucente


Hi Terry,

I have tried to reproduce the issue firing up a VM with Debian (can't do
Ubuntu) but unfortunately i could not manage to reproduce the issue. If,
as you say, nfacctd crashes it should then be easy for you guys to collect
useful info for troubleshooting at the next crash. The impact would be
to restart nfacctd after having done a 'ulimit -c unlimited' so that the
coredump file is saved on disk (then inspecting the file with gdb will
say where the program is crashing). You may want to review this section
of the docs for more info:

https://github.com/pmacct/pmacct/blob/master/QUICKSTART#L2644-#L2659

This all said, i am trading off impact with urgency. Shall the issue be
more urgent than catch it on Oct 1st, we can indeed take a different
route (especially if you can spin a VM equal to the one where nfacctd is
crashing). Let me know and, being this about troubleshooting, we may
freely continue by 1:1 emails if you wish and summarize here findings at
a later point.

Paolo

On Mon, Sep 09, 2019 at 01:51:41AM +, Terry Duchcherer wrote:
> My techs have just been restarting the server at the beginnibg of each month. 
> If memory serves me correctly, the nfacctd service dies.
> 
> Terry
> 
> 
> -Original Message-
> From: pmacct-discussion  On Behalf Of 
> Paolo Lucente
> Sent: Thursday, September 5, 2019 7:16 AM
> To: pmacct-discussion@pmacct.net
> Subject: Re: [pmacct-discussion] nfacctd monthly accounting problem
> 
> 
> Hi Terry,
> 
> Thanks for reporting this issue.
> 
> Can you elaborate a bit more on the 'nfacctd does not start accounting for 
> the new month'? It just stops accounting or it keeps accounting on the old 
> month (ie. it seems like it does not flip the month)? Or some different 
> behaviour? The more details you can share, the better.
> 
> Thanks,
> Paolo
>  
> On Wed, Sep 04, 2019 at 04:25:38PM +, Terry Duchcherer wrote:
> > We are using nfacct to do monthly data accounting for our customers, all 
> > works fine except at the beginning of a new month nfacctd does not start 
> > accounting for the new month. Restarting the service corrects the problem.
> > 
> > nfacctd version 1.7.1-git
> > Ubuntu version 16.04
> > 
> > Here is our config:
> > 
> > nfacctd_port: 2055
> > plugins: mysql[inbound], mysql[outbound]
> > sql_db: dbname
> > sql_host: localhost
> > sql_user: dbuser
> > sql_passwd: dbpass
> > sql_table[inbound]: acct_in
> > sql_table[outbound]: acct_out
> > aggregate[inbound]: dst_host
> > aggregate[outbound]: src_host
> > aggregate_filter[inbound]: dst net (nnn.nnn.nnn.nnn/22 or 
> > nnn.nnn.nnn.nnn /22 or nnn.nnn.nnn.nnn /21)
> > aggregate_filter[outbound]: src net (nnn.nnn.nnn.nnn /22 or 
> > nnn.nnn.nnn.nnn /22 or nnn.nnn.nnn.nnn /21)
> > sql_refresh_time: 300
> > sql_history: 1M
> > sql_history_roundoff: h
> > 
> > What am I missing?
> > 
> > Thanks in Advance;
> > Terry
> > NETAGO
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> >   [ Read 16 lines ]
> > ^G Get Help^O Write Out   ^W Where Is^K Cut Text^J Justify 
> > ^C Cur Pos ^Y Prev Page
> > ^X Exit^R Read File   ^\ Replace ^U Uncut Text  ^T To Spell
> > ^_ Go To Line  ^V Next Page
> 
> > ___
> > pmacct-discussion mailing list
> > http://www.pmacct.net/#mailinglists
> 
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] nfacctd monthly accounting problem

2019-09-05 Thread Paolo Lucente


Hi Terry,

Thanks for reporting this issue.

Can you elaborate a bit more on the 'nfacctd does not start accounting
for the new month'? It just stops accounting or it keeps accounting on
the old month (ie. it seems like it does not flip the month)? Or some
different behaviour? The more details you can share, the better.

Thanks,
Paolo
 
On Wed, Sep 04, 2019 at 04:25:38PM +, Terry Duchcherer wrote:
> We are using nfacct to do monthly data accounting for our customers, all 
> works fine except at the beginning of a new month nfacctd does not start 
> accounting for the new month. Restarting the service corrects the problem.
> 
> nfacctd version 1.7.1-git
> Ubuntu version 16.04
> 
> Here is our config:
> 
> nfacctd_port: 2055
> plugins: mysql[inbound], mysql[outbound]
> sql_db: dbname
> sql_host: localhost
> sql_user: dbuser
> sql_passwd: dbpass
> sql_table[inbound]: acct_in
> sql_table[outbound]: acct_out
> aggregate[inbound]: dst_host
> aggregate[outbound]: src_host
> aggregate_filter[inbound]: dst net (nnn.nnn.nnn.nnn/22 or nnn.nnn.nnn.nnn /22 
> or nnn.nnn.nnn.nnn /21)
> aggregate_filter[outbound]: src net (nnn.nnn.nnn.nnn /22 or nnn.nnn.nnn.nnn 
> /22 or nnn.nnn.nnn.nnn /21)
> sql_refresh_time: 300
> sql_history: 1M
> sql_history_roundoff: h
> 
> What am I missing?
> 
> Thanks in Advance;
> Terry
> NETAGO
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>   [ Read 16 lines ]
> ^G Get Help^O Write Out   ^W Where Is^K Cut Text^J Justify ^C 
> Cur Pos ^Y Prev Page
> ^X Exit^R Read File   ^\ Replace ^U Uncut Text  ^T To Spell^_ 
> Go To Line  ^V Next Page

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Use nfacctd renormalize with tee plugin

2019-09-04 Thread Paolo Lucente


Hi Alexandre,

Renormalization is a feature available only with collection (not teeing,
since teeing does no or very minimal parsing). I am curious, are you using
some sort of variable sampling rate or is it a constant? Perhaps you see
where i am going to - if constant, you could perhaps factor it in at
some point in your post processing. 

Paolo
  
On Wed, Sep 04, 2019 at 04:41:15PM +0200, alexandre wrote:
> Hello,
> 
> We are currently using exporting Netflow data to multiple Netflow
> collectors using the tee plugin along with nfacctd. The problem is that
> we are sampling our flows, and some collectors don't offer the
> possibility to correct the values.
> 
> I saw that nfacctd had some parameters that could correct the data (like
> renormalize) and I would like to use it to correct my sampled Netflow
> before exporting it with the tee plugin, but I still get the sampled
> values.
> 
> We are using Netflow v9 and the sampling value is not sent by the router.
> 
> Here is an example of the .conf file I use for nfacctd :
> 
> nfacctd_port: 2100
> nfacctd_ip: 0.0.0.0
> nfacctd_renormalize: true
> nfacctd_ext_sampling_rate: 1024
> !
> plugins: tee[br01]
> tee_receivers[br01]: /var/pmacct/tee_br01.lst
> !
> !pre_tag_map: /path/to/pretag.map
> !
> plugin_buffer_size: 10240
> plugin_pipe_size: 1024000
> nfacctd_pipe_size: 1024000
> 
> Do you have any suggestion about it ?
> 
> Many thanks
> 
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] MySQL SSL/TLS support

2019-08-20 Thread Paolo Lucente

Hi Scott,

To confirm SSL/TLS connections to MySQL are not currently supported.
While from a coding perspective it does not appear to be a big deal
(matter of adding a mysql_set_ssl() call before mysql_real_connect()), i
have no infrastructure to test this working properly. Can you help with
this? If so we can follow-up by unicast email as the dev part will be of
little general interest:

1) please send over the output of a pmacctd -V

2) Basing on the version you are running and compile options i will say
   where to insert the mysql_set_ssl() call.

3) As preparations, you should have ready the following inputs required
   by mysql_set_ssl():

   * The path name of the client private key file.
   * The path name of the client public key certificate file. 
   * The path name of the Certificate Authority (CA) certificate file
 (apparently optional).
   * The path name of the directory that contains trusted SSL CA
 certificate files.

For a first round we'll hard-code all this info to proof it working;
then, once happy, we can move all of that to config directives.

Paolo

On Mon, Aug 19, 2019 at 05:05:57AM +, Scott Pettit wrote:
> Hello,
> 
> I can't find a configuration key to enable SSL/TLS when using MySQL with 
> pmacct. Is this possible?
> 
> -Scott
> --
> 
> [https://s3-ap-southeast-2.amazonaws.com/e2emailsig/vorco.png]
> 
> Scott Pettit | Director
> ☎+64 9 950 | 
> ✉spet...@vorco.net
> 
> Vorco | ☎+64 9 222
> 205/100 Parnell Road, Parnell, Auckland 1052, New Zealand
> http://www.vorco.net
> 
> The content of this message and any attachments may be privileged, 
> confidential or sensitive and is intended only for the use of the intended 
> recipient(s). Any unauthorised use is prohibited. Views expressed in this 
> message are those of the individual sender, except where stated otherwise 
> with appropriate authority. All pricing provided is valid at the time of 
> writing only and may change without notice. Sales are made subject to our 
> Terms & Conditions, available on our website or on request. Errors & 
> Omissions Excepted.
> 
>  The content of this message and any attachments may be privileged, 
> confidential or sensitive. Any unauthorised use is prohibited. Views 
> expressed in this message are those of the individual sender, except where 
> stated otherwise with appropriate authority. All pricing provided is valid at 
> the time of writing only and due to factors such as the exchange rate, may 
> change without notice. Sales are made subject to our Terms & Conditions, 
> available on our website or on request.
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] pmbmpd and IPv6

2019-07-15 Thread Paolo Lucente


Hi Fabien,

Just to confirm that IPv6 should be supported just fine in pmbmpd: the
code for parsing BGP Update PDUs is shared with BGP and, actually, even
the compiler flags that were allowing to disable IPv6 support are gone. 
It smells it may be IPv6 data is not included in the export from the
router - can you inspect raw BMP data (say, with wireshark) to be extra
sure? Of course just let me know if you need help with that.
 
Paolo

On Mon, Jul 15, 2019 at 01:45:55PM +0200, Fabien VINCENT wrote:
> Dear, 
> 
> I try to setup a PoC using pmbmpd 
> 
> I have a strange behavior with A9K/IOS-XR sending BMP data to bmp-server
> 
> 
> I have the sample config 
> 
> !
> bmp_daemon: true
> bmp_daemon_ip: 10.x.y.z
> bmp_daemon_port: 1790
> !
> ! default to 10
> !bmp_daemon_max_peers
> !
> bmp_daemon_msglog_file: /var/log/pmbmpd/bmp-$peer_src_ip.log
> !
> 
> bmp_daemon_allow_file: /opt/pmacct/conf/bmp.allowed
> bmp_dump_file: /tmp/pmbmpd/$bmp_router-%s.dump
> bmp_dump_output: json
> bmp_dump_refresh_time: 120 
> 
> And when dump files are written, nothing related to IPv6 BMP export. Is
> it supported on pmacct 1.7.3 ? 
> 
> # /usr/local/sbin/pmbmpd -V
> pmacct BMP Collector Daemon, pmbmpd 1.7.3-git (20190418-00+c4)
> 
> Arguments:
>  '--enable-pgsql' '--enable-rabbitmq' '--enable-kafka' '--enable-geoip'
> '--enable-jansson' '--enable-l2' '--enable-64bit'
> '--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins'
> '--enable-st-bins'
> 
> Libs:
> libpcap version 1.8.1
> PostgreSQL 19
> rabbimq-c 0.8.0
> rdkafka 0.11.3
> jansson 2.11
> 
> System:
> Linux 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019
> x86_64
> 
> Compiler:
> gcc 7.4.0
> 
> For suggestions, critics, bugs, contact me: Paolo Lucente
> . 
> 
> Anything related to pmbmpd or IOS-XR ? 
> 
> -- 
> FABIEN VINCENT
> ---
> @beufanet
> ---

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] sfprobe not sending exports

2019-06-23 Thread Paolo Lucente


Hi Yang,

Apparently you are exporting flows to [127.0.0.1]:6343 but you are
listening with tcpdump on interface 'eno1'? 

Paolo
 
On Sun, Jun 23, 2019 at 12:53:59AM -0700, Yang Yu wrote:
> I tried to use pmacct to sample local interface and create sFlow
> exports. From debug it looks like packets are being sampled but no
> sFlow export was ever received. I tried sampling rate 1 or 100, still
> no exports. What am I missing?
> 
> 
> Yang
> 
> 
> >
> cat pmacctd_sflow.conf
> plugins: sfprobe
> sampling_rate:1
> pcap_interface: eno1
> 
> 
> >
> $ sudo pmacctd -f pmacctd_sflow.conf  -d
> DEBUG: [pmacctd_sflow.conf] plugin name/type: 'default'/'core'.
> DEBUG: [pmacctd_sflow.conf] plugin name/type: 'default_sfprobe'/'sfprobe'.
> DEBUG: [pmacctd_sflow.conf] sampling_rate:100
> DEBUG: [pmacctd_sflow.conf] pcap_interface:eno1
> DEBUG: [pmacctd_sflow.conf] debug:true
> INFO ( default/core ): Promiscuous Mode Accounting Daemon, pmacctd
> 1.7.3-git (20190418-00+c4)
> INFO ( default/core ):  '--enable-jansson' '--enable-mysql'
> '--enable-pgsql' '--enable-sqlite3' '--enable-ndpi'
> 'PKG_CONFIG_PATH=/usr/local/libdata/pkgconfig' '--enable-l2'
> '--enable-64bit' '--enable-traffic-bins' '--enable-bgp-bins'
> '--enable-bmp-bins' '--enable-st-bins'
> INFO ( default/core ): Reading configuration file
> '/home/user/tmp/pmacct/pmacctd_sflow.conf'.
> INFO ( default_sfprobe/sfprobe ): plugin_pipe_size=4096000 bytes
> plugin_buffer_size=424 bytes
> INFO ( default_sfprobe/sfprobe ): ctrl channel: obtained=212992 bytes
> target=77280 bytes
> DEBUG ( default_sfprobe/sfprobe ): Creating sFlow agent.
> INFO ( default_sfprobe/sfprobe ): Exporting flows to [127.0.0.1]:6343
> INFO ( default_sfprobe/sfprobe ): Sampling at: 1/100
> INFO ( default/core ): [eno1,0] link type is: 1
> DEBUG ( default_sfprobe/sfprobe ): 74d435e8fa15 -> 384f49cda533 (len =
> 549, captured = 128)
> DEBUG ( default_sfprobe/sfprobe ): 384f49cda533 -> 74d435e8fa15 (len =
> 1514, captured = 128)
> DEBUG ( default_sfprobe/sfprobe ): 384f49cda533 -> 74d435e8fa15 (len =
> 2942, captured = 128)
> DEBUG ( default_sfprobe/sfprobe ): 74d435e8fa15 -> 384f49cda533 (len =
> 86, captured = 86)
> DEBUG ( default_sfprobe/sfprobe ): 74d435e8fa15 -> 384f49cda533 (len =
> 86, captured = 86)
> 
> >
> $ sudo tcpdump port 6463 -
> tcpdump: listening on eno1, link-type EN10MB (Ethernet), capture size
> 262144 bytes
> ^C
> 0 packets captured
> 0 packets received by filter
> 0 packets dropped by kernel
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] bgp_peer_src_as_map and pmacctd

2019-06-18 Thread Paolo Lucente

Ciao Simone,

The config and maps all look good and, to be frank, it should all work.
I admit it may be a better tested config with nfacctd/sfacctd (where it
should just work) than pmacctd/uacctd. If you have interest in trying to
make it work, i'd be more than happy to support you and investigate the
issue. 

Shall i find you positive: the setup is a bit involved and by far the
easiest would be if i could troubleshoot on your own setup/testbed. If
that is not possible, i can simulate a setup in my own testbed (it will
take longer). Let me know what is possible (here or by unicast email).

Paolo

On Tue, Jun 18, 2019 at 10:58:50AM +0200, Simone Ricci wrote:
> Good Morning,
> 
> I’m facing a problem with pmacctd trying to use bgp_peer_src_as_map directive 
> to populate accordingly the peer_src_as field. Our setup is quite simple:
> 
> - The collector (running pmacctd) sees traffic subject to analysis on two 
> interfaces
> - Every link lives in its own vlan
> - One link has multiple peers in it (it’s an IXP)
> 
> This is the current configuration:
> 
> ## pmacct.conf ##
> daemonize: false
> pcap_interfaces_map: /opt/pmacct/etc/pcap_interfaces.map
> pcap_ifindex: map
> plugins: memory[in]
> aggregate[in]: src_as, peer_src_as
> imt_buckets: 65537
> imt_mem_pools_size: 65535
> imt_mem_pools_number: 1048576
> plugin_buffer_size: 1048576
> plugin_pipe_size: 134217728
> bgp_daemon: true
> pmacctd_as: bgp
> bgp_agent_map: /opt/pmacct/etc/bgp_agent.map
> bgp_peer_src_as_map: /opt/pmacct/etc/bgp_peers.map
> bgp_peer_src_as_type: map
> 
> 
> ## pcap_interfaces.map ##
> ifname=enp1s0f0 ifindex=100
> ifname=enp1s0f1 ifindex=200
> 
> ## bgp_agent.map ##
> bgp_ip=W.X.Y.Z ip=0.0.0.0/0 ! W.X.Y.Z is peer’s router id
> 
> ## bgp_peers.map ##
> id=X ip=0.0.0.0/0 src_mac=xx:xx:xx:xx:xx:xx
> id=Y ip=0.0.0.0/0 src_mac=yy:yy:yy:yy:yy:yy
> id=Z ip=0.0.0.0/0 src_mac=zz:zz:zz:zz:zz:zz
> 
> Obviously macs and asns are hidden to protect the innocents (!)
> 
> When I start the daemon, it comes up correctly without giving any 
> warning/error, but peer_src_as gets always populated with the first entry on 
> the relevant map (in this case, X).
> Now I’m wondering, is this configuration supported ? Or maybe src_mac is 
> supposed to be used only with nfacctd and sfacctd ?
> 
> To overcome the problem I can easily span multiple pmacctd daemons, each one 
> with the relevant pcap_filter directive, then collect data separately (which 
> is not an issue since the memory plugin is just for debugging purposes, the 
> plan is of course is to send everything to influx and/or elasticsearch for 
> further analysis)…but this seems rather hackish to me.
> 
> Thanks!
> 
> 
> -- 
> Simone Ricci
> 
> 
> 
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] pmacct on ppp interface

2019-05-29 Thread Paolo Lucente


Hi Alex,

First thing first 1.6.1 is a release of almost 3 years ago, i can't
support that - please upgrade to 1.7.3 or master code. That said i can
confirm pmacctd/uacctd should support PPP-encapsulated traffic. Also, you
may send me a trace of the NFLOG traffic (as captured by tcpdump) via
unicast email for some troubleshooting.

Paolo

On Wed, May 29, 2019 at 12:37:40PM +0300, Alex K wrote:
> Hi All,
> 
> I am facing the following issue:
> 
> I have configured iptables to log packets coming through a ppp interface
> (named sim0) using NFLOG target. These packets are forwarded to uacctd to
> the respective uacctd group, as below, which are printed in a CSV file
> using the print plugin:
> 
> 
> iptables (mangle table):
> -A INPUT -i sim0 -j NFLOG --nflog-group 1 --nflog-size 40 --nflog-threshold
> 10 --nflog-prefix sim0in
> -A FORWARD -i sim0 -j NFLOG --nflog-group 1 --nflog-size 40
> --nflog-threshold 10 --nflog-prefix sim0in
> -A POSTROUTING -o sim0 -j NFLOG --nflog-group 1 --nflog-size 40
> --nflog-threshold 10 --nflog-prefix sim0out
> 
> 
> uacctd config:
> ! Collect traffic on sim0
> daemonize: true
> debug:  true
> promisc:   false
> pidfile:   /var/run/uacctd_sim0.pid
> imt_path:  /tmp/uacctd_sim0.pipe
> !syslog: daemon
> logfile: /var/log/uacct/uacct_sim0.log
> uacctd_group: 1
> plugins: print[in_out_sim0]
> aggregate[in_out_sim0]:src_host,dst_host,src_port,dst_port,proto
> print_output[in_out_sim0]: csv
> print_output_file[in_out_sim0]: /var/lib/uacctd-sim0-%Y%m%d.csv
> print_output_file_append[in_out_sim0]: true
> print_refresh_time: 10
> print_history: 24h
> 
> I receive normally outgoing traffic which is logged at the CSV file.
> Using tcpdump I can see all the in/out traffic and iptables counters are
> rising at the respective chains. The sim0 interface is dynamically brought
> up from a ppp connection.
> 
> Do you have any idea why uacctd is not getting those incoming packets
> (INPUT and FORWARD chain) or how this can be troubleshooted. I am using
> pmacct 1.6.1-1.
> 
> Thank you!
> Alex

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] nfacctd crash when using pre_tag_map

2019-05-28 Thread Paolo Lucente

Hi Felix,

Thanks for getting in touch. Can you please get more data about the
crash by following this section fo the QUICKSTART (i'd need an output of
GDB 'bt'):

https://github.com/pmacct/pmacct/blob/master/QUICKSTART#L2606-#L2635

You can follow up 1:1 so that we don't disturb everybody with the
back/forth that will be needed by the troubleshooting process. We can
then summarize resolution on the list.

Paolo
  
On Mon, May 27, 2019 at 03:55:04PM +, Felix Stolba wrote:
>  Hi
> 
> I’m trying to use a pre_tag_map with less than 5000 entries with the purpose 
> of adding the ingress and egress interface names to the flow records as 
> labels. When using this map, nfacctd reproducibly crashes, tested using 1.7.1 
> and 1.7.3. I would appreciate if someone (Paolo? :) ) could help isolate the 
> problem. Debug logs can be found attached. I will be happy to provide any 
> additional info that will be needed.
> 
> When crashing, nfacctd emits this log message:
> realloc(): invalid next size
> Aborted (core dumped)
> 
> Few tests I've already done:
> * Use a smaller map: works - did a PoC using a map of about 200 lines, this 
> worked great.
> * Delete everything below OUTTABLE (see below): works - having only the top 
> part of the map keeps pmacct running
> * Delete some lines below OUTTABLE - produced a different error message: 
> "corrupted size vs. prev_size"
> 
> The pre_tag_map essentially looks like the ones in the JEQ examples [1]:
> 
> set_label=INTERFACE_NAME ip=ROUTER_IP in=IFINDEX jeq=OUTTABLE
> ... 2000 lines of similar mappings ...
> set_label=INTERFACE_NAME ip=ROUTER_IP out=IFINDEX label=OUTTABLE  
> ... 2000 lines of similar mappings ...
> 
> 
> Best regards
> Felix
> 
> [1] https://github.com/pmacct/pmacct/blob/master/examples/pretag.map.example
> 
> 
> 

> flow01:~/pmacct-to-elasticsearch# nfacctd -f /etc/pmacct/pmacctd.conf -d
> DEBUG: [/etc/pmacct/pmacctd.conf] plugin name/type: 'default'/'core'.
> DEBUG: [/etc/pmacct/pmacctd.conf] plugin name/type: 
> 'elasticsearch_print'/'print'.
> DEBUG: [/etc/pmacct/pmacctd.conf] debug_internal_msg:true
> DEBUG: [/etc/pmacct/pmacctd.conf] nfacctd_time_new:true
> DEBUG: [/etc/pmacct/pmacctd.conf] nfacctd_account_options:true
> DEBUG: [/etc/pmacct/pmacctd.conf] nfacctd_ip:0.0.0.0
> DEBUG: [/etc/pmacct/pmacctd.conf] nfacctd_port:4739
> DEBUG: [/etc/pmacct/pmacctd.conf] nfacctd_disable_opt_scope_check:true
> DEBUG: [/etc/pmacct/pmacctd.conf] 
> nfacctd_templates_file:/etc/pmacct/nf_templates_cache
> DEBUG: [/etc/pmacct/pmacctd.conf] nfacctd_net:bmp
> DEBUG: [/etc/pmacct/pmacctd.conf] nfacctd_as:bmp
> DEBUG: [/etc/pmacct/pmacctd.conf] pmacctd_as:false
> DEBUG: [/etc/pmacct/pmacctd.conf] pmacctd_net:false
> WARN: [/etc/pmacct/pmacctd.conf] Invalid network aggregation value 'false'
> WARN: [/etc/pmacct/pmacctd.conf:18] Invalid value. Ignored.
> DEBUG: [/etc/pmacct/pmacctd.conf] nfacctd_ext_sampling_rate:1024
> DEBUG: [/etc/pmacct/pmacctd.conf] nfacctd_renormalize:true
> DEBUG: [/etc/pmacct/pmacctd.conf] bmp_daemon:true
> DEBUG: [/etc/pmacct/pmacctd.conf] bmp_daemon_ip:0.0.0.0
> DEBUG: [/etc/pmacct/pmacctd.conf] bmp_daemon_max_peers:100
> DEBUG: [/etc/pmacct/pmacctd.conf] logfile:/var/log/pmacct/pmacctd.log
> DEBUG: [/etc/pmacct/pmacctd.conf] 
> print_output_file[elasticsearch_print]:/elasticsearch_print.json
> DEBUG: [/etc/pmacct/pmacctd.conf] print_output[elasticsearch_print]:json
> DEBUG: [/etc/pmacct/pmacctd.conf] 
> print_trigger_exec[elasticsearch_print]:/etc/pmacct/p2es/triggers/elasticsearch_print
> DEBUG: [/etc/pmacct/pmacctd.conf] print_refresh_time[elasticsearch_print]:15
> DEBUG: [/etc/pmacct/pmacctd.conf] aggregate[elasticsearch_print]:src_host, 
> dst_host,  in_iface, out_iface, timestamp_start, timestamp_end, src_port, 
> dst_port, proto, tos, src_mask, dst_mask, tcpflags, etype, src_host_country, 
> dst_host_country, vlan, sampling_rate, tag, tag2, label, src_as, dst_as, 
> as_path, std_comm, ext_comm, lrg_comm, local_pref, med, src_as_path, 
> src_std_comm, src_ext_comm, src_lrg_comm, src_local_pref, src_med, 
> mpls_vpn_rd, peer_src_as, peer_dst_as, peer_dst_ip, peer_src_ip, src_roa, 
> dst_roa, src_net, dst_net
> DEBUG: [/etc/pmacct/pmacctd.conf] geoipv2_file:/etc/pmacct/GeoLite2-City.mmdb
> DEBUG: [/etc/pmacct/pmacctd.conf] pre_tag_map:/etc/pmacct/ifindex.map
> DEBUG: [/etc/pmacct/pmacctd.conf] maps_refresh:true
> DEBUG: [/etc/pmacct/pmacctd.conf] maps_entries:64000
> DEBUG: [/etc/pmacct/pmacctd.conf] maps_index:true
> DEBUG: [/etc/pmacct/pmacctd.conf] rpki_rtr_cache:rpki01:8282
> DEBUG: [/etc/pmacct/pmacctd.conf] rpki_rtr_cache_version:0
> DEBUG: [/etc/pmacct/pmacctd.conf] debug:true
> realloc(): invalid next size
> Aborted (core dumped)

> 2019-05-27T06:59:50Z INFO ( default/core/BMP ): waiting for BMP data on 
> 0.0.0.0:1790
> 2019-05-27T06:59:55Z INFO ( elasticsearch_print/print ): 
> plugin_pipe_size=4096000 bytes plugin_buffer_size=1548 bytes
> 2019-05-27T06:59:55Z INFO ( elasticsearch_print/print ): ctrl channel: 
> 

Re: [pmacct-discussion] Tolerating small packet flooding with pmacctd/nfacctd

2019-05-16 Thread Paolo Lucente


Hi Mikhail,

For the export (pmacctd) part let me point you to Q7 of the FAQS doc:

https://github.com/pmacct/pmacct/blob/master/FAQS#L71-#L101

Specifically PF_RING and ZeroMQ-based internal buffering (for this last
part grep 'ZeroMQ' in the QUICKSTART document). 

For the collection (nfacctd) part: if, for your project, you can use
NetFlow v5 or sFlow exports (which are both not template-based) then you
could rely on SO_REUSEPORT. Although described in the context of BGP
collection, you can re-use  the idea for flow collection:

https://github.com/pmacct/pmacct/blob/master/QUICKSTART#L1757-#L1767

If, instead, using NetFlow v9/IPFIX (which are bot template-based) then
we may want to resort to some finer idea so that a replicator (nfacctd
with 'tee' plugin) set up as balancer for the actual nfacctd collectors.
We can follow-up if this is the case and i'd like to better understand
the export part, ie. sharing your config would help.

Paolo

On Thu, May 16, 2019 at 11:54:17AM +0200, Mikhail Sennikovsky wrote:
> Hi all,
> 
> We've were experimenting with pmacctd/nfacctd-based IP traffic
> accounting recently, and have faced some issues with handling small
> packet floods by pmacctd/nfacctd in our setup.
> 
> Would be great if someone here could suggest us how we could overcome them.
> 
> Our goal was actually to precisely account the amount of traffic being
> sent to and from each IP used by a set of "client" hosts sitting
> behind the "router" host, which routes traffic to/from them.
> In our test setup the pmacctd was running on that "router" host,
> sniffing on its outbound interface, and then sending the netflow data
> to the nfacctd running on a "collector" host.
> 
> So we've experienced two main problems when some "client" host started
> to flood some small, e.g. tcp syn flood (this does not have to be
> exactly tcp syn flood however, e.g. flooding small udp packets each
> using different source port would work as well):
> 
> 1. top reported ~50% cpu utilization of pmacctd processes, and started
> reporting packet drops (dropped_packets value reported by SIGUSR1
> handler)
> 
> 2. pmacctd started producing significant amount of netflow traffic,
> which was eventually dropped by the nfacctd on the "collector" host
> (netstat -su reporting the increasing number of udp receive buffer
> errors, while increasing the nfacctd_pipe_size to 2097152 made the
> situation better, but still did not make the drops go away
> completely).
> 
> Both of the above (apparently) resulted in decrease in preciseness of
> our traffic measurements.
> 
> Had someone else here experienced similar issues, and/or could perhaps
> suggest some ways of overcoming them?
> Perhaps given that we do not need the information on each an every
> "flow", but rather just the precise info on overall packets/bytes
> being sent to/from a specific IP, it might be possible to adjust our
> setup to tolerate such flooding?
> 
> Thanks,
> Mikhail
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


[pmacct-discussion] pmacct 1.7.3 released !

2019-05-16 Thread Paolo Lucente


VERSION.
1.7.3


DESCRIPTION.
pmacct is a small set of multi-purpose passive network monitoring tools. It
can account, classify, aggregate, replicate and export forwarding-plane data,
ie. IPv4 and IPv6 traffic; collect and correlate control-plane data via BGP
and BMP; collect and correlate RPKI data; collect infrastructure data via
Streaming Telemetry. Each component works both as a standalone daemon and
as a thread of execution for correlation purposes (ie. enrich NetFlow with
BGP data).

A pluggable architecture allows to store collected forwarding-plane data into
memory tables, RDBMS (MySQL, PostgreSQL, SQLite), noSQL databases (MongoDB,
BerkeleyDB), AMQP (RabbitMQ) and Kafka message exchanges and flat-files.
pmacct offers customizable historical data breakdown, data enrichments like
BGP and IGP correlation and GeoIP lookups, filtering, tagging and triggers.
Libpcap, Linux Netlink/NFLOG, sFlow v2/v4/v5, NetFlow v5/v8/v9 and IPFIX are
all supported as inputs for forwarding-plane data. Replication of incoming
NetFlow, IPFIX and sFlow datagrams is also available. Statistics can be
easily exported to time-series databases like ElasticSearch and InfluxDB and
traditional tools Cacti RRDtool MRTG, Net-SNMP, GNUPlot, etc.

Control-plane and infrastructure data, collected via BGP, BMP and Streaming
Telemetry, can be all logged real-time or dumped at regular time intervals
to AMQP (RabbitMQ) and Kafka message exchanges and flat-files.


HOMEPAGE.
http://www.pmacct.net/


DOWNLOAD.
http://www.pmacct.net/pmacct-1.7.3.tar.gz


CHANGELOG.
+ Introduced the RPKI daemon to build a ROA database and check prefixes
  validation status and coverages. Resource Public Key Infrastructure
  (RPKI) is a specialized public key infrastructure (PKI) framework
  designed to secure the Internet routing. RPKI uses certificates to
  allow Local Internet Registries (LIRs) to list the Internet number
  resources they hold. These attestations are called Route Origination
  Authorizations (ROAs). ROA information can be acquired in one of the
  two following ways: 1) importing it using the rpki_roas_file config
  directive from a file in the RIPE Validator format or 2) connecting
  to a RPKI RTR Cache for live ROA updates; the cache IP address/port
  being defined by the rpki_rtr_cache config directive (and a few more
  optional rpki_rtr_* directives are available and can be reviwed in
  the CONFIG-KEYS doc). The ROA fields will be populated with one of
  these five values: 'u' Unknown, 'v' Valid, 'i' Invalid no overlaps,
  'V' Invalid with a covering Valid prefix, 'U' Invalid with a covering
  Unknown prefix. Thanks to Job Snijders ( @job ) for his support and
  vision.
+ Introducing pmgrpcd.py, written in Python, a daemon to handle gRPC-
  based Streaming Telemetry sessions and unmarshall GPB data. Code
  was mostly courtesy by Matthias Arnold ( @tbearma1 ). This is in
  addition (or feeding into) pmtelemetryd, written in C, a daemon to
  handle TCP/UDP-based Streaming Telemetry sessions with JSON-encoded
  data. Thanks to Matthias Arnold ( @tbearma1 ) and Thomas Graf for
  their support and contributing code.  
+ pmacctd, uacctd: added support for CFP (Cisco FabricPath) and Cisco
  Virtual Network Tag protocols. Both patches were courtesy by Stephen
  Clark ( @sclark46 ). 
+ print plugin: added 'custom' to print_output. This is to cover two
  main use-cases: 1) use JSON or Avro encodings but fix the format of
  the messages in a custom way and 2) use a different encoding than
  JSON or Avro. See also example in examples/custom and new directives
  print_output_custom_lib and print_output_custom_cfg_file. The patch
  was courtesy by Edge Intelligence ( @edge-intelligence ).
+ Introducing mpls_pw_id aggregation primitive and mpls_pw_id key in
  pre_tag_map to filter on signalled L2 MPLS VPN Pseudowire IDs.
+ BGP daemon: added bgp_disable_router_id knob to enable/disable BGP
  Router-ID check, both at BGP OPEN time and BGP lookup. Useful, for
  example, in scenarios with split BGP v4/v6 AFs over v4/v6 transports.
+ BGP, BMP daemons: translate origin attribute numeric value into IGP
  (i), EGP (e) and Incomplete (u) strings.
+ plugins: added new plugin_exit_any feature to make the daemon bail
  out if any (not all, which is the default behaviour) of the plugins
  exits.
+ maps_index: improved selection of buckets for index hash structure
  by picking the closest prime number to the double of the entries of
  the map to be indexed in order to achieve better elements dispersion
  and hence better performances.
+ nfacctd: added support for IPFIX templateId-scoped (IE 145) sampling
  information.
+ pmacctd, uacctd, sfacctd, nfacctd: added a -M command-line option to
  set *_markers (ie. print_markers) to true and fixed -A command-line
  option to set print_output_file_append to align to true/false.
! fix, BGP, BMP, Streaming Telemetry daemons: improved sequencing of
  dump events by assigning a single sequence number per event (ie. for
  

Re: [pmacct-discussion] IPFIX Periodic Template

2019-04-24 Thread Paolo Lucente


Hi Rajesh,

Since templates are sent out periodically (every 18th packet .. is a
period :)), do you mean whether templates can be sent out on a time-
based interval rather than a packet-based one? If so, currently this is
not possible.

The choice for each 18th packet was made originally in the softflowd
code - probably a good small-enough number like anything else, i would
not attach a magical sense to the number 18. The default can't be
changed at the moment but exposing it via a config option would be
trivial. Would a different packet-based interval work for you?

My sense is that if the collector is multi-threaded and a template can
land in one thread and data packets in a different one and templates are
not distributed among the threads then the collector architecture is
flawed (and engineering timeouts at the exporter is the wrong place to
look at). But i speak without knowledge of the specific collector code,
i'm just basing myself on your description. 

Your timeouts look good to me, i tend to recommend to set them short for
better accuracy of stats - and you set them to 30 secs, which is short
enough. 

Paolo

 
On Tue, Apr 23, 2019 at 09:38:59PM +0530, RAJESH KUMAR S.R wrote:
> Hi,
> 
> I just need few clarifications,suggestions regarding IPFIX templates.
> 
> 1.
> Currently, just before sending the first flow packet, pmacctd seems to send
> out the template packet and subsequently for every 18 packets.
> We are facing the following issue:
> we are using logstash(as part of ELK stack) as collector
> (nfprobe_receiver), and it seems to run in multithreaded environment.
> Since, the first template and flow packet is sent at the same time, they
> are processed by different threads and flow packet is dropped as it doesn't
> know the template.
> Is there any configuration for sending template packets periodically.
> 2.
> Is there any particular reason for not sending template packets
> periodically and sent for every 18th packet.
> 
> 3.
> Are the following timeouts a good choice for production environment.
> nfprobe_timeouts:
> general=30:maxlife=30:expint=30:udp=30:tcp=30:tcp.rst=30:tcp.fin=30:icmp=30
> 
> 
> 
> Thanks,
> Rajesh kumar S R

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] AMQP exporter error

2019-04-10 Thread Paolo Lucente


Hi Grimur,

The issue you did hit was solved last December and will be part of
upcoming release 1.7.3:

https://github.com/pmacct/pmacct/commit/56c498e60043c868131d64404f3c3a8f338ea406

Until the release will be available, you could use 1.7.3-rc2 or master
GitHub code. It is currently left to you to implement a recovery
mechanism in case of crashes - consider spawning pmacct daemons off of
a systemd service would do plenty of the work for you in this sense,
ie. checking (sub-)processes liveliness and restarting the daemon if
negative.  

Paolo

On Wed, Apr 10, 2019 at 09:28:50AM +, Grímur Daníelsson wrote:
> Hello everyone
> 
> I had this error last night which caused the amqp plugin to crash.
> 
> nfacctd: plugin_common.c:442: P_cache_insert: Assertion `!cache_ptr->stitch' 
> failed.
> WARN ( default/core ): connection lost to 'ingress-amqp'; closing connection.
> 
> After this error occurred the service kept on collecting as if nothing 
> happened but it didn't export anything and so we
> lost all ingress traffic for the night.
> 
> We have another amqp exporter that is for egress traffic which kept trucking 
> along just fine.
> 
> Is there no recovery mechanism in place if an exporter crashes or loses 
> connection?
> Isn't it better to crash the program with a fatal error in these cases?
> 
> Thanks for the help.
> 
> Regards, Grimur

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] nfacctd Dropping UDP. Buffer Receive Size?

2019-03-09 Thread Paolo Lucente

Hi Brian,

If i understand correctly the issue is with the fact that templates are
sent out very frequently and hence the daemon keeps writing 'real time'
to disk. You mention a configurable interval but, recalling the original
purpose of the feature (if restarting a collector be able to immediately
collect new data without waiting for templates to arrive), i wonder if
it makes sense to write all to disk only upon clean exit of the daemon.
By the way thanks very much for the feedback on nailing this down.

On the SO_REUSEPORT point: sure and, please, if you could contribute
some code that would be fantastic. For now i am advocating for a simpler
architecture: 1) use a replicator (tee plugin) to balance flows onto
multiple collectors (different UDP ports) and 2) use SO_REUSEPORT with
an ACL, ie. bgp_daemon_allow_file, that should match tee config for BGP
(i don't say also BMP or TCP-based Streaming Telemetry because for
those, contrary to BGP, receiver port is typically configurable at the
exporter).

Paolo

On Fri, Mar 08, 2019 at 09:28:31PM +, Brian Solar wrote:
> 
> The culprit was actually the template file.  It appears to block while 
> writing and it's really slow.  When I remove the configuration option one 
> process could do what I could not accomplish using 80 processes with each 
> using a template file.
> 
> Any consideration on a different implementation?
> 
> Writing it out on a configurable interval would be a simple improvement.
> 
> When load-balancing, particularly with SO_REUSEPORT, it would be nice to 
> allow them to communicate the template set to each other.  Perhaps another 
> use for zeromq?
> 
> Brian
> 
> 
> 
> ‐‐‐ Original Message ‐‐‐
> On Sunday, February 24, 2019 5:02 PM, Paolo Lucente  wrote:
> 
> >
> >
> > Hi Brian,
> >
> > You are most probably looking for this:
> >
> > https://github.com/pmacct/pmacct/blob/master/QUICKSTART#L2644-#L2659
> >
> > Should that not work, ie. too many input flows for the available
> > resources, you have a couple load-balancing strategies possible:
> > one is to configure a replicator (tee plugin, see in QUICKSTART).
> >
> > Paolo
> >
> > On Sun, Feb 24, 2019 at 05:31:55PM +, Brian Solar wrote:
> >
> > > Is there a way to adjust the UDP buffer receive size ?
> > > Are there any other indications of nfacctd not keeping up?
> > > cat /proc/net/udp |egrep drops\|0835
> > > sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt 
> > > uid timeout inode ref pointer drops
> > > 52366: :0835 : 07 :00034B80 00: 
> > >  0 0 20175528 2 89993febd940 7495601
> > > 7495601 drops w/ a buffer of 0x0034B80 or 214528
> > > sysctl -a |fgrep mem
> > > net.core.optmem_max = 20480
> > > net.core.rmem_default = 212992
> > > net.core.rmem_max = 2147483647
> > > net.core.wmem_default = 212992
> > > net.core.wmem_max = 212992
> > > net.ipv4.igmp_max_memberships = 20
> > > net.ipv4.tcp_mem = 9249771 12333028 18499542
> > > net.ipv4.tcp_rmem = 4096 87380 6291456
> > > net.ipv4.tcp_wmem = 4096 16384 4194304
> > > net.ipv4.udp_mem = 9252429 12336573 18504858
> > > net.ipv4.udp_rmem_min = 4096
> > > net.ipv4.udp_wmem_min = 4096
> > > vm.lowmem_reserve_ratio = 256 256 32
> > > vm.memory_failure_early_kill = 0
> > > vm.memory_failure_recovery = 1
> > > vm.nr_hugepages_mempolicy = 0
> > > vm.overcommit_memory = 0
> >
> > > pmacct-discussion mailing list
> > > http://www.pmacct.net/#mailinglists
> 
> 

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] Making an RPM out of source code

2019-03-07 Thread Paolo Lucente


Hi Edvinas,

For a comprehensive list of files to install (consider some have
conditionals, depending of configure time CL switches) you can follow:

Makefile.am:all __DATA variables
src/Makefile.am:sbin_PROGRAMS and bin_PROGRAMS (*)

Paolo

(*) I have just committed removing a confusing / unused EXTRABIN
variable from bin_PROGRAMS:

https://github.com/pmacct/pmacct/commit/ef187e8772a19b4fd3a0a47253065be4b3a6fc37

On Thu, Mar 07, 2019 at 01:12:55PM +0200, Edvinas Kairys wrote:
> Hello,
> 
> I'm trying to make an RPM file from latest version of PMACCT. Now i came to
> problem to gather all required files to pack them in RPM (using FPM
> software.)
> 
> Looking at 'make install' where're lots of randomly located files. Maybe
> there're any list of them to make sure all them will be packed using RPM ?
> 
> Thanks

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] nfacctd Dropping UDP. Buffer Receive Size?

2019-03-06 Thread Paolo Lucente

Hi Brian,

No, it's not currently possible to send exporter system time / uptime to
Kafka (also because doing it per-flow would be lots of wasted space).  

Also, there are minimal protections, yes, for example for the case of
flows stopping before they start. But not for time format errors, ie.
not making a call for how old is old rather fix (or have fixed) your
exporting device.

Wrt Fortigate: "this way" you mean very old timestamps or stop time is
earlier than start time? If the former case, any chance they export with
timestamps in secs instead of msecs (in which case there would be a knob
for this 'timestamps_secs: true')? 

Paolo 
 
On Tue, Mar 05, 2019 at 12:46:35AM +, Brian Solar wrote:
> Is there a way to send the devices decoded system time and uptime to Kafka?
> 
> Are there protections for flows stopping before they start? Or other time 
> format errors?
> 
> I have yet to track down actual packets, but Ive seen usual timestamps and 
> very old time stamps in Kafka. This makes nfacctd generate millions of 
> entries at times.
> 
> Fortigate devices seem to report this way at times. I haven't noticed it on 
> other devices as of yet.
> 
>  Original Message ----
> On Feb 25, 2019, 9:28 AM, Paolo Lucente wrote:
> 
> Hi Brian,
> 
> Thanks very much for the nginx config, definitely something to add to
> docs as a possible option. QN reads 'Queries Number' (inherited from the
> SQL plugins, hence the queries wording); the first number is now many
> are sent to the backend, the second is how many should be sent as part
> of the purge event.
> 
> They should normally be aligned. In case of NetFlow/IPFIX, among the
> different possibilities, it may reveal time sync issues among exporters
> and the collector; easiest to resolve / experiment is to consider as
> timestamp in pmacct the arrival time at the collector (versus the start
> time of flows) by setting nfacctd_time_new to true.
> 
> Paolo
> 
> On Mon, Feb 25, 2019 at 03:23:42AM +, Brian Solar wrote:
> >
> > Thanks for the response Paolo. I am using nginx to stream load balance (see 
> > config below).
> >
> > Another quick question on the Kafka plugin. What Does the QN portion of the 
> > purging cache end line indicate/mean?
> >
> >
> > 2019-02-25T03:05:04Z INFO ( kafka2/kafka ): *** Purging cache - END (PID: 
> > 387033, QN: 12786/13291, ET: 1) ***
> >
> > 2019-02-25T03:16:22Z INFO ( kafka2/kafka ): *** Purging cache - END (PID: 
> > 150221, QN: 426663/426663, ET: 19) ***
> >
> > # Load balance UDP-based FLOW traffic across two servers
> > stream {
> >
> > log_format combined '$remote_addr - - [$time_local] $protocol $status 
> > $bytes_sent $bytes_received $session_time "$upstream_addr"';
> >
> > access_log /var/log/nginx/stream-access.log combined;
> >
> > upstream flow_upstreams {
> > #hash $remote_addr consistent;
> > server 10.20.25.11:2100;
> > #
> > server 10.20.25.12:2100;
> >
> > }
> >
> > server {
> > listen 2201 udp;
> > proxy_pass flow_upstreams;
> > #proxy_timeout 1s;
> > proxy_responses 0;
> > # must have user: root in main config
> > proxy_bind $remote_addr transparent;
> > error_log /var/log/nginx/stream-flow-err.log;
> > }
> > }
> >
> >
> >
> >
> > ‐‐‐ Original Message ‐‐‐
> > On Sunday, February 24, 2019 5:02 PM, Paolo Lucente  wrote:
> >
> > >
> > >
> > > Hi Brian,
> > >
> > > You are most probably looking for this:
> > >
> > > https://github.com/pmacct/pmacct/blob/master/QUICKSTART#L2644-#L2659
> > >
> > > Should that not work, ie. too many input flows for the available
> > > resources, you have a couple load-balancing strategies possible:
> > > one is to configure a replicator (tee plugin, see in QUICKSTART).
> > >
> > > Paolo
> > >
> > > On Sun, Feb 24, 2019 at 05:31:55PM +, Brian Solar wrote:
> > >
> > > > Is there a way to adjust the UDP buffer receive size ?
> > > > Are there any other indications of nfacctd not keeping up?
> > > > cat /proc/net/udp |egrep drops\|0835
> > > > sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt 
> > > > uid timeout inode ref pointer drops
> > > > 52366: :0835 : 07 :00034B80 00: 
> > > >  0 0 20175528 2 89993febd940 7495601
> > > > 7495601 drops w/ a buffer of 0x0034B80 or 214528
> > > > sysctl -a |fgrep mem
> > >

Re: [pmacct-discussion] nfacctd Dropping UDP. Buffer Receive Size?

2019-02-25 Thread Paolo Lucente

Hi Brian,

Thanks very much for the nginx config, definitely something to add to
docs as a possible option. QN reads 'Queries Number' (inherited from the
SQL plugins, hence the queries wording); the first number is now many
are sent to the backend, the second is how many should be sent as part
of the purge event. 

They should normally be aligned. In case of NetFlow/IPFIX, among the
different possibilities, it may reveal time sync issues among exporters
and the collector; easiest to resolve / experiment is to consider as
timestamp in pmacct the arrival time at the collector (versus the start
time of flows) by setting nfacctd_time_new to true.

Paolo
 
On Mon, Feb 25, 2019 at 03:23:42AM +, Brian Solar wrote:
> 
> Thanks for the response Paolo.  I am using nginx to stream load balance (see 
> config below).
> 
> Another quick question on the Kafka plugin. What Does the QN portion of the 
> purging cache end line indicate/mean?
> 
> 
> 2019-02-25T03:05:04Z INFO ( kafka2/kafka ): *** Purging cache - END (PID: 
> 387033, QN: 12786/13291, ET: 1) ***
> 
> 2019-02-25T03:16:22Z INFO ( kafka2/kafka ): *** Purging cache - END (PID: 
> 150221, QN: 426663/426663, ET: 19) ***
> 
> # Load balance UDP-based FLOW traffic across two servers
> stream {
> 
>log_format combined '$remote_addr - - [$time_local] $protocol $status 
> $bytes_sent $bytes_received $session_time "$upstream_addr"';
> 
>access_log /var/log/nginx/stream-access.log combined;
> 
> upstream flow_upstreams {
> #hash $remote_addr consistent;
> server 10.20.25.11:2100;
> #
> server 10.20.25.12:2100;
> 
> }
> 
> server {
> listen 2201 udp;
> proxy_pass flow_upstreams;
> #proxy_timeout 1s;
> proxy_responses 0;
> # must have user: root in main config
> proxy_bind $remote_addr transparent;
> error_log /var/log/nginx/stream-flow-err.log;
> }
> }
> 
> 
> 
> 
> ‐‐‐ Original Message ‐‐‐
> On Sunday, February 24, 2019 5:02 PM, Paolo Lucente  wrote:
> 
> >
> >
> > Hi Brian,
> >
> > You are most probably looking for this:
> >
> > https://github.com/pmacct/pmacct/blob/master/QUICKSTART#L2644-#L2659
> >
> > Should that not work, ie. too many input flows for the available
> > resources, you have a couple load-balancing strategies possible:
> > one is to configure a replicator (tee plugin, see in QUICKSTART).
> >
> > Paolo
> >
> > On Sun, Feb 24, 2019 at 05:31:55PM +, Brian Solar wrote:
> >
> > > Is there a way to adjust the UDP buffer receive size ?
> > > Are there any other indications of nfacctd not keeping up?
> > > cat /proc/net/udp |egrep drops\|0835
> > > sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt 
> > > uid timeout inode ref pointer drops
> > > 52366: :0835 : 07 :00034B80 00: 
> > >  0 0 20175528 2 89993febd940 7495601
> > > 7495601 drops w/ a buffer of 0x0034B80 or 214528
> > > sysctl -a |fgrep mem
> > > net.core.optmem_max = 20480
> > > net.core.rmem_default = 212992
> > > net.core.rmem_max = 2147483647
> > > net.core.wmem_default = 212992
> > > net.core.wmem_max = 212992
> > > net.ipv4.igmp_max_memberships = 20
> > > net.ipv4.tcp_mem = 9249771 12333028 18499542
> > > net.ipv4.tcp_rmem = 4096 87380 6291456
> > > net.ipv4.tcp_wmem = 4096 16384 4194304
> > > net.ipv4.udp_mem = 9252429 12336573 18504858
> > > net.ipv4.udp_rmem_min = 4096
> > > net.ipv4.udp_wmem_min = 4096
> > > vm.lowmem_reserve_ratio = 256 256 32
> > > vm.memory_failure_early_kill = 0
> > > vm.memory_failure_recovery = 1
> > > vm.nr_hugepages_mempolicy = 0
> > > vm.overcommit_memory = 0
> >
> > > pmacct-discussion mailing list
> > > http://www.pmacct.net/#mailinglists
> 

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] nfacctd Dropping UDP. Buffer Receive Size?

2019-02-24 Thread Paolo Lucente


Hi Brian,

You are most probably looking for this:

https://github.com/pmacct/pmacct/blob/master/QUICKSTART#L2644-#L2659

Should that not work, ie. too many input flows for the available
resources, you have a couple load-balancing strategies possible:
one is to configure a replicator (tee plugin, see in QUICKSTART).

Paolo

On Sun, Feb 24, 2019 at 05:31:55PM +, Brian Solar wrote:
> Is there a way to adjust the UDP buffer receive size ?
> 
> Are there any other indications of nfacctd not keeping up?
> 
> cat /proc/net/udp |egrep drops\|0835
> 
>   sl  local_address rem_address   st tx_queue rx_queue tr tm->when retrnsmt   
> uid  timeout inode ref pointer drops
> 
> 52366: :0835 : 07 :00034B80 00:   
>00 20175528 2 89993febd940 7495601
> 
> 7495601 drops w/ a buffer of 0x0034B80 or 214528
> 
> sysctl -a |fgrep mem
> 
> net.core.optmem_max = 20480
> 
> net.core.rmem_default = 212992
> 
> net.core.rmem_max = 2147483647
> 
> net.core.wmem_default = 212992
> 
> net.core.wmem_max = 212992
> 
> net.ipv4.igmp_max_memberships = 20
> 
> net.ipv4.tcp_mem = 9249771  1233302818499542
> 
> net.ipv4.tcp_rmem = 409687380   6291456
> 
> net.ipv4.tcp_wmem = 409616384   4194304
> 
> net.ipv4.udp_mem = 9252429  1233657318504858
> 
> net.ipv4.udp_rmem_min = 4096
> 
> net.ipv4.udp_wmem_min = 4096
> 
> vm.lowmem_reserve_ratio = 256   256 32
> 
> vm.memory_failure_early_kill = 0
> 
> vm.memory_failure_recovery = 1
> 
> vm.nr_hugepages_mempolicy = 0
> 
> vm.overcommit_memory = 0

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] dst_as always 0

2019-01-29 Thread Paolo Lucente


Hi Grimur,

Any chance you could try this again with some more current code than
1.7.0? Like 1.7.2 or, better, master code in GitHub? Just to make sure
you are not hitting something which may have potentially been solved
meanwhile (although it does not ring a bell).

Also, can you please allow me to identify the issue better with an
example? When ASNs are zero, are the IP addresses belonging to your own
IP address space? Or it is ore a symptom that BGP correlation is not
taking place? And when you use bgp_daemon_as, you configure an ASN
different from the router so to form an eBGP session, true? 

Paolo

On Tue, Jan 29, 2019 at 03:55:47PM +, Grímur Daníelsson wrote:
> Hi
> 
> I'm having problems where dst_as is always 0 and when src_ip is from the same 
> ASN as the dst_ip that also gets set to 0. I'm using the BGP daemon and 
> peer_as_src and src_as_path get set correctly as far as i can tell.
> 
> I've tried to set and unset bgp_daemon_as without success. There are no bgp 
> errors in the log and it connects to the correct bgp agents without any 
> problems that I can see.
> 
> This is using pmacct 1.7.0
> 
> Any idea what I'm doing wrong here?
> 
> Nfacctd Config (the relevant parts):
> --
> nfacctd_ip: 
> nfacctd_port: 2100
> syslog: daemon
> 
> nfacctd_net: bgp
> nfacctd_as: bgp
> 
> bgp_daemon:true
> bgp_daemon_ip: 
> bgp_daemon_id: 
> bgp_daemon_port: 179
> bgp_daemon_max_peers: 10
> ! bgp_daemon_as: 
> 
> bgp_src_as_path_type: bgp
> bgp_peer_src_as_type: bgp
> bgp_follow_default: 5
> bgp_agent_map: bgp_agents.map
> 
> plugins: amqp[ingress], amqp[egress]
> aggregate[ingress]: peer_src_ip, src_host, dst_host, src_port, dst_port, 
> proto, tos, tcpflags, in_iface, out_iface, etype, vlan, flows, 
> export_proto_version, dst_as, src_as, src_as_path, peer_src_as, peer_dst_as
> 
> bgp_agents.map:
> 
> bgp_ip= ip=0.0.0.0/0
> bgp_ip= ip=0.0.0.0/0
> 
> -
> Regards, Grimur

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Aggregation and filtering

2019-01-28 Thread Paolo Lucente


Hi Christian,

Thanks for the kind words, much appreciated, and the very interesting
email. In line of principle i'd not be shy to say that you may want to
use a different tool for that; but at the very same time i'd like to
explore with you (and anybody interested in the conversation) whether
there is an opportunity for improvement. 

Reading your goal i see two challenges: 1) any field, before being
printed as part of a JSON, needs a semantics (ie. string, hex, int,
MAC address, IP address, etc.) to be attached to and 2) of course
such a list will never be comprehensive, especially when we look at
IPFIX and PENs. Issue #1 can be countered with a giant map that does
attach a semantic to fields, for example, using something like
https://www.iana.org/assignments/ipfix/ipfix.xhtml as reference: it
would be some work but the basic infrastructure is already there. But #2
means you will always hit a point where you have to integrate existing
knowledge with custom primitives (aggregate_primitives framework) which,
a bit, kills your original point (meaning you will never have a 100%
guarantee you are decoding everything - but you indeed can get a
warning that you are not). How does all of this sound, would this be
good enough? Any better proposals, ideas?

Paolo

On Sun, Jan 27, 2019 at 12:37:27AM +0100, Christian Meutes wrote:
> Hi!
> 
> I'm using pmacct since a long time now .. so the first thing (not
> quite..) to say and to express is really my gratitude to Paolo for
> creating these cool tools!
> 
> So the reason for my first mail to the list is the following: I wonder
> if there is a way to export flow data, received by nfacctd and is then
> written to a json-formatted file, but this without nfacctd having
> worked on any further aggregation (defineable via "aggregate:") and
> also without losing any types.
> 
> Basically it should export all flows via the print plugin, but
> *lossless* (take as received and serialize into JSON).
> 
> I suppose that manually declaring all possible types via primitives
> for making them assignable in the aggregate-directive is not really
> the way to go (don't get me wrong this explicit design is a necessary
> minimum to have, otoh losing data because some new type/value was
> received but yet not declared and configured, this seems to me as a
> basic use case as well).
> 
> Am I mistaken? Ideas?
> 
> Thanks!
> -- 
> Christian
> 
> e-mail/xmpp: christ...@errxtx.net
> PGP Fingerprint: B458 E4D6 7173 A8C4 9C75315B 709C 295B FA53 2318
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] sending one netflow stream to different NF_PROBE receivers

2019-01-15 Thread Paolo Lucente


Hi Edvinas,

You could specify a single receiver in nfprobe. But you can transparent
replicate this feed several destinations wish with a simple tee instance.
You can get started how to configure a replicator by reading here:

https://github.com/pmacct/pmacct/blob/1.7.2/QUICKSTART#L1751-#L1786

As you can see if you keep reading from there on you can complicate
things at wish in order to intercept more advanced scenarios.

Paolo

On Tue, Jan 15, 2019 at 10:18:08AM +0200, Edvinas K wrote:
> Hello,
> 
> Is't possible to send one netflow stream to different NF_PROBE receivers or
> i need to run separate instances ?
> 
> Thanks

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] pretag map line length limits

2019-01-10 Thread Paolo Lucente


Hi Inge,

Always great to read from you. 

You are looking for the maps_row_len knob, by default 256 chars. Along
with maps_entries it allows to specify the two key dimensions to alloc
memory for the map.

Paolo

On Thu, Jan 10, 2019 at 02:54:09PM +, Inge Bjørnvall Arnesen wrote:
> Hi,
> 
> I have been running nfacct for many years and it has served me well, but as 
> my network gets ever more complex and new transit lines are added, I've come 
> across an issue with how I've been configuring the program. My goal is still 
> to maintain a MySQL DB with  minute Internet traffic entries (both 
> directions) per public IP at my site. My routers report ingress traffic only, 
> so Netflow must be enabled on all edge interfaces, rather than just the 
> designated uplinks and transits.  This means that Netflow reports all traffic 
> that goes via our edge routers and that I have to filter Internet traffic out 
> from other, internal traffic that crosses edge.
> 
> My approach so far has been to use pretag map filters for this. The basic 
> structure for these filters are:
> 
> !  Incoming
> id=1 ip= filter='not ( src net   or src net  prefix n>) and dst net '
> ...
> id=1 ip= filter='not ( src net   or src net  prefix n>) and dst net '
> 
> 
> ! Outgoing
> id=2 ip= filter='not ( dst net   or dst net  prefix n>) and src net '
> ...
> id=2 ip= filter='not ( dst net   or dst net  prefix n>) and src net '
> 
> 
> With RFC1918 prefixes takes up some space to begin with  and the number of 
> public prefixes are increasing, I'm running into an issue where the pretag 
> map line length is exceeded and nfacct fails to start.  Are there ways to 
> increase the maximum line length or other ways of organizing this filtering 
> process that will keep me within the maximum pretag map line length?
> 
> Regards,
> 
> 
>   *   Inge Arnesen
> 
> 
> 
> 

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] pmact to netflow collector

2019-01-06 Thread Paolo Lucente


Hi Edvinas,

At this point the only way that comes to mind to make some sensible
progress is to offer you to have a look at this myself. If shell access
to your environment is possible (no screen sharing) please send me an
email off-list.

Paolo
 
On Fri, Jan 04, 2019 at 05:50:12PM +0200, Edvinas K wrote:
> sorry, for so much reply's.
> 
> Since I don't fully understand how to proceed the first step (to know if
> the pmacct manages provide correct info localy)
> 
> I tested with another collector (NTOP) and the results are the same. NTOP
> collector also shows ~10x times lower traffic tan it's really is:
> 
> PMACCT:
> 
> [image: image.png]
> 
> Cisco:
> 
> [image: image.png]
> 
> On Fri, Jan 4, 2019 at 1:59 PM Edvinas K  wrote:
> 
> > Btw the log   "INFO ( default/core ): short IPv4 packet read
> > (36/38/frags). Snaplen issue ?"  is quite occiasional.
> >
> > is there any one liner to grab the total traffic count in particural
> > timeline ?
> >
> > i'm using something like this:
> >
> > pmacctd -P print -O formatted -r 10 -i ens1f0.432 -c
> > src_host,dst_host,src_port,dst_port,proto . So my goal now is to see all
> > traffic inside the Box (before sending to another analyzer)
> >
> > Thanks
> >
> >
> > On Fri, Jan 4, 2019 at 1:50 PM Edvinas K  wrote:
> >
> >> Thanks i will try.
> >>
> >> Maybe is there any quick start guide for first step ?
> >>
> >> Also - i tried to send all data to other analyzer (Solar Winds) and it
> >> errored because of packets which comes with INTERFACE s index 0 (zero)
> >>
> >> [image: image.png]
> >>
> >> Maybe this could be the case ?
> >>
> >>
> >>
> >> On Fri, Jan 4, 2019 at 12:37 PM Paolo Lucente  wrote:
> >>
> >>>
> >>> Are logs full of this kind of message? Or is this occasional? If
> >>> occasional, meaning even once every few secs, it cannot cause a 1/10th
> >>> traffic ratio to reality.
> >>>
> >>> I kind of still suggest you should make this measurable with a traffic
> >>> flow that you control and know exactly how many packets and bytes were
> >>> generated (the flow can be mixed to normal traffic no problem). You
> >>> should then make a three way kind of verification: 1) collect with
> >>> pmacctd and show it via memory/print plugin (something easy to setup);
> >>> if all looks good, 2) export via nfprobe and collect with nfacctd and,
> >>> again, show with memory/print plugin; if all looks good, 3) collect
> >>> with nfdump/nfsen. Depending where the issue is, ie. #1, #2 or #3, we
> >>> can troubleshoot in different ways; considering if the issue is #3
> >>> then the problem is not on the pmacct side of the things.
> >>>
> >>> Paolo
> >>>
> >>> On Thu, Jan 03, 2019 at 05:13:39PM +0200, Edvinas K wrote:
> >>> > Also I see these logs:
> >>> >
> >>> >  INFO ( default/core ): short IPv4 packet read (36/38/frags). Snaplen
> >>> issue
> >>> > ?
> >>> >
> >>> > Could it help to identify the cause ?
> >>> >
> >>> >
> >>> > On Thu, Jan 3, 2019 at 5:11 PM Edvinas K 
> >>> wrote:
> >>> >
> >>> > > Hello,
> >>> > >
> >>> > > Seems, I was wrong and misleading myself and you guys:
> >>> > >
> >>> > > 1)  seems there're no discards at all. I always 'generated' discards
> >>> by
> >>> > > myself while exiting from PMACCT with CTRL+C
> >>> > >
> >>> > > Only now i managed to see the statistics with Kill SIGUSR1 and I see
> >>> that
> >>> > > no dropped packets occurs.
> >>> > >
> >>> > > But the problem exists. Still i see almost 10x lower traffic in
> >>> > > NFSEN/NFDUMP analyzer than it's really is. What could be the case ?
> >>> > >
> >>> > > Thanks
> >>> > >
> >>> > >
> >>> > >
> >>> > >
> >>> > >
> >>> > >
> >>> > >
> >>> > >
> >>> > >
> >>> > > On Thu, Jan 3, 2019 at 4:37 PM Paolo Lucente 
> >>> wrote:
> >>> > >
> >>> > >>
> >>> > >> Hi Edvinas,
>

Re: [pmacct-discussion] Custom primitives with netflow

2019-01-06 Thread Paolo Lucente


Hi Rajesh,

Nice labels worked for you.

Clarify me one thing: the output you did show, with zeroed peer_src_ip
(and exporteripv4address, engineid, enginetype), is the one from
pmacctd, right? Not nfacctd. In that case the output is expected. 
In fact in nfacctd it should not be possible to get a null peer_src_ip
(which is nothing else than the address returned by a recv() on the
socket); setting nfprobe_source_ip is needed only in cases in which
multiple interfaces could be selected for output or for settin field
type #130 for, for example, NAT traversal scenarios.

Paolo

On Sat, Jan 05, 2019 at 12:29:35AM +0530, RAJESH KUMAR S.R wrote:
> Hi Paolo,
> 
> I was able to set labels and export as strings for different pmacct
> instances that was listening on different interfaces. Thanks for the
> suggestion.
> 
> I need a help regarding exporting Exporter's IP as part of flow records.
> Based on old pmacct discussions, I'm using "peer_src_ip and
> exporterIPv4Address" as primitives but both they seem to come as 0 and
> 0.0.0.0 in flows.
> I tried setting the "nfprobe_source_ip: 172.30.130.99", but it goes in
> separate flow as ExporterAddress: 172.30.130.99, but I need the
> "exporterIpv4Address" to be set to correct value in all flow messages, will
> pmacct collect the interface ip and populate in "exporterIpv4Address"
> field. Also, I'm not sure of how to get the engineid working, that also
> seem to go as 0 in flows.
> 
> pmacctd Output:
> 
> *SRC_MAC   DST_MAC PEER_SRC_IP*
> 50:01:d9:a3:41:f1  f8:59:71:73:94:4d  ::
> * SRC_IP   DST_IPSRC_PORT  DST_PORT  PROTOCOL   *
> 52.229.174.94  192.168.1.9 443  43238  tcp
> 
> *exporteripv4addressengineid  enginetype  PACKETS BYTES*
>  0.0.0.0 0 0
> 2 629
> 
> 
> I have the following pmacct conf file
> 
> *primitives*
> 
> name=engineType field_type=0:38 len=1 semantics=u_int
> name=engineId field_type=0:39 len=1 semantics=u_int
> name=exporterIPv4Address field_type=130 len=4 semantics=ip
> 
> *pmacct.conf*
> "
> debug: true
>daemonize: false
>pre_tag_map: ipfix_pretag.map
> 
>nfprobe_engine: 100
>nfprobe_version: 10
>nfprobe_source_ip: 172.30.130.99
>aggregate_primitives: ipfix_primitives.lst
>plugins: nfprobe, print
>interface: wlp1s0
>aggregate: src_mac, dst_mac, src_host, dst_host, src_port, dst_port,
> proto, peer_src_ip, exporterIPv4Address, engineId, engineType
>nfprobe_receiver: 10.40.6.6:17058
> "
> 
> 
> 
> 
> On Wed, Dec 26, 2018 at 12:44 PM RAJESH KUMAR S.R 
> wrote:
> 
> > Hi Paolo,
> >
> > Thanks for the fix and suggestion. I'll try tag and label primitives and
> > see if they match my requirements.
> >
> > On Tue, Dec 25, 2018 at 10:49 PM Paolo Lucente  wrote:
> >
> >>
> >> Hi Rajesh,
> >>
> >> You are right, there was a bug in the serialize_bin() func that was
> >> making it work good only for the first byte. This is now resolved:
> >>
> >>
> >> https://github.com/pmacct/pmacct/commit/1076ff3529f439133357176e4c1260cfcdcef56e
> >>
> >> I've read your question about metadata and was wondering: would tags
> >> (tag, tag2 primitive) or labels (label primitive) defined via a
> >> pre_tag_map be a solution for you? You could do a proof-of-concept
> >> locally, ie. like you were doing already with the print plugin, and if
> >> meeting your requirements we can move onto the nfprobe part; i expect
> >> tags to work no problem; labels should work but may require a bit more
> >> testing.
> >>
> >> Paolo
> >>
> >> On Mon, Dec 24, 2018 at 02:21:18PM +0530, RAJESH KUMAR S.R wrote:
> >> > Hi Paolo,
> >> >
> >> > Thanks for the fix. I tested with pmacctd and nfacctd and I see that
> >> when I
> >> > read 1 byte of raw data, it prints correct on both sides
> >> > but by while reading more bytes, the first byte is alone correct on
> >> nfacct
> >> > side. Not sure I'm testing correctly, but thanks for the fix.
> >> >
> >> > pmacctd side
> >> > dummy_byte  PACKETS   BYTES
> >> > *08-00-45*535   124114
> >> > *86-DD-60*10861
> >> >
> >> > On nfacctd side, I'm getting only the first byte correct
> >> > *08-00-00*535 124114
> >> > *86-00-00*10   861
> &

Re: [pmacct-discussion] pmact to netflow collector

2019-01-04 Thread Paolo Lucente


Are logs full of this kind of message? Or is this occasional? If
occasional, meaning even once every few secs, it cannot cause a 1/10th
traffic ratio to reality.

I kind of still suggest you should make this measurable with a traffic
flow that you control and know exactly how many packets and bytes were
generated (the flow can be mixed to normal traffic no problem). You
should then make a three way kind of verification: 1) collect with
pmacctd and show it via memory/print plugin (something easy to setup);
if all looks good, 2) export via nfprobe and collect with nfacctd and,
again, show with memory/print plugin; if all looks good, 3) collect
with nfdump/nfsen. Depending where the issue is, ie. #1, #2 or #3, we
can troubleshoot in different ways; considering if the issue is #3
then the problem is not on the pmacct side of the things. 

Paolo

On Thu, Jan 03, 2019 at 05:13:39PM +0200, Edvinas K wrote:
> Also I see these logs:
> 
>  INFO ( default/core ): short IPv4 packet read (36/38/frags). Snaplen issue
> ?
> 
> Could it help to identify the cause ?
> 
> 
> On Thu, Jan 3, 2019 at 5:11 PM Edvinas K  wrote:
> 
> > Hello,
> >
> > Seems, I was wrong and misleading myself and you guys:
> >
> > 1)  seems there're no discards at all. I always 'generated' discards by
> > myself while exiting from PMACCT with CTRL+C
> >
> > Only now i managed to see the statistics with Kill SIGUSR1 and I see that
> > no dropped packets occurs.
> >
> > But the problem exists. Still i see almost 10x lower traffic in
> > NFSEN/NFDUMP analyzer than it's really is. What could be the case ?
> >
> > Thanks
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > On Thu, Jan 3, 2019 at 4:37 PM Paolo Lucente  wrote:
> >
> >>
> >> Hi Edvinas,
> >>
> >> 'pmacctd -V' returns all the libs it is linked against, including
> >> version. There you *should* find an indication the PF_RING-enabled
> >> libpcap is being used.
> >>
> >> Paolo
> >>
> >> On Thu, Jan 03, 2019 at 10:46:55AM +0200, Edvinas K wrote:
> >> > Hello,
> >> >
> >> > How to check if the PF_RING is in action and active ? some forwarded
> >> > packets counts, or etc ?
> >> >
> >> > Thanks
> >> >
> >> > On Thu, Dec 27, 2018 at 3:00 PM Edvinas K 
> >> wrote:
> >> >
> >> > > thank you,
> >> > >
> >> > > seems all easy things didin't help.
> >> > >
> >> > > I tried to set up the buffer size in kernel:
> >> > >
> >> > > prod [root@netvpn001prpjay pmacct-1.7.2]# cat
> >> > > /proc/sys/net/core/[rw]mem_max
> >> > > 20
> >> > > 20
> >> > >
> >> > > and then
> >> > >
> >> > > prod [root@netvpn001prpjay pmacct-1.7.2]# cat flowexport.cfg
> >> > >!
> >> > >daemonize: no
> >> > >aggregate: src_host, dst_host, src_port, dst_port, proto, tos
> >> > >plugins: nfprobe
> >> > >nfprobe_receiver: 10.3.14.101:2101
> >> > >nfprobe_version: 9
> >> > >
> >> > >pmacctd_pipe_size: 20
> >> > >plugin_pipe_size: 100
> >> > >plugin_buffer_size: 1
> >> > >
> >> > >! nfprobe_engine: 1:1
> >> > >! nfprobe_timeouts: tcp=120:maxlife=3600
> >> > >!
> >> > >! networks_file: /path/to/networks.lst
> >> > >!...
> >> > >
> >> > > maybe after putting plugin_pipe_size and  plugin_buffer_size drops got
> >> > > little bit lower, but still a lot.
> >> > > also noticed strange log message: "INFO ( default/core ): short IPv4
> >> > > packet read (36/38/frags). Snaplen issue ?"
> >> > >
> >> > > I going to try that PF_RING stuff.
> >> > >
> >> > > On Thu, Dec 20, 2018 at 10:08 PM Paolo Lucente 
> >> wrote:
> >> > >
> >> > >>
> >> > >> Hi Edvinas,
> >> > >>
> >> > >> I wanted to confirm that when you changed pmacctd_pipe_size to 2GB
> >> you
> >> > >> ALSO changed /proc/sys/net/core/[rw]mem_max to 2GB and ALSO restarted
> >> > >> pmacctd after having done so.
> >> > >>
> >> > >> Wrt PF_RING: i can't voice since i don't use

Re: [pmacct-discussion] pmact to netflow collector

2019-01-03 Thread Paolo Lucente


Hi Edvinas,

'pmacctd -V' returns all the libs it is linked against, including
version. There you *should* find an indication the PF_RING-enabled
libpcap is being used.

Paolo
 
On Thu, Jan 03, 2019 at 10:46:55AM +0200, Edvinas K wrote:
> Hello,
> 
> How to check if the PF_RING is in action and active ? some forwarded
> packets counts, or etc ?
> 
> Thanks
> 
> On Thu, Dec 27, 2018 at 3:00 PM Edvinas K  wrote:
> 
> > thank you,
> >
> > seems all easy things didin't help.
> >
> > I tried to set up the buffer size in kernel:
> >
> > prod [root@netvpn001prpjay pmacct-1.7.2]# cat
> > /proc/sys/net/core/[rw]mem_max
> > 20
> > 20
> >
> > and then
> >
> > prod [root@netvpn001prpjay pmacct-1.7.2]# cat flowexport.cfg
> >!
> >daemonize: no
> >aggregate: src_host, dst_host, src_port, dst_port, proto, tos
> >plugins: nfprobe
> >nfprobe_receiver: 10.3.14.101:2101
> >nfprobe_version: 9
> >
> >pmacctd_pipe_size: 20
> >plugin_pipe_size: 100
> >plugin_buffer_size: 1
> >
> >! nfprobe_engine: 1:1
> >! nfprobe_timeouts: tcp=120:maxlife=3600
> >!
> >! networks_file: /path/to/networks.lst
> >!...
> >
> > maybe after putting plugin_pipe_size and  plugin_buffer_size drops got
> > little bit lower, but still a lot.
> > also noticed strange log message: "INFO ( default/core ): short IPv4
> > packet read (36/38/frags). Snaplen issue ?"
> >
> > I going to try that PF_RING stuff.
> >
> > On Thu, Dec 20, 2018 at 10:08 PM Paolo Lucente  wrote:
> >
> >>
> >> Hi Edvinas,
> >>
> >> I wanted to confirm that when you changed pmacctd_pipe_size to 2GB you
> >> ALSO changed /proc/sys/net/core/[rw]mem_max to 2GB and ALSO restarted
> >> pmacctd after having done so.
> >>
> >> Wrt PF_RING: i can't voice since i don't use it myself. While i never
> >> heard any horror story with it (thumbs up!), i think doing a proof of
> >> concept first is always a good idea; this is also to answer your second
> >> question: it will improve things for sure but how much you have to test.
> >>
> >> Another thing you may do is also to increase buffering internal to
> >> pmacct (it may help reduce CPU cycles by the core process and hence help
> >> it process more data), i see that in your config you have NO buffering
> >> enabled. For a quick test you could set:
> >>
> >> plugin_pipe_size: 100
> >> plugin_buffer_size: 1
> >>
> >> And depending if you see any benefits/improvement and if you have memory
> >> you could ramp these values up. Or alternatively you could introduce
> >> ZeroMQ. Again, this is internal queueuing (whereas in my previous email
> >> i was tackling the queueing between kernel and pmacct):
> >>
> >> https://github.com/pmacct/pmacct/blob/master/CONFIG-KEYS#L234-#L292
> >>
> >> Paolo
> >>
> >> On Wed, Dec 19, 2018 at 06:40:14PM +0200, Edvinas K wrote:
> >> > Hello,
> >> >
> >> > How would you recommend to test PF_RING:
> >> >
> >> > Some questions:
> >> >
> >> > Is't safe to install it on production server ?
> >> > Is't possible to hope, that this PF_RING will solve all the discards ?
> >> >
> >> > Thanks
> >> >
> >> > On Tue, Dec 18, 2018 at 5:59 PM Edvinas K 
> >> wrote:
> >> >
> >> > > thanks,
> >> > >
> >> > > I tried to change the pipe size. As i noticed my OS (centos) default
> >> and
> >> > > max size are the same:
> >> > >
> >> > > prod [root@netvpn001prpjay pmacct-1.7.2]# cat
> >> > > /proc/sys/net/core/[rw]mem_default
> >> > > 212992
> >> > > 212992
> >> > >
> >> > > prod [root@netvpn001prpjay pmacct-1.7.2]# cat
> >> > > /proc/sys/net/core/[rw]mem_max
> >> > > 212992
> >> > > 212992
> >> > >
> >> > > I tried to set the pmacctd_pipe_size: to 20  and later to
> >> 212992.
> >> > > Seems the drops is still occuring.
> >> > > Tomorrow i will try to look at that PF_RING thing.
> >> > >
> >> > > Thanks
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >&

Re: [pmacct-discussion] memory limits - set up question

2018-12-25 Thread Paolo Lucente

Hi Sophie,

Let me start with the bad news to conclude with the good ones.

Unfortunately there is not a good way to size memory pools given a
traffic figure and/or the amount of IP addresses monitored. It really
depends on the traffic mix (that is, how big it is the  matrix produced by your traffic?). So unless you are in a
constrained / embedded environment my recommendation would be to set
imt_mem_pools_number to zero so to let the memory table grow without
boundaries; then, once you have some estimate, if you wish you may
define some boundaries to go in a more production phase.

Alternatively you could resort to a push model, like the print plugin
(and Kafka, RabbitMQ and SQL ones). In those data is pushed at regular
time intervals and shall cache get full (symptom you may want to config
a bigger cache), entries are dumped before the next scheduled write. 

Paolo
 
On Mon, Dec 24, 2018 at 01:50:32PM +0100, Sophie Loewenthal wrote:
> Hi,
> 
> I just installed pmacct and would like to assign the memory 16Mb.
> 
> I read these pages,
> 
> https://github.com/pmacct/pmacct/blob/master/docs/INTERNALS
> 
> and know that this should be adjusted : 
> imt_mem_pools_number 
> imt_mem_pools_size
> 
> I’m storing,
> aggregate: etype, proto, src_host, src_port
> 
> 
> The config file has this,
> daemonize: true
> pidfile: /var/run/pmacctd.pid
> syslog: daemon
> aggregate: etype, proto, src_host, src_port
> interface: ens3
> daemonize: true
> aggregate: sum_host
> 
> 
> What should the values be for imt_mem_pools_size? 
> How many pools should I have?
> 
> The server has incoming 6 Mbytes/s a second during peak periods from some 
> 3000 IP addresses.
> 
> 
> Regards,
> Sophie 
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] pmact to netflow collector

2018-12-20 Thread Paolo Lucente


Hi Edvinas,

I wanted to confirm that when you changed pmacctd_pipe_size to 2GB you
ALSO changed /proc/sys/net/core/[rw]mem_max to 2GB and ALSO restarted
pmacctd after having done so.

Wrt PF_RING: i can't voice since i don't use it myself. While i never
heard any horror story with it (thumbs up!), i think doing a proof of
concept first is always a good idea; this is also to answer your second
question: it will improve things for sure but how much you have to test.

Another thing you may do is also to increase buffering internal to
pmacct (it may help reduce CPU cycles by the core process and hence help
it process more data), i see that in your config you have NO buffering
enabled. For a quick test you could set:

plugin_pipe_size: 100
plugin_buffer_size: 1

And depending if you see any benefits/improvement and if you have memory
you could ramp these values up. Or alternatively you could introduce
ZeroMQ. Again, this is internal queueuing (whereas in my previous email
i was tackling the queueing between kernel and pmacct):

https://github.com/pmacct/pmacct/blob/master/CONFIG-KEYS#L234-#L292

Paolo 

On Wed, Dec 19, 2018 at 06:40:14PM +0200, Edvinas K wrote:
> Hello,
> 
> How would you recommend to test PF_RING:
> 
> Some questions:
> 
> Is't safe to install it on production server ?
> Is't possible to hope, that this PF_RING will solve all the discards ?
> 
> Thanks
> 
> On Tue, Dec 18, 2018 at 5:59 PM Edvinas K  wrote:
> 
> > thanks,
> >
> > I tried to change the pipe size. As i noticed my OS (centos) default and
> > max size are the same:
> >
> > prod [root@netvpn001prpjay pmacct-1.7.2]# cat
> > /proc/sys/net/core/[rw]mem_default
> > 212992
> > 212992
> >
> > prod [root@netvpn001prpjay pmacct-1.7.2]# cat
> > /proc/sys/net/core/[rw]mem_max
> > 212992
> > 212992
> >
> > I tried to set the pmacctd_pipe_size: to 20  and later to 212992.
> > Seems the drops is still occuring.
> > Tomorrow i will try to look at that PF_RING thing.
> >
> > Thanks
> >
> >
> >
> >
> >
> > On Tue, Dec 18, 2018 at 5:32 PM Paolo Lucente  wrote:
> >
> >>
> >> Hi Edvinas,
> >>
> >> Easier thing first, i recommend to inject some test traffic and see that
> >> one how it looks like.
> >>
> >> The dropped packets highlight a buffering issue. You could take an
> >> intermediate step and see if enlarging buffers helps. Configure
> >> pmacctd_pipe_size to 20 and follow instructions here for the
> >> /proc files to touch:
> >>
> >> https://github.com/pmacct/pmacct/blob/1.7.2/CONFIG-KEYS#L203-#L216
> >>
> >> If it helps, good. If not: you should really look into one of the
> >> frameworks i was pointing you to in my previous email. PF_RING, for
> >> example, can do sampling and/or balancing. Sampling should not be done
> >> inside pmacct because the dropped packets are between the kernel and the
> >> application.
> >>
> >> Paolo
> >>
> >> On Mon, Dec 17, 2018 at 02:52:48PM +0200, Edvinas K wrote:
> >> > Seems there're lots of dropped packets:
> >> >
> >> > prod [root@netvpn001prpjay pmacct-1.7.2]# pmacctd -i ens1f0.432 -f
> >> > flowexport.cfg
> >> > WARN: [flowexport.cfg:2] Invalid value. Ignored.
> >> > INFO ( default/core ): Promiscuous Mode Accounting Daemon, pmacctd
> >> > 1.7.2-git (20181018-00+c3)
> >> > INFO ( default/core ):  '--enable-l2' '--enable-ipv6' '--enable-64bit'
> >> > '--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins'
> >> > '--enable-st-bins'
> >> > INFO ( default/core ): Reading configuration file
> >> > '/opt/pmacct-1.7.2/flowexport.cfg'.
> >> > INFO ( default_nfprobe/nfprobe ): NetFlow probe plugin is originally
> >> based
> >> > on softflowd 0.9.7 software, Copyright 2002 Damien Miller <
> >> d...@mindrot.org>
> >> > All rights reserved.
> >> > INFO ( default_nfprobe/nfprobe ):   TCP timeout: 3600s
> >> > INFO ( default_nfprobe/nfprobe ):  TCP post-RST timeout: 120s
> >> > INFO ( default_nfprobe/nfprobe ):  TCP post-FIN timeout: 300s
> >> > INFO ( default_nfprobe/nfprobe ):   UDP timeout: 300s
> >> > INFO ( default_nfprobe/nfprobe ):  ICMP timeout: 300s
> >> > INFO ( default_nfprobe/nfprobe ):   General timeout: 3600s
> >> > INFO ( default_nfprobe/nfprobe ):  Maximum lifetime: 604800s
> >> > INFO ( default_nfprobe/nfprobe ):   Expiry interval: 60s
> >&

Re: [pmacct-discussion] Custom primitives with netflow

2018-12-19 Thread Paolo Lucente
rint ): cache entries=16411 base cache
> memory=54878384 bytes
> WARN ( default_print/print ): no print_output_file and no
> print_output_lock_file defined.
> INFO ( default_print/print ): *** Purging cache - START (PID: 4356) ***
> INFO ( default_print/print ): *** Purging cache - END (PID: 4356, QN: 0/0,
> ET: X) ***
> INFO ( default_print/print ): *** Purging cache - START (PID: 4379) ***
> INFO ( default_print/print ): *** Purging cache - END (PID: 4379, QN: 0/0,
> ET: X) ***
> INFO ( default_print/print ): *** Purging cache - START (PID: 4410) ***
> INFO ( default_print/print ): *** Purging cache - END (PID: 4410, QN: 0/0,
> ET: X) ***
> INFO ( default_print/print ): *** Purging cache - START (PID: 4443) ***
> SRC_IP  DST_IP SRC_PORT  DST_PORT
> PROTOCOLTOS*dummy_byte*  PACKETS   BYTES
> 172.24.1.197  239.255.255.25056940
> 1900  udp 0*30-38*
> 4 800
> 
> 
> 
> 
> 
> On Mon, Dec 17, 2018 at 6:47 AM Paolo Lucente  wrote:
> 
> >
> > Hi Rajesh,
> >
> > Thanks for pointing this out. I've committed some code to unlock
> > field_type also for uacctd/pmacctd daemons precisely for the use case
> > you mentioned. Here the details:
> >
> >
> > https://github.com/pmacct/pmacct/commit/87ebf3a9f907c331f752c96a76ea247e77f99107
> >
> > You can back port this patch to latest stable release or use master
> > code. Keep me posted if it works for you - it did work for me in lab
> > using your config as a base.
> >
> > One recommendation: use IPFIX instead of NetFlow v9 if possible. IPFIX
> > allows to define the field type as :, where pmacct PEN
> > is documented here:
> >
> > https://github.com/pmacct/pmacct/blob/master/docs/IPFIX
> >
> > So you could use, say, 43874:100 as field type instead of squatting the
> > public code points.
> >
> > Paolo
> >
> > On Sat, Dec 15, 2018 at 12:04:54AM +0530, RAJESH KUMAR S.R wrote:
> > > Hi,
> > >
> > > I need some understanding in exporting the custom defined primitives in
> > > netflow v9 messages, if that is possible, as I want to define custom
> > fields
> > > and send out to netflow collector and visualize using graphs (if the
> > > collector supports custom templates)
> > >
> > > As a first step, I am trying to use the custom aggregate primitive  used
> > in
> > > examples/primitives.lst.example.
> > >
> > > " Defines a primitive called 'udp_len': base pointer is set to the UDP
> > > header
> > >  (l4:17) plus 4 bytes offset, reads for 2 byte and will present it as
> > > unsigned
> > >  int.
> > >
> > > name=udp_lenpacket_ptr=l4:17+4  len=2   semantics=u_int
> > > "
> > >
> > > I used to classify flows after defining "udp_len" as mentioned above.
> > > My conf file for pmacctd is
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > *"   daemonize:false   interface: wlp1s0   aggregate_primitives:
> > > primitives.lst   aggregate: etype, proto, src_host, dst_host, src_port,
> > > dst_port, udp_len   plugins: nfprobe, print   nfprobe_receiver:
> > > 172.24.1.123:9996 <http://172.24.1.123:9996>   nfprobe_version: 9*
> > > *"*
> > > My primitives.lst file defines custom primitive as follows
> > >
> > > *"name=udp_lenpacket_ptr=l4:17+4  len=2   semantics=u_int"*
> > >
> > > When I run the pmacct "sudo pmacctd -f pmacct.conf", I'm able to see the
> > > flows that has udp_len column displayed in the console using print
> > plugin.
> > >
> > > Output of
> > > "sudo pmacctd -f pmacct.conf"
> > >
> > > INFO ( default/core ): Promiscuous Mode Accounting Daemon, pmacctd
> > > 1.7.2-git (20180701-01)
> > > INFO ( default/core ):  '--enable-l2' '--enable-ipv6' '--enable-64bit'
> > > '--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins'
> > > '--enable-st-bins'
> > > INFO ( default/core ): Reading configuration file
> > > '/home/certes-rajesh/pmacct/pmacct/pmacct.conf'.
> > > INFO ( default/core ): [primitives.lst] (re)loading map.
> > > INFO ( default/core ): [primitives.lst] map successfully (re)loaded.
> > > INFO ( default_nfprobe/nfprobe ): NetFlow probe plugin is originally
> > based
> > > on softflowd 0.9.7 software, Copyright 2002

Re: [pmacct-discussion] pmact to netflow collector

2018-12-18 Thread Paolo Lucente


Hi Edvinas,

Easier thing first, i recommend to inject some test traffic and see that
one how it looks like.

The dropped packets highlight a buffering issue. You could take an
intermediate step and see if enlarging buffers helps. Configure
pmacctd_pipe_size to 20 and follow instructions here for the
/proc files to touch:

https://github.com/pmacct/pmacct/blob/1.7.2/CONFIG-KEYS#L203-#L216

If it helps, good. If not: you should really look into one of the
frameworks i was pointing you to in my previous email. PF_RING, for
example, can do sampling and/or balancing. Sampling should not be done
inside pmacct because the dropped packets are between the kernel and the
application.

Paolo 

On Mon, Dec 17, 2018 at 02:52:48PM +0200, Edvinas K wrote:
> Seems there're lots of dropped packets:
> 
> prod [root@netvpn001prpjay pmacct-1.7.2]# pmacctd -i ens1f0.432 -f
> flowexport.cfg
> WARN: [flowexport.cfg:2] Invalid value. Ignored.
> INFO ( default/core ): Promiscuous Mode Accounting Daemon, pmacctd
> 1.7.2-git (20181018-00+c3)
> INFO ( default/core ):  '--enable-l2' '--enable-ipv6' '--enable-64bit'
> '--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins'
> '--enable-st-bins'
> INFO ( default/core ): Reading configuration file
> '/opt/pmacct-1.7.2/flowexport.cfg'.
> INFO ( default_nfprobe/nfprobe ): NetFlow probe plugin is originally based
> on softflowd 0.9.7 software, Copyright 2002 Damien Miller 
> All rights reserved.
> INFO ( default_nfprobe/nfprobe ):   TCP timeout: 3600s
> INFO ( default_nfprobe/nfprobe ):  TCP post-RST timeout: 120s
> INFO ( default_nfprobe/nfprobe ):  TCP post-FIN timeout: 300s
> INFO ( default_nfprobe/nfprobe ):   UDP timeout: 300s
> INFO ( default_nfprobe/nfprobe ):  ICMP timeout: 300s
> INFO ( default_nfprobe/nfprobe ):   General timeout: 3600s
> INFO ( default_nfprobe/nfprobe ):  Maximum lifetime: 604800s
> INFO ( default_nfprobe/nfprobe ):   Expiry interval: 60s
> INFO ( default_nfprobe/nfprobe ): Exporting flows to
> [10.3.14.101]:rtcm-sc104
> INFO ( default/core ): [ens1f0.432,0] link type is: 1
> ^C^C^C^C^C^C^C^C
> 
> after 1 minute:
> 
> WARN ( default_nfprobe/nfprobe ): Shutting down on user request.
> INFO ( default/core ): OK, Exiting ...
> NOTICE ( default/core ): +++
> NOTICE ( default/core ): [ens1f0.432,0] received_packets=3441854
> *dropped_packets=2365166*
> 
> About 1GB of traffic is passing through the router where i'm capturing the
> packets. Isn't it too much traffic for nfrpobe to process ? CPUs seems not
> in 100% usage. We're using  Intel Xeon E5-2620 0 @ 2.00GHz
> <http://netmon.adform.com/device/device=531/tab=health/metric=processor/processor_id=1466/>
> x
> 24.
> 
> prod [root@netvpn001prpjay ~]# ps -aux | grep pmacct
> root 41840 30.9  0.0  18964  7760 ?Rs   Dec14 1309:50 pmacctd:
> Core Process [default]
> root 41841 *68.4%*  0.0  22932  9756 ?RDec14 2898:29
> pmacctd: Netflow Probe Plugin [default_nfprobe]
> root 41869 32.5  0.0  19360  8128 ?Ss   Dec14 1378:29 pmacctd:
> Core Process [default]
> root 41870 *67.6%* 0.0  22928  9760 ?RDec14 2865:35
> pmacctd: Netflow Probe Plugin [default_nfprobe]
> 
> Before starting with your mentioned 'steroid' things, i would like to ask,
> is't really worth to go to that kernel "things", or start with techniques
> for example like sampling, or like Nikola recommended try to fidle with
> nfprobe_engine settings ?
> 
> Thanks
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> On Sun, Dec 16, 2018 at 6:25 PM Paolo Lucente  wrote:
> 
> >
> > Hi Edvinas,
> >
> > You may want to check whether libpcap is dropping packets on input to
> > pmacctd. You can achieve that sending a SIGUSR1 and checking the output
> > in the logfile/syslog/console. You will get something a-la:
> >
> > https://github.com/pmacct/pmacct/blob/master/docs/SIGNALS#L16-#L34
> >
> > Should amount of dropped packets be non-zero and visibly increasing then
> > you may want to put your libpcap on steroids:
> >
> > https://github.com/pmacct/pmacct/blob/master/FAQS#L71-#L101
> >
> > Should, instead, that not be the case, i am unsure and would need
> > further investigation. You could try to produce a controlled stream of
> > data and sniff nfprobe output. Or collect with a different software for
> > a quick counter-test (nfacctd itself or another of your choice).
> >
> > Paolo
> >
> > On Fri, Dec 14, 2018 at 03:02:35PM +0200, Edvinas K wrote:
> > > Thanks, i really appreciate your help.
> > >
> > > Everything seems working OK, on NFSEN (NFDUMP) graphs 

Re: [pmacct-discussion] Custom primitives with netflow

2018-12-16 Thread Paolo Lucente


Hi Rajesh,

Thanks for pointing this out. I've committed some code to unlock
field_type also for uacctd/pmacctd daemons precisely for the use case
you mentioned. Here the details:

https://github.com/pmacct/pmacct/commit/87ebf3a9f907c331f752c96a76ea247e77f99107

You can back port this patch to latest stable release or use master
code. Keep me posted if it works for you - it did work for me in lab
using your config as a base.

One recommendation: use IPFIX instead of NetFlow v9 if possible. IPFIX
allows to define the field type as :, where pmacct PEN
is documented here:

https://github.com/pmacct/pmacct/blob/master/docs/IPFIX

So you could use, say, 43874:100 as field type instead of squatting the
public code points.

Paolo 

On Sat, Dec 15, 2018 at 12:04:54AM +0530, RAJESH KUMAR S.R wrote:
> Hi,
> 
> I need some understanding in exporting the custom defined primitives in
> netflow v9 messages, if that is possible, as I want to define custom fields
> and send out to netflow collector and visualize using graphs (if the
> collector supports custom templates)
> 
> As a first step, I am trying to use the custom aggregate primitive  used in
> examples/primitives.lst.example.
> 
> " Defines a primitive called 'udp_len': base pointer is set to the UDP
> header
>  (l4:17) plus 4 bytes offset, reads for 2 byte and will present it as
> unsigned
>  int.
> 
> name=udp_lenpacket_ptr=l4:17+4  len=2   semantics=u_int
> "
> 
> I used to classify flows after defining "udp_len" as mentioned above.
> My conf file for pmacctd is
> 
> 
> 
> 
> 
> 
> 
> 
> *"   daemonize:false   interface: wlp1s0   aggregate_primitives:
> primitives.lst   aggregate: etype, proto, src_host, dst_host, src_port,
> dst_port, udp_len   plugins: nfprobe, print   nfprobe_receiver:
> 172.24.1.123:9996    nfprobe_version: 9*
> *"*
> My primitives.lst file defines custom primitive as follows
> 
> *"name=udp_lenpacket_ptr=l4:17+4  len=2   semantics=u_int"*
> 
> When I run the pmacct "sudo pmacctd -f pmacct.conf", I'm able to see the
> flows that has udp_len column displayed in the console using print plugin.
> 
> Output of
> "sudo pmacctd -f pmacct.conf"
> 
> INFO ( default/core ): Promiscuous Mode Accounting Daemon, pmacctd
> 1.7.2-git (20180701-01)
> INFO ( default/core ):  '--enable-l2' '--enable-ipv6' '--enable-64bit'
> '--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins'
> '--enable-st-bins'
> INFO ( default/core ): Reading configuration file
> '/home/certes-rajesh/pmacct/pmacct/pmacct.conf'.
> INFO ( default/core ): [primitives.lst] (re)loading map.
> INFO ( default/core ): [primitives.lst] map successfully (re)loaded.
> INFO ( default_nfprobe/nfprobe ): NetFlow probe plugin is originally based
> on softflowd 0.9.7 software, Copyright 2002 Damien Miller 
> All rights reserved.
> INFO ( default_nfprobe/nfprobe ):   TCP timeout: 3600s
> INFO ( default_nfprobe/nfprobe ):  TCP post-RST timeout: 120s
> INFO ( default_nfprobe/nfprobe ):  TCP post-FIN timeout: 300s
> INFO ( default_nfprobe/nfprobe ):   UDP timeout: 300s
> INFO ( default_nfprobe/nfprobe ):  ICMP timeout: 300s
> INFO ( default_nfprobe/nfprobe ):   General timeout: 3600s
> INFO ( default_nfprobe/nfprobe ):  Maximum lifetime: 604800s
> INFO ( default_nfprobe/nfprobe ):   Expiry interval: 60s
> INFO ( default_nfprobe/nfprobe ): Exporting flows to [192.168.122.1]:9996
> *ERROR ( default_nfprobe/nfprobe ): custom primitive 'udp_len' has null
> field_type*
> INFO ( default_print/print ): cache entries=16411 base cache
> memory=54878384 bytes
> WARN ( default_print/print ): no print_output_file and no
> print_output_lock_file defined.
> INFO ( default/core ): [wlp1s0,0] link type is: 1
> *WARN ( default/core ): connection lost to 'default_nfprobe-nfprobe';
> closing connection.*
> INFO ( default_print/print ): *** Purging cache - START (PID: 2837) ***
> ETYPE  SRC_IP
> DST_IP SRC_PORT  DST_PORT
> PROTOCOLudp_len  PACKETS   BYTES
> 86dd   fd50:1d9:a341:f100:8ae:86f3:123d:3654
> ff02::fb   5353  5353
> udp 41   3 243
> ...
> 
> When I try to give a dummy field type, it throws
> "WARN ( default/core ): [primitives.lst] field_type is only supported in
> nfacctd.".
> 
> I need help in figuring out whether I'm doing the right thing for exporting
> custom fields as part netflow messages as I will need to send out more
> custom fields that are read from the packet.

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] pmact to netflow collector

2018-12-16 Thread Paolo Lucente


Hi Edvinas,

You may want to check whether libpcap is dropping packets on input to
pmacctd. You can achieve that sending a SIGUSR1 and checking the output
in the logfile/syslog/console. You will get something a-la:

https://github.com/pmacct/pmacct/blob/master/docs/SIGNALS#L16-#L34

Should amount of dropped packets be non-zero and visibly increasing then
you may want to put your libpcap on steroids:

https://github.com/pmacct/pmacct/blob/master/FAQS#L71-#L101

Should, instead, that not be the case, i am unsure and would need
further investigation. You could try to produce a controlled stream of
data and sniff nfprobe output. Or collect with a different software for
a quick counter-test (nfacctd itself or another of your choice).

Paolo

On Fri, Dec 14, 2018 at 03:02:35PM +0200, Edvinas K wrote:
> Thanks, i really appreciate your help.
> 
> Everything seems working OK, on NFSEN (NFDUMP) graphs of flows statistics
> looks good, but the traffic rate Mb/s (45 Mb/s) is somehow 10x lower than
> really is. Maybe some tips to troubleshoot that ?
> 
> [image: image.png]
> 
> Is there any hidden things to check about ?
> 
> My config:
> 
> 1050  pmacctd -i ens1f0.432 -f flowexport.cfg
> 1051  pmacctd -i ens1f1.433 -f flowexport.cfg
> 
> cat flowexport.cfg
>!
>daemonize: true
>aggregate: src_host, dst_host, src_port, dst_port, proto, tos
>plugins: nfprobe
>nfprobe_receiver: 10.3.14.101:2101
>nfprobe_version: 9
>! nfprobe_engine: 1:1
>! nfprobe_timeouts: tcp=120:maxlife=3600
>!
>    ! networks_file: /path/to/networks.lst
> 
> On Thu, Dec 13, 2018 at 4:32 AM Paolo Lucente  wrote:
> 
> >
> > Hi Nikola,
> >
> > I see, makes sense. Thanks very much for clarifying.
> >
> > Paolo
> >
> > On Wed, Dec 12, 2018 at 06:20:58PM -0800, Nikola Kolev wrote:
> > > Hi Paollo,
> > >
> > > Sorry for being cryptic - what I meant was that I wasn't able to
> > > launch pmacctd/uacctd in a way that it deals with dynamic interfaces as
> > > ppp. Basically I failed to find any reference in the docs on how to make
> > > it run in such a way, that it collects info from ppp* (a-la the ppp+
> > > syntax of iptables), without launching a separate pmacctd instance for
> > > each interface, hence the complicated setup with
> > > iptables-nflog-uacctd-nfdump.
> > >
> > > On Thu, 13 Dec 2018 01:35:00 +
> > > Paolo Lucente  wrote:
> > >
> > > >
> > > > Hi Nikola,
> > > >
> > > > Can you please elaborate a bit more? The cryptic part for me is "as
> > > > nfacctd is not supporting wildcard addresses to be bound to".
> > > >
> > > > Thanks,
> > > > Paolo
> > > >
> > > > On Wed, Dec 12, 2018 at 04:50:33PM -0800, Nikola Kolev wrote:
> > > > > Hey,
> > > > >
> > > > > If I may add to that:
> > > > >
> > > > > I'm doing something similar, but in a slightly different manner:
> > > > >
> > > > > as nfacctd is not supporting wildcard addresses to be bound to, I'm
> > > > > using iptables' rules to export via nflog to uacctd, which then can
> > > > > send to nfdump. Just food for thought...
> > > > >
> > > > > On 2018-12-12 14:58, Paolo Lucente wrote:
> > > > > >Hi Edvinas,
> > > > > >
> > > > > >You are looking for the nfprobe plugin. You can follow the relevant
> > > > > >section in the QUICKSTART to get going:
> > > > > >
> > > > > >https://github.com/pmacct/pmacct/blob/1.7.2/QUICKSTART#L1167-#L1302
> > > > > >
> > > > > >Paolo
> > > > > >
> > > > > >On Wed, Dec 12, 2018 at 03:12:39PM +0200, Edvinas K wrote:
> > > > > >>Hello,
> > > > > >>
> > > > > >>I managed to run basic pmacct to capture linux router (FRR) flows
> > > > > >>from libcap:
> > > > > >>"pmacctd -P print -O formatted -r 10 -i bond0.2170 -c
> > > > > >>src_host,dst_host,src_port,dst_port,proto"
> > > > > >>
> > > > > >>now I need to push all the flows as a netflow format to the
> > > > > >>netflow collector (nfdump). Could you give me some advice how to
> > > > > >>configure that ?
> > > > > >>Thank you
> > > > > >
> > > > > >>___
> > > > > >>pmacct-discussion mailing list
> > > > > >>http://www.pmacct.net/#mailinglists
> > > > > >
> > > > > >
> > > > > >___
> > > > > >pmacct-discussion mailing list
> > > > > >http://www.pmacct.net/#mailinglists
> > > > >
> > > > > --
> > > > > Nikola
> > >
> > >
> > > --
> > > Nikola
> >
> > ___
> > pmacct-discussion mailing list
> > http://www.pmacct.net/#mailinglists
> >



___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


  1   2   3   4   5   6   7   8   9   10   >