Re: [pmacct-discussion] pmacctd with nfprobe and pretag.map

2024-04-24 Thread Paolo Lucente


Hi Bruno,

What version of pmacct are you using? Also, are you familiar with 
tcpdump / wireshark tools? It would be the best way to confirm that the 
extra info (label) is making in the output Netflow record; let me know 
if you need help with that, here or by unicast email.


Paolo


On 22/4/24 22:36, Bruno Agostinho (He Him) wrote:

Hi Paolo,

Thanks for the quick response. I've tried the way you said, adding a 
pre-tag map and label aggregation. I've tried the pre-tag map with 
label=hostname, and again with set_label=hostname label=hostname (this 
had been my first attempt). Still I can't get a netflow message with the 
hostname information. I don't want to jump to the nfacctd configuration 
without /seeing/ the label in the netflow messages. Does it make sense ? 
Please review what I might be doing wrong and where exactly in the 
netflow messages should we expect to have that label.


Regards,

Bruno

On Fri, Apr 19, 2024 at 5:43 PM Paolo Lucente <mailto:pa...@pmacct.net>> wrote:



Hi Bruno,

Yes you can use labels for that & i have just proofed that working
successfully end-to-end (exporter and collector sides) on my local
environment.

In the nfprobe config:

pre_tag_map: /path/to/pretag.map
aggregate: label, < all other usual suspects here >

Then in pretag.map, super simple:
label=

You can craft one single label packing all you want, and separating the
info the way you like, ie. _, or be fancy and have
multiple labels that then pmacct will concatenate for you -- although i
don't real see a point to complicate stuff like that.

Hope this helps.

Paolo


On 18/4/24 18:23, Bruno Agostinho (He Him) wrote:
 > Hello All,
 >
 > I'm using pmacctd to export netflow messages to further
processing in
 > centralized nfacctd's. I'd like to enrich netflow messages with the
 > hostname (and possibly instance type). I've deduced from
documentation
 > and other discussions that netflow protocol isn't designed to
support
 > it. Hence nfprobe usage of tags and labels is restrained to
manipulate
 > direction and ifindex fields. And to achieve what I want I'd need
to use
 > a protocol other than netflow, like kafka. Please confirm my
 > understanding or, if I'm wrong, how this can be achieved (using
tags and
 > labels or any other method).
 >
 > Thanks a lot in advance for any insight you can provide!
 >
 > Bruno
 >
 > ___
 > pmacct-discussion mailing list
 > http://www.pmacct.net/#mailinglists
<http://www.pmacct.net/#mailinglists>



___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] pmacctd with nfprobe and pretag.map

2024-04-19 Thread Paolo Lucente


Hi Bruno,

Yes you can use labels for that & i have just proofed that working 
successfully end-to-end (exporter and collector sides) on my local 
environment.


In the nfprobe config:

pre_tag_map: /path/to/pretag.map
aggregate: label, < all other usual suspects here >

Then in pretag.map, super simple:
label=

You can craft one single label packing all you want, and separating the 
info the way you like, ie. _, or be fancy and have 
multiple labels that then pmacct will concatenate for you -- although i 
don't real see a point to complicate stuff like that.


Hope this helps.

Paolo


On 18/4/24 18:23, Bruno Agostinho (He Him) wrote:

Hello All,

I'm using pmacctd to export netflow messages to further processing in 
centralized nfacctd's. I'd like to enrich netflow messages with the 
hostname (and possibly instance type). I've deduced from documentation 
and other discussions that netflow protocol isn't designed to support 
it. Hence nfprobe usage of tags and labels is restrained to manipulate 
direction and ifindex fields. And to achieve what I want I'd need to use 
a protocol other than netflow, like kafka. Please confirm my 
understanding or, if I'm wrong, how this can be achieved (using tags and 
labels or any other method).


Thanks a lot in advance for any insight you can provide!

Bruno

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] nfacctd pretag.map with mpls_vpn_rd

2024-03-20 Thread Paolo Lucente


Hi Andy,

Amazing, great to have you rolling with this. And i will also close 
Issue 770, thanks for confirming.


Paolo


On 21/3/24 06:01, Andrew Lake wrote:

Hi Paolo,

Thanks for the explanation and talking through this with me. Indeed 
flow_to_rd.map was the missing piece I needed. I have things working the 
way we wanted by generating a flow_to_rd.map that matches on the 
router_ip and mpls_vpn_id. Your SNMP suggestion would also work for us 
too since we have all that info readily available, but going the 
mpls_vpn_id route keeps the number of entries smaller and should change 
less. I had previously played around with flow_to_rd.map but for 
whatever reason convinced myself it wasn’t affecting the path lookups. 
It is very clearly working though now, no many thanks for your guidance.


 From my perspective issue 770 can also be closed.

Thanks,
Andy

On March 19, 2024 at 4:33:08 AM, Paolo Lucente (pa...@pmacct.net 
<mailto:pa...@pmacct.net>) wrote:




Hi Andy,

Amazing! So probably, after all, we may bash Issue 770 on GitHub -- 
coolio.


Let me recap situation so that you can correct me where i am wrong and
fill any gaps. You have flows, these (all or some) have a VRFID; you
have BGP feeds and there, obviously, you have MPLS VPN RD. The linking
pin is the flow_to_rd_map (see examples/flow_to_rd.map.example and
CONFIG-KEYS).

The map allows you to work in a few modes: 'id' is the output, the RD as
you want to match it in the BGP feed; you can output the RD in a few
different modes: basing on router / interface, for example; or VRFID; or
the MPLS label stack. Probably you want the VRFID scenario.

Problem that often people come across is .. what is the VRFID anyway,
where do i source the info? Of course some vendor-specific SNMP polling;
so some people i have seen preferring to go to the router / interface
scenario, because if you have some sort of inventory / source of truth
that is easy to compose.

Probably the least popular scenario is composing the map with MPLS
labels but now in the software-defined world, potentially with
controllers, that sporadically happens too.

Paolo


On 19/3/24 05:57, Andrew Lake wrote:
> Hi Paolo,
>  
> Ok maybe I have gotten headed down the wrong path. It sounds like you’re  
> saying nfacctd in normal collector mode should be able to take into  
> account the VRF when asking the BGP daemon to lookup the AS path for a  
> flow? This would be my ideal situation, and the replication setup I had  
> was just trying to get around the fact that I didn’t think it did based  
> on some preliminary testing. I might have given up too soon when it  
> didn’t appear to be working though.
>  
> Are there any extra options in nfacctd.conf or similar I need to set to  
> make it take the VRF into consideration for the path lookup?
>  
> What fields does it look at in the BGP message and IPFIX messages to  
> make the decision? rd and mpls_vpn_rd respectively? Mainly just asking  
> so I can debug if something was not getting set properly by our routers  
> in either of the message sets when I was testing.
>  
>  
> Thanks again,

> Andy
>  
> On March 16, 2024 at 1:06:48 AM, Paolo Lucente (pa...@pmacct.net <mailto:pa...@pmacct.net>

> <mailto:pa...@pmacct.net <mailto:pa...@pmacct.net>>) wrote:
>  
>>

>> Hi Andy,
>>
>> Thanks for opening the issue on GitHub and the kind words.
>>
>> Thing is all you want to achieve is supported in pmacct when working in
>> collector mode where the proper inspection of each flow is performed.
>>
>> Why don't you leave the tee part barebone and implement these features
>> in the collector? Just an idea as a perfectly valid answer could be that
>> performing this enrichment in the replicator then also 3rd party tools
>> could benefit from this info. Let me know.
>>
>> Paolo
>>
>>
>> On 16/3/24 01:51, Andrew Lake wrote:
>> > Hi Paolo,
>> >   
>> > Thanks for reply! I have created the issue as requested:   
>> > https://github.com/pmacct/pmacct/issues/770 
<https://github.com/pmacct/pmacct/issues/770>
>> <https://github.com/pmacct/pmacct/issues/770 
<https://github.com/pmacct/pmacct/issues/770>>
>> > <https://github.com/pmacct/pmacct/issues/770 
<https://github.com/pmacct/pmacct/issues/770>
>> <https://github.com/pmacct/pmacct/issues/770 
<https://github.com/pmacct/pmacct/issues/770>>>. Apologies for missing 
the

>> > docs about tee/replication mode and what you described make sense.
>> >   
>> > Your comment about the matching RD in the BGP messages I think is   
>> > somewhat related to my ultimate goal, which is to enrich the IPFIX   
>> > records with the AS Path using the BGP table for the matching 

Re: [pmacct-discussion] nfacctd pretag.map with mpls_vpn_rd

2024-03-19 Thread Paolo Lucente


Hi Andy,

Amazing! So probably, after all, we may bash Issue 770 on GitHub -- coolio.

Let me recap situation so that you can correct me where i am wrong and 
fill any gaps. You have flows, these (all or some) have a VRFID; you 
have BGP feeds and there, obviously, you have MPLS VPN RD. The linking 
pin is the flow_to_rd_map (see examples/flow_to_rd.map.example and 
CONFIG-KEYS).


The map allows you to work in a few modes: 'id' is the output, the RD as 
you want to match it in the BGP feed; you can output the RD in a few 
different modes: basing on router / interface, for example; or VRFID; or 
the MPLS label stack. Probably you want the VRFID scenario.


Problem that often people come across is .. what is the VRFID anyway, 
where do i source the info? Of course some vendor-specific SNMP polling; 
so some people i have seen preferring to go to the router / interface 
scenario, because if you have some sort of inventory / source of truth 
that is easy to compose.


Probably the least popular scenario is composing the map with MPLS 
labels but now in the software-defined world, potentially with 
controllers, that sporadically happens too.


Paolo


On 19/3/24 05:57, Andrew Lake wrote:

Hi Paolo,

Ok maybe I have gotten headed down the wrong path. It sounds like you’re 
saying nfacctd in normal collector mode should be able to take into 
account the VRF when asking the BGP daemon to lookup the AS path for a 
flow? This would be my ideal situation, and the replication setup I had 
was just trying to get around the fact that I didn’t think it did based 
on some preliminary testing. I might have given up too soon when it 
didn’t appear to be working though.


Are there any extra options in nfacctd.conf or similar I need to set to 
make it take the VRF into consideration for the path lookup?


What fields does it look at in the BGP message and IPFIX messages to 
make the decision? rd and mpls_vpn_rd respectively? Mainly just asking 
so I can debug if something was not getting set properly by our routers 
in either of the message sets when I was testing.



Thanks again,
Andy

On March 16, 2024 at 1:06:48 AM, Paolo Lucente (pa...@pmacct.net 
<mailto:pa...@pmacct.net>) wrote:




Hi Andy,

Thanks for opening the issue on GitHub and the kind words.

Thing is all you want to achieve is supported in pmacct when working in
collector mode where the proper inspection of each flow is performed.

Why don't you leave the tee part barebone and implement these features
in the collector? Just an idea as a perfectly valid answer could be that
performing this enrichment in the replicator then also 3rd party tools
could benefit from this info. Let me know.

Paolo


On 16/3/24 01:51, Andrew Lake wrote:
> Hi Paolo,
>  
> Thanks for reply! I have created the issue as requested:  
> https://github.com/pmacct/pmacct/issues/770 
<https://github.com/pmacct/pmacct/issues/770>
> <https://github.com/pmacct/pmacct/issues/770 
<https://github.com/pmacct/pmacct/issues/770>>. Apologies for missing the

> docs about tee/replication mode and what you described make sense.
>  
> Your comment about the matching RD in the BGP messages I think is  
> somewhat related to my ultimate goal, which is to enrich the IPFIX  
> records with the AS Path using the BGP table for the matching VRF.  
> Balancing this with the fact that our routers have limitations in the  
> number of addresses to which they will forward IPFIX, my plan was to  
> have pmacct in tee mode forward the IPFIX records to a pmacct instance  
> dedicated to peering with only one VRF where it could do the BGP lookup.  
> Alternative would be to try to lookup up the path by mapping the IPFIX  
> VRF ID to the BGP RD and then basing the lookup on that value in  
> addition to the prefix, but this seems like non-hanging fruit as you  
> said. if you are able to get the VRF ID matching against IPFIX working  
> so we can tee it from there, that will be fantastic.
>  
> Thanks again for all your help…and also just in general building an  
> awesome product :)
>  
> Andy
>  
> On March 15, 2024 at 2:18:01 AM, Paolo Lucente (pa...@pmacct.net <mailto:pa...@pmacct.net>

> <mailto:pa...@pmacct.net <mailto:pa...@pmacct.net>>) wrote:
>  
>>

>> Hi Andy,
>>
>> mpls_vpn_rd is supported in pre_tag_map however it is not supported when
>> in tee / replication mode (this is documented).
>>
>> For your specific use-case, since you are interested in matching the VRF
>> ID, which in turn is self-consistent as part of an IPFIX record, this is
>> something that can be achieved. However this may then open the door to
>> somebody wanting to match the RD for a prefix as coming from a BGP / BMP
>> feed, and for this a few (non-hanging-fruit) steps would be needed; then
>> again the limitation to the self-containe

Re: [pmacct-discussion] nfacctd pretag.map with mpls_vpn_rd

2024-03-15 Thread Paolo Lucente


Hi Andy,

Thanks for opening the issue on GitHub and the kind words.

Thing is all you want to achieve is supported in pmacct when working in 
collector mode where the proper inspection of each flow is performed.


Why don't you leave the tee part barebone and implement these features 
in the collector? Just an idea as a perfectly valid answer could be that 
performing this enrichment in the replicator then also 3rd party tools 
could benefit from this info. Let me know.


Paolo


On 16/3/24 01:51, Andrew Lake wrote:

Hi Paolo,

Thanks for reply! I have created the issue as requested: 
https://github.com/pmacct/pmacct/issues/770 
<https://github.com/pmacct/pmacct/issues/770>. Apologies for missing the 
docs about tee/replication mode and what you described make sense.


Your comment about the matching RD in the BGP messages I think is 
somewhat related to my ultimate goal, which is to enrich the IPFIX 
records with the AS Path using the BGP table for the matching VRF. 
Balancing this with the fact that our routers have limitations in the 
number of addresses to which they will forward IPFIX, my plan was to 
have pmacct in tee mode forward the IPFIX records to a pmacct instance 
dedicated to peering with only one VRF where it could do the BGP lookup. 
Alternative would be to try to lookup up the path by mapping the IPFIX 
VRF ID to the BGP RD and then basing the lookup on that value in 
addition to the prefix, but this seems like non-hanging fruit as you 
said. if you are able to get the VRF ID matching against IPFIX working 
so we can tee it from there, that will be fantastic.


Thanks again for all your help…and also just in general building an 
awesome product :)


Andy

On March 15, 2024 at 2:18:01 AM, Paolo Lucente (pa...@pmacct.net 
<mailto:pa...@pmacct.net>) wrote:




Hi Andy,

mpls_vpn_rd is supported in pre_tag_map however it is not supported when
in tee / replication mode (this is documented).

For your specific use-case, since you are interested in matching the VRF
ID, which in turn is self-consistent as part of an IPFIX record, this is
something that can be achieved. However this may then open the door to
somebody wanting to match the RD for a prefix as coming from a BGP / BMP
feed, and for this a few (non-hanging-fruit) steps would be needed; then
again the limitation to the self-contained VRF ID scenario can be
documented.

May i ask you to open an Issue on GitHub? I'll flag it as Enhancement
right away and will be able to make progress pretty soon for the VRF ID
case.

Paolo


On 13/3/24 03:11, Andrew Lake wrote:
> Hi,
>  
> I recently tried creating a setup where I have an instance of nfacctd  
> running that has a pretag.map that I want to look at the value of  
> mpls_vpn_rd as determined from the IPFIX record and then set a tag. The  
> plan is to then use the tee plugin to send the IPFIX traffic to a  
> different nfacctd instance based on the tag set. I can get more into why  
> we’re doing this if interested, but I ran into a snag that I can’t seem  
> to figure out on my own. nfacctd doesn’t seems to like when I add  
> mpls_vpn_rd to the pretag.map. I get messages like:
>  
> [/etc/pmacct/pretag.map:3] unknown key 'mpls_vpn_rd'. Ignored.
>  
> The pretag.map is pretty vanilla. Just lines like:
>  
> set_tag=1 mpls_vpn_rd=
>  
> My nfacctd.conf:
>  
> ! Port where nfacctd listens

> nfacctd_port: 9996
>  
> ! Adds debugging output to logs. Disable in production.

> debug: true
>  
> ! tag flow values to determine where tee sends them next

> pre_tag_map: /etc/pmacct/pretag.map
>  
> plugins: tee

> tee_receivers: /etc/pmacct/tee_receivers.lst
> tee_transparent: true
>  
>  
>  
> I went so far as to copy the exact mpls_vpn_rd line from the example in  
> the git repo just to see if it would accept it and it still complained.  
>  From looking at the documentation looks like mpls_vpn_rd should be  
> allowed in nfacctd (I think?). I tried following the code path in the  
> source but was having trouble telling which “Ignored” message was  
> getting triggered, and figured maybe I was better off just asking before  
> I got too far down the rabbit hole.
>  
> Is a pretag.map with mpls_vpn_rd supported by nfacctd? If yes, any ideas  
> where to look next? Or any other info I should send?
>  
> Thanks,

> Andy
>  
>  
>  
>  
> ___

> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists <http://www.pmacct.net/#mailinglists>


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] nfacctd pretag.map with mpls_vpn_rd

2024-03-15 Thread Paolo Lucente


Hi Andy,

mpls_vpn_rd is supported in pre_tag_map however it is not supported when 
in tee / replication mode (this is documented).


For your specific use-case, since you are interested in matching the VRF 
ID, which in turn is self-consistent as part of an IPFIX record, this is 
something that can be achieved. However this may then open the door to 
somebody wanting to match the RD for a prefix as coming from a BGP / BMP 
feed, and for this a few (non-hanging-fruit) steps would be needed; then 
again the limitation to the self-contained VRF ID scenario can be 
documented.


May i ask you to open an Issue on GitHub? I'll flag it as Enhancement 
right away and will be able to make progress pretty soon for the VRF ID 
case.


Paolo


On 13/3/24 03:11, Andrew Lake wrote:

Hi,

I recently tried creating a setup where I have an instance of nfacctd 
running that has a pretag.map that I want to look at the value of 
mpls_vpn_rd as determined from the IPFIX record and then set a tag. The 
plan is to then use the tee plugin to send the IPFIX traffic to a 
different nfacctd instance based on the tag set. I can get more into why 
we’re doing this if interested, but I ran into a snag that I can’t seem 
to figure out on my own. nfacctd doesn’t seems to like when I add 
mpls_vpn_rd to the pretag.map. I get messages like:


[/etc/pmacct/pretag.map:3] unknown key 'mpls_vpn_rd'. Ignored.

The pretag.map is pretty vanilla. Just lines like:

set_tag=1 mpls_vpn_rd=

My nfacctd.conf:

! Port where nfacctd listens
nfacctd_port: 9996

! Adds debugging output to logs. Disable in production.
debug: true

! tag flow values to determine where tee sends them next
pre_tag_map: /etc/pmacct/pretag.map

plugins: tee
tee_receivers: /etc/pmacct/tee_receivers.lst
tee_transparent: true



I went so far as to copy the exact mpls_vpn_rd line from the example in 
the git repo just to see if it would accept it and it still complained. 
 From looking at the documentation looks like mpls_vpn_rd should be 
allowed in nfacctd (I think?). I tried following the code path in the 
source but was having trouble telling which “Ignored” message was 
getting triggered, and figured maybe I was better off just asking before 
I got too far down the rabbit hole.


Is a pretag.map with mpls_vpn_rd supported by nfacctd? If yes, any ideas 
where to look next? Or any other info I should send?


Thanks,
Andy




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] pmacct with nfprobe_direction / nfprobe_ifindex and pretag.map

2024-01-14 Thread Paolo Lucente



Hi Klaus,

Can you confirm what version of pmacct are you using? A 'pmacctd -V' 
would do.


I would like essentially to confirm that, for the first issue you are 
hitting, you are running either 1.7.8 or a recent code that includes 
this patch from Dec 15th: 
https://github.com/pmacct/pmacct/commit/547e24171b0da2775ad35aeb2997d586003cb674 
.


For the second issue you mention, ie. setting both input and output 
interface given a direction, let me confirm that the current mechanism 
does not support that -- the use case has been so far using src/dst IP 
address/prefix or src/dst MAC address to determine direction, and given 
that, set input OR output interface but not both.


You could use ULOG / uacctd, which should already return you both 
interfaces, just an idea if you are running Linux, it seems the system 
you are monitoring is passing traffic through. Otherwise to use the 
tagging mechanism, some dev would be required.


Paolo


On 11/1/24 11:11, Klaus Conrad wrote:

Hello everybody,

I'm currently struggling with properly setting up pmacct for the follow
scenario:

I need InputInt and OutputInt as well as Direction to be set in the
generated Netflow.

By default, InputInt/OutputInt are set to 0.

The traffic I'm capturing is VLAN tagged.

Now I want to set InputInt and OutputInt and Direction depending on the
VLAN tag of the captured traffic.

My pretag.map looks like this:

set_tag=2 vlan=10 jeq=eval_ifindexes
set_tag=1 vlan=11 jeq=eval_ifindexes
set_tag=2 vlan=20 jeq=eval_ifindexes
set_tag=1 vlan=21 jeq=eval_ifindexes
...
set_tag=999 filter='net 0.0.0.0/0'


set_tag2=62 vlan=10 label=eval_ifindexes
set_tag2=62 vlan=11
set_tag2=60 vlan=20
set_tag2=60 vlan=21
...
set_tag2=52 filter='net 0.0.0.0/0'



My pmacct.conf looks like this:

...
aggregate: src_host,dst_host,src_port,dst_port,proto,sampling_rate,vlan
nfprobe_ifindex_override[prod]: true
nfprobe_direction[prod]: tag
nfprobe_ifindex[prod]: tag2
pre_tag_map: /etc/pmacct/pretag.map


The problem I'm facing is as follows:

It appears that the first set_tag and set_tag2 rules always apply. So
all flows are tagged as "egress" and OutputInt is always set to 62,
regardless of the vlan tag of the captured traffic.


Also I do not understand how I could set both InputInt and OutputInt to
a non-zero value.

Thanks a lot in advance for any insight you can provide!

Klaus



___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Traffic Analysis Tool

2023-12-14 Thread Paolo Lucente

Hi Faizan,

I guess the underlying question is whether you can build a 
prefix-to-prefix traffic matrix using sFlow data (and probably attribute 
it to ASNs automatically using a BGP feed). This is indeed all possible 
in concept. A proof of concept would be needed in one of your 
environment for testing.


Paolo


On 14/12/23 05:15, Faizan Barmare wrote:

Hello Paolo,

I am currently serving as a consultant for several IXPs in India.

Our standard practice involves implementing an IX setup with a shared 
Layer 2 Ethernet fabric platform. At our IXPs, peers establish BGP 
sessions with our route-server, powered by BIRD v2. To effectively 
monitor and manage our network, we rely on "IXP Manager," utilizing SNMP 
for Interface graphs and SFLOW for Peer-to-Peer graphs.


We are facing a demand from multiple IXPs for a more comprehensive 
traffic analysis tool that offers more in-depth traffic analysis, 
insights than the current capabilities of "IXP Manager." Specifically, 
we are seeking a tool that can seamlessly complement "IXP Manager" and 
provide detailed analysis of traffic patterns associated with individual 
peer IP-prefixes, information on the volume of traffic each IP-prefix is 
sending and receiving.


we believe that PMACCT could be the ideal tool to meet our requirements. 
We would appreciate it if you could confirm whether our understanding is 
accurate and elaborate on *_how PMACCT can contribute to enhancing our 
IXP network_*. If PMACCT proves to be valuable, I am considering 
implementing it across multiple IXP environments.


Looking forward to your insights.

Best regards,

Faizan Barmare.


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] pretag.map not working when running nfacctd in a container

2023-12-12 Thread Paolo Lucente

Hi Rich,

I was wondering if you had any log availble from nfacctd; for example,
is it possible that the file 'pretag.map' with no paths supplied is not
found, causing the issue? 

Paolo

On Thu, Dec 07, 2023 at 11:34:56PM +, Compton, Rich A wrote:
> Hi, hoping that someone can help me with this issue.  I am trying to run 
> nfacctd in a container and I’m using a pretag.map file to filter only certain 
> netflow records. When I remove the “pre_tag_map:” line  and 
> “pre_tag_label_filter” from the config file, I am able to export the netflow 
> records to the mysql database.  When I add the same config back in, I get no 
> netflow records in my database.
> The same config with the pre_tag_map config seems to work when running 
> nfacctd natively on the host OS.
> Anybody have any ideas what the issue is?
> Here’s a sample of my template config file:
> 
> daemonize: false
> nfacctd_port: 2055
> nfacctd_time_new: true
> pre_tag_map: pretag.map
> maps_index: true
> maps_entries: 1
> plugins: mysql[dns], mysql[ntp], mysql[ssdp], mysql[snmp], mysql[chargen], 
> mysql[ldap], mysql[portmap]
> aggregate: src_host, src_port, dst_host, dst_port, proto, src_as, dst_as, 
> in_iface, out_iface, peer_src_ip
> pre_tag_label_filter[dns]: dns
> aggregate_filter[dns]: dst port 53
> pre_tag_label_filter[ntp]: ntp
> aggregate_filter[ntp]: dst port 123
> pre_tag_label_filter[ssdp]: ssdp
> aggregate_filter[ssdp]: dst port 1900
> pre_tag_label_filter[snmp]: snmp
> aggregate_filter[snmp]: dst port 161
> pre_tag_label_filter[chargen]: chargen
> aggregate_filter[chargen]: dst port 19
> pre_tag_label_filter[ldap]: ldap
> aggregate_filter[ldap]: dst port 389
> pre_tag_label_filter[portmap]: portmap
> aggregate_filter[portmap]: dst port 111
> 
> sql_db[dns]: honeypot_feed
> sql_optimize_clauses[dns]: true
> sql_table[dns]: netflow
> sql_host[dns]: ${SQL_HOST}
> sql_passwd[dns]: ${SQL_PASSWORD}
> sql_user[dns]: ${SQL_USER}
> sql_refresh_time[dns]: 10
> sql_history[dns]: 1m
> sql_history_roundoff[dns]: mh
> 
> sql_db[ntp]: honeypot_feed
> sql_optimize_clauses[ntp]: true
> sql_table[ntp]: netflow
> sql_host[ntp]: ${SQL_HOST}
> sql_passwd[ntp]: ${SQL_PASSWORD}
> sql_user[ntp]: ${SQL_USER}
> sql_refresh_time[ntp]: 10
> sql_history[ntp]: 1m
> sql_history_roundoff[ntp]: mh
> 
> sql_db[snmp]: ${SQL_DATABASE}
> sql_optimize_clauses[snmp]: true
> sql_table[snmp]: netflow
> sql_host[snmp]: ${SQL_HOST}
> sql_passwd[snmp]: ${SQL_PASSWORD}
> sql_user[snmp]: ${SQL_USER}
> sql_refresh_time[snmp]: 10
> sql_history[snmp]: 1m
> sql_history_roundoff[snmp]: mh
> 
> sql_db[ssdp]: ${SQL_DATABASE}
> sql_optimize_clauses[ssdp]: true
> sql_table[ssdp]: netflow
> sql_host[ssdp]: ${SQL_HOST}
> sql_passwd[ssdp]: ${SQL_PASSWORD}
> sql_user[ssdp]: ${SQL_USER}
> sql_refresh_time[ssdp]: 10
> sql_history[ssdp]: 1m
> sql_history_roundoff[ssdp]: mh
> 
> sql_db[ldap]: ${SQL_DATABASE}
> sql_optimize_clauses[ldap]: true
> sql_table[ldap]: netflow
> sql_host[ldap]: ${SQL_HOST}
> sql_passwd[ldap]: ${SQL_PASSWORD}
> sql_user[ldap]: ${SQL_USER}
> sql_refresh_time[ldap]: 10
> sql_history[ldap]: 1m
> sql_history_roundoff[ldap]: mh
> 
> sql_db[chargen]: ${SQL_DATABASE}
> sql_optimize_clauses[chargen]: true
> sql_table[chargen]: netflow
> sql_host[chargen]: ${SQL_HOST}
> sql_passwd[chargen]: ${SQL_PASSWORD}
> sql_user[chargen]: ${SQL_USER}
> sql_refresh_time[chargen]: 10
> sql_history[chargen]: 1m
> sql_history_roundoff[chargen]: mh
> 
> sql_db[portmap]: ${SQL_DATABASE}
> sql_optimize_clauses[portmap]: true
> sql_table[portmap]: netflow
> sql_host[portmap]: ${SQL_HOST}
> sql_passwd[portmap]: ${SQL_PASSWORD}
> sql_user[portmap]: ${SQL_USER}
> sql_refresh_time[portmap]: 10
> sql_history[portmap]: 1m
> sql_history_roundoff[portmap]: mh
> 
> 
> ---cut-
> Example of pretag.map file:
> set_label=dns src_net=1.2.3.0/24
> set_label=ntp src_net=1.2.3.0/24
> set_label=snmp src_net=1.2.3.0/24
> set_label=ssdp src_net=1.2.3.0/24
> set_label=chargen src_net=1.2.3.0/24
> set_label=portmap src_net=1.2.3.0/24
> set_label=ldap src_net=1.2.3.0/24
> 
> 
> 
> 
> [signature_1767717039]
> 
> Rich Compton| Principal Eng |314.596.2828
> 8560 Upland Drive,   Suite B  |  Englewood, CO 80112



> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] peer_src_as vs src_as

2023-11-24 Thread Paolo Lucente

Hi Benedikt,

Yes, fields are directly populated with what is in the NetFlow packet. 
Super strange the Cisco is putting the Source AS in PeerSrcAS (confirmed 
also with tcpdump), maybe a bug?


You could probably get around it defining a custom primitive but it 
would be very dirty. I would make the Cisco device export right the 
informationi instead.


Paolo


On 22/11/23 10:49, Benedikt Sveinsson wrote:

Hi (hope this is not a duplicate email)

I’m running a new build of nfacct  - version below.

Exporting into Kafka

Collecting from two platforms Cisco and Huawei

I’m doing this nfacct -> kafka -> flowexporter -> Prometheus -> Grafana 
thing and initially had this in containers but now fresh setup natively 
on the Ubuntu host.


I straight up hit a snag – using same config files for nfacct as in the 
old setup – I now got as_src always 0


When looking at the kafka entries I noticed I have two as src fields – 
peer_as_src and as_src


{"event_type":"purge","label":"dublin","as_src":0,"as_dst":12969,"peer_as_src":32934,"peer_as_dst":0,"ip_src":"x.x.x.x","ip_dst":"x.x.x.x","port_src":443,"port_dst":59073,"stamp_inserted":"2023-11-09
 11:50:00","stamp_updated":"2023-11-09 12:32:36","packets":100,"bytes":5200,"writer_id":"default_kafka/592569"}

Our AS is 12969 – I have a networks file for our own networks etc.

I’m seeing the source AS as peer_as_src being populated with the source 
AS, but as_src always 0


Now to my confusion when I added the Huawei router to the collector :

{"event_type":"purge","label":"arbaer","as_src":24940,"as_dst":12969,"peer_as_src":0,"peer_as_dst":0,"ip_src":"x.x.x.x","ip_dst":"x.x.x.x","port_src":50196,"port_dst":443,"stamp_inserted":"1995-01-29
 09:45:00","stamp_updated":"2023-11-09 12:36:36","packets":200,"bytes":12000,"writer_id":"default_kafka/592828"}

I get as_src and as_dst correct  - this is an issue as I modified the 
flow-exporter code to pick up peer_as_src


Now looking at the tcpdump of the Netflow packet from the cisco router – 
it uses field name PeerSrcAS (not been able to decode the Huawei packets 
for some reason)


Can someone help me understand the Kafka fields – and from where they 
are populated ? is this directly related to what is in the actual 
netflow packet from the device – or something config related in nfacct 
?  - sorry if I’m missing something from the documentation and bit 
scrambling to get this running.


  * Benedikt 


root@netflow:/etc/pmacct# nfacctd -V

NetFlow Accounting Daemon, nfacctd 1.7.9-git [20231101-0 (a091a85e)]

Arguments:

'--enable-kafka' '--enable-jansson' '--enable-l2' 
'--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins' 
'--enable-st-bins'


Libs:

cdada 0.5.0

libpcap version 1.10.1 (with TPACKET_V3)

rdkafka 1.8.0

jansson 2.13.1

Plugins:

memory

print

nfprobe

sfprobe

tee

kafka

System:

Linux 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64

Compiler:

gcc 11.4.0

config of nfacct:

!daemonize: true

!syslog: daemon

pre_tag_map: /etc/pmacct/pretag.map

nfacctd_as: longest

nfacctd_net: longest

networks_file: /etc/pmacct/networks.lst

networks_file_no_lpm: true

aggregate: peer_src_as,peer_dst_as,src_host, dst_host, src_port, 
dst_port, src_as, dst_as, label


snaplen: 700

!sampling_rate: 100

!

bgp_daemon: true

bgp_daemon_ip: 10.131.24.11

bgp_daemon_port: 179

bgp_daemon_max_peers: 10

bgp_agent_map: /etc/pmacct/peering_agent.map

!

plugins: kafka

!bgp_table_dump_kafka_topic: pmacct.bgp

!bgp_table_dump_refresh_time: 300

kafka_cache_entries: 1

kafka_topic: netflow

kafka_max_writers: 10

kafka_output: json

kafka_broker_host: localhost

kafka_refresh_time: 5

kafka_history: 5m

kafka_history_roundoff: m

!print_refresh_time: 300

!print_history: 300

!print_history_roundoff: m

!print_output_file_append: true

!print_output_file: /var/netflow/flow_%s

!print_output: csv

nfacctd_ext_sampling_rate: 100

nfacctd_renormalize: true

nfacctd_port: 

nfacctd_time_secs: true

nfacctd_time_new: true


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] pmacct+clickhouse

2023-11-21 Thread Paolo Lucente


Hi Sergey,

Googling around i could find a couple of documents around the topic.
Like for example: https://github.com/kvitex/pmacct-kafka-clickhouse .
Not being a user of Clickhouse myself, i can't say if it's complete and
actual but maybe it's a starting point & any issues maybe you can report
them to the author of the document to improve it? 

Paolo

On Fri, Nov 17, 2023 at 10:09:27PM +0200, Sergey Gorshkov wrote:
> Hi Paolo!
> 
> Are you has best practices to install pmacct+clickhouse?
> 
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] filtering in tee mode

2023-08-10 Thread Paolo Lucente


Hi Evgeniy,

For a starter, did you have a look to section XVa of the QUICKSTART guide:

https://github.com/pmacct/pmacct/blob/0bd518b6fbee4ba286832f07fbf8debf0c3fa925/QUICKSTART#L2198C10-L2198C10

Examples are based on src_mac, dst_mac ; but you could give a try with 
src_net , dst_net . This is the full list of keys supported in nfacctd 
when in tee mode:


https://github.com/pmacct/pmacct/blob/0bd518b6fbee4ba286832f07fbf8debf0c3fa925/examples/pretag.map.example#L25

Paolo


On 9/8/23 14:12, Evgeniy Kozhuhovskiy wrote:

Good day for everyone

I'm trying to solve the following task with the nfacctd tee plugin, but 
I'm little bit stuck. Would appreciate your help.


I have a list of "our networks" (list of our customer nets), and I have 
a list of "IX networks".


I need to filter out incoming netflow stream traffic between our 
networks and IX, and tee the rest to the remote collector.
In other words, I have to exclude traffic between customers and IX from 
billing.


How to make such tagging with pretag.map? (i.e. tag "the rest", that is 
not tagged already)


--
With best regards, Evgeniy Kozhuhovskiy

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Cisco NCS - IPFIX 315 - sampling_rate and outer qtag not detected

2023-07-29 Thread Paolo Lucente

Hi Tiago,

Great to read from you, about your issues:

1) can you send me a pcap with a data packet and the templates, both
data and sampling option? Being able to replay it will give me a chance
to understand what may be wrong.

2) vlan_out refers to the vlan after, say, some re-tagging took place.
It does not refer to outer vs inner vlan. What you are looking for is
cvlan. Problem being cvlan is not currently supported as an aggregation
primitive but only as a filter in the pre_tag_map. Implementing this
would not be a biggie & can squeeze in the dev cycles pretty easily;
just as above, i'd just ask you if you can send me some sample data so
not to perform the coding blindly.

Paolo

 
On Thu, Jul 27, 2023 at 08:41:17PM +, Tiago Felipe Gonçalves wrote:
> Hi,
> 
> I’m using sfacctd, and nfacctd to collect/digest flows, but I’m having two 
> issues with IPFIX 315 being exported by Cisco NCSs on my lab environment.
> 
> ===
> 1. The router is sending sampling rate template, but nfacctd is unable to 
> detect it:
> Cisco NetFlow/IPFIX
> Version: 10
> Length: 140
> Timestamp: Jul 27, 2023 21:23:32.0 CEST
> ExportTime: 1690485812
> FlowSequence: 4603756
> Observation Domain Id: 4096
> Set 1 [id=257] (1 flows)
> FlowSet Id: (Data) (257)
> FlowSet Length: 124
> [Template Frame: 3]
> Flow 1
> Selector Id: 1
> Sampling Packet Interval: 32000
> Selector Algorithm: Random n-out-of-N Sampling (3)
> Sampling Size: 1
> Sampling Population: 32000
> SamplerName: ipfix_sm
> Selector Name: ipfix_sm
> String_len_short: 8
> Padding: 00
> 
> Seems that nfacctd understand the template:
> 
> DEBUG ( default/core ): Received NetFlow/IPFIX packet from 
> [192.168.245.145:21660] version [10] seqno [4621414]
> DEBUG ( default/core ): Processing NetFlow/IPFIX flowset [3] from 
> [192.168.245.145:21660] seqno [4621414]
> DEBUG ( default/core ): NfV10 agent : 192.168.245.145:4096
> DEBUG ( default/core ): NfV10 template type : options
> DEBUG ( default/core ): NfV10 template ID   : 338
> DEBUG ( default/core ): 
> -
> DEBUG ( default/core ): |pen | field type | offset |  
> size  |
> DEBUG ( default/core ): | 0  | 149[149  ] |  0 |  
> 4 |
> DEBUG ( default/core ): | 0  | 160[160  ] |  4 |  
> 8 |
> DEBUG ( default/core ): 
> -
> DEBUG ( default/core ): Netflow V9/IPFIX record size : 12
> DEBUG ( default/core ):
> DEBUG ( default/core ): Received NetFlow/IPFIX packet from 
> [192.168.245.145:21660] version [10] seqno [4621414]
> DEBUG ( default/core ): Processing NetFlow/IPFIX flowset [338] from 
> [192.168.245.145:21660] seqno [4621414]
> DEBUG ( default/core ): Received NetFlow/IPFIX packet from 
> [192.168.245.145:21660] version [10] seqno [4621415]
> DEBUG ( default/core ): Processing NetFlow/IPFIX flowset [3] from 
> [192.168.245.145:21660] seqno [4621415]
> DEBUG ( default/core ): NfV10 agent : 192.168.245.145:4096
> DEBUG ( default/core ): NfV10 template type : options
> DEBUG ( default/core ): NfV10 template ID   : 257
> DEBUG ( default/core ): 
> -
> DEBUG ( default/core ): |pen | field type | offset |  
> size  |
> DEBUG ( default/core ): | 0  | 302[302  ] |  0 |  
> 4 |
> DEBUG ( default/core ): | 0  | 305[305  ] |  4 |  
> 4 |
> DEBUG ( default/core ): | 0  | 304[304  ] |  8 |  
> 2 |
> DEBUG ( default/core ): | 0  | 309[309  ] | 10 |  
> 4 |
> DEBUG ( default/core ): | 0  | 310[310  ] | 14 |  
> 4 |
> DEBUG ( default/core ): | 0  | sampler name   [84   ] | 18 |  
>90 |
> DEBUG ( default/core ): | 0  | 335[335  ] |108 |  
> 65535 |
> DEBUG ( default/core ): 
> -
> DEBUG ( default/core ): Netflow V9/IPFIX record size : 107
> DEBUG ( default/core ):
> DEBUG ( default/core ): Received NetFlow/IPFIX packet from 
> [192.168.245.145:21660] version [10] seqno [4621415]
> DEBUG ( default/core ): Processing NetFlow/IPFIX flowset [257] from 
> [192.168.245.145:21660] seqno [4621415]
> DEBUG ( default/core ): Received NetFlow/IPFIX packet from 
> [172.31.31.162:63625] version [10] seqno [2092073163]
> DEBUG ( default/core ): Processing NetFlow/IPFIX flowset [335] from 
> [172.31.31.162:63625] seqno [2092073163]
> 
> But when printing the data, seems that sampling_rate 

Re: [pmacct-discussion] nfprobe/nfacctd communication over TCP

2023-04-29 Thread Paolo Lucente



Hi Eric,

Thanks for getting in touch & let me confirm that there are currently no 
plans.


This said, IPFIX RFC does contemplate TCP and it should not be a biggie 
to implement. Let me put it on my todo list, unless this is a dev that 
you may take on your side; if this would be on me, can you confirm an 
urgency?


Paolo


On 24/4/23 21:37, Eric Lopez wrote:
Is there any planned support to enable TCP communication between nfprobe 
and nfacctd?
There are configs for enabling DTLS, so I was wondering how difficult it 
would be to enable TCP vs. UDP?


Working in a large deployment on a major cloud provider where we suspect 
UDP traffic is being dropped.




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] IPFix Bi-Flows & docker ARM

2023-03-01 Thread Paolo Lucente


Hi Dain,

Thanks very much for your work on the arm docker image.

With regards to your question, i guess you are looking into collecting 
traffic with pmacctd then exporting bi-flows with the nfprobe plugin; 
bi-flows are currently only supported by nfacctd on collection.


Paolo


On 24/2/23 03:55, Dain Perkins wrote:

Hi all,

does anyone have a config that will aggregate based on something along 
the lines of flow_id or community_id into a bidirectional ipfix record?  
Is that even possible?


I could have sworn I had it working, but gave up ages ago while fighting 
with docker/ubiquiti/arm.


BTW I posted on github too, but if anyone wants a working, if very 
basic, arm docker image I can make it public


thanks
-d

Dain Perkins

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Does pmacct looking glass suppoorts multipath?

2023-02-06 Thread Paolo Lucente


Hi Alexander,

Thanks for reporting this.

You are hitting onto a (known) limitation of the LG (server / lookup), 
in fact in bgp_lg_daemon_ip_lookup() there is this note in the code: 
https://github.com/pmacct/pmacct/blob/4a70a5b41195afc904d77efa61987bcb80023512/src/bgp/bgp_lookup.c#L843


The enhancement belongs to the C part of the code. Would you have some 
spare cycles to code this? If so, i'd be happy to support you. If not, 
this would be best tracked as an Issue on GitHub so that we don't loose 
track of it.


Paolo


On 30/1/23 12:10, Alexander Brusilov wrote:

Hi everyone, Paolo,

I am trying to set up a Looking Glass server, everything works good, but 
bgp multipath. Here is part of logs:


nfacctd[13959]: INFO ( default/core ):  '--prefix=/opt/pmacct-1.7.8' 
'--enable-geoipv2' '--enable-jansson' '--enable-zmq' '--enable-pgsql' 
'PKG_CONFIG_PATH=/usr/pgsql-14/lib/pkgconfig' '--enable-l2' 
'--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins' 
'--enable-st-bins'
nfacctd[13959]: INFO ( default/core ): Reading configuration file 
'/opt/pmacct-1.7.8/etc/nfacctd.conf'.
nfacctd[13959]: INFO ( default/core ): [/opt/pmacct/etc/sampling.map] 
(re)loading map.
nfacctd[13959]: INFO ( default/core ): [/opt/pmacct/etc/sampling.map] 
map successfully (re)loaded.
nfacctd[13959]: INFO ( default/core ): 
[/opt/pmacct/etc/agent_to_peer.map] (re)loading map.
nfacctd[13959]: INFO ( default/core ): 
[/opt/pmacct/etc/agent_to_peer.map] map successfully (re)loaded.
nfacctd[13959]: INFO ( default/core/lg ): Looking Glass listening on 
192.168.X.X:1791

nfacctd[13959]: INFO ( default/core/BGP ): maximum BGP peers allowed: 3
nfacctd[13959]: INFO ( default/core/BGP ): bgp_daemon_pipe_size: 
obtained=33554432 target=16777216.
nfacctd[13959]: INFO ( default/core/BGP ): waiting for BGP data on 
192.168.X.X:179

nfacctd[13959]: INFO ( default/core/BGP ): [10.X.X.X1] BGP peers usage: 3/3
nfacctd[13959]: INFO ( default/core/BGP ): [10.X.X.X1] Capability: 
MultiProtocol [1] AFI [1] SAFI [1]
nfacctd[13959]: INFO ( default/core/BGP ): [10.X.X.X1] Capability: 
MultiProtocol [1] AFI [2] SAFI [1]
nfacctd[13959]: INFO ( default/core/BGP ): [10.X.X.X1] Capability: 
4-bytes AS [65] ASN [XX]
nfacctd[13959]: INFO ( default/core/BGP ): [10.X.X.X1] Capability: 
ADD-PATHs [69] AFI [1] SAFI [1] SEND_RECEIVE [2]
nfacctd[13959]: INFO ( default/core/BGP ): [10.X.X.X1] Capability: 
ADD-PATHs [69] AFI [2] SAFI [1] SEND_RECEIVE [2]
nfacctd[13959]: INFO ( default/core/BGP ): [10.X.X.X1] BGP_OPEN: Local 
AS: XX Remote AS: XX HoldTime: 90
nfacctd[13959]: INFO ( nfacct_bgp_v4/pgsql ): cache entries=524288 base 
cache memory=214005504 bytes
nfacctd[13959]: INFO ( default/core ): [/opt/pmacct/etc/pretag.map] 
(re)loading map.
nfacctd[13959]: INFO ( nfacct_bgp_v6/pgsql ): cache entries=524288 base 
cache memory=214005504 bytes
nfacctd[13959]: INFO ( default/core ): [/opt/pmacct/etc/pretag.map] map 
successfully (re)loaded.
nfacctd[13959]: INFO ( default/core ): [/opt/pmacct/etc/pretag.map] 
(re)loading map.
nfacctd[13959]: INFO ( default/core ): [/opt/pmacct/etc/pretag.map] map 
successfully (re)loaded.
nfacctd[13959]: INFO ( default/core ): [/opt/pmacct/etc/pretag.map] 
(re)loading map.
nfacctd[13959]: INFO ( default/core ): [/opt/pmacct/etc/pretag.map] map 
successfully (re)loaded.
nfacctd[13959]: INFO ( default/core ): waiting for NetFlow/IPFIX data on 
:::9995
nfacctd[13959]: INFO ( default/core/BGP ): *** Dumping BGP tables - 
START (PID: 13991 RID: 1) ***
nfacctd[13959]: INFO ( default/core/BGP ): *** Dumping BGP tables - END 
(PID: 13991 RID: 1 TABLES: 3 ENTRIES: 3345336 ET: 53) ***


ADD-PATHs successfully negotiated and present in dump file:
$ sudo grep 'X.X.X.0/24' 
/opt/pmacct/var/nfacct-bgp-20230130-131675075080-10_X_X_X.json
{"seq": 0, "timestamp": "1675075080", "peer_ip_src": "10.X.X.X", 
"peer_tcp_port": 50573, "event_type": "dump", "afi": 1, "safi": 1, 
"ip_prefix": "X.X.X.0/24", "as_path_id": 1, "bgp_nexthop": "", 
"as_path": "", "comms": "", "origin": "i", "local_pref": 100, 
"med": 20}
{"seq": 0, "timestamp": "1675075080", "peer_ip_src": "10.X.X.X", 
"peer_tcp_port": 50573, "event_type": "dump", "afi": 1, "safi": 1, 
"ip_prefix": "X.X.X.0/24", "as_path_id": 3, "bgp_nexthop": "", 
"as_path": "", "comms": "", "origin": "i", "local_pref": 100, 
"med": 20}
{"seq": 0, "timestamp": "1675075080", "peer_ip_src": "10.X.X.X", 
"peer_tcp_port": 50573, "event_type": "dump", "afi": 1, "safi": 1, 
"ip_prefix": "X.X.X.0/24", "as_path_id": 5, "bgp_nexthop": "", 
"as_path": "", "comms": "", "origin": "i", "local_pref": 100, 
"med": 20}
{"seq": 0, "timestamp": "1675075080", "peer_ip_src": "10.X.X.X", 
"peer_tcp_port": 50573, "event_type": "dump", "afi": 1, "safi": 1, 
"ip_prefix": "X.X.X.0/24", "as_path_id": 4, "bgp_nexthop": "", 
"as_path": "", "comms": "", "origin": "i", "local_pref": 100, 
"med": 20}
{"seq": 0, "timestamp": "1675075080", "peer_ip_src": "10.X.X.X", 
"peer_tcp_port": 50573, "event_type": "dump", "afi": 1, "safi": 1, 

Re: [pmacct-discussion] I need help with pre_tag_map and aggregate_filter

2023-01-23 Thread Paolo Lucente


Hi Federico,

Sure, please let's switch to unicast email as i'd need more info and/or 
some example(s).


First and foremost, in summary, what is appearing not to be working 
well: labelling flow or filtering of labels? From your last email i seem 
to understand that it's the latter case, are we in sync?


Paolo


On 17/1/23 11:47, Federico Urtizberea wrote:

Hi Paolo, thanks for your answer.
I followed your suggestions and the results are:

# Disable pre_tag_label_encode_as_map
I had to change the different pretag.map files to tag the flows 
correctly and none were captured. I then disabled the 
pre_tag_label_filter and with a kafka consumer, I filtered the labeled 
flows and I was able to see the properly labeled flows.


# enable pre_tag_label_encode_as_map
By disabling pre_tag_label_filter and using a kafka consumer, I filtered 
the labeled flows and I was able to see the correctly labeled flows.


The pretag.map files were changing over the days, to really only mark 
the searched traffic and have a clearer configuration.
If you need more accurate data, and a flow sample, I can send you by 
unicast email.

Regards,

Federico


On 16/1/23 16:27, Paolo Lucente wrote:


Hi Federico,

I see the combo pre_tag_label_filter / pre_tag_label_encode_as_map, 
can you please temporarily disable the latter 
(pre_tag_label_encode_as_map) and see if the filtering does work as 
expected? Should it not, can you also disable the filtering and check 
what you see? Are labels applied correctly?


Paolo


On 12/1/23 11:21, Federico Urtizberea wrote:
Hi everyone, after looking the previous configuration, I changed it a 
bit, but so far I still can't seeing the unknown traffic.


The actual configuration, is cleaner than previous one.

# /etc/pmacct/network.lst
192.168.0.0/24
192.168.1.0/24
172.16.0.0/23
172.16.2.0/24
172.16.250.0./24


# /etc/pmacct/pretag_in.map

set_label=client%wknwnnet1 dst_net=172.16.0.0/23  jeq=eval_type
set_label=client%wknwnnet1   dst_net=172.16.2.0/24 jeq=eval_type

set_label=client%wknwnnet2   dst_net=172.16.250.0/24 jeq=eval_type

set_label=type%mynet1   src_net=192.168.0.0/23 label=eval_type
set_label=type%mynet2   src_net=192.168.2.0/24 label=eval_type
set_label=type%tip  src_net=0.0.0.0/0   label=eval_type


# /etc/pmacct/pretag_out.map

set_label=client%wknwnnet1 src_net=172.16.0.0/23  jeq=eval_type
set_label=client%wknwnnet1   src_net=172.16.2.0/24 jeq=eval_type

set_label=client%wknwnnet2   src_net=172.16.250.0/24 jeq=eval_type

set_label=type%mynet1   dst_net=192.168.0.0/23 label=eval_type
set_label=type%mynet2   dst_net=192.168.2.0/24 label=eval_type
set_label=type%tip  dst_net=0.0.0.0/0   label=eval_type


# /etc/pmacct/pretag_unknown.map

dst_net=172.16.0.0/23
dst_net=172.16.2.0/24
dst_net=172.16.250.0/24

src_net=172.16.0.0/23
src_net=172.16.2.0/24
src_net=172.16.250.0/24
set_label=client%unknown    src_net=0.0.0.0/0   jeq=eval_type

set_label=type%mynet1 dst_net=192.168.0.0/23   label=eval_type
set_label=type%mynet2   dst_net=192.168.2.0/24 label=eval_type
set_label=type%unknown  dst_net=0.0.0.0/0  label=eval_type


#/etc/pmacct/sfacctd.conf

daemonize: false
debug: true
networks_file: /etc/pmacct/networks.lst
sfacctd_net: file
sfacctd_port: 8152
sfacctd_renormalize: true
sfacctd_time_new: true
plugin_buffer_size: 1024000
plugin_pipe_size: 1024
propagate_signals: true
timestamps_secs: true
pre_tag_label_encode_as_map: true

plugins: kafka[in],kafka[out],kafka[unknown]

kafka_topic[in]: input_traffic
kafka_output[in]: json
kafka_broker_host[in]: 10.0.0.1
kafka_broker_port[in]: 5094
kafka_refresh_time[in]: 180
kafka_history[in]: 3m
kafka_history_roundoff[in]: m
pre_tag_map[in]: /etc/pmacct/pretag_in.map
aggregate_filter[in]: vlan and (dst net 172.16.0.0/23 or dst net 
172.16.2.0/24 or dst net 172.16.250.0/24)

aggregate[in]: etype,label

kafka_topic[out]: output_traffic
kafka_output[out]: json
kafka_broker_host[out]: 10.0.0.1
kafka_broker_port[out]: 5094
kafka_refresh_time[out]: 180
kafka_history[out]: 3m
kafka_history_roundoff[out]: m
pre_tag_map[out]: /etc/pmacct/pretag_out.map
aggregate_filter[out]: vlan and (src net 172.16.0.0/23 or src net 
172.16.2.0/24 or src net 172.16.250.0/24)

aggregate[out]: etype,label

kafka_topic[unknown]: unknown_traffic
kafka_output[unknown]: json
kafka_broker_host[unknown]: 10.0.0.1
kafka_broker_port[unknown]: 5094
kafka_refresh_time[unknown]: 180
kafka_history[unknown]: 3m
kafka_history_roundoff[unknown]: m
pre_tag_map[unknown]: /etc/pmacct/pretag_unknown.map
pre_tag_label_filter[unknown]: -null
aggregate[unknown]: 
src_host,src_port,src_net,src_mask,dst_host,dst_port,dst_net,dst_mask,proto,etype,vlan,in_iface,out_iface,peer_src_ip,label


  Any advice?

Regards,


Federico


On 10/1/23 21:46, Federico Urtizberea wrote:
An errata, in the copy and paste process I made a mistake. My 
pretag.map file is:


/etc/pmacct/pretag.map

set_label=client%wknwnnet1   src_net=172.16.0.0/23 jeq=eval_out_type
set_label=client%wknwnnet1   src_net=172.16.2.0/24

Re: [pmacct-discussion] I need help with pre_tag_map and aggregate_filter

2023-01-16 Thread Paolo Lucente


Hi Federico,

I see the combo pre_tag_label_filter / pre_tag_label_encode_as_map, can 
you please temporarily disable the latter (pre_tag_label_encode_as_map) 
and see if the filtering does work as expected? Should it not, can you 
also disable the filtering and check what you see? Are labels applied 
correctly?


Paolo


On 12/1/23 11:21, Federico Urtizberea wrote:
Hi everyone, after looking the previous configuration, I changed it a 
bit, but so far I still can't seeing the unknown traffic.


The actual configuration, is cleaner than previous one.

# /etc/pmacct/network.lst
192.168.0.0/24
192.168.1.0/24
172.16.0.0/23
172.16.2.0/24
172.16.250.0./24


# /etc/pmacct/pretag_in.map

set_label=client%wknwnnet1 dst_net=172.16.0.0/23  jeq=eval_type
set_label=client%wknwnnet1   dst_net=172.16.2.0/24 jeq=eval_type

set_label=client%wknwnnet2   dst_net=172.16.250.0/24 jeq=eval_type

set_label=type%mynet1   src_net=192.168.0.0/23 label=eval_type
set_label=type%mynet2   src_net=192.168.2.0/24 label=eval_type
set_label=type%tip  src_net=0.0.0.0/0   label=eval_type


# /etc/pmacct/pretag_out.map

set_label=client%wknwnnet1 src_net=172.16.0.0/23  jeq=eval_type
set_label=client%wknwnnet1   src_net=172.16.2.0/24 jeq=eval_type

set_label=client%wknwnnet2   src_net=172.16.250.0/24 jeq=eval_type

set_label=type%mynet1   dst_net=192.168.0.0/23 label=eval_type
set_label=type%mynet2   dst_net=192.168.2.0/24 label=eval_type
set_label=type%tip  dst_net=0.0.0.0/0   label=eval_type


# /etc/pmacct/pretag_unknown.map

dst_net=172.16.0.0/23
dst_net=172.16.2.0/24
dst_net=172.16.250.0/24

src_net=172.16.0.0/23
src_net=172.16.2.0/24
src_net=172.16.250.0/24
set_label=client%unknown    src_net=0.0.0.0/0   jeq=eval_type

set_label=type%mynet1 dst_net=192.168.0.0/23   label=eval_type
set_label=type%mynet2   dst_net=192.168.2.0/24 label=eval_type
set_label=type%unknown  dst_net=0.0.0.0/0  label=eval_type


#/etc/pmacct/sfacctd.conf

daemonize: false
debug: true
networks_file: /etc/pmacct/networks.lst
sfacctd_net: file
sfacctd_port: 8152
sfacctd_renormalize: true
sfacctd_time_new: true
plugin_buffer_size: 1024000
plugin_pipe_size: 1024
propagate_signals: true
timestamps_secs: true
pre_tag_label_encode_as_map: true

plugins: kafka[in],kafka[out],kafka[unknown]

kafka_topic[in]: input_traffic
kafka_output[in]: json
kafka_broker_host[in]: 10.0.0.1
kafka_broker_port[in]: 5094
kafka_refresh_time[in]: 180
kafka_history[in]: 3m
kafka_history_roundoff[in]: m
pre_tag_map[in]: /etc/pmacct/pretag_in.map
aggregate_filter[in]: vlan and (dst net 172.16.0.0/23 or dst net 
172.16.2.0/24 or dst net 172.16.250.0/24)

aggregate[in]: etype,label

kafka_topic[out]: output_traffic
kafka_output[out]: json
kafka_broker_host[out]: 10.0.0.1
kafka_broker_port[out]: 5094
kafka_refresh_time[out]: 180
kafka_history[out]: 3m
kafka_history_roundoff[out]: m
pre_tag_map[out]: /etc/pmacct/pretag_out.map
aggregate_filter[out]: vlan and (src net 172.16.0.0/23 or src net 
172.16.2.0/24 or src net 172.16.250.0/24)

aggregate[out]: etype,label

kafka_topic[unknown]: unknown_traffic
kafka_output[unknown]: json
kafka_broker_host[unknown]: 10.0.0.1
kafka_broker_port[unknown]: 5094
kafka_refresh_time[unknown]: 180
kafka_history[unknown]: 3m
kafka_history_roundoff[unknown]: m
pre_tag_map[unknown]: /etc/pmacct/pretag_unknown.map
pre_tag_label_filter[unknown]: -null
aggregate[unknown]: 
src_host,src_port,src_net,src_mask,dst_host,dst_port,dst_net,dst_mask,proto,etype,vlan,in_iface,out_iface,peer_src_ip,label


  Any advice?

Regards,


Federico


On 10/1/23 21:46, Federico Urtizberea wrote:
An errata, in the copy and paste process I made a mistake. My 
pretag.map file is:


/etc/pmacct/pretag.map

set_label=client%wknwnnet1   src_net=172.16.0.0/23 jeq=eval_out_type
set_label=client%wknwnnet1   src_net=172.16.2.0/24 jeq=eval_out_type
set_label=client%wknwnnet2   src_net=172.16.250.0/24 jeq=eval_out_type
set_label=client%wknwnnet1   dst_net=172.16.0.0/23 jeq=eval_in_type
set_label=client%wknwnnet1   dst_net=172.16.2.0/24 jeq=eval_in_type
set_label=client%wknwnnet2   dst_net=172.16.250.0/24 jeq=eval_in_type

set_label=direction%output,type%mynet1   dst_net=192.168.0.0/23 
label=eval_out_type
set_label=direction%output,type%mynet2   dst_net=192.168.2.0/24 
label=eval_out_type

set_label=direction%output,type%tip dst_net=0.0.0.0/0 label=eval_out_type

set_label=direction%input,type%mynet1   src_net=192.168.0.0/23 
label=eval_in_type
set_label=direction%input,type%mynet2   src_net=192.168.2.0/24 
label=eval_in_type

set_label=direction%input,type%tip src_net=0.0.0.0/0 label=eval_in_type


Regards,


Federico

On 10/1/23 20:55, Federico Urtizberea wrote:

Hi to all, i need some suggestions to resolve this.
I have severeral well known networks connected to my network, and i 
provide transit to them. I need to measure the traffic between them 
and my network, the ip transit traffic, and the unknown generate 
traffic. To achieve this, I have configured several SFLOW exporters.
Let's say thay 

[pmacct-discussion] pmacct 1.7.8 released !

2022-12-31 Thread Paolo Lucente


VERSION.
1.7.8


DESCRIPTION.
pmacct is a small set of multi-purpose passive network monitoring tools. It
can account, classify, aggregate, replicate and export forwarding-plane data,
ie. IPv4 and IPv6 traffic; collect and correlate control-plane data via BGP
and BMP; collect and correlate RPKI data; collect infrastructure data via
Streaming Telemetry. Each component works both as a standalone daemon and
as a thread of execution for correlation purposes (ie. enrich NetFlow with
BGP data).

A pluggable architecture allows to store collected forwarding-plane data into
memory tables, RDBMS (MySQL, PostgreSQL, SQLite), noSQL databases (MongoDB,
BerkeleyDB), AMQP (RabbitMQ) and Kafka message exchanges and flat-files.
pmacct offers customizable historical data breakdown, data enrichments like
BGP and IGP correlation and GeoIP lookups, filtering, tagging and triggers.
Libpcap, Linux Netlink/NFLOG, sFlow v2/v4/v5, NetFlow v5/v8/v9 and IPFIX are
all supported as inputs for forwarding-plane data. Replication of incoming
NetFlow, IPFIX and sFlow datagrams is also available. Collected data can
be easily exported (ie. via Kafka) to modern databases like ElasticSearch,
Apache Druid and ClickHouse and (ie. via flat-files) to classic tools 
Cacti, RRDtool and MRTG, etc.

Control-plane and infrastructure data, collected via BGP, BMP and Streaming
Telemetry, can be all logged real-time or dumped at regular time intervals
to AMQP (RabbitMQ) and Kafka message exchanges and flat-files.


HOMEPAGE.
http://www.pmacct.net/


DOWNLOAD.
http://www.pmacct.net/pmacct-1.7.8.tar.gz


CHANGELOG.
+ Introduced support for eBPF for all daemons: if SO_REUSEPORT is
  supported by the OS and eBPF support is compiled in, this allows
  to load a custom load-balancer. To load-share, daemons have to
  be part of the same cluster_name and each be configured with a
  distinct cluster_id.
+ Introduced support for listening on VRF interfaces on Linux for
  all daemons. The feature can be enabled via nfacctd_interface,
  bgp_daemon_interface and equivalent knobs. Many thanks to
  Marcel Menzel ( @WRMSRwasTaken ) for this contribution.
+ pre_tag_map: introduced limited tagging / labelling support for
  BGP (pmbgpd), BMP (pmbmpd), Streaming Telemetry (pmtelemetryd)
  daemons. ip, set_tag, set_label keys being currently supported.
+ pre_tag_map: defined a new pre_tag_label_encode_as_map config
  knob to encode the output 'label' value as a map for JSON and
  Apache Avro encodings, ie. in JSON "label": { "key1": "value1",
  "key2": "value2" }. For keys and values to be correctly mapped,
  the '%' delimiter is used when composing a pre_tag_map, ie.
  "set_label=key1%value1,key2%value2 ip=0.0.0.0/0". Thanks to
  Salvatore Cuzzilla ( @scuzzilla ) for this contribution.
+ pre_tag_map: introduced support for IP prefixes for src_net
  and dst_net keys for indexed maps (maps_index set to true).
  Indexing being an hash map, this feature currently tests data
  against all defined IP prefix lenghts in the map for a match
  (first defined matching prefix wins).
+ pre_tag_map: introduced two new 'is_nsel', 'is_nel' keys to
  check for the presence of firewallEvent field (233) and
  natEvent field (230) in NetFlow/IPFIX respectively in order
  to infer whether data is NSEL / NEL. If set to 'true' this
  does match NSEL / NEL data, if set to 'false' it does match
  non NSEL / NEL data respectively.
+ Introduced a new mpls_label_stack primitive, encoded as a
  string and includes a comma-separated list of integers (label
  values). Thanks to Salvatore Cuzzilla ( @scuzzilla ) for this
  contribution.
+ Introduced a new fw_event primitive, to support NetFlow v9/
  IPFIX firewallEvent 233 Information Element.
+ Introduced a new tunnel_tcp_flags primitive for pmacctd and
  sfacctd to record TCP flags for the inner layer of a tunneled
  technology (ie. VXLAN). Also tunnel_dst_port decoding was
  fixed for sfacctd. 
+ Introduced support for in/out VLAN support for sfacctd. To be
  savy, 'in_vlan' and 'vlan' were muxed onto the same primitive
  depending on the daemon being used. Thanks to Jim Westfall
  ( @jwestfall69 ) for this contribution. 
+ Introduced a new mpls_label_stack_encode_as_array config knob
  to encode the MPLS label stack as an array for JSON and Apache
  Avro encodings, ie. in JSON "mpls_label_stack": [ "0-label0",
  "1-label1", "2-label2", "3-label3", "4-label4", "5-label5" ]
  and in Avro "name": "mpls_label_stack", "type": { "type":
  "array", "items": { "type": "string" } }. Thanks to Salvatore
  Cuzzilla ( @scuzzilla ) for this contribution.
+ Introduced a new tcpflags_encode_as_array config knob to encode
  TCP flags as an array for JSON and Apache Avro, ie. in JSON
  "tcp_flags": [ "URG", "ACK", "PSH", "RST", "SYN", "FIN" ] and
  in Avro "name": "tcp_flags", "type": { "type": "array",
  "items": { "type": "string" } }. Thanks to Salvatore Cuzzilla
  ( @scuzzilla ) for this contribution.
+ Introduced a new fwd_status_encode_as_string config knob to

[pmacct-discussion] pmacct 1.7.8 released !

2022-12-31 Thread Paolo Lucente

VERSION.
1.7.8


DESCRIPTION.
pmacct is a small set of multi-purpose passive network monitoring tools. 
It can account, classify, aggregate, replicate and export forwarding 
plane data, ie. IPv4 and IPv6 traffic; collect and correlate 
control-plane data via BGP and BMP; collect and correlate RPKI data; 
collect infrastructure data via Streaming Telemetry. Each component 
works both as a standalone daemon and as a thread of execution for 
correlation purposes (ie. enrich NetFlow with BGP data).


A pluggable architecture allows to store collected forwarding-plane data 
into memory tables, RDBMS (MySQL, PostgreSQL, SQLite), noSQL databases 
(MongoDB, BerkeleyDB), AMQP (RabbitMQ) and Kafka message exchanges and 
flat-files. pmacct offers customizable historical data breakdown, data 
enrichments like BGP and IGP correlation and GeoIP lookups, filtering, 
tagging and triggers. Libpcap, Linux Netlink/NFLOG, sFlow v2/v4/v5, 
NetFlow v5/v8/v9 and IPFIX are all supported as inputs for 
forwarding-plane data. Replication of incoming NetFlow, IPFIX and sFlow 
datagrams is also available. Collected data can be easily exported (ie. 
via Kafka) to modern databases like ElasticSearch, Apache Druid and 
ClickHouse and (ie. via flat-files) to classic tools  Cacti, RRDtool and 
MRTG, etc.


Control-plane and infrastructure data, collected via BGP, BMP and 
Streaming Telemetry, can be all logged real-time or dumped at regular 
time intervals to AMQP (RabbitMQ) and Kafka message exchanges and 
flat-files.



HOMEPAGE.
http://www.pmacct.net/


DOWNLOAD.
http://www.pmacct.net/pmacct-1.7.8.tar.gz


CHANGELOG.
+ Introduced support for eBPF for all daemons: if SO_REUSEPORT is
  supported by the OS and eBPF support is compiled in, this allows
  to load a custom load-balancer. To load-share, daemons have to
  be part of the same cluster_name and each be configured with a
  distinct cluster_id.
+ Introduced support for listening on VRF interfaces on Linux for
  all daemons. The feature can be enabled via nfacctd_interface,
  bgp_daemon_interface and equivalent knobs. Many thanks to
  Marcel Menzel ( @WRMSRwasTaken ) for this contribution.
+ pre_tag_map: introduced limited tagging / labelling support for
  BGP (pmbgpd), BMP (pmbmpd), Streaming Telemetry (pmtelemetryd)
  daemons. ip, set_tag, set_label keys being currently supported.
+ pre_tag_map: defined a new pre_tag_label_encode_as_map config
  knob to encode the output 'label' value as a map for JSON and
  Apache Avro encodings, ie. in JSON "label": { "key1": "value1",
  "key2": "value2" }. For keys and values to be correctly mapped,
  the '%' delimiter is used when composing a pre_tag_map, ie.
  "set_label=key1%value1,key2%value2 ip=0.0.0.0/0". Thanks to
  Salvatore Cuzzilla ( @scuzzilla ) for this contribution.
+ pre_tag_map: introduced support for IP prefixes for src_net
  and dst_net keys for indexed maps (maps_index set to true).
  Indexing being an hash map, this feature currently tests data
  against all defined IP prefix lenghts in the map for a match
  (first defined matching prefix wins).
+ pre_tag_map: introduced two new 'is_nsel', 'is_nel' keys to
  check for the presence of firewallEvent field (233) and
  natEvent field (230) in NetFlow/IPFIX respectively in order
  to infer whether data is NSEL / NEL. If set to 'true' this
  does match NSEL / NEL data, if set to 'false' it does match
  non NSEL / NEL data respectively.
+ Introduced a new mpls_label_stack primitive, encoded as a
  string and includes a comma-separated list of integers (label
  values). Thanks to Salvatore Cuzzilla ( @scuzzilla ) for this
  contribution.
+ Introduced a new fw_event primitive, to support NetFlow v9/
  IPFIX firewallEvent 233 Information Element.
+ Introduced a new tunnel_tcp_flags primitive for pmacctd and
  sfacctd to record TCP flags for the inner layer of a tunneled
  technology (ie. VXLAN). Also tunnel_dst_port decoding was
  fixed for sfacctd.
+ Introduced support for in/out VLAN support for sfacctd. To be
  savy, 'in_vlan' and 'vlan' were muxed onto the same primitive
  depending on the daemon being used. Thanks to Jim Westfall
  ( @jwestfall69 ) for this contribution.
+ Introduced a new mpls_label_stack_encode_as_array config knob
  to encode the MPLS label stack as an array for JSON and Apache
  Avro encodings, ie. in JSON "mpls_label_stack": [ "0-label0",
  "1-label1", "2-label2", "3-label3", "4-label4", "5-label5" ]
  and in Avro "name": "mpls_label_stack", "type": { "type":
  "array", "items": { "type": "string" } }. Thanks to Salvatore
  Cuzzilla ( @scuzzilla ) for this contribution.
+ Introduced a new tcpflags_encode_as_array config knob to encode
  TCP flags as an array for JSON and Apache Avro, ie. in JSON
  "tcp_flags": [ "URG", "ACK", "PSH", "RST", "SYN", "FIN" ] and
  in Avro "name": "tcp_flags", "type": { "type": "array",
  "items": { "type": "string" } }. Thanks to Salvatore Cuzzilla
  ( @scuzzilla ) for this contribution.
+ Introduced a new 

Re: [pmacct-discussion] Filter destination IP on lists with tens of thousands of entries?

2022-12-16 Thread Paolo Lucente


Hi Rich,

Indexed pre_tag_map could fit the bill (and if going down this route i 
do recommend to perform a proof-of-concept using latest & greatest code 
in master since it has been a recent area of growth / improvement).


Example of how you could populate the pre_tag_map (centric on IPv4 but 
you can similarly throw IPv6 in the mix):


set_label=dns   dst_net=X.X.X.X/32
set_label=dns   dst_net=Y.Y.Y.Y/32
...
set_label=ntp   dst_net=J.J.J.J/32
set_label=ntp   dst_net=K.K.K.K/32
...
set_label=xyz   dst_net=W.W.W.W/32
set_label=xyz   dst_net=Z.Z.Z.Z/32

And so forth. That simple if there is guarantee of no overlaps, ie. an 
host serving, say, both dns and ntp.


In case of overlaps you could make the workflow more complex (and 
computationally more expensive as it's not going to be a O(1) hit 
anymore) with JEQs in the map but i'd rather recommend you determine 
upfront - when composing the map - whether this is the case and make 
entries, say, 'set_label=dns,ntp dst_net=<..>'. It is a classic case of 
what you prefer to make more complex / what to optimize - and the good 
answer most often should be to optimize what happens at runtime.


So you have a pre_tag_map composed, you can reload it at runtime for 
updates sending a SIGUSR2 to the daemon, you have labels issued for 
flows. Now onto the filtering part & putting ports in the mix:


pre_tag_map: /path/to/pretag.map
maps_index: true
maps_entries: 
!
plugins: print[dns], print[ntp], print[xyz]
!
pre_tag_label_filter[dns]: dns
aggregate_filter[dns]: dst port 53
!
pre_tag_label_filter[ntp]: ntp
aggregate_filter[ntp]: dst port 123
!
pre_tag_label_filter[xyz]: xyz
aggregate_filter[xyz]: dst port <..>

Then each plugin can have its own faith: output file / kafka topic, log 
interval, historical time bucketing, etc.


This use-case is a classic, i have helped others optimizing and making 
it sure that maps with entries in the order of tens / hundreds of 
millions would just go through without problem (of course, in certain 
cases, given the right amount of memory). Feel free to reach out here or 
by unicast email if you need further help with this!


Paolo


On 15/12/22 16:03, Compton, Rich A wrote:
Hi, I have a few (~20) lists of IPs provided by Shadowserver 
(https://www.shadowserver.org ) on a daily 
basis.  Some lists contain a few hundred IPs and some contain tens of 
thousands of IPs.  I want to have pmacct filter out netflow records that 
do not have a destination IP contained in these lists.


Example logic would be:

If the netflow record is destined to an IP in the open DNS server list 
and on UDP dst port 53


Then store netflow record

Else If the netflow record is destined to an IP in the open NTP server 
list and on UDP dst port 123


Then store netflow record

..additional lists...

Else drop netflow record

Is there a way to do this?  It seems like there would be too many 
entries for BPF.  Also, I want to dynamically update these lists every 
night.


Thanks!

signature_1767717039

Rich Compton    |     Principal Eng   |    314.596.2828

8560 Upland Drive,   Suite B  |  Englewood, CO 80112

The contents of this e-mail message and
any attachments are intended solely for the
addressee(s) and may contain confidential
and/or legally privileged information. If you
are not the intended recipient of this message
or if this message has been addressed to you
in error, please immediately alert the sender
by reply e-mail and then delete this message
and any attachments. If you are not the
intended recipient, you are notified that
any use, dissemination, distribution, copying,
or storage of this message or any attachment
is strictly prohibited.

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Ballpark pmacctd performance

2022-12-07 Thread Paolo Lucente


Hi Chander,

Sorry for not having been specific in my previous email but i total 
meant to use PF_RING for inherent speed ups plus also leveraging the 
built-in sampling feature - since sampling is always best done as soon 
as possible in any data pipeline - but you figured that out yourself.


The above all said, one curiosity from my side: what is your use-case 
here? The reason i ask is because you mentioned Gbps speeds with such a 
low sampling rate, 1:20. Is it security, is this the reason for trying 
to detect micro-/pico-flows? 1:1k, for example, would be more typical 
for Gbps and 1:10k for 10 Gbps speeds.


Paolo


On 4/12/22 08:48, Chander Govindarajan wrote:

Hi Paolo,

Have been mucking around with PF_RING (v8.2) based on your pointer and 
this is what I am observing:


1. For the same setup as before (that took ~25% of a core), with 
PF_RING, I observed worse performance (~40%).
2. I found out that PF_RING has its own sampling knobs. There is an api 
to set sampling rate for each ring, but I couldn't access it (since  our 
interface is pmacctd -> modified libpcap -> PF_RING). Instead, I changed 
the sampling rate in the pf_ring kernel module directly (default values 
in the ring_create and ring_alloc_mem functions).
3. Using this, for a sampling rate of 20 in the pf_ring module (and 
setting the pmacctd sampling_rate to 1 to avoid double-sampling), I 
observe ~15% of core - so that is great.
4. Even better - when I go down to sampling rates of 200 and 2k, cpu 
consumption falls to 2% and 1% with this approach. In comparison, using 
the built-in pmacctd sampling (with no pf_ring) only got me to 15% (from 
the initial 25%) where it flattened out.


All in all, using pf_ring with in-built sampling seems to be giving 
amazing results.


1. Does all this sound reasonable, or am I doing something wrong?
2. It would be nice if the sampling rate in pmacctd would internally set 
the pf_ring sampling value. But, due to the libpcap interface (and this 
being a specific 3rd party software), I guess this would be out of scope?


Regards,
Chander

On 12/4/22 07:02, Paolo Lucente wrote:


Hi Chander,

I am unable to confirm your figure but i can say that you can give a 
try to PF_RING-enabled libpcap to see if it brings any advantage to 
the performance you are currently seeing.


Also, as explained here ( 
https://github.com/pmacct/pmacct/blob/50c6545275ba8fdf227a248af9982302f1ef25e0/QUICKSTART#L243-#L255 ) with PF_RING and a supported NIC you can scale performance horizontally by hashing flows over NIC queues and then binding pmacctd to a specific NIC queue.


Paolo


On 1/12/22 05:54, Chander Govindarajan wrote:

Hi,

Wanted to check what the expected performance of pmacctd is for the 
following config. Using pmacctd (v1.7.7 docker image with nfprobe 
plugin on an Ubuntu 22.04, linux kernel 5.15) with the key portion of 
the config as follows:


```
aggregate: src_host, dst_host, in_iface, out_iface, timestamp_start, 
timestamp_end, src_port, dst_port, proto, tos, tcpflags

timestamps_secs: true

plugin_pipe_size: 102400
nfprobe_timeouts: 
tcp=5:tcp.rst=5:tcp.fin=5:udp=5:icmp=5:general=5:maxlife=5


sampling_rate: 20
```

With iperf2 (-l 1440 -P 1 -b 1G) and around 100k packets per second, 
I am seeing CPU util of ~25-30% by pmacctd. Is this within expected 
behaviour?


Regards,
Chander

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Ballpark pmacctd performance

2022-12-03 Thread Paolo Lucente



Hi Chander,

I am unable to confirm your figure but i can say that you can give a try 
to PF_RING-enabled libpcap to see if it brings any advantage to the 
performance you are currently seeing.


Also, as explained here ( 
https://github.com/pmacct/pmacct/blob/50c6545275ba8fdf227a248af9982302f1ef25e0/QUICKSTART#L243-#L255 
) with PF_RING and a supported NIC you can scale performance 
horizontally by hashing flows over NIC queues and then binding pmacctd 
to a specific NIC queue.


Paolo


On 1/12/22 05:54, Chander Govindarajan wrote:

Hi,

Wanted to check what the expected performance of pmacctd is for the 
following config. Using pmacctd (v1.7.7 docker image with nfprobe plugin 
on an Ubuntu 22.04, linux kernel 5.15) with the key portion of the 
config as follows:


```
aggregate: src_host, dst_host, in_iface, out_iface, timestamp_start, 
timestamp_end, src_port, dst_port, proto, tos, tcpflags

timestamps_secs: true

plugin_pipe_size: 102400
nfprobe_timeouts: 
tcp=5:tcp.rst=5:tcp.fin=5:udp=5:icmp=5:general=5:maxlife=5


sampling_rate: 20
```

With iperf2 (-l 1440 -P 1 -b 1G) and around 100k packets per second, I 
am seeing CPU util of ~25-30% by pmacctd. Is this within expected 
behaviour?


Regards,
Chander

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] peer_ip_src 0.0.0.0 on IPv6 netflow

2022-11-13 Thread Paolo Lucente



Hi Federico,

This is indeed very strange since, unless your vendor is trying to 
specify the IP address of the exporter (and this is somehow failing) as 
part of the flows, the IP address is taken directly from the operating 
system socket.


The feature of using IE #130 (exporterIPv4Address) or #131 
(exporterIPv6Address), afaik, is mostly a J feature .. so, given your 
mentioning of your exporting platform, we may have a match there.


Any chance you can send me a small trace (in libpcap format) so to have 
a look in these IPFIX packets and their templates? In case you should 
not be familiar on how to produce one, please see here: 
https://github.com/pmacct/pmacct/blob/d8ea3ec9c7fd6ff679ec4be302324c563e563cd5/QUICKSTART#L3067-L3081


Paolo


On 8/11/22 12:16, Federico Urtizberea wrote:

Hello everyone. Thank you for taking a moment and reading these lines.
I am trying to differentiate the ip of the exporter of the Netflow 
packets, in this case the exporter is an MX104.
 From what I understand, the field used by nfacct for this is 
peer_ip_src. For IPv4 Netflows, peer_ip_src is completed correctly, but 
for IPv6 for a few packets (i suspect the first ones), peer_ip_src is 
the IPv6 address of the exporter, and for all other packets it is 
completed with 0.0.0.0.
Launching nfacctd in debug mode you can see that the Netflow packets 
arriving with the IPv6 of the exporter, and sniffing the incomming 
interface for the nfacctd server searching for 0.0.0.0, nothing is found.

The nfacctd version that I'm using is nfacctd 1.7.7-git.
Thanks in advance,

Federico


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] kafka plugin and number of json messages vs number of netflow record

2022-10-25 Thread Paolo Lucente


Improved link: 
https://github.com/pmacct/pmacct/blob/1.7.8/QUICKSTART#L3065-#L3071


Paolo


On 25/10/22 10:24, Paolo Lucente wrote:


Hi Wilfrid,

Can you please check whether you are dropping any NetFlow packets: 
https://github.com/pmacct/pmacct/blob/master/QUICKSTART#L3065-#L3071 .


Also, as i was saying in the previous email, to be sure no aggregation 
is taking place, we should look at the templates and compare them with 
the aggregation method you defined in pmacct: any chance you can share a 
sample (produced as i was suggesting but also a Wireshark screenshot 
would work OK, if it's easier for you) here or via unicast email?


Paolo


On 24/10/22 06:16, Grassot, Wilfrid wrote:

Hi Paolo,

Thanks for your feedback.
We are now filtering the template messages from the accounting of netflow
messages, but still the number of flows are l kind of 3 times more 
than the

number of json messages.

To troubleshoot, we did focus on one specific router, now comparing the
number of flow received  for (router ; ifindex) pair with the number 
of json

messages for the same  (router ; ifindex) pair
And we have the kind of comparaison:

exporter IP address    !  ifIndex    !  netflow_count    !  json_count

...    !  464    !    91144    !   
19491...    !  820    !    3900    !   919


...    !  959    !    11219    !   1918

...    !  756    !    280    !    59

...    !  757    !    293    !
56Obviously I am not asking to troubleshoot, but I would like again
confirmation that we should expect from kafka plugin to translate each 
flow

record matching {router, ifindex)  into json and sent to kafka.

Thanks again

Wilfrid







-Original Message-
From: Paolo Lucente 
Sent: Friday, 21 October 2022 15:37
To: pmacct-discussion@pmacct.net; Grassot, Wilfrid 

Subject: Re: [pmacct-discussion] kafka plugin and number of json 
messages vs

number of netflow record

CAUTION:  External email. Do not click links or open attachments 
unless you

recognize the sender and know the content is safe.

Hi Wilfrid,

To say whether some aggregation is taking place or not, you should 
look at

the template of the incoming NetFlow records. You can achieve this with
Wireshark / tshark or via pmacct, either running it in debug mode - 
you will
find the templates in the log file - or defining a 
nfacctd_templates_file.


In general, i would expect less JSON records output to Kafka than 
incoming
NetFlow records because of the templates - which are really service 
messages

to make the protocol work and hence do not make it to the database.

Paolo


On 21/10/22 09:13, Grassot, Wilfrid wrote:

Hi Paolo

We are  collecting netflow records of several routers interfaces.

Now we are testing the kafka plugin of nfactt using json as format 
output.


kafka_topic[l3vpn]: pmacct_netflow

aggregate[l3vpn]: tcpflags, proto, src_host, src_port, dst_host,
dst_port, src_as, dst_as, peer_src_as, peer_dst_as, peer_src_ip,
peer_dst_ip, in_iface, src_net,

dst_net, tos, timestamp_start, timestamp_end

kafka_broker_host[l3vpn]:
kafka-node-1.interstellar.prv:9092,kafka-node-2.interstellar.prv:9092,
kafka-node-3.interstellar.prv:9092

kafka_output[l3vpn]: json

kafka_topic[l3vpn]: pmacct_netflow

Is this setup converting 1 for 1 a netflow record to a json message ?

I am asking because the backend engineers are noticing a lot of
difference between the number of netflow records received and the
number of the json messages kafka is receiving.

Is there a kind of aggregation done by kafka plugin that would reduce
the number of json messages sent to Kafka ?

Thank you in advance.

Wilfrid Grassot

**


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] kafka plugin and number of json messages vs number of netflow record

2022-10-25 Thread Paolo Lucente



Hi Wilfrid,

Can you please check whether you are dropping any NetFlow packets: 
https://github.com/pmacct/pmacct/blob/master/QUICKSTART#L3065-#L3071 .


Also, as i was saying in the previous email, to be sure no aggregation 
is taking place, we should look at the templates and compare them with 
the aggregation method you defined in pmacct: any chance you can share a 
sample (produced as i was suggesting but also a Wireshark screenshot 
would work OK, if it's easier for you) here or via unicast email?


Paolo


On 24/10/22 06:16, Grassot, Wilfrid wrote:

Hi Paolo,

Thanks for your feedback.
We are now filtering the template messages from the accounting of netflow
messages, but still the number of flows are l kind of 3 times more than the
number of json messages.

To troubleshoot, we did focus on one specific router, now comparing the
number of flow received  for (router ; ifindex) pair with the number of json
messages for the same  (router ; ifindex) pair
And we have the kind of comparaison:

exporter IP address !  ifIndex  !  netflow_count!  json_count

... !  464  !91144  !   
19491...!  820  !3900   !   
919

... !  959  !11219  !   1918

... !  756  !280!59

... !  757  !293!
56Obviously I am not asking to troubleshoot, but I would like again
confirmation that we should expect from kafka plugin to translate each flow
record matching {router, ifindex)  into json and sent to kafka.

Thanks again

Wilfrid







-Original Message-
From: Paolo Lucente 
Sent: Friday, 21 October 2022 15:37
To: pmacct-discussion@pmacct.net; Grassot, Wilfrid 
Subject: Re: [pmacct-discussion] kafka plugin and number of json messages vs
number of netflow record

CAUTION:  External email. Do not click links or open attachments unless you
recognize the sender and know the content is safe.

Hi Wilfrid,

To say whether some aggregation is taking place or not, you should look at
the template of the incoming NetFlow records. You can achieve this with
Wireshark / tshark or via pmacct, either running it in debug mode - you will
find the templates in the log file - or defining a nfacctd_templates_file.

In general, i would expect less JSON records output to Kafka than incoming
NetFlow records because of the templates - which are really service messages
to make the protocol work and hence do not make it to the database.

Paolo


On 21/10/22 09:13, Grassot, Wilfrid wrote:

Hi Paolo

We are  collecting netflow records of several routers interfaces.

Now we are testing the kafka plugin of nfactt using json as format output.

kafka_topic[l3vpn]: pmacct_netflow

aggregate[l3vpn]: tcpflags, proto, src_host, src_port, dst_host,
dst_port, src_as, dst_as, peer_src_as, peer_dst_as, peer_src_ip,
peer_dst_ip, in_iface, src_net,

dst_net, tos, timestamp_start, timestamp_end

kafka_broker_host[l3vpn]:
kafka-node-1.interstellar.prv:9092,kafka-node-2.interstellar.prv:9092,
kafka-node-3.interstellar.prv:9092

kafka_output[l3vpn]: json

kafka_topic[l3vpn]: pmacct_netflow

Is this setup converting 1 for 1 a netflow record to a json message ?

I am asking because the backend engineers are noticing a lot of
difference between the number of netflow records received and the
number of the json messages kafka is receiving.

Is there a kind of aggregation done by kafka plugin that would reduce
the number of json messages sent to Kafka ?

Thank you in advance.

Wilfrid Grassot

**


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] kafka plugin and number of json messages vs number of netflow record

2022-10-21 Thread Paolo Lucente


Hi Wilfrid,

To say whether some aggregation is taking place or not, you should look 
at the template of the incoming NetFlow records. You can achieve this 
with Wireshark / tshark or via pmacct, either running it in debug mode - 
you will find the templates in the log file - or defining a 
nfacctd_templates_file.


In general, i would expect less JSON records output to Kafka than 
incoming NetFlow records because of the templates - which are really 
service messages to make the protocol work and hence do not make it to 
the database.


Paolo


On 21/10/22 09:13, Grassot, Wilfrid wrote:

Hi Paolo

We are  collecting netflow records of several routers interfaces.

Now we are testing the kafka plugin of nfactt using json as format output.

kafka_topic[l3vpn]: pmacct_netflow

aggregate[l3vpn]: tcpflags, proto, src_host, src_port, dst_host, 
dst_port, src_as, dst_as, peer_src_as, peer_dst_as, peer_src_ip, 
peer_dst_ip, in_iface, src_net,


dst_net, tos, timestamp_start, timestamp_end

kafka_broker_host[l3vpn]: 
kafka-node-1.interstellar.prv:9092,kafka-node-2.interstellar.prv:9092,kafka-node-3.interstellar.prv:9092


kafka_output[l3vpn]: json

kafka_topic[l3vpn]: pmacct_netflow

Is this setup converting 1 for 1 a netflow record to a json message ?

I am asking because the backend engineers are noticing a lot of 
difference between the number of netflow records received and the number 
of the json messages kafka is receiving.


Is there a kind of aggregation done by kafka plugin that would reduce 
the number of json messages sent to Kafka ?


Thank you in advance.

Wilfrid Grassot

**


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Doubt about custom sql tables

2022-10-03 Thread Paolo Lucente


Hi Federico,

Thanks for getting in touch and bringing this up. More than a bug, you 
are running in an aspect about SQL tables that is poorly documented (i 
will try to improve that as a follow-up). The only vague mentioning of 
what you are running into is here:


https://github.com/pmacct/pmacct/blob/master/sql/README.mysql#L50

Essentially, before fixed schema v6 both IP addresses and ASNs were 
written in the same field, that is ip_src and ip_dst. What the 
intersection of "sql_table_schema", "sql_optimize_clauses: true" and 
"sql_table_version: 9" does is to enable writing to custom schemas 
(sql_table_schema and sql_optimize_clauses) using the v9 style rather 
than the default, v1 (sql_table_version), hence using the as_src / 
as_dst fields for storing ASNs.


Paolo


On 3/10/22 19:20, Federico Urtizberea wrote:

Hello everyone, and thanks for reading this.
I work at a small ISP, and am trying to use PMACCT to get some metrics 
from our network and get a better understanding of how our traffic is 
flowing.
Our network is quite simple, we do not transit, all incoming and 
outgoing traffic is generated by our clients and is almost IPv4 (or that 
is what we think, one of the reason to try to deploy PMACCT). So my 
first attempt was to compare the metrics collected with PMACCT to well 
known data such as the metrics collected by SNMP, by comparing the 
incoming and outgoing traffic from our ASN.

The collector is configured like this (nfacctd 1.7.7-git (RELEASE)):

daemonize: false
debug: true
nfacctd_port: 2100
nfacctd_pro_rating: true
nfacctd_renormalize: true
nfacctd_time_new: true
plugin_buffer_size: 102400
plugin_pipe_size: 8519680
propagate_signals: true
timestamps_secs: true

plugins: mysql[in],mysql[out]

aggregate[in]: dst_as
sql_db[in]: pmacct
sql_dont_try_update[in]: true
sql_history[in]: 1m
sql_history_roundoff[in]: m
sql_host[in]: 127.0.0.1
sql_multi_values[in]: 100
sql_optimize_clauses[in]: true
sql_passwd[in]: arealsmartpwd
sql_port[in]: 3306
sql_preprocess[in]: minp=1,adjb=30
sql_refresh_time[in]: 60
sql_table[in]: asn_in_%Y%m%d
sql_table_schema[in]: /etc/pmacct/asn_in.schema
sql_table_version[in]: 9
sql_user[in]: pmacct

aggregate[out]: src_as
sql_db[out]: pmacct
sql_dont_try_update[out]: true
sql_history[out]: 1m
sql_history_roundoff[out]: m
sql_host[out]: 127.0.0.1
sql_multi_values[out]: 100
sql_optimize_clauses[out]: true
sql_passwd[out]: arealsmartpwd
sql_port[out]: 3306
sql_preprocess[out]: minp=1,adjb=30
sql_refresh_time[out]: 60
sql_table[out]: asn_out_%Y%m%d
sql_table_schema[out]: /etc/pmacct/asn_out.schema
sql_table_version[out]: 9
sql_user[out]: pmacct

The custom schema for the sql tables are:

* /etc/pmacct/asn_in.schema

CREATE TABLE asn_in_%Y%m%d (
   `as_dst` int(4) unsigned NOT NULL,
   `packets` int(10) unsigned NOT NULL,
   `bytes` bigint(20) unsigned NOT NULL,
   `stamp_inserted` datetime NOT NULL,
   `stamp_updated` datetime DEFAULT NULL,
   PRIMARY KEY (`stamp_inserted`,`stamp_updated`,`as_dst`),
   INDEX a (as_dst)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;

* /etc/pmacct/asn_out.schema

CREATE TABLE asn_out_%Y%m%d (
   `as_src` int(4) unsigned NOT NULL,
   `packets` int(10) unsigned NOT NULL,
   `bytes` bigint(20) unsigned NOT NULL,
   `stamp_inserted` datetime NOT NULL,
   `stamp_updated` datetime DEFAULT NULL,
   PRIMARY KEY (`stamp_inserted`,`stamp_updated`,`as_src`),
   INDEX a (as_src)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;

So far in the PMACCT documentation and the threads I read, to use custom 
sql tables, the only settings I understood needed to be set in the 
PMACCT config were 
(https://github.com/pmacct/pmacct/blob/41f7ef4d1e156873361ebd772ccb07ed7efd0238/QUICKSTART#L341):


sql_optimize_clauses: true
sql_table: 
aggregate: 

But if I just do that, and use the sql schemas detailed above, I get the 
following error:


INFO ( in/mysql ): *** Purging cache - START (PID: 84) ***

INFO ( out/mysql ): *** Purging cache - START (PID: 85) ***

DEBUG ( in/mysql ): 5071 VALUES statements sent to the MySQL server.

ERROR ( in/mysql ): Unknown column 'ip_dst' in 'field list'


INFO ( in/mysql ): *** Purging cache - END (PID: 84, QN: 5070/5071, ET: 
0) ***


DEBUG ( out/mysql ): 5199 VALUES statements sent to the MySQL server.

ERROR ( out/mysql ): Unknown column 'ip_src' in 'field list'


INFO ( out/mysql ): *** Purging cache - END (PID: 85, QN: 5198/5199, ET: 
0) ***


Because of that, i need to use one of these two directives to avoid this 
error, sql_table_version with version 9 (is the only I have tested) or 
sql_table_type with table type bgp.


Is it a bug o i have missed anything in the docs and i need to configure 
one of these directives to make it work?


Regards,


Federico


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] icmp6 netflow 9 not including type & code sometimes

2022-10-03 Thread Paolo Lucente


Hi,

Best would be for me to be able to reproduce the issue; can you make a 
brief capture in pcap format (ie. with tcpdump) of some of this icmp6 
traffic and send it over via unicast email?


If you could even compose two traces, one for the interface that is 
working, one for the one that is not working that would be awesome.


Paolo


On 27/9/22 01:27, fireballiso wrote:

More information:

pmacctd -V
Promiscuous Mode Accounting Daemon, pmacctd 1.7.9-git [RELEASE]

Arguments:
  '--enable-l2' '--enable-traffic-bins' '--enable-bgp-bins' 
'--enable-bmp-bins' '--enable-st-bins'


Libs:
cdada 0.4.0
libpcap version 1.10.1 (with TPACKET_V3)

Plugins:
memory
print
nfprobe
sfprobe
tee

System:
Linux 5.19.9-200.fc36.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Sep 15 09:49:52 
UTC 2022 x86_64


Compiler:
gcc 12.2.1

===

Config file (sending netflow to IPv6 loopback interface for capture with 
nfcapd):


!
daemonize: true
!
pcap_interface: eth0
aggregate: src_host, dst_host, src_port, dst_port, proto, tcpflags, tos
plugins: nfprobe
nfprobe_receiver: [::1]:9995

nfprobe_version: 9

=

Still, the netflow captured with the config above doesn't have the icmp6 
type and code values set correctly, but are always zeros.



On 9/25/2022 10:21 PM, fireballiso wrote:
Hi! I use pmacctd to generate netflow 9 for two interfaces on a 
physical (not virtual) Linux machine. The flows from one interface 
shows icmp and icmp6 protocols with the type and code as expected in 
the dst_port, and the other interface only shows icmp type and code 
correctly; the icmp6 type and code are always 0, regardless of the 
true values.


Another machine (a VMWare virtual machine, running on ESXi 7) 
generates netflow 9 for an interface that only has IPv6 addresses; 
this also shows the icmp6 type and code as always 0.


The interfaces on both machines have identical pmacctd configurations 
(except for the interface names), and the pmacctd versions are 
identical (cloned from github).


What would cause the icmp6 type and code to not be set correctly for 
two interfaces, but correctly for another one?


-Indy



___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] nfacct accounts traffic twice

2022-07-08 Thread Paolo Lucente
ully created: 32771 entries.

INFO ( default/core ): waiting for NetFlow/IPFIX data on :::5678

Best,
Michael

Am 04.07.2022 um 21:29 schrieb Paolo Lucente:


Hi Michael,

Welcome back! :-) What version of pmacct are you using? I see you 
daemonize but there is no logfile specified: did you check the log on 
startup to make sure that the filter in 'aggregate_filter' is being 
accepted and loaded?


Your understanding of how 'aggregate_filter' should work, ie. filter 
you out 1.2.3.4 if it's not specified among the networks listed in 
the filter, is right.


Paolo


On 1/7/22 16:59, Muenz, Michael wrote:

Hi,

after over 15 years I'm back using pmacct for an open source 
accounting project.
I'm using OPNsense to ingest Netflow v5 traffic into pmacct with 
MySQL backend.

I'm intersted only in specific networks so I'm doing it like this:

daemonize: true
debug: false

nfacctd_port: 5678
nfacctd_time_new: true
plugins: mysql[inbound],mysql[outbound]

aggregate[inbound]: tag,dst_host
aggregate[outbound]: tag,src_host

aggregate_filter[inbound]: (dst net 46.16.78.247/32 ...)
aggregate_filter[outbound]: (src net 46.16.78.247/32 ...)

The different networks in in aggregate filter are differenct customers.
Now my idea was that I add a pretagging so when a packet comes with 
filter X it add tag Y:


! 1101 = OPNREPO
id=1101 ip=81.33.44.75 filter='host 46.16.78.247'

Now every flow from 81.33.44.75 with traffic going from/to 
46.16.78.247 gets tag 1101.

After this I can select * from X where 1101 and sum up.

My problem is that aggregate_filter will also aggregate the source 
of the other side.
Lets say I transfer a 1GB file from 1.2.3.4 to 46.16.78.247 I have 4 
records:


src 0.0.0.0, dst 46.16.78.247

src 0.0.0.0, dst 1.2.3.4

src 46.16.78.247, dst 0.0.0.0

src 1.2.3.4, dst 0.0.0.0

I thought that with aggregate_filter the lines with 1.2.3.4 wont get 
into the db but maybe I'm wrong?


Any ideas?

Thanks!
Michael


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists





___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] nfacct accounts traffic twice

2022-07-04 Thread Paolo Lucente



Hi Michael,

Welcome back! :-) What version of pmacct are you using? I see you 
daemonize but there is no logfile specified: did you check the log on 
startup to make sure that the filter in 'aggregate_filter' is being 
accepted and loaded?


Your understanding of how 'aggregate_filter' should work, ie. filter you 
out 1.2.3.4 if it's not specified among the networks listed in the 
filter, is right.


Paolo


On 1/7/22 16:59, Muenz, Michael wrote:

Hi,

after over 15 years I'm back using pmacct for an open source accounting 
project.
I'm using OPNsense to ingest Netflow v5 traffic into pmacct with MySQL 
backend.

I'm intersted only in specific networks so I'm doing it like this:

daemonize: true
debug: false

nfacctd_port: 5678
nfacctd_time_new: true
plugins: mysql[inbound],mysql[outbound]

aggregate[inbound]: tag,dst_host
aggregate[outbound]: tag,src_host

aggregate_filter[inbound]: (dst net 46.16.78.247/32 ...)
aggregate_filter[outbound]: (src net 46.16.78.247/32 ...)

The different networks in in aggregate filter are differenct customers.
Now my idea was that I add a pretagging so when a packet comes with 
filter X it add tag Y:


! 1101 = OPNREPO
id=1101 ip=81.33.44.75 filter='host 46.16.78.247'

Now every flow from 81.33.44.75 with traffic going from/to 46.16.78.247 
gets tag 1101.

After this I can select * from X where 1101 and sum up.

My problem is that aggregate_filter will also aggregate the source of 
the other side.
Lets say I transfer a 1GB file from 1.2.3.4 to 46.16.78.247 I have 4 
records:


src 0.0.0.0, dst 46.16.78.247

src 0.0.0.0, dst 1.2.3.4

src 46.16.78.247, dst 0.0.0.0

src 1.2.3.4, dst 0.0.0.0

I thought that with aggregate_filter the lines with 1.2.3.4 wont get 
into the db but maybe I'm wrong?


Any ideas?

Thanks!
Michael


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] [docker-doctors] pmacctd in docker

2022-05-10 Thread Paolo Lucente


Hi Thomas,

I think some confusion may be deriving from docs (to be improved) and 
the fact 1.7.6 is old.


Nevertheless, from the interface indexes from your last output (ie. 
1872541466, 3698069186, etc.) i can tell that you did configure 
pcap_ifindex to 'hash' (being honored as you can see) in conjunction 
with pcap_interfaces_map.


One issue in the code is for sure the fact to require the definition of 
an ifindex always, even if pcap_ifindex is not set to 'map'. Another 
issue was the silent discarding of pcap_interfaces_map without notifying 
you with a warning. Both of these issues have been addressed in this 
commit that i just passed:


https://github.com/pmacct/pmacct/commit/02080179aef3e87527e4d1158700eee729f1a5c3

Paolo


On 9/5/22 14:31, Thomas Eckert wrote:

Hi Paolo,

Thanks for the hint, I gave it a try. I'm observing the exact same 
behavior between running pmacct in a container & directly on my host in 
all cases. Tested with

* official docker image: 281904b7afd6
* official ubuntu 21.10 package: pmacct/impish,now 1.7.6-2 amd64

I *think* the problem is with the interfaces' ifindex parameter when 
using the pcap_interfaces_map config key - everything works fine 
(capture files are printed) when instead using the pcap_interface key. 
Whenever I do not specify the 'ifindex' in the file specified as value 
for the pcap_interfaces_map config key, I do not observe capture files 
being printed. Vice versa, if I do specify the 'ifindex' parameter, then 
capture files are printed.


In fact, if I do specify 'ifindex' for all interfaces listed when I run 
"netstat -i", then pmacctd throws errors for my br-* & enx interfaces - 
which it does not do when I omit 'ifindex' - almost as if it only then 
realizes that it is supposed to access those interfaces at all. This 
assumption is also based on the fact that I do see log lines such as these
     INFO ( default/core ): Reading configuration file 
'/etc/pmacct/pmacctd.conf'.

     INFO ( default/core ): [/etc/pmacct/pcap-itf.conf] (re)loading map.
     INFO ( default/core ): [/etc/pmacct/pcap-itf.conf] map successfully 
(re)loaded.

     INFO ( default/core ): [docker0,1872541466] link type is: 1      <=
     INFO ( default/core ): [eno2,3698069186] link type is: 1           <=
     INFO ( default/core ): [lo,2529615826] link type is: 1  
   <=

     INFO ( default/core ): [tun0,3990258693] link type is: 12          <=
when specifying 'ifname' whereas the marked (<=) lines are missing 
whenever I do not.


Reading through the config key documentation some more, I found the 
config key pcap_ifindex. Interestingly enough, using it does not yield 
any difference in results - neither for value "sys" nor for value "hash" 
- irrespective of all other settings I played around with.


Assuming in pmacctd.conf the config key pcap_interfaces_map is used, 
then this is what I speculate is effectively happening:

* pmacctd ignores config key pcap_ifindex
* instead, it expects 'ifindex' to be set in the interface mapping file 
for each line

* each line where 'ifindex' is not set is ignored
* if 'ifindex' is missing on all lines, this results in a 
"no-interface-being-listened-on" case without any warning/error
Summary: seems like 'ifname' is a mandatory parameter in the interface 
mapping file whereas the documentation says "pmacctd: mandatory keys: 
ifname."


My understanding of the documentation for above-mentioned config keys is 
that the behavior I'm observing is not as intended (e.g. 'ifindex' 
effectively being required, pcap_ifindex effectively being ignored) . So 
I'm either making a mistake, e.g. in my config files, misunderstanding 
the documentation or I'm encountering a bug - which I find difficult to 
believe given how trivial my setup is.


Any Suggestions ?

Regards & Thanks,
   Thomas

On Sun, May 8, 2022 at 1:43 PM Paolo Lucente <mailto:pa...@pmacct.net>> wrote:



Hi Thomas,

The simplest thing i may recommend is to check it all working outside a
container - this way you can easily isolate whether the issue is
somehow
related to the container (config or interaction of pmacctd with the
container) or with the pmacct config itself.

Paolo


On 6/5/22 06:05, Thomas Eckert wrote:
 > Hi everyone,
 >
 > pmacct starter here, trying to get pmacctd working inside of a
container
 > to listen to the (container's) host's traffic. I suppose this is
a, if
 > not the, standard use case for pmacctd in a container. So I'm
sure it
 > works in principle but I'm doing something wrong.
 >
 > Command for starting the container:
 >      docker run \
 >          --privileged --network=host \
 >          --name pmacctd \
 >          -v /tmp/pmacctd.conf:/etc/pmacct/pmacctd.conf:ro \
 >          -v /tmp/pcap-itf.conf:/etc/pmacct/pcap-

Re: [pmacct-discussion] pmacct.net

2022-05-08 Thread Paolo Lucente



A quick note to thank you Karl for your always good inputs; let me read 
through and see what actions i can take.


Paolo


On 4/5/22 13:32, Karl O. Pinc wrote:

Hi Paolo,

On Wed, 4 May 2022 01:25:23 -0300
Paolo Lucente  wrote:


Somehow i can't reproduce the problem, both pmacct.net and
www.pmacct.net do actually work for me no problem (http of course,
ie. not https, well no https is advertised out nor does it work).

Can you please qualify the issue better (here or by unicast email).


I'm using Mozilla Firefox 91.8.0esr on Debian bullseye (v11.3).

Some browsers of late (I think firefox, at least
in private windows, and maybe other browsers) use
https by default.   So, it's a https problem.  Just typing
"pmacct.net" resulted in a "can't connect" type of message.

https://blog.mozilla.org/security/2021/08/10/firefox-91-introduces-https-by-default-in-private-browsing/

According to the above, this should not be a problem.
But my ISP sucks; dns resolution is slow.  So there's
probably a race condition.  One that does not affect
very many people.



Should you decide to use HTTPS, here's my certbot command
(for the debian certbot package which uses the free letsencrypt.org
service):

certbot certonly --webroot \
  --webroot-path /var/www/webserver \
  --domains foo.example.com,bar.example.com,... \
  --renew-with-new-domains

I prefer not to let certbot frob the webserver configs.  So you'll
then need to add the cert files found in
/etc/letsencrypt/live// to the TLS configs for your
webserver.  (See /etc/letsencrypt/live/README.)

The debian certbot package comes with a systemd timer to renew
the certs.  (And a systemd service.)  They probably come
enabled out of the box but check with "systemctl status ...".

As an FYI, the way certbot issuance/renewal works is that first a cookie
is obtained and dropped into the http document root.  When the
letsencrypt server verifies the cookie using http, it knows that you
run the website and then issues you a cert.

See also:

https://blog.chromium.org/2021/03/a-safer-default-for-navigation-https.html

FWIW, using HTTPS is supposed to get you a better google ranking.

Regards,

Karl 
Free Software:  "You don't pay back, you pay forward."
  -- Robert A. Heinlein

P.S.  If you want to use the certbot web cert to secure your
SMTP traffic I have a hook I can send you that works with postfix.
You'll have to frob it to get the certs onto your secondary MX.


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] [docker-doctors] pmacctd in docker

2022-05-08 Thread Paolo Lucente


Hi Thomas,

The simplest thing i may recommend is to check it all working outside a 
container - this way you can easily isolate whether the issue is somehow 
related to the container (config or interaction of pmacctd with the 
container) or with the pmacct config itself.


Paolo


On 6/5/22 06:05, Thomas Eckert wrote:

Hi everyone,

pmacct starter here, trying to get pmacctd working inside of a container 
to listen to the (container's) host's traffic. I suppose this is a, if 
not the, standard use case for pmacctd in a container. So I'm sure it 
works in principle but I'm doing something wrong.


Command for starting the container:
     docker run \
         --privileged --network=host \
         --name pmacctd \
         -v /tmp/pmacctd.conf:/etc/pmacct/pmacctd.conf:ro \
         -v /tmp/pcap-itf.conf:/etc/pmacct/pcap-itf.conf:ro \
         -v /tmp//captures:/var/pmacct/captures:rw pmacctd-debug \
         pmacct/pmacctd:latest

Contents of pmacctd.conf:
     daemonize: false
     snaplen: 1000
     pcap_interfaces_map: /etc/pmacct/pcap-itf.conf
     aggregate: src_host, dst_host, src_port, dst_port, proto, class
     plugins: print
     print_output: json
     print_output_file: /var/pmacct/captures/capture-%Y%m%d_%H%M.txt
     print_output_file_append: true
     print_history: 1m
     print_history_roundoff: m
     print_refresh_time: 5

pcap-itf.conf contains all interfaces of the host (as per netstat -i) in 
the form

     ifname=eno2
One line each, no other keys/values other than ifname.
Possibly important note: There's a VPN (openconnect) constantly running 
on the host. The VPN's interface is listed in netstat -i and, as such, 
included in pcap-itf.conf.


Starting the container yields this output:
     INFO ( default/core ): Promiscuous Mode Accounting Daemon, pmacctd 
1.7.7-git (20211107-0 (ef37a415))
     INFO ( default/core ):  '--enable-mysql' '--enable-pgsql' 
'--enable-sqlite3' '--enable-kafka' '--enable-geoipv2' 
'--enable-jansson' '--enable-rabbitmq' '--enable-nflog' '--enable-ndpi' 
'--enable-zmq' '--enable-avro' '--enable-serdes' '--enable-redis' 
'--enable-gnutls' 'AVRO_CFLAGS=-I/usr/local/avro/include' 
'AVRO_LIBS=-L/usr/local/avro/lib -lavro' '--enable-l2' 
'--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins' 
'--enable-st-bins'
     INFO ( default/core ): Reading configuration file 
'/etc/pmacct/pmacctd.conf'.

     INFO ( default/core ): [/etc/pmacct/pcap-itf.conf] (re)loading map.
     INFO ( default/core ): [/etc/pmacct/pcap-itf.conf] map successfully 
(re)loaded.
     INFO ( default_print/print ): cache entries=16411 base cache 
memory=67875896 bytes

     INFO ( default_print/print ): JSON: setting object handlers.
     INFO ( default_print/print ): *** Purging cache - START (PID: 7) ***
     INFO ( default_print/print ): *** Purging cache - END (PID: 7, QN: 
0/0, ET: X) ***


Now, the problem is there are no files showing up in the 'captures' 
directory at all.


I tried these things  (as well as combinations thereof) to try to 
understand what's going on:
* change the time related settings in pmacct.conf: to dump data 
more/less often - also waited (increasingly) long, at times up to 20 minutes
* change 'snaplen' in pmacct.conf up & down - just to make sure I'm not 
running into buffering problems (just guessing, haven't read pmacct/d 
sources)
* change pcap-itf.conf to contain all interfaces or only the (host's) 
LAN + VPN interfaces (removing all others like docker's internal 'docker0')
* check permission settings of the 'captures' directory - this should be 
fine because a simple "touch /var/pmacct/captures/foobar" works and the 
file does exist as observed in the directory on the host itself
* run the container _not_ in host-sniffing mode, so just inside its own 
network-bubble, then cause traffic against it and observe it writing 
data to the 'captures' directory - works!


Because I started to doubt my own sanity I asked one of our Docker/K8S 
experts to check my docker setup and he found no problem looking over 
it, including via "docker inspect pmacct". So I'm fairly sure my mistake 
is somewhere in the configuration of pmacctd but I cannot figure out 
what is. Would someone please point it out to me ?


Regards & Thanks,
   Thomas

PS: It's been almost 10 years since I've posted to a mailing list. 
Please forgive any conventions/best-practices missteps.



___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] TimescaleDB

2022-05-03 Thread Paolo Lucente



Hi John,

Yes, i can confirm that writing directly from pmacct into a TimescaleDB, 
you can do it using the 'pgsql' plugin. Should you run into troubles 
(which you should not!) please let me know.


Paolo



On 3/5/22 17:33, John Jensen wrote:

Hi all,

Has anyone successfully used TimescaleDB as a backend for pmacctd? I 
understand that TimescaleDB is essentially just a Postgres extension - 
does this mean that nothing really "changes" in terms of configuring 
pmacct to insert into a traditional SQL database (Postgres)?


Thanks in advance!

-JJ

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] pmacct.net

2022-05-03 Thread Paolo Lucente



Hi Karl,

Always great to read from you & thanks for your note.

Somehow i can't reproduce the problem, both pmacct.net and 
www.pmacct.net do actually work for me no problem (http of course, ie. 
not https, well no https is advertised out nor does it work).


Can you please qualify the issue better (here or by unicast email).

Thanks,
Paolo



On 3/5/22 16:28, Karl O. Pinc wrote:

FYI.

I notice that "pmacct.net" in my browser's URL bar does
not redirect to "www.pmacct.net".  I get
"unable to connect".

Regards,

Karl 
Free Software:  "You don't pay back, you pay forward."
  -- Robert A. Heinlein

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Easiest way to ingest nfacctd data into python?

2022-05-03 Thread Paolo Lucente


Hi Rich,

While i don't have actual examples and while supporting the answers you 
already received, i may propose you the following architectural tips:


* Write stuff into files with the 'print' plugin; using 
print_latest_file to point always to the latest finalized file and 
print_trigger_exec to execute a 3rd party (post-processing) script on 
the file just finalized;


* Whatever language the post-processing is written into, you know that 
you are being invoked by print_trigger_exec because print_latest_file is 
now available. So, not even much env variable sophistication, you know 
which file to read and process with your Python script;


* Should you want to scale up things further you could use the kafka 
plugin, for example, instead of print / files; with a Python script 
consuming from the topic where pmacct is producing. Such a setup would 
allow you to scale things out easily with kafka topic partitions and 
consumer threads / processes.


Paolo


On 3/5/22 15:19, Compton, Rich A wrote:
Hi, I’m trying to take the netflow records from nfacctd and process them 
with a python script.  Can someone suggest how I can do this with python 
without having nfacctd put them into a database and then have my python 
script read it?  Is using kafka the best way?  The netflow collection 
and python script will be on the same instance.  Any example code would 
be very helpful!


Thanks!

signature_2304850901

Rich Compton    |     Principal Eng   |    314.596.2828

8560 Upland Drive,   Suite B  |  Englewood, CO 80112

PGP Key 



The contents of this e-mail message and
any attachments are intended solely for the
addressee(s) and may contain confidential
and/or legally privileged information. If you
are not the intended recipient of this message
or if this message has been addressed to you
in error, please immediately alert the sender
by reply e-mail and then delete this message
and any attachments. If you are not the
intended recipient, you are notified that
any use, dissemination, distribution, copying,
or storage of this message or any attachment
is strictly prohibited.

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Contributing to the project

2022-04-22 Thread Paolo Lucente



Hi Suphannee,

Thanks for this message and for your kind words about the project. The 
best way to contribute back - making sure to trace every line of code 
back to you / your company - is to do a Pull Request on GitHub, one per 
logical feature. Look forward to review your code and thanks in advance 
for sharing your contribution with the rest of the community much-much 
appreciated.


Paolo


On 22/4/22 11:47, Suphannee Sivakorn (BLOOMBERG/ BANGKOK 5) wrote:

Hi everyone,
I have been using pmacct for parsing netflow for a few years now. It has 
been very good so far (Thank you!). I have a few patches/features that I 
created to assist with our use cases/findings. To name a few:


- Enrichment with maxmind v2 Geoip ASN
- Fixing some unparsable option templates
- Fixing no end-time timestamp on some IPFIX flows

Wonder what is the best way to contribute these back to the main project 
and if there is someone could help reviewing them. Thank you.


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] set_label=enp1s0_in filter='inbound' doesn't work, while it works against ppp0

2022-04-21 Thread Paolo Lucente


Hi Ruben,

Indeed, very strange. And i could easily reproduce the issue on a Linux 
VM. To be frank i was not even aware of the existence of such inbound vs 
outbount knob, very convenient indeed. I suspect this is something new 
that good old bpf_filter() - which accepts filtering instructions, 
packet pointer, total length and capture length only - can't help with. 
I see, for example, in newer libpcap versions there is a new function 
called bpf_filter_with_aux_data(); i may be wrong, i didn't go super 
deep in the examination, but i would not be surprised that one would 
need to implement that in order to make these inbound / outbound knobs work.


Would you ask me how would i do it on an ethernet link? Probably i would 
resort to the known MAC address of your enp1s0 interface. What is 
destined to it is inbound, what is originated by it is outbound. Old 
school, probably needing some good thinking in order to deploy it at 
scale but, probably working OK in a home environment.


Paolo



On 20/4/22 18:04, Ruben wrote:

Hi,

I'm trying to get pmacctd to perform traffic accounting on my home 
router that's based on a debian machine.


I'm running the following configuration:

debug: false
daemonize: true
pidfile: /var/run/pmacctd.pid
! syslog: daemon
logfile: /var/log/pmacctd.ppp0.log

plugin_pipe_size: 1024
plugin_buffer_size: 10240
plugins: print[print]

pcap_interface: ppp0
pcap_interface_wait: true
pre_tag_map: /etc/pmacct/pre_tag.ppp0.map

networks_file: /etc/pmacct/networks.map
networks_no_mask_if_zero: false
pmacctd_net: file
! pmacctd_net[print]: file
pmacctd_as: file
! pmacctd_as[print]: file

aggregate[print]: etype, proto, src_as, dst_as, src_host, dst_host, label
print_output_file[print]: /etc/pmacct/print_dump.ppp0.json
print_output[print]: json
print_history[print]: 1m
print_history_roundoff[print]: m
print_refresh_time[print]: 60
print_trigger_exec[print]: /etc/pmacct/postit.ppp0.sh


Within my pre_tag.ppp0.map file i have:

set_label=ppp0_in filter='inbound'
set_label=ppp0_out filter='outbound'


This works correctly and my labels end up with ppp0_in and ppp0_out.

The issue i'm facing is that when i replace every ppp0 occurrence with 
enp1s0, that the labels do /not/ get set.


The only difference i've seen between these configs seems to be the 
'link type' which for ppp0 is 113 and for the enp1s0 is 1.


Is there something i'm missing here?
Is there a better way to correctly identify inbound vs outbound traffic?

tcpdump -i enp1s0 inbound works the same as tcpdump -i enp1s0 -Q in


Kind regards,

    Ruben


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Exporting BGP enriched sflow data

2022-01-14 Thread Paolo Lucente


Hi Marcel,

Thanks for the feedback - this is more in line with what i was 
expecting. So, source AS and destination AS work. I guess you should 
also see working BGP next-hop (having peer_dst_ip specified on the 
aggregate line of your config).


Let me instead confirm that Local Preference, AS-PATH and Communities 
are not implemented although supported by the Gateway element in sFlow. 
If you would like to see this happening, can you please open an issue on 
GitHub https://github.com/pmacct/pmacct/issues ? I will flag it as an 
enhancement, do some sort of effort analysis and see when that can be 
executed on. If you have relative priorities or specific interests among 
what is not supported, please specify that too.


Finally, please clarify what "AS Router" and "AS Peer" is. Is "AS Peer" 
the first AS on the AS-PATH? And is "AS Router" the BGP next-hop?


Paolo


On 13/1/22 23:11, Marcel Menzel wrote:

Hello Paolo,

sorry for the late answer. According to Wireshark, either "AS Source" or 
"AS Destination" (in "AS Set") is set, which is fine (at least I was 
being told). However, localpref, "AS Router" & "AS Peer" always being 
zero. At least localpref for outgoing packets should be 200, because I 
am setting it in my BIRD eBGP sessions to test it (it is also being 
correctly displayed in the memory tables).


I am using the config right now:
aggregate: src_host, dst_host,in_iface, out_iface, src_port, dst_port, 
proto, tos, tcpflags, tag, src_as, dst_as, peer_src_as, peer_dst_as, 
peer_src_ip, peer_dst_ip, local_pref, as_path


More info: I am using https://github.com/monogon-dev/NetMeta on the 
other end to process the generated sflow data. It is using goflow 
internally, maybe this will help to troubleshoot this.

If you want, I can send you a pcap of the generated sflow packets.

Will try latest git master the next days.

  - Marcel

Am 12.01.2022 um 04:18 schrieb Paolo Lucente:


Hi Marcel,

May i ask you one more detail since you looked into the sFlow raw data 
produced by sFlow: is that the ASN information is there but it's 
zeroes, both source and destination, or is that the ASN information is 
totally omitted? And, if possible, please perform the test with both 
peer_dst_as being part of aggregate and with peer_dst_as being removed 
from aggregate.


Paolo


On 10/1/22 17:17, Marcel Menzel wrote:

Hi Paolo,


unfortunately, that did not resolve the problem. The sflow data still 
does not contain the ASN information.


I am using a compiled version from commit 
d5e336f2d83e0ff8f0b8475238339a557fc3eae8.


Kind regards,

Marcel

Am 10.01.2022 um 02:26 schrieb Paolo Lucente:


Hi Marcel,

I tried latest & greatest code and i have the ASN info in sFlow 
using the sfprobe plugin with a config very similar to yours.


Can you try to remove peer_dst_as from 'aggregate' and give it 
another try? It is not supported anyway. Should it make the trick, 
i'll investigate deeper why that does confuse things out.


Paolo



On 9/1/22 10:02, Marcel Menzel wrote:

Hello list,

I am trying to export BGP / ASN enriched sflow data via pmacct's 
sfprobe and setting up an iBGP session with BIRD running on the 
same machine.


Using the memory plugin at the same time and viewing it with 
"pmacct -s", the ASN information gets populated there, but not in 
the exported sflow data. At first, i thought it's a problem with 
the sflow receiving side, but looking in pcaps for the sflow 
stream, that data is actually missing there.


Switching from sflow to netflow (sfprobe), the netflow data 
contains the ASN data I am interested in.


This is my sflow config:

 pcap_interface: enp43s0f1
 pcap_ifindex: sys
 plugins: sfprobe
 sampling_rate: 16
 sfprobe_receiver: 10.10.3.210:6343
 aggregate: src_host, dst_host, src_port, dst_port, proto, tos, 
src_as, dst_as, local_pref, med, as_path, peer_dst_as

 pmacctd_as: bgp
 bgp_daemon: true
 bgp_daemon_ip: 2a0f:85c1:beef:1011:1::1
 bgp_agent_map: /etc/pmacct/bgp_agent.map
 bgp_daemon_port: 17917
 bgp_daemon_interface: vrf-as207781

This is my netflow config:

 pcap_interface: enp43s0f1
 pcap_ifindex: sys
 nfprobe_receiver: 10.10.3.210:2055
 nfprobe_version: 10
 nfprobe_timeouts: expint=10:maxlife=10
 nfprobe_maxflows: 65535
 nfprobe_engine: 10
 sampling_rate: 16
 aggregate: src_host, dst_host, src_port, dst_port, proto, tos, 
src_as, dst_as, local_pref, med, as_path, peer_dst_as

 pmacctd_as: bgp
 bgp_daemon: true
 bgp_daemon_ip: 2a0f:85c1:beef:1011:1::1
 bgp_agent_map: /etc/pmacct/bgp_agent.map
 bgp_daemon_port: 17917
 bgp_daemon_interface: vrf-as207781

The bgp_acent.map file contains the following line: 
bgp_ip=2a0f:85c1:beef:1012::1 ip=0.0.0.0



Thanks & kind regards,

Marcel Menzel

___
pmacct-discussion mailing 

Re: [pmacct-discussion] Exporting BGP enriched sflow data

2022-01-11 Thread Paolo Lucente


Hi Marcel,

May i ask you one more detail since you looked into the sFlow raw data 
produced by sFlow: is that the ASN information is there but it's zeroes, 
both source and destination, or is that the ASN information is totally 
omitted? And, if possible, please perform the test with both peer_dst_as 
being part of aggregate and with peer_dst_as being removed from aggregate.


Paolo


On 10/1/22 17:17, Marcel Menzel wrote:

Hi Paolo,


unfortunately, that did not resolve the problem. The sflow data still 
does not contain the ASN information.


I am using a compiled version from commit 
d5e336f2d83e0ff8f0b8475238339a557fc3eae8.


Kind regards,

Marcel

Am 10.01.2022 um 02:26 schrieb Paolo Lucente:


Hi Marcel,

I tried latest & greatest code and i have the ASN info in sFlow using 
the sfprobe plugin with a config very similar to yours.


Can you try to remove peer_dst_as from 'aggregate' and give it another 
try? It is not supported anyway. Should it make the trick, i'll 
investigate deeper why that does confuse things out.


Paolo



On 9/1/22 10:02, Marcel Menzel wrote:

Hello list,

I am trying to export BGP / ASN enriched sflow data via pmacct's 
sfprobe and setting up an iBGP session with BIRD running on the same 
machine.


Using the memory plugin at the same time and viewing it with "pmacct 
-s", the ASN information gets populated there, but not in the 
exported sflow data. At first, i thought it's a problem with the 
sflow receiving side, but looking in pcaps for the sflow stream, that 
data is actually missing there.


Switching from sflow to netflow (sfprobe), the netflow data contains 
the ASN data I am interested in.


This is my sflow config:

 pcap_interface: enp43s0f1
 pcap_ifindex: sys
 plugins: sfprobe
 sampling_rate: 16
 sfprobe_receiver: 10.10.3.210:6343
 aggregate: src_host, dst_host, src_port, dst_port, proto, tos, 
src_as, dst_as, local_pref, med, as_path, peer_dst_as

 pmacctd_as: bgp
 bgp_daemon: true
 bgp_daemon_ip: 2a0f:85c1:beef:1011:1::1
 bgp_agent_map: /etc/pmacct/bgp_agent.map
 bgp_daemon_port: 17917
 bgp_daemon_interface: vrf-as207781

This is my netflow config:

 pcap_interface: enp43s0f1
 pcap_ifindex: sys
 nfprobe_receiver: 10.10.3.210:2055
 nfprobe_version: 10
 nfprobe_timeouts: expint=10:maxlife=10
 nfprobe_maxflows: 65535
 nfprobe_engine: 10
 sampling_rate: 16
 aggregate: src_host, dst_host, src_port, dst_port, proto, tos, 
src_as, dst_as, local_pref, med, as_path, peer_dst_as

 pmacctd_as: bgp
 bgp_daemon: true
 bgp_daemon_ip: 2a0f:85c1:beef:1011:1::1
 bgp_agent_map: /etc/pmacct/bgp_agent.map
 bgp_daemon_port: 17917
 bgp_daemon_interface: vrf-as207781

The bgp_acent.map file contains the following line: 
bgp_ip=2a0f:85c1:beef:1012::1 ip=0.0.0.0



Thanks & kind regards,

Marcel Menzel

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Exporting BGP enriched sflow data

2022-01-09 Thread Paolo Lucente


Hi Marcel,

I tried latest & greatest code and i have the ASN info in sFlow using 
the sfprobe plugin with a config very similar to yours.


Can you try to remove peer_dst_as from 'aggregate' and give it another 
try? It is not supported anyway. Should it make the trick, i'll 
investigate deeper why that does confuse things out.


Paolo



On 9/1/22 10:02, Marcel Menzel wrote:

Hello list,

I am trying to export BGP / ASN enriched sflow data via pmacct's sfprobe 
and setting up an iBGP session with BIRD running on the same machine.


Using the memory plugin at the same time and viewing it with "pmacct 
-s", the ASN information gets populated there, but not in the exported 
sflow data. At first, i thought it's a problem with the sflow receiving 
side, but looking in pcaps for the sflow stream, that data is actually 
missing there.


Switching from sflow to netflow (sfprobe), the netflow data contains the 
ASN data I am interested in.


This is my sflow config:

     pcap_interface: enp43s0f1
     pcap_ifindex: sys
     plugins: sfprobe
     sampling_rate: 16
     sfprobe_receiver: 10.10.3.210:6343
     aggregate: src_host, dst_host, src_port, dst_port, proto, tos, 
src_as, dst_as, local_pref, med, as_path, peer_dst_as

     pmacctd_as: bgp
     bgp_daemon: true
     bgp_daemon_ip: 2a0f:85c1:beef:1011:1::1
     bgp_agent_map: /etc/pmacct/bgp_agent.map
     bgp_daemon_port: 17917
     bgp_daemon_interface: vrf-as207781

This is my netflow config:

     pcap_interface: enp43s0f1
     pcap_ifindex: sys
     nfprobe_receiver: 10.10.3.210:2055
     nfprobe_version: 10
     nfprobe_timeouts: expint=10:maxlife=10
     nfprobe_maxflows: 65535
     nfprobe_engine: 10
     sampling_rate: 16
     aggregate: src_host, dst_host, src_port, dst_port, proto, tos, 
src_as, dst_as, local_pref, med, as_path, peer_dst_as

     pmacctd_as: bgp
     bgp_daemon: true
     bgp_daemon_ip: 2a0f:85c1:beef:1011:1::1
     bgp_agent_map: /etc/pmacct/bgp_agent.map
     bgp_daemon_port: 17917
     bgp_daemon_interface: vrf-as207781

The bgp_acent.map file contains the following line: 
bgp_ip=2a0f:85c1:beef:1012::1 ip=0.0.0.0



Thanks & kind regards,

Marcel Menzel

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] src_as and dst_as are always zero -- nfprobe

2021-12-20 Thread Paolo Lucente


Ciao Luca,

Apologies for the late answer.

I did manage to reproduce your issue and just pushed a fix to master 
code that seemed to work for me. It is a simple enough one-liner that, 
if you don't wish to move to master code, you could apply to 1.7.7:


https://github.com/pmacct/pmacct/commit/d5e336f2d83e0ff8f0b8475238339a557fc3eae8

Let me know if that seems to work for you too.

Paolo


On 7/12/21 19:09, Luca Cilloni wrote:

Hi,
I’m trying to export IPFIX/NetFlow9 from pmacctd/nfprobe v1.7.7 running on a 
ubuntu 20.04 in a lab environment.
The Linux box has 2 interfaces: one L2 where pmacctd listen packets coming from 
an external router port mirror, and another L3 from which the NetFlow stream 
should be originated. pmacctd does bgp peering with the external router. I also 
have configured a memory plugin with the same aggregate set of nfprobe.
Everything works fine except the src_as and dst_as fields in the IPFIX stream: 
they are always set to zero. But if I look at the memory plugin flows table, 
using pmacct -s, the src_as and dst_as fields are correctly populated.

This is the pmacctd config file:
! General config
debug: false
daemonize: false
pcap_interface: ens4
pcap_interface_wait: true
pre_tag_map: pretag.map
pmacctd_ext_sampling_rate: 1000
pmacctd_net: bgp
pmacctd_as: bgp

! BGP Daemon config
bgp_daemon: true
bgp_daemon_ip: 10.0.224.146
bgp_daemon_id: 10.0.224.146
bgp_daemon_as: 65100
bgp_agent_map: bgp_peers.map

! Plugin declarations
plugins: nfprobe[zflow], memory[mem]

! zflow plugin config
aggregate[zflow]: src_host, dst_host, src_mask, dst_mask, src_as, dst_as
nfprobe_receiver[zflow]: 10.0.224.134:2055
nfprobe_version[zflow]: 10
nfprobe_timeouts: expint=10:maxlife=10
nfprobe_direction[zflow]: tag
nfprobe_maxflows[zflow]: 65535
nfprobe_source_ip[zflow]: 10.0.224.146
nfprobe_engine[zflow]: 10

! mem plugin config
aggregate[mem]: src_host, dst_host, src_mask, dst_mask, src_as, dst_as

This is the bgp_peers.map file:
bgp_ip=10.0.224.145 ip=10.0.224.146

And this is the pretag.map file:
set_tag=1   filter='vlan 100'
set_tag=2   filter='vlan 101’

Any help would be very appreciated.

Cheers,
Luca


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


[pmacct-discussion] pmacct 1.7.7 released !

2021-11-07 Thread Paolo Lucente


VERSION.
1.7.7


DESCRIPTION.
pmacct is a small set of multi-purpose passive network monitoring tools. It
can account, classify, aggregate, replicate and export forwarding-plane data,
ie. IPv4 and IPv6 traffic; collect and correlate control-plane data via BGP
and BMP; collect and correlate RPKI data; collect infrastructure data via
Streaming Telemetry. Each component works both as a standalone daemon and
as a thread of execution for correlation purposes (ie. enrich NetFlow with
BGP data).

A pluggable architecture allows to store collected forwarding-plane data into
memory tables, RDBMS (MySQL, PostgreSQL, SQLite), noSQL databases (MongoDB,
BerkeleyDB), AMQP (RabbitMQ) and Kafka message exchanges and flat-files.
pmacct offers customizable historical data breakdown, data enrichments like
BGP and IGP correlation and GeoIP lookups, filtering, tagging and triggers.
Libpcap, Linux Netlink/NFLOG, sFlow v2/v4/v5, NetFlow v5/v8/v9 and IPFIX are
all supported as inputs for forwarding-plane data. Replication of incoming
NetFlow, IPFIX and sFlow datagrams is also available. Collected data can
be easily exported (ie. via Kafka) to modern databases like ElasticSearch,
Apache Druid and ClickHouse and (ie. via flat-files) to classic tools 
Cacti, RRDtool and MRTG, etc.

Control-plane and infrastructure data, collected via BGP, BMP and Streaming
Telemetry, can be all logged real-time or dumped at regular time intervals
to AMQP (RabbitMQ) and Kafka message exchanges and flat-files.


HOMEPAGE.
http://www.pmacct.net/


DOWNLOAD.
http://www.pmacct.net/pmacct-1.7.7.tar.gz


CHANGELOG.
+ BGP, BMP, Streaming Telemetry daemons: introduced parallelization
  of dump events via a configurable amount of workers where the unit
  of parallelization is the exporter (BGP, BMP, telemetry exporter),
  ie. in a scenario where there are 4 workers and 4 exporters each
  worker is assigned one exporter data to dump.
+ pmtelemetryd: added support for draft-ietf-netconf-udp-notif:
  a UDP-based notification mechanism to collect data from networking
  devices. A shim header is proposed to facilitate the data streaming
  directly from the publishing process on network processor of line
  cards to receivers. The objective is a lightweight approach to
  enable higher frequency and less performance impact on publisher
  and receiver process compared to already established notification
  mechanisms. Many thanks to Alex Huang Feng ( @ahuangfeng ) and the
  whole Unyte team.
+ BGP, BMP, Streaming Telemetry daemons: now correctly honouring the
  supplied Kafka partition key for BGP, BMP and Telemetry msg logs
  and dump events.
+ BGP, BMP daemons: a new "rd_origin" field is added to output log/
  dump to specify the source of Route Distinguisher information (ie.
  flow vs BGP vs BMP).
+ pre_tag_map: added ability to tag new NetFlow/IPFIX and sFlow
  sample_type types: "flow-ipv4", "flow-ipv6", "flow-mpls-ipv4" and
  "flow-mpls-ipv6". Also added a new "is_bi_flow" true/false key to
  tag (or exclude) NSEL bidirectional flows. Added as well a new
  "is_multicast" true/false config key to tag (or exclude) IPv4/IPv6
  multicast destinations.
+ maps_index: enables indexing of maps to increase lookup speeds on
  large maps and/or sustained lookup rates. The feature has been
  remplemented using stream-lined structures from libcdada. This is
  a major work that helps preventing the unpredictable behaviours
  caused by the homegrown map indexing mechanism. Many thanks to
  Marc Sune ( @msune ).
+ maps_index: support for indexing src_net and dst_net keywords has
  been added.
+ Added _ipv6_only config directives to optionally
  enable the IPV6_V6ONLY socket option. Also changed the wrong
  setsockopt() IPV6_BINDV6ONLY id to IPV6_V6ONLY.
+ Added log function to libserdes to debug transactions with the
  Schema Registry when kafka_avro_schema_registry is set.
+ nDPI: newer versions of the library (ie. >= 3.5) bring changes
  to the API. pmacct is now aligned to compile against these.
+ pmacctd: added pcap_arista_trailer_offset config directive since
  Arista has changed the structure of the trailer format in recent
  releases of EOS. Thanks to Jeremiah Millay ( @floatingstatic )
  for his patch.
+ More improvements carried out on the Continuous Integration
  (CI) side by migrating from Travis CI to GitHub Actions. Huge
  thanks to Marc Sune ( @msune ) to make all of this possible.
+ More improvements also carried out in the space of the Docker
  images being created: optimized image size and a better layered
  pipeline. Thanks to Marc Sune ( @msune ) and Daniel Caballero
  ( @dcaba ) to make all of this possible.
+ libcdada shipped with pmacct was upgraded to version 0.3.5. Many
  thanks Marc Sune ( @msune ) for his work with libcdada.
! build system: several improvements carried out in this area,
  ie. improved MySQL checks, introduced pcap-config tool for
  libpcap, compiling on BSD/old compilers, etc. Monumental thanks
  to Marc Sune ( @msune ) for his 

Re: [pmacct-discussion] [docker-doctors] docker nfacct ... strange udp source ip !

2021-06-09 Thread Paolo Lucente



Hi Alessandro,

(thanks for the kind words, first and foremost)

Indeed, the test that Marc proposes is very sound, ie. check the actual 
packets coming in "on the wire" with tcpdump: do they really change 
sender IP address?


Let me also confirm that what is used to populate peer_ip_src is the 
sender IP address coming straight from the socket (Marc's question) and, 
contrary to sFlow, there is typically there is no other way to infer 
such info (Alessandro's question).


Paolo


On 9/6/21 14:51, Marc Sune wrote:

Alessandro,

inline

Missatge de Alessandro Montano | FIBERTELECOM
 del dia dc., 9 de juny 2021 a les 10:12:


Hi Paolo (and Marc),

this is my first post here ... first of all THANKS FOR YOU GREAT JOB :)

I'm using pmacct/nfacctd container from docker-hub 
(+kafka+telegraf+influxdb+grafana) and it's really a powerfull tool

The sender are JUNIPER MX204 routers, using j-flow (extended netflow)

NFACCTD VERSION:
NetFlow Accounting Daemon, nfacctd 1.7.6-git [20201226-0 (7ad9d1b)]
  '--enable-mysql' '--enable-pgsql' '--enable-sqlite3' '--enable-kafka' 
'--enable-geoipv2' '--enable-jansson' '--enable-rabbitmq' '--enable-nflog' 
'--enable-ndpi' '--enable-zmq' '--enable-avro' '--enable-serdes' 
'--enable-redis' '--enable-gnutls' 'AVRO_CFLAGS=-I/usr/local/avro/include' 
'AVRO_LIBS=-L/usr/local/avro/lib -lavro' '--enable-l2' '--enable-traffic-bins' 
'--enable-bgp-bins' '--enable-bmp-bins' '--enable-st-bins'

SYSTEM:
Linux 76afde386f6f 5.4.0-73-generic #82-Ubuntu SMP Wed Apr 14 17:39:42 UTC 2021 
x86_64 GNU/Linux

CONFIG:
debug: false
daemonize: false
pidfile: /var/run/nfacctd.pid
logfile: /var/log/pmacct/nfacctd.log
nfacctd_renormalize: true
nfacctd_port: 20013
aggregate[k]: peer_src_ip, peer_dst_ip, in_iface, out_iface, vlan, 
sampling_direction, etype, src_as, dst_as, as_path, proto, src_net, src_mask, 
dst_net, dst_mask, flows
nfacctd_time_new: true
plugins: kafka[k]
kafka_output[k]: json
kafka_topic[k]: nfacct
kafka_broker_host[k]: kafka
kafka_broker_port[k]: 9092
kafka_refresh_time[k]: 60
kafka_history[k]: 1m
kafka_history_roundoff[k]: m
kafka_max_writers[k]: 1
kafka_markers[k]: true
networks_file_no_lpm: true
use_ip_next_hop: true

DOCKER-COMPOSE:
#Docker version 20.10.2, build 20.10.2-0ubuntu1~20.04.2
#docker-compose version 1.29.2, build 5becea4c
version: "3.9"
services:
   nfacct:
 networks:
   - ingress
 image: pmacct/nfacctd
 restart: on-failure
 ports:
   - "20013:20013/udp"
 volumes:
   - /etc/localtime:/etc/localtime
   - ./nfacct/etc:/etc/pmacct
   - ./nfacct/lib:/var/lib/pmacct
   - ./nfacct/log:/var/log/pmacct
networks:
   ingress:
 name: ingress
 ipam:
   config:
   - subnet: 192.168.200.0/24

My problem is the  value of field PEER_IP_SRC ... at start everything is 
correct, and it works well for a (long) while ... hours ... days ...
I have ten routers so  "peer_ip_src": "151.157.228.xxx"  where xxx can easily 
identify the sender. Perfect.

Suddenly ... "peer_ip_src": "192.168.200.1" for all records (and I loose the 
sender info!!!) ...

It seems that docker-proxy decide to do nat/masquerading and translate 
source_ip for the udp stream.
The only way for me to have the correct behavior again is to stop/start the 
container.

How can I fix it? Or, is there an alternative way to obtain the same info 
(router ip) from inside the netflow stream, and not from the udp packet.


Paolo is definitely the right person to answer how "peer_ip_src" is populated.

However, there is something that I don't fully understand. To the best
of my knowledge, even when binding ports, docker (actually the kernel,
configured by docker) shouldn't masquerade traffic at all - if
masquerade is truly what happens. And certainly that wouldn't happen
"randomly" in the middle of the execution.

My first thought would be that this is something related to pmacct
itself, and that records are incorrectly generated but traffic is ok.

I doubt the  linux kernel iptables rules would randomly change the way
traffic is manipulated, unless of course, something else on that
machine/server is reloading iptables, and the resulting ruleset is
_slightly different_ for the traffic flowing towards the docker
container, effectively modifying the streams that go to pmacct (e.g.
rule priority reording). That _could_ explain why restarting the
daemon suddenly works, as order would be fixed.

Some more info would be needed to discard an iptables/docker issue:

* Dump the iptables -L and iptables -t nat -L before and after the
issue and compare.
* Use iptables -vL and iptables -t nat -vL to monitor counters, before
and after the issue, specially in the NAT table.
* Get inside the running container
(https://github.com/pmacct/pmacct/blob/master/docs/DOCKER.md#opening-a-shell-on-a-running-container),
install tcpdump, and write the pcap to a file, before and after the
incident.

Since these dumps might contain sensitive data, you can send them
anonymized or in private.

Hopefully 

Re: [pmacct-discussion] BMP

2021-05-31 Thread Paolo Lucente


Hi Edgar,

1) Wonderful stuff!

2) In the logfile you will find basic information of which BMP exporters 
are connected and a recap of how many of them are connected and whether 
any does disconnect; should you go for dumps at regular intervals, you 
will find there a recap as well of how many tables (ie. peers) and 
entries (ie. RIB info) are being sent over to Kafka; if you go for 
message logs, instead, than you can assume that it's message-in / 
message-out, 1:1.


The most beautiful thing about Kafka is that you can have N consumers 
for the very same produced data. So anytime you can have your database 
consuming data but, in parallel, you can "sniff" exactly what would have 
gone in the database, message by message: super useful in 
proof-of-concept phase, then it gets boring. Kafka ships with a console 
consumer, you can find some info here:


https://github.com/pmacct/pmacct/blob/master/QUICKSTART#L1028-#L1033

Should you be looking for yet other summaries of what goes into Kafka, 
you could enable some extra statistics in pmacct, as offered by librdkafka:


https://github.com/pmacct/pmacct/blob/master/QUICKSTART#L1139-#L1144

Just instead of kafka_config_file do use bmp_dump_kafka_config_file or 
bmp_daemon_msglog_kafka_config_file (depending if you are configuring 
dumps or message logs).


Paolo


On 31/5/21 09:19, edgar lip wrote:

Hi Paolo ,

ok , i got it - fair enough.
1. Once I will be able to grasp my head around it and if something will 
come up from this - I will make a doc and share it for sure !!!
2.what about the how to's about only the bmp section - the quick start 
explains how to start it - but not how to drive it ( show commands , 
check proper work , check that kafka send messages , etc ... )




thanks
Lipnitsky Edgar


On Mon, May 31, 2021 at 7:13 AM Paolo Lucente <mailto:pa...@pmacct.net>> wrote:



Hi Edgar,

For end-to-end solutions you have two main choices (of course i am
excluding the obvious: buy a product or buy consultancy from somebody):

1) Google for them, and you may end up with results like this one

https://imply.io/post/an-end-to-end-streaming-analytics-stack-for-network-telemetry-data

<https://imply.io/post/an-end-to-end-streaming-analytics-stack-for-network-telemetry-data>

or

2) look on GitHub for containers, and you may end up with results like
this one https://github.com/kvitex/pmacct-kafka-clickhouse
<https://github.com/kvitex/pmacct-kafka-clickhouse>

Essentially with data pipelines, since you enter in a combinatorial
game
of choices, ie. each piece you add to the pipeline to make it
end-to-end
you add more choices that inflate a matrix of options, it gets
difficult
to find an how-to guide for exactly what you are trying to achieve.

So, very possibly, either you stick to one of the solutions you find
documented or you have to gather all the pieces together and cover as
much as possible the pipeline you have in mind; the rest, you have to
fill (and maybe be so kind to document it for others to enjoy :-)). Of
course any specific help needed in filling the gaps you may hit, i'd be
happy to help you with.

Paolo


On 30/5/21 09:19, edgar lip wrote:
 > hi ,
 > from the quick start guide i see that i can set up only the
collector
 > itself , and send what was collected to a kafka , which is a good
start,
 > but = )
 >    - there is no mention of some show commands to check what is
working
 > / if working etc ...
 >    - it is not clear which design is supported ?
 >    - is there a docs on how to  continue from there ... ?
 >   - example how to setup the kafka
 >   -  how to setup the data basse and also connect it to the kafka
 > What i am trying to say is that there is no global guideline to
follow ,
 > i am mostly dealing with network but still have a good overall look
 > For things like this, it will be great if u can point me  to some
docs /
 > howto's to read and learn more about the project.
 >
 > Also saw that someone sent me how to dump logs on the machine
itself (
 > thanks John) , it is nice but not the way that i meant to go with
this.
 > As I mentioned I am trying to go like "MX router -> bmp collector (
 > pmacct /pmbmpd ) -> kafka -> psql -> grafana."
 >
 >
 > appreciate any help
 > thanks
 > Lipnitsky Edgar
 >
 >
 >
 > On Sat, May 29, 2021 at 9:53 PM Paolo Lucente mailto:pa...@pmacct.net>
 > <mailto:pa...@pmacct.net <mailto:pa...@pmacct.net>>> wrote:
 >
 >
 >     Hi Edgar,
 >
 >     Thanks for your feedback wrt the BMP documentation. Let's try
to get
 >     you
 >    

Re: [pmacct-discussion] sql_num_hosts only giving null values in the MySQL database

2021-05-31 Thread Paolo Lucente



Hi Klaas,

Do the log provide any hints / error that can put us on the right track? 
Should that not help, can you please enable debug on the pmacct side 
(that is: -d or "debug: true") to see if anything more helpful pops up 
in the logs?


A total personal comment: sql_num_hosts is surely good to optimize stuff 
if you are forced into the MySQL / MariaDB world. But if you care about 
optimization and getting the right operands over the IP fields, ie. 
netmask operations and stuff, look into PostgreSQL.


Paolo


On 31/5/21 14:17, Klaas Tammling wrote:

Hi everyone,

I'm having some trouble using sql_num_hosts with nfacctd. I have 
converted all relevant columns (ip_dst, ip_src, net_src, net_dst) to 
VARBINARY(16) in my MariaDB.


However when writing into the database nfacctd only writes NULL values 
into the database. When setting sql_num_hosts to the default I'm getting 
the values back as a string.


Am I missing something additional here?

I was hoping to get a better performance, using sql_num_hosts, when 
searching through the database.


Thanks for your help.

Regards,

Klaas

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists



___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] BMP

2021-05-30 Thread Paolo Lucente


Hi Edgar,

For end-to-end solutions you have two main choices (of course i am 
excluding the obvious: buy a product or buy consultancy from somebody):


1) Google for them, and you may end up with results like this one 
https://imply.io/post/an-end-to-end-streaming-analytics-stack-for-network-telemetry-data 
or


2) look on GitHub for containers, and you may end up with results like 
this one https://github.com/kvitex/pmacct-kafka-clickhouse


Essentially with data pipelines, since you enter in a combinatorial game 
of choices, ie. each piece you add to the pipeline to make it end-to-end 
you add more choices that inflate a matrix of options, it gets difficult 
to find an how-to guide for exactly what you are trying to achieve.


So, very possibly, either you stick to one of the solutions you find 
documented or you have to gather all the pieces together and cover as 
much as possible the pipeline you have in mind; the rest, you have to 
fill (and maybe be so kind to document it for others to enjoy :-)). Of 
course any specific help needed in filling the gaps you may hit, i'd be 
happy to help you with.


Paolo


On 30/5/21 09:19, edgar lip wrote:

hi ,
from the quick start guide i see that i can set up only the collector 
itself , and send what was collected to a kafka , which is a good start, 
but = )
   - there is no mention of some show commands to check what is working 
/ if working etc ...

   - it is not clear which design is supported ?
   - is there a docs on how to  continue from there ... ?
  - example how to setup the kafka
  -  how to setup the data basse and also connect it to the kafka
What i am trying to say is that there is no global guideline to follow , 
i am mostly dealing with network but still have a good overall look
For things like this, it will be great if u can point me  to some docs / 
howto's to read and learn more about the project.


Also saw that someone sent me how to dump logs on the machine itself ( 
thanks John) , it is nice but not the way that i meant to go with this.
As I mentioned I am trying to go like "MX router -> bmp collector ( 
pmacct /pmbmpd ) -> kafka -> psql -> grafana."



appreciate any help
thanks
Lipnitsky Edgar



On Sat, May 29, 2021 at 9:53 PM Paolo Lucente <mailto:pa...@pmacct.net>> wrote:



Hi Edgar,

Thanks for your feedback wrt the BMP documentation. Let's try to get
you
up and running and improve docs but, in order to do that, i'd need some
more specific question(s) from you. Where are you stuck? What is not
working?

Paolo

On 29/5/21 13:05, edgar lip wrote:
 > Hi pacct team / gents
 >
 > I would like to start using this project with the bmp section ,
later
 > will check the rpki and then telemetry.
 > i am an network dude mostly ( can see this based on my request =) )
 > but first thing first - bmp
 > Can you guys help with a start manual / how to's - i saw the
quick start
 > but it is very unclear and lacks details.
 >
 > as of the picture that i see right now:
 > MX router -> bmp collector ( pmacct /pmbmpd ) -> kafka -> psql ->
grafana.
 >
 >
 > thanks a lot
 > Lipnitsky Edgar
 >
 > ___
 > pmacct-discussion mailing list
 > http://www.pmacct.net/#mailinglists
<http://www.pmacct.net/#mailinglists>
 >

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists
<http://www.pmacct.net/#mailinglists>


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists



___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] BMP

2021-05-29 Thread Paolo Lucente



Hi Edgar,

Thanks for your feedback wrt the BMP documentation. Let's try to get you 
up and running and improve docs but, in order to do that, i'd need some 
more specific question(s) from you. Where are you stuck? What is not 
working?


Paolo

On 29/5/21 13:05, edgar lip wrote:

Hi pacct team / gents

I would like to start using this project with the bmp section , later 
will check the rpki and then telemetry.

i am an network dude mostly ( can see this based on my request =) )
but first thing first - bmp
Can you guys help with a start manual / how to's - i saw the quick start 
but it is very unclear and lacks details.


as of the picture that i see right now:
MX router -> bmp collector ( pmacct /pmbmpd ) -> kafka -> psql -> grafana.


thanks a lot
Lipnitsky Edgar

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists



___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] IPFIX - bgp_next_hop vs ip_next_hop

2021-05-24 Thread Paolo Lucente



Hi Andrej,

Thanks for your feedback & cool. It would be nice to look at your patch; 
in case you are able to share it with the rest of us, please submit a PR 
on GitHub.


Paolo

On 24/5/21 14:36, Andrej Brkic wrote:

Hi Paolo,

Unfortunately there's no change when using 'nfacctd_as: bgp' and 
'nfacctd_net: flow' combo. I still end up with the the same next hop 
regardless of the path the flow takes which makes the as-path for the 
multipathed destinations invalid since bgp lookup is always done with 
the same next hop address. From what I can see in pkt_handlers.c 
pbgp->peer_dst_ip.address.ipv4 is always set to NF9_BGP_IPV4_NEXT_HOP 
(if set in the flow export) and in my case it's always set to the first 
nexthop entry in the forwarding table for the destination that have 
multiple paths. I ended up writing a small patch that adds a new config 
directive "force_ip_next_hop" and if set and NF9_IPV4_NEXT_HOP is there 
it will use it instead of NF9_BGP_IPV4_NEXT_HOP. In my use case this 
works fine since all bgp peers are on /31 or /30 ptp links and their ip 
is always equal to the ip of the other side of the ptp link.


Andrej

On 21.5.2021. 21:18, Paolo Lucente wrote:


Hi Andrej,

It is possible that you may find joy with the following combo 
'nfacctd_as: bgp' and 'nfacctd_net: flow'. The next-hop for something 
not intuitive (but that i can explain and is documented) is tied to 
'nfacctd_net'. Can you give it a try? If positive, we can take it from 
there, keep me posted.


Paolo

On 20/5/21 09:20, Andrej Brkic wrote:

Hi,

I have quite a few Juniper boxes doing inline jflow and exporting 
flows to nfacctd.
All was working fine until we had to upgrade junos on those which 
broke nexthop-learning
for ipfix. What happens now is that for all destinations that are 
multipathed the
bgp_ipv4_next_hop will be set to the value of the first nexthop entry 
in the forwarding
table for the prefix which has multiple paths. Naturally this breaks 
the as_path lookup
(we're using BGP + add_paths to correctly resolve as path using 
bgp_next_hop). JTAC is
of no help here since they claim this was never supported on these 
platforms and the

fact it worked on the old junos versions is pure luck.

I tried setting "use_ip_next_hop" to true but it had no effect. Is it 
even possible to
use it in a combo with nfacctd_as: bgp and have internal bgpd ignore 
bgp_next_hop in
the flow and use the ip_next_hop to correctly resolve as path for 
multipathed
prefixes ? Quick look at pkt_handlers.c shows that if bgp_ip_next_hop 
is set in the
flow export peer_ds_ip.address.ipv4 will never be set to ip_next_hop 
regardless of

use_ip_next_hop being set to true.


Thanks,
Andrej


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists





___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] IPFIX - bgp_next_hop vs ip_next_hop

2021-05-21 Thread Paolo Lucente



Hi Andrej,

It is possible that you may find joy with the following combo 
'nfacctd_as: bgp' and 'nfacctd_net: flow'. The next-hop for something 
not intuitive (but that i can explain and is documented) is tied to 
'nfacctd_net'. Can you give it a try? If positive, we can take it from 
there, keep me posted.


Paolo

On 20/5/21 09:20, Andrej Brkic wrote:

Hi,

I have quite a few Juniper boxes doing inline jflow and exporting flows 
to nfacctd.
All was working fine until we had to upgrade junos on those which broke 
nexthop-learning
for ipfix. What happens now is that for all destinations that are 
multipathed the
bgp_ipv4_next_hop will be set to the value of the first nexthop entry in 
the forwarding
table for the prefix which has multiple paths. Naturally this breaks the 
as_path lookup
(we're using BGP + add_paths to correctly resolve as path using 
bgp_next_hop). JTAC is
of no help here since they claim this was never supported on these 
platforms and the

fact it worked on the old junos versions is pure luck.

I tried setting "use_ip_next_hop" to true but it had no effect. Is it 
even possible to
use it in a combo with nfacctd_as: bgp and have internal bgpd ignore 
bgp_next_hop in
the flow and use the ip_next_hop to correctly resolve as path for 
multipathed
prefixes ? Quick look at pkt_handlers.c shows that if bgp_ip_next_hop is 
set in the
flow export peer_ds_ip.address.ipv4 will never be set to ip_next_hop 
regardless of

use_ip_next_hop being set to true.


Thanks,
Andrej


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Configuration when sampling from multiple routers.

2021-05-17 Thread Paolo Lucente


Hi Hendrik,

If your NetFlow/IPIFX exporter implementation is decently done, it may 
be all easier than that. There is field type 61 ( see 
https://www.iana.org/assignments/ipfix/ipfix.xhtml ) that denotes the 
sampling direction. Tipically, ie. in Cisco, it is either 0x00 or absent 
if sampling is ingress vs it is always present and set to 0x01 if 
sampling is egress.


You can either check if your flows have such a field and are labelled 
properly or, more quickly, you can:


1) edit a file /path/to/pretag.map with a one liner that will tag flows 
in egress direction (0x01) with a value 100:


tag=100 direction=1

2) complement your current config to read the pretag.map file and filter 
out the flows with tag 100 (we actually filter in untagged traffic, that 
is, traffic with tag 0):


pre_tag_map: /path/to/pretag.map
pre_tag_filter[foo]: 0

One note on the 'foo' part. That is a plugin name; you can't make 
pre_tag_filter a global config directive, it has to be associated to a 
specific named plugin. This means, if you are not doing it already, ie. 
running only one single unnamed plugin, give it a name. How to do it? 
Super simple: you may have a line right now a-la:


plugins: kafka

You should just change it to:

plugins: kafka[foo]

Where 'foo' can be any string of your choice.

Paolo


On 17/5/21 14:40, Hendrik Meyburgh wrote:

Hi.

I have looked at and tested the options over the past few days and 
realistically we need to sample both directions at both locations as we 
have a different use case we need to satisfy at the network edges in 
addition to accounting subscriber traffic. We are investigating with the 
routing vendor if there is a way of specifying a sampling interface to 
send to a specific collector but we are still waiting for feedback on if 
that is possible.


Another option I have been considering is that I should use multiple 
pmacct collectors, where the first one filter based on the source ip, 
same prefix list for src_host and dst_host, with and the same in_face, 
out_face and then tee/replicate that to another collector to recombine 
them and to sum_host, I haven't tested that yet, will it work or is 
there something else can I try?


Thank you.

On Thu, May 13, 2021 at 2:40 AM Paolo Lucente <mailto:pa...@pmacct.net>> wrote:



Hi Hendrik,

What direction are you sampling NetFlow traffic at your edges? Is it
consistent, are you sampling at both place in the same direction,
either
ingress (which would make more sense) or egress (which would make
slight
less sense)? If so, i'd be puzzled why you would get duplicated
traffic;
if, instead, you mix directions or do both at both endpoints, etc.
then,
yeah, that makes sense (and if so we can further analize the scenario).

Paolo


On 12/5/21 11:44, Hendrik Meyburgh wrote:
 > Hi.
 >
 > I have an issue where my setup is causing double counting when using
 > sum_host using the below topology. The sampling is set up on the
 > interface where the SRC is located and also on the peering edges. My
 > config is below, is there something else which I can enable to
stop this
 > from happening? We are currently testing setting the same
 > observation-domain-id for both routers to see if that will help.
 >
 > Thank you.
 >
 >                         ++              +-+
 > SRC     - Router1   +<->+ Router2 
  +---   DST
 >          Sampling ++---+           +++. 
  Sampling

 >                                  |                        |
 >                                  |                        |
 >                                  |                        |
 >                                  |                        |
 >                                  |                        |
 >                                  |                        |
 >                                  +--+---+--+
 >                                     |                |
 >                                     |   pmacct  |
 >                                     +---+
 >
 > daemonize: true
 >
 > nfacctd_port: 2100
 >
 > logfile: /var/log/nfacctd.log
 >
 > !debug: true
 >
 > plugins: print[SUM]
 >
 >
 > ! Test2: disable below
 >
 > nfacctd_renormalize: true
 >
 > !nfacctd_ext_sampling_rate: 1024
 >
 > nfacctd_pro_rating: true
 >
 > !
 >
 > nfacctd_time_new: true
 >
 > aggregate[SUM]: sum_host
 >
 > networks_file[SUM]: /root/pmacct/TARGETS
 >
 > networks_file_filter[SUM]: true
 >
 > print_cache_entries[SUM]: 1
 &

Re: [pmacct-discussion] Configuration when sampling from multiple routers.

2021-05-12 Thread Paolo Lucente


Hi Hendrik,

What direction are you sampling NetFlow traffic at your edges? Is it 
consistent, are you sampling at both place in the same direction, either 
ingress (which would make more sense) or egress (which would make slight 
less sense)? If so, i'd be puzzled why you would get duplicated traffic; 
if, instead, you mix directions or do both at both endpoints, etc. then, 
yeah, that makes sense (and if so we can further analize the scenario).


Paolo


On 12/5/21 11:44, Hendrik Meyburgh wrote:

Hi.

I have an issue where my setup is causing double counting when using 
sum_host using the below topology. The sampling is set up on the 
interface where the SRC is located and also on the peering edges. My 
config is below, is there something else which I can enable to stop this 
from happening? We are currently testing setting the same 
observation-domain-id for both routers to see if that will help.


Thank you.

                        ++              +-+
SRC     - Router1   +<->+ Router2   +---   DST
         Sampling ++---+           +++.       Sampling
                                 |                        |
                                 |                        |
                                 |                        |
                                 |                        |
                                 |                        |
                                 |                        |
                                 +--+---+--+
                                    |                |
                                    |   pmacct  |
                                    +---+

daemonize: true

nfacctd_port: 2100

logfile: /var/log/nfacctd.log

!debug: true

plugins: print[SUM]


! Test2: disable below

nfacctd_renormalize: true

!nfacctd_ext_sampling_rate: 1024

nfacctd_pro_rating: true

!

nfacctd_time_new: true

aggregate[SUM]: sum_host

networks_file[SUM]: /root/pmacct/TARGETS

networks_file_filter[SUM]: true

print_cache_entries[SUM]: 1

print_refresh_time[SUM]: 300

print_history[SUM]: 5m

print_output[SUM]: csv

print_output_file[SUM]: /root/pmacct/SUM/file-%Y%m%d-%H%M.txt

print_history_roundoff[SUM]: m


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists



___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Tee and Kafka plugins

2021-05-12 Thread Paolo Lucente



Hi Wilfrid,

Ah, i see & thanks for your kind words on the pmacct project.

You can define Kafka brokers and topics as part of the tee_receivers 
map: 
https://github.com/pmacct/pmacct/blob/master/examples/tee_receivers.lst.example#L29-#L32 
. But you can have only one set of configuration for all brokers for one 
single tee plugin (where you can define SSL stuff, buffers, etc.), hence 
the separate option tee_kafka_config_file (action item for me, i will 
augment the CONFIG-KEYS doc to specifically say all of this).


The above is all for the sender end. For the receiving end, please see 
CONFIG-KEYS for nfacctd_kafka_* options. In this case binary NetFlow 
data is shuffled as-is into Kafka (prepending some socket info so that 
the collector can reckon which router originally sent the datagram) and 
unpacked at the receiving end by nfacctd - all of this is "non 
standard", i mean, there is no such a thing as using Kafka as transport 
for NetFlow so this is is a pmacct-to-pmacct thing. Essentially, big 
picture is:


router -> nfacctd (tee) -> Kafka -> nfacctd (collector)

Paolo


On 12/5/21 10:52, Grassot, Wilfrid wrote:

Paolo,

Thank you for your answer.

My bad, my question have been badly expressed, as I meant to ask about
specifically "tee plugins" sending replicated data straight to kafka.
https://github.com/pmacct/pmacct/commit/f660b083b505c969c623e15b1dbb9e
27ffac

and related parameter tee_kafka_config_file
https://github.com/pmacct/pmacct/blob/master/ChangeLog#L579-L582

I cannot see where tee plugin specifies the kafka topic to send netflow
to, and how Kafka  can handle received netflow replicated from tee plugin.
Somewhere there must be a plugin on Kafka side that handle the conversion
of netflow to send it to a topic in a data format retrievable from a
topic.

I hope my question a bit clearer.

Thanks for your usual support and for your pmacct swiss knife

Wilfrid

  








-Original Message-
From: Paolo Lucente 
Sent: Tuesday, 11 May 2021 04:52
To: pmacct-discussion@pmacct.net; Grassot, Wilfrid

Subject: Re: [pmacct-discussion] Tee and Kafka plugins

CAUTION:  External email. Do not click links or open attachments unless
you recognize the sender and know the content is safe.

Hi Wilfrid,

Your understanding is correct although replication and collection are two
separate pieces. You can have 1) a nfacctd replicator, that is binary
NetFlow to binary NetFlow, where you could fan-out and filter pieces of
your original export (to different collector, apps, etc.) and
2) a nfacctd collector that is parsing binary NetFlow and sending into
Kafka; this piece should be business as usual and encodings you can pick
are JSON or Apache Avro.

You can find examples and some introductory elaboration around the
replication (tee) plugin here:
https://github.com/pmacct/pmacct/blob/master/QUICKSTART#L2105-#L2167

Paolo

On 10/5/21 12:28, Grassot, Wilfrid wrote:

Hi Paolo,

I understand that we can configure tee to replicate filtered datagram
with pre-tag.map to a kafka broker.

I am novice to Kafka, but does it mean the data sent to Kafka is in
json format ? If not which other data format is it sent to Kafka ?

I am not sure either how would look like the configuration nfacct with
the tee plugin to replicate netflow packet to a kafka broker, would
you have an example ?

Many thanks .

Wilfrid


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists





___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Tee and Kafka plugins

2021-05-10 Thread Paolo Lucente



Hi Wilfrid,

Your understanding is correct although replication and collection are 
two separate pieces. You can have 1) a nfacctd replicator, that is 
binary NetFlow to binary NetFlow, where you could fan-out and filter 
pieces of your original export (to different collector, apps, etc.) and 
2) a nfacctd collector that is parsing binary NetFlow and sending into 
Kafka; this piece should be business as usual and encodings you can pick 
are JSON or Apache Avro.


You can find examples and some introductory elaboration around the 
replication (tee) plugin here: 
https://github.com/pmacct/pmacct/blob/master/QUICKSTART#L2105-#L2167


Paolo

On 10/5/21 12:28, Grassot, Wilfrid wrote:

Hi Paolo,

I understand that we can configure tee to replicate filtered datagram 
with pre-tag.map to a kafka broker.


I am novice to Kafka, but does it mean the data sent to Kafka is in json 
format ? If not which other data format is it sent to Kafka ?


I am not sure either how would look like the configuration nfacct with 
the tee plugin to replicate netflow packet to a kafka broker, would you 
have an example ?


Many thanks .

Wilfrid


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists



___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Another src_as / dst_as problem

2021-05-06 Thread Paolo Lucente


Hi Cedric,

Thanks for following up. This line in your log "WARN ( default/core ): 
connection lost to 'ip-nfprobe'; closing connection." tells me that 
there may be more wrong to it, it actually seems the plugin crashes and 
as a result you should stop receiving any (good or bad) data.


I tried to reproduce the issue at my end but failed, ie. it all works 
fine using your config. Since we have a crash, to try to better nail the 
problem what would help is if you can return me some info about it, here 
is how:


https://github.com/pmacct/pmacct/blob/1.7.6/QUICKSTART#L2864-#L2884

You can return me the info here or by unicast email as nobody else would 
be much interested in the back and forth of the troubleshooting process. 
Express route to resolution could be to get access to your environment, 
if reachable via ssh (no screen sharing) and it's non production.


Paolo


On 6/5/21 08:23, BASSAGET Cédric wrote:

Hello Paolo.
Just hade a remote session with Luca Dari from ntopng. Seems the 
starttime/endtime in the flows are not correct too :


     Timestamp: May  6, 2021 08:11:03.0 CEST
         ExportTime: 1620281463
     FlowSequence: 34583266
     Observation Domain Id: 0
     Set 1 [id=1024] (4 flows)
         FlowSet Id: (Data) (1024)
         FlowSet Length: 308
         [Template Frame: 9]
         Flow 1
             [Duration: 877515505.66400 seconds (milliseconds)]
                 StartTime: Nov 13, 112781 17:46:47.0 CET
                 EndTime: May 28, 511486763 17:23:04.66400 CET

I can provide  you a full capture if needed.
Regards
Cédric

Le mer. 5 mai 2021 à 15:26, BASSAGET Cédric 
mailto:cedric.bassaget...@gmail.com>> a 
écrit :


Hello Paolo :)

I was running :
# pmacctd -V
Promiscuous Mode Accounting Daemon, pmacctd 1.7.2-git (20181018-00+c3)
3.0-0.bpo.2-amd64 #1 SMP Debian 5.3.9-2~bpo10+1 (2019-11-13) x86_64


I tried to compile github release yesterday but it failed. Tried
again a few minutes ago and compilation seem to work now.
pmacctd 1.7.7-git (20210505-1 (3edef0c3))

but unfortunately I have the same problem : src_as / dst_as field is
still 0 :(

Regards
Cédric


Le mar. 4 mai 2021 à 21:27, Paolo Lucente mailto:pa...@pmacct.net>> a écrit :


Hi Cedric,

It seems this should work. Can you confirm what version are you
using? a
"pmacctd -V" would do so that i try to reproduce (and/or
encourage you
to get to 1.7.6 or master code on GitHub 8-)).

Paolo

On 4/5/21 14:56, BASSAGET Cédric wrote:
 > Hello,
 > I'm (once again) trying to export netflow from a Linux / bird
router to
 > an external probe. But I can't get src_as / dst_as in my
netflow export...
 >
 > bgp session between pmacct and bird is OK :
 > bird> show route export pmacct count
 > 871845 of 2695832 routes for 876157 networks
 >
 > if I set a "bgp_table_dump_file" file, it is filled with the
full-view
 > content (stuff like :
 >
 > {"timestamp": "2021-05-04 14:40:00", "peer_ip_src": "127.0.0.1",
 > "peer_tcp_port": 60836, "event_type": "dump", "afi": 1,
"safi": 1,
 > "ip_prefix": "1.22.148.0/24 <http://1.22.148.0/24>
<http://1.22.148.0/24 <http://1.22.148.0/24>>", "bgp_nexthop":
 > "149.14.152.113", "as_path": "174 6453 4755 45528 45528 45528
45528
 > 45528", "comms": "174:21100 174:22008", "origin": 0,
"local_pref": 100,
 > "med": 2021}
 >
 > note that pmacctd stops with the following warning when it
has finished
 > to write this file :
 > INFO ( default/core/BGP ): *** Dumping BGP tables - START
(PID: 9379) ***
 > INFO ( default/core/BGP ): *** Dumping BGP tables - END (PID:
9379,
 > TABLES: 2 ET: 8) ***
 > WARN ( default/core ): connection lost to 'ip-nfprobe';
closing connection.
 > WARN ( default/core ): no more plugins active. Shutting down.
 >
 > Here's my config :
 >
 > # cat /etc/pmacct/pmacctd.netflow.conf
 > debug: false
 > daemonize: false
 > interface: bond0
 > aggregate: etype, tag, src_host, dst_host, src_port,
dst_port, proto,
 > tos, src_as, dst_as, vlan
 >
 > nfprobe_version: 10
 > plugins: nfprobe[ip]
 >
 > nfprobe_receiver[ip]: 192.168.156.109:4739
<http://19

Re: [pmacct-discussion] Another src_as / dst_as problem

2021-05-04 Thread Paolo Lucente


Hi Cedric,

It seems this should work. Can you confirm what version are you using? a 
"pmacctd -V" would do so that i try to reproduce (and/or encourage you 
to get to 1.7.6 or master code on GitHub 8-)).


Paolo

On 4/5/21 14:56, BASSAGET Cédric wrote:

Hello,
I'm (once again) trying to export netflow from a Linux / bird router to 
an external probe. But I can't get src_as / dst_as in my netflow export...


bgp session between pmacct and bird is OK :
bird> show route export pmacct count
871845 of 2695832 routes for 876157 networks

if I set a "bgp_table_dump_file" file, it is filled with the full-view 
content (stuff like :


{"timestamp": "2021-05-04 14:40:00", "peer_ip_src": "127.0.0.1", 
"peer_tcp_port": 60836, "event_type": "dump", "afi": 1, "safi": 1, 
"ip_prefix": "1.22.148.0/24 ", "bgp_nexthop": 
"149.14.152.113", "as_path": "174 6453 4755 45528 45528 45528 45528 
45528", "comms": "174:21100 174:22008", "origin": 0, "local_pref": 100, 
"med": 2021}


note that pmacctd stops with the following warning when it has finished 
to write this file :

INFO ( default/core/BGP ): *** Dumping BGP tables - START (PID: 9379) ***
INFO ( default/core/BGP ): *** Dumping BGP tables - END (PID: 9379, 
TABLES: 2 ET: 8) ***

WARN ( default/core ): connection lost to 'ip-nfprobe'; closing connection.
WARN ( default/core ): no more plugins active. Shutting down.

Here's my config :

# cat /etc/pmacct/pmacctd.netflow.conf
debug: false
daemonize: false
interface: bond0
aggregate: etype, tag, src_host, dst_host, src_port, dst_port, proto, 
tos, src_as, dst_as, vlan


nfprobe_version: 10
plugins: nfprobe[ip]

nfprobe_receiver[ip]: 192.168.156.109:4739 
nfprobe_timeouts[ip]: tcp=120:maxlife=3600
pmacctd_flow_lifetime: 30

sampling_rate: 10

pmacctd_as: bgp
bgp_daemon: true
bgp_daemon_ip: 127.0.0.1
!bgp_daemon_ip: ::
bgp_daemon_as: 203xxx
bgp_daemon_port: 17917
bgp_agent_map: /etc/pmacct/bgp_agent_map.map
bgp_peer_as_skip_subas: true
bgp_peer_src_as_type: bgp
! pre_tag_map: /etc/pmacct/pretag.map

! bgp_table_dump_file: /tmp/bgp-$peer_src_ip-%H%M.log
! bgp_table_dump_refresh_time: 600

# cat /etc/pmacct/bgp_agent_map.map
bgp_ip=185.x.y.z ip=0.0.0.0/0 


Can somebody tell me what I'm missing ? I used to make it work about 1 
year ago... long time ago !


Thanks a lot for you help.
Regards
Cédric

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists



___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Kafka purge timing

2021-04-16 Thread Paolo Lucente


Hi Hendrik,

You may see these messages appearing in your log (i can spot one in your
excerpt in your previous email): "Finished cache entries (ie.
print_cache_entries). Purging.". This is the reason for the intermediate
purges. You have more entries to store for the 300 seconds interval than
available cache entries; for the Kafka plugin the default is 16411. Set
kafka_cache_entries to something (ideally a prime number) greater than
that. Dunno, go for 1. See if it works and take it from there (to
further increase it or, if unhappy with memory consumed, reduce it).

Paolo

On Fri, Apr 16, 2021 at 07:36:38AM +0200, Hendrik Meyburgh wrote:
> Hi.
> 
> I have a problem where the data is being purged in multiple intervals and
> this is causing a calculation issue, as it doesn't seem to be resetting the
> values back to zero after each purge. My understanding is
> that kafka_refresh_time should set it to fixed values, in this case 300
> seconds, but from the logs it is clear that it is not happening like that.
> Am I missing a config option or am I misunderstanding the configuration?
> 
> Some sample data for the IP x.x.x.x, it is the same IP for both events:
> {"event_type": "purge", "ip_src": "x.x.x.x", "stamp_inserted": "2021-04-16
> 07:10:00", "stamp_updated": "2021-04-16 07:14:00", "packets": 12288,
> "bytes": 9193472, "writer_id": "subscriber_usage/20793"}
> 
> {"event_type": "purge", "ip_src": "x.x.x.x", "stamp_inserted": "2021-04-16
> 07:10:00", "stamp_updated": "2021-04-16 07:15:03", "packets": 12288,
> "bytes": 13235200, "writer_id": "subscriber_usage/20945"}
> 
> Some of the logs
> 
> 2021-04-16T07:20:01+02:00 INFO ( subscriber_usage/kafka ): *** Purging
> cache - START (PID: 21144) ***
> 
> 2021-04-16T07:20:05+02:00 INFO ( subscriber_usage/kafka ): *** Purging
> cache - END (PID: 21144, QN: 106679/106679, ET: 2) ***
> 
> 2021-04-16T07:23:51+02:00 INFO ( subscriber_usage/kafka ): Finished cache
> entries (ie. print_cache_entries). Purging.
> 
> 2021-04-16T07:23:51+02:00 INFO ( subscriber_usage/kafka ): *** Purging
> cache - START (PID: 21194) ***
> 
> 2021-04-16T07:23:56+02:00 INFO ( subscriber_usage/kafka ): *** Purging
> cache - END (PID: 21194, QN: 180344/180344, ET: 3) ***
> 
> 2021-04-16T07:25:01+02:00 INFO ( subscriber_usage/kafka ): *** Purging
> cache - START (PID: 21412) ***
> 
> 2021-04-16T07:25:05+02:00 INFO ( subscriber_usage/kafka ): *** Purging
> cache - END (PID: 21412, QN: 109942/109942, ET: 2) ***
> 
> Current running configuration.
> 
> daemonize: true
> 
> logfile: /var/log/nfacctd.log
> 
> !debug: true
> 
> plugins: kafka[subscriber_usage]
> 
> aggregate[subscriber_usage]: sum_host
> 
> nfacctd_renormalize: true
> 
> !nfacctd_ext_sampling_rate: 1024
> 
> nfacctd_pro_rating: true
> 
> !
> 
> nfacctd_time_new: true
> 
> !
> 
> kafka_output: json
> 
> kafka_topic: pmacct.test
> 
> kafka_refresh_time: 300
> 
> kafka_history: 5m
> 
> kafka_history_roundoff: m
> 
> kafka_broker_host: a.a.a.a
> 
> 
> Thank you.

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] crashes since update to 1.7.6

2021-04-14 Thread Paolo Lucente
)
==3277==  If you believe this happened as a result of a stack
==3277==  overflow in your program's main thread (unlikely but
==3277==  possible), you can try to increase the size of the
==3277==  main thread stack using the --main-stacksize= flag.
==3277==  The main thread stack size used in this run was 8388608.
==3271==
==3271== HEAP SUMMARY:
==3271== in use at exit: 17,759,425 bytes in 3,899 blocks
==3271==   total heap usage: 4,945 allocs, 1,046 frees, 18,407,103 bytes 
allocated

==3271==
==3277==
==3277== HEAP SUMMARY:
==3277== in use at exit: 17,789,001 bytes in 3,937 blocks
==3277==   total heap usage: 5,007 allocs, 1,070 frees, 18,471,846 bytes 
allocated

==3277==
==3271== LEAK SUMMARY:
==3271==    definitely lost: 936 bytes in 2 blocks
==3271==    indirectly lost: 32,276 bytes in 71 blocks
==3271==  possibly lost: 640 bytes in 2 blocks
==3271==    still reachable: 17,725,573 bytes in 3,824 blocks
==3271== suppressed: 0 bytes in 0 blocks
==3271== Rerun with --leak-check=full to see details of leaked memory
==3271==
==3271== For lists of detected and suppressed errors, rerun with: -s
==3271== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
==3277== LEAK SUMMARY:
==3277==    definitely lost: 936 bytes in 2 blocks
==3277==    indirectly lost: 32,276 bytes in 71 blocks
==3277==  possibly lost: 640 bytes in 2 blocks
==3277==    still reachable: 17,755,149 bytes in 3,862 blocks
==3277== suppressed: 0 bytes in 0 blocks
==3277== Rerun with --leak-check=full to see details of leaked memory
==3277==
==3277== For lists of detected and suppressed errors, rerun with: -s
==3277== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
==3267== LEAK SUMMARY:
==3267==    definitely lost: 3,528 bytes in 6 blocks
==3267==    indirectly lost: 3,452 bytes in 35 blocks
==3267==  possibly lost: 4,800 bytes in 15 blocks
==3267==    still reachable: 882,988 bytes in 493 blocks
==3267==   of which reachable via heuristic:
==3267== multipleinheritance: 6,016 bytes in 4 
blocks

==3267== suppressed: 0 bytes in 0 blocks
==3267== Rerun with --leak-check=full to see details of leaked memory
==3267==
==3267== Use --track-origins=yes to see where uninitialised values come 
from

==3267== For lists of detected and suppressed errors, rerun with: -s
==3267== ERROR SUMMARY: 3 errors from 1 contexts (suppressed: 0 from 0)

I just started up uacctd and then shout it down again to force a write 
in mysql database.

Hope it helps to find the issue.

regards

Goeran

Am 14.04.2021 um 14:39 schrieb Paolo Lucente:


Hi Goran,

Can you please gather more information about the crash following these 
instructions:


https://github.com/pmacct/pmacct/blob/master/QUICKSTART#L2876-#L2896

Output from either a gdb back trace or valgrind would be of help.

Paolo 



___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] crashes since update to 1.7.6

2021-04-14 Thread Paolo Lucente


Hi Goran,

Can you please gather more information about the crash following these 
instructions:


https://github.com/pmacct/pmacct/blob/master/QUICKSTART#L2876-#L2896

Output from either a gdb back trace or valgrind would be of help.

Paolo


On 13/04/2021 19:40, Göran Bruns wrote:

Hi there,

since the update to version 1.7.6, I noticed crashes of uacctd in log 
files.


It looks like this:

2021-04-13 19:07:45.781 info kernel: uacctd[4955]: segfault at 0 ip 
7fbe399c7acd sp 7ffed50ed338 error 4 in 
libc-2.32.so[7fbe39947000+143000]
2021-04-13 19:07:45.781 info kernel: Code: ff 0f 00 00 66 0f 60 c9 48 3d 
bf 0f 00 00 66 0f 60 d2 66 0f 61 c9 66 0f 61 d2 66 0f 70 c9 00 66 0f 70 
d2 00 0f 87 03 03 00 00  0f 6f 1f 66 0f ef ed f3 0f 6f 67 01 66 0f 
6f f3 66 0f 74 d9 66


It seems related to writing to mysql database. It occurs in intervals 
and there are no new traffic data in db.
I read the quickstart guide. The system is not memory constrained, uacct 
runs as root and the configuration did not change.
So before I try to recompile with debug  ... are there any other things 
I can do to solve the problem ?


regards

Goeran


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists



___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] nfprobe vs. print plugin with ESP flows

2021-04-14 Thread Paolo Lucente



Hi Sean,

I must admit this email thread went 'Read' in my email client and i lost 
track of it. Please allow me a bit of time this week to get through it. 
Apologies for the inconvenience.


Paolo

On 13/04/2021 16:41, Sean wrote:

Hi Paolo,

I was curious if you received and have had a chance to look at the
pcap you requested.  I am still struggling to set up this netflow
accounting for my routers.  Thanks!

--Sean

On Mon, Mar 15, 2021 at 11:51 AM Sean  wrote:


Thanks for taking a look.  I have sent the attachments directly to you.

--Sean

On Sun, Mar 14, 2021 at 11:16 AM Paolo Lucente  wrote:



Hi Sean,

It smells like a bug. May i ask you to send me a brief capture of some
of these ESP packets by unicast email? It would allow me to reproduce
the issue. You can do that with tcpdump, in case you are not familiar
with it something a-la "tcpdump -i  -s 0 -n -w 
esp" should do it; then press CTRL+C to exit and make sure the file has
a positive size.

Paolo

On 12/03/2021 19:04, Sean wrote:

Hi all,

I just joined the list, and just started tinkering at pmacct. The gist
of what I'm trying to do is generate netflow data on two linux servers
acting as routers with Free Range Routing (FRR) software.  The routers
are mostly passing IPSEC tunnels, I want to use the netflow data to
track bandwidth utilization for each tunnel.

I notice when I use the print plugin on the router(s) that I can see
flows for ESP -
SRC_IP   DST_IPSRC_PORT  DST_PORT
PROTOCOL  TOS  PACKETSBYTES
192.168.192.100 192.168.0.100  00
   esp 044   25696
192.168.0.100 192.168.192.100  00
   esp 022   12848

For the running pmacct configuration, I use the nfprobe plugin and
send to a remote netflow receiver.  The trouble is that on the
receiver, I am only seeing flows for protoid 17, which is just UDP.
Would anyone here have an idea what I need to do to get nfprobe to
send the ESP flows to my receiver?

My config -
daemonize: true
debug: true
syslog: daemon
aggregate: src_host, dst_host, src_port, dst_port, proto, tos
plugins: nfprobe
nfprobe_receiver: 192.168.192.10:9995
nfprobe_version: 10
nfprobe_source_ip: 192.168.192.2


--Sean

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists





___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] nfprobe vs. print plugin with ESP flows

2021-03-14 Thread Paolo Lucente



Hi Sean,

It smells like a bug. May i ask you to send me a brief capture of some 
of these ESP packets by unicast email? It would allow me to reproduce 
the issue. You can do that with tcpdump, in case you are not familiar 
with it something a-la "tcpdump -i  -s 0 -n -w  
esp" should do it; then press CTRL+C to exit and make sure the file has 
a positive size.


Paolo

On 12/03/2021 19:04, Sean wrote:

Hi all,

I just joined the list, and just started tinkering at pmacct. The gist
of what I'm trying to do is generate netflow data on two linux servers
acting as routers with Free Range Routing (FRR) software.  The routers
are mostly passing IPSEC tunnels, I want to use the netflow data to
track bandwidth utilization for each tunnel.

I notice when I use the print plugin on the router(s) that I can see
flows for ESP -
SRC_IP   DST_IPSRC_PORT  DST_PORT
PROTOCOL  TOS  PACKETSBYTES
192.168.192.100 192.168.0.100  00
  esp 044   25696
192.168.0.100 192.168.192.100  00
  esp 022   12848

For the running pmacct configuration, I use the nfprobe plugin and
send to a remote netflow receiver.  The trouble is that on the
receiver, I am only seeing flows for protoid 17, which is just UDP.
Would anyone here have an idea what I need to do to get nfprobe to
send the ESP flows to my receiver?

My config -
daemonize: true
debug: true
syslog: daemon
aggregate: src_host, dst_host, src_port, dst_port, proto, tos
plugins: nfprobe
nfprobe_receiver: 192.168.192.10:9995
nfprobe_version: 10
nfprobe_source_ip: 192.168.192.2


--Sean

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Can't add AS number to netflow export

2021-02-15 Thread Paolo Lucente


Hi Michal,

The deal breaker is the format / encoding. If you can consume 
JSON-decded NetFlow then possibilities are pretty much infinite. If you 
want the binary NetFlow / IPFIX encoding then, unfortunately, no joy.


Paolo

On 15/02/2021 10:49, Michał Margula wrote:

Hi Paolo,

Thank you for your reply. I really was hoping it would work :). Do you 
think it is still possible with nfacctd and just dumping traffic on the 
ethernet interface instead of receiving netflow?


pon., 15 lut 2021 o 01:07 Paolo Lucente <mailto:pa...@pmacct.net>> napisał(a):



Hi Michal,

Similar topic was discussed recently on the list (*) but, as you can
see, the broad generic answer to it is negative.

Paolo

(*)
https://www.mail-archive.com/pmacct-discussion@pmacct.net/msg04028.html
<https://www.mail-archive.com/pmacct-discussion@pmacct.net/msg04028.html>


On 14/02/2021 22:34, Michał Margula wrote:
 > Hi,
 >
 > I am trying to achieve following setup with pmacct:
 > - receive netflow export from X that does not contain AS numbers
 > - resend it to Y but adding AS number information
 >
 > I was able to configure BGP peering with one of our routers
(tried both
 > with Cisco and FRR). I tried both eBGP and IBGP. I confirmed both
on the
 > router and the pmacct (via bgp_table_dump_file) that I am correctly
 > receiving the BGP feed. I also tried two versions of
bgp_agent_map - one
 > with router-id of the router and another with just the IP I am
peering
 > with under bgp_ip.
 >
 > Then I tried with pmacctd instead of nfacctd  but with no luck. AS
 > numbers are always empty in netflow export, it is the same when I do
 > pmacct -s -a. This is the config I used for nfacctd:
 >
 > root@netflow:/home/alchemyx# cat /etc/pmacct/nfacctd.conf
 > ! nfacctd configuration
 > !
 > !
 > !
 > daemonize: true
 > pidfile: /var/run/nfacctd.pid
 > syslog: daemon
 >
 > nfacctd_ip: 127.0.0.1
 > nfacctd_port: 2100
 > root@netflow:/home/alchemyx# cat /etc/pmacct/nfacctd.conf
 > ! nfacctd configuration
 > !
 > !
 > !
 > daemonize: true
 > pidfile: /var/run/nfacctd.pid
 > syslog: daemon
 >
 > nfacctd_ip: 127.0.0.1
 > nfacctd_port: 2100
 >
 > bgp_daemon: true
 > bgp_daemon_ip: 192.168.223.10
 > bgp_daemon_max_peers: 100
 > bgp_daemon_as: 65535
 > bgp_agent_map: /etc/pmacct/bgp_agent.map
 > nfacctd_as: bgp
 >
 > plugins: tee[a]
 > tee_receivers[a]: /etc/pmacct/tee_nflow_receivers.lst
 > root@netflow:/home/alchemyx# cat /etc/pmacct/bgp_agent.map
 > bgp_ip=xxx.yyy.zz.1 ip=0.0.0.0/0 <http://0.0.0.0/0>
 >
 > root@netflow:/home/alchemyx# cat /etc/pmacct/tee_nflow_receivers.lst
 > id=1 ip=192.168.222.9:7779 <http://192.168.222.9:7779>
 >
 >
 > bgp_daemon: true
 > bgp_daemon_ip: 192.168.223.10
 > bgp_daemon_max_peers: 100
 > bgp_daemon_as: 65535
 > bgp_agent_map: /etc/pmacct/bgp_agent.map
 > nfacctd_as: bgp
 >
 > plugins: tee[a]
 > tee_receivers[a]: /etc/pmacct/tee_nflow_receivers.lst
 > root@netflow:/home/alchemyx# cat /etc/pmacct/bgp_agent.map
 > bgp_ip=xxx.yyy.zz.1 ip=0.0.0.0/0 <http://0.0.0.0/0>
 >
 > root@netflow:/home/alchemyx# cat /etc/pmacct/tee_nflow_receivers.lst
 > id=1 ip=192.168.222.9:7779 <http://192.168.222.9:7779>
 >
 > And this is pmacctd config I used:
 >
 > root@netflow:/home/alchemyx# cat /etc/pmacct/pmacctd.conf
 > ! pmacctd configuration
 > !
 > !
 > !
 > daemonize: true
 > pidfile: /var/run/pmacctd.pid
 > syslog: daemon
 >
 > promisc: true
 > aggregate: src_host,dst_host
 > interface: ens16f0
 > pmacctd_as: bgp
 > pmacctd_net: bgp
 >
 > nfprobe_receiver: 192.168.222.9:7779 <http://192.168.222.9:7779>
 > nfprobe_version: 9
 >
 > bgp_daemon: true
 > bgp_daemon_ip: 192.168.223.10
 > bgp_daemon_max_peers: 100
 > bgp_daemon_as: 205679
 > bgp_agent_map: /etc/pmacct/bgp_agent.map
 > plugin_buffer_size: 409600
 > plugin_pipe_size: 40960
 >
 > And bgp_agent.map is the same. I feel like I am missing something
 > obvious, but can't find it. Any help would be greatly appreciatd.
 >
 > Kind regards,
 > Michał
 >



--
Michał Margula, mic...@margula.pl <mailto:mic...@margula.pl>
"W życiu piękne są tylko chwile" [Ryszard Riedel]

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Can't add AS number to netflow export

2021-02-14 Thread Paolo Lucente


Hi Michal,

Similar topic was discussed recently on the list (*) but, as you can 
see, the broad generic answer to it is negative.


Paolo

(*) https://www.mail-archive.com/pmacct-discussion@pmacct.net/msg04028.html


On 14/02/2021 22:34, Michał Margula wrote:

Hi,

I am trying to achieve following setup with pmacct:
- receive netflow export from X that does not contain AS numbers
- resend it to Y but adding AS number information

I was able to configure BGP peering with one of our routers (tried both 
with Cisco and FRR). I tried both eBGP and IBGP. I confirmed both on the 
router and the pmacct (via bgp_table_dump_file) that I am correctly 
receiving the BGP feed. I also tried two versions of bgp_agent_map - one 
with router-id of the router and another with just the IP I am peering 
with under bgp_ip.


Then I tried with pmacctd instead of nfacctd  but with no luck. AS 
numbers are always empty in netflow export, it is the same when I do 
pmacct -s -a. This is the config I used for nfacctd:


root@netflow:/home/alchemyx# cat /etc/pmacct/nfacctd.conf
! nfacctd configuration
!
!
!
daemonize: true
pidfile: /var/run/nfacctd.pid
syslog: daemon

nfacctd_ip: 127.0.0.1
nfacctd_port: 2100
root@netflow:/home/alchemyx# cat /etc/pmacct/nfacctd.conf
! nfacctd configuration
!
!
!
daemonize: true
pidfile: /var/run/nfacctd.pid
syslog: daemon

nfacctd_ip: 127.0.0.1
nfacctd_port: 2100

bgp_daemon: true
bgp_daemon_ip: 192.168.223.10
bgp_daemon_max_peers: 100
bgp_daemon_as: 65535
bgp_agent_map: /etc/pmacct/bgp_agent.map
nfacctd_as: bgp

plugins: tee[a]
tee_receivers[a]: /etc/pmacct/tee_nflow_receivers.lst
root@netflow:/home/alchemyx# cat /etc/pmacct/bgp_agent.map
bgp_ip=xxx.yyy.zz.1 ip=0.0.0.0/0

root@netflow:/home/alchemyx# cat /etc/pmacct/tee_nflow_receivers.lst
id=1 ip=192.168.222.9:7779


bgp_daemon: true
bgp_daemon_ip: 192.168.223.10
bgp_daemon_max_peers: 100
bgp_daemon_as: 65535
bgp_agent_map: /etc/pmacct/bgp_agent.map
nfacctd_as: bgp

plugins: tee[a]
tee_receivers[a]: /etc/pmacct/tee_nflow_receivers.lst
root@netflow:/home/alchemyx# cat /etc/pmacct/bgp_agent.map
bgp_ip=xxx.yyy.zz.1 ip=0.0.0.0/0

root@netflow:/home/alchemyx# cat /etc/pmacct/tee_nflow_receivers.lst
id=1 ip=192.168.222.9:7779

And this is pmacctd config I used:

root@netflow:/home/alchemyx# cat /etc/pmacct/pmacctd.conf
! pmacctd configuration
!
!
!
daemonize: true
pidfile: /var/run/pmacctd.pid
syslog: daemon

promisc: true
aggregate: src_host,dst_host
interface: ens16f0
pmacctd_as: bgp
pmacctd_net: bgp

nfprobe_receiver: 192.168.222.9:7779
nfprobe_version: 9

bgp_daemon: true
bgp_daemon_ip: 192.168.223.10
bgp_daemon_max_peers: 100
bgp_daemon_as: 205679
bgp_agent_map: /etc/pmacct/bgp_agent.map
plugin_buffer_size: 409600
plugin_pipe_size: 40960

And bgp_agent.map is the same. I feel like I am missing something 
obvious, but can't find it. Any help would be greatly appreciatd.


Kind regards,
Michał




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


[pmacct-discussion] pmacct 1.7.6 released !

2021-02-07 Thread Paolo Lucente


VERSION.
1.7.6


DESCRIPTION.
pmacct is a small set of multi-purpose passive network monitoring tools. It
can account, classify, aggregate, replicate and export forwarding-plane data,
ie. IPv4 and IPv6 traffic; collect and correlate control-plane data via BGP
and BMP; collect and correlate RPKI data; collect infrastructure data via
Streaming Telemetry. Each component works both as a standalone daemon and
as a thread of execution for correlation purposes (ie. enrich NetFlow with
BGP data).

A pluggable architecture allows to store collected forwarding-plane data into
memory tables, RDBMS (MySQL, PostgreSQL, SQLite), noSQL databases (MongoDB,
BerkeleyDB), AMQP (RabbitMQ) and Kafka message exchanges and flat-files.
pmacct offers customizable historical data breakdown, data enrichments like
BGP and IGP correlation and GeoIP lookups, filtering, tagging and triggers.
Libpcap, Linux Netlink/NFLOG, sFlow v2/v4/v5, NetFlow v5/v8/v9 and IPFIX are
all supported as inputs for forwarding-plane data. Replication of incoming
NetFlow, IPFIX and sFlow datagrams is also available. Collected data can
be easily exported (ie. via Kafka) to modern databases like ElasticSearch,
Apache Druid and ClickHouse and (ie. via flat-files) to classic tools
Cacti, RRDtool and MRTG, etc.

Control-plane and infrastructure data, collected via BGP, BMP and Streaming
Telemetry, can be all logged real-time or dumped at regular time intervals
to AMQP (RabbitMQ) and Kafka message exchanges and flat-files.


HOMEPAGE.
http://www.pmacct.net/


DOWNLOAD.
http://www.pmacct.net/pmacct-1.7.6.tar.gz


CHANGELOG.
+ Added dependency to libcdada in an effort to streamline basic
  data structures needed for everyday coding. All new structures
  will make use of libcdada, old ones will be ported over time.
  Libcdada offers basic data structures in C: ie. list, set, map/
  hash table, queue and is a libstdc++ wrapper. Many thanks to
  Marc Sune ( @msune ) for his work with libcdada and his enormous
  help facilitating the integration.
+ BGP daemon: added suppport for Accumulated IGP Metric Attribute
  (AIGP) and Label-Index TLV of Prefix-SID Attribute.
+ BGP daemon: added SO_KEEPALIVE TCP socket option (ie. to keep the
  sessions alive via a firewall / NAT kind of device). Thanks to
  Jared Mauch ( @jaredmauch ) for his patch.
+ BGP daemon: if comparing source TCP ports among BGP peers is
  being enabled (config directive tmp_bgp_lookup_compare_ports),
  print also BGP Router-ID as distinguisher as part of log/dump
  output.
+ BMP daemon: added support for HAProxy Proxy Protocol Header in
  the first BMP message in order to determine the original sender
  IP address and port. The new bmp_daemon_parse_proxy_header config
  directive enables the feature. Contribution is by Peter Pothier
  ( @pothier-peter ).
+ BMP daemon: improved support and brought implementation on par
  with the latest drafting efforts at IETF wrt draft-cppy-grow-bmp-
  path-marking-tlv, draft-xu-grow-bmp-route-policy-attr-trace,
  draft-ietf-grow-bmp-tlv and draft-lucente-grow-bmp-tlv-ebit.
+ BMP daemon: added 'bgp_agent_map' equivalent feature for BMP.
+ nfacctd, nfprobe plugin: added support for collection and export
  of NetFlow/IPFIX data over Datagram Transport Layer Security (in
  short DTLS). The feature depends on the GnuTLS library.
+ nfacctd: added support for deprecated NetFlow v9 IE #104
  (layer2packetSectionData) as it is implemented for NetFlow-lite
  on Cisco devices. Reused code from IPFIX IE #315.
+ nfacctd: added support for MPLS VPN RD IE #90. This comes in two
  flavours both found across vendor implementations: 1) IE present
  in flow data and 2) IE present in Options data as a lookup from
  IE #234 (ingressVRFID) and #235 (egressVRFID).
+ nfacctd: added a new timestamp_export aggregation primitive to
  record the timestamp being carried in the header of NetFlow/IPFIX
  messates (that is, the time at which the export was performed).
+ nfprobe plugin: added support for ICMP/ICMPv6 information as part
  of the NetFlow/IPFIX export. The piece of info is encoded in the
  destination port field as per the current common understandings
  across vendors. As a result of that, the 'dst_port' primitive is
  to be part of the aggregation method in order to leverage this
  feature.
+ MySQL plugin: introduced support to connect to a MySQL server
  via UNIX sockets.
+ tee plugin: added crc32 hash algorithm as a new balancing option
  for nodes in the receiving pool. It hashes original exporter IP
  address against a crc32 function. Thanks to @edge-intelligence
  for the contribution.
+ Massive improvements carried out on the Continuous Integration
  (CI) side, ie. to ensure better quality of the code, and on the
  containerization side by offering official stable / bleeding edge
  Docker images. Huge thanks to Marc Sune ( @msune ) to make all of
  this possible.
! fix, BGP daemon: re-worked internal structuring of 'modern' BGP
  attributes: for the sake of large-scale space 

Re: [pmacct-discussion] sfacctd, aggregation and reexport

2021-01-01 Thread Paolo Lucente



Hi Moo,

Unfortunately you are falling in the very same use-case of that message 
but using sFlow instead of NetFlow/IPFIX. Like you said, the first 3 can 
be done out of the box but - once sFlow is unpacked, you can't re-pack 
it with the additional info, ie. ASN information.


This is mainly because of UDP, ie. adding extra info takes more space 
and you may exceed MTU, and trying to break one single message in 
multiple ones would screw up any original sequence numbers. In essence 
several corner cases have to be taken into account for the feature in 
order to work correctly and there is not enough traction / demand for me 
to invest time into it and put it on the roadmap. Of course, needless to 
say, any such contribution would be more than welcome.


Paolo

On 30/12/2020 13:25, mooli...@ik.me wrote:

Hello,

I would like to know if the following config is possible:

1. receive ipv4 and ipv6 sflow samples sent from a Juniper MX router
2. lookup ASN info via bgp_daemon thread
3. aggregate on etype,src_as,dst_as,peer_src_as,peer_dst_as
4. export aggregated stats to another sflow collector (as-stats)

1,2 and 3 works
but I don't see a way to do 4
nfprobe is only available for pmacctd
tee is just replicating stats

a similar question was asked in an old post : 
https://www.mail-archive.com/pmacct-discussion@pmacct.net/msg03675.html
but Paolo said at that timeit wasn't possible at that time.

is there another way to create aggregated sflow packets from nfacctd's stats?

Thanks,
Moo

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Flexible Netflow with Cisco ISR and nfacctd

2020-12-01 Thread Paolo Lucente



Hi Fabien,

With prior knowledge of the template, ie. either you start nfacctd with 
'-d' (debug) so to see the content of templates in the logs or collect 
some NetFlow in a pcap file and open it with WireShark, you could use 
the aggregate_primitives framework of pmacct to define custom primitives.


Essentially in the config you do 'aggregate_primitives: 
/path/to/primitives.lst'. Then for the actual content of the 
'primitives.lst' file, you can look here:


https://github.com/pmacct/pmacct/blob/1.7.5/examples/primitives.lst.example

Top part of the file you can read the knobs available; bottom part you 
are solely interested in the examples for NetFlow v9/IPFIX, ie. line 60, 
66 and 72.


You can define custom primitives for pretty much anything but not for 
non-key dimensions, ie. packets and bytes, those have to be supported 
natively (although it's on the roadmap to make them also customizable) 
even though, frankly, that has never been an issue. Should you run in 
any issue with the counters, please send me an example pcap via unicast 
email and we'll find a solution.


Hope this helps for a start.

Paolo

On 30/11/2020 22:58, Fabien VINCENT wrote:

Hello,

I'm looking to do Netflow v9, Flexible Netflow to be honest, with 
nfacctd but can't find any good ressources to play with nfacctd and 
aggregate's primitives when having FNF exports.


Is their is any documentation if template is a bit "custom" on the Cisco 
ISR side ? Seems sometimes for some reason, template is marked as 
unknown, or bytes/packets are null with nfacctd and I can't find any 
information about how to configure or troubleshoot it


Any helps / hints appreciated !




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] MySQL plugin processes terminating

2020-11-20 Thread Paolo Lucente



Hi Klaas,

Is it the main MySQL plugin failing on you or the writer processes (so 
the main MySQL plugin stays up and running)? Is it possible it is a 
simple memory issue, a-la you should throw more memory at it?


You can collect more info on the crash (which may be useful for debug 
and troubleshooting) with the instruction here:


https://github.com/pmacct/pmacct/blob/1.7.5/QUICKSTART#L2813-#L2828

As soon as you see malloc() appearing somewhere then it's a (lack of) 
memory issue.


In case, instead, writers are thrown away because they pass the 
configured writer limit (sql_max_writers, 10 by default), then you would 
find a note in the logs.


Paolo

On 20/11/2020 13:02, Klaas Tammling wrote:

Hi all,

I've got the issue that regularly the threads for the MySQL plugin seem 
to silently crash. Is there any easy way to monitor these processes and 
restart them if needed?


Thanks.

Regards,

Klaas

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Fragment/4 buffer full. Skipping fragments.

2020-11-20 Thread Paolo Lucente


Hi Pierre,

Maybe you need to increase the pmacctd_frag_buffer_size (by default 4MB 
and perhaps not sufficient for your traffic footprint):


https://github.com/pmacct/pmacct/blob/1.7.5/CONFIG-KEYS

Give that a try.

Paolo

On 20/11/2020 12:53, Pierre Grié wrote:

Hello,

We are using pmacct to generate Netflow v9 metrics. Yesterday, while we 
were under what we determined to be a heavy load of UDP fragmented 
packets, pmacct did not report any traffic peak. Our SNMP metrics 
reported 1Gbps+ of traffic at the same time.


We noticed the following log message multiple times around then:
"INFO ( default / core ): Fragment/4 buffer full. Skipping fragments.".

Could this message be linked to the behavior we're seeing? Could it be 
caused by a misconfiguration our our side?


Here's our current configuration:

-

daemonize: true
pcap_ifindex: sys
pcap_interfaces_map: /etc/pmacct/pcap_interfaces.map
aggregate: src_host, dst_host, src_port, dst_port, proto, tcpflags, 
src_as, dst_as

promisc: false
syslog: local0

plugins: nfprobe[xxx], nfprobe[xxx]
nfprobe_version: 9
nfprobe_source_ip: xxx

! Configuration for xxx

nfprobe_receiver[xxx]: xxx

nfprobe_direction[xxx]: tag
pre_tag_map[xxx]: /etc/pmacct/pretag.map
sampling_rate[xxx]: 200
plugin_pipe_size[xxx]: 12288000
plugin_buffer_size[xxx]: 122880

! Configuration for 

nfprobe_receiver[xxx]: xxx

nfprobe_direction[xxx]: tag
pre_tag_map[xxx]: /etc/pmacct/pretag.map
sampling_rate[xxx]: 1000
plugin_pipe_size[xxx]: 12288000
plugin_buffer_size[xxx]: 122880

pmacctd_as: file
networks_file: /etc/pmacct/networks.list

nfprobe_timeouts: tcp=1:maxlife=1:tcp.rst=1:tcp.fin=1:general=1:expint=5

-

Thanks!


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] pmacctd and OpenVPN

2020-11-20 Thread Paolo Lucente



Hi Erik,

Take a capture with tcpdump of some of these packets on the tun 
interface and send it via unicast email. Let's see what is possible or 
what is the issue.


Paolo

On 20/11/2020 11:34, Erik wrote:

Hi,

I am running a VPN server based on OpenVPN and recently there was
a request to analyse some of the data flows.

So I installed pmacct to do some experimenting. This is on Ubuntu 20.04
with pmacct 1.7.2 from the repository.

The software installed fine and after configuration on the main NIC
I was able to export flows in IPFIX format via nfprobe and look at
the flows using nfdump.

The next step was to configure it on the VPN server's TUN-adapter,
but that caused pmacctd to fail to start, logging:

ERROR ( default/core ): MAC aggregation not available for link type: 12

I have since looked through the archives and the Changelog for newer
versions, but could not find anything relating to this error, other
than some remark suggesting pmacct only supports or supported
real NICs.
I have not been able to test a newer version yet, to see if this has
changed and it may take some time before I will be able to.

Meanwhile, can someone confirm that pmacct does support
TUN interfaces, or not?
And maybe point me to an alternative if it doesn't?
So far I have only found nprobe, which is a commercial alternative.

Thanks,
Erik

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists



___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] nfacctd tee - filter subnets before transmit

2020-11-19 Thread Paolo Lucente


Hi Eric,

Fantastic, thanks for confirming!

Paolo

On 18/11/2020 21:08, eric c wrote:

Good afternoon Paolo,

I missed a part in the receiver config:

BEFORE:
id=100 ip=192.168.10.50:3056 <http://192.168.10.50:3056/>

AFTER:
id=100 ip=192.168.10.50:3056 <http://192.168.10.50:3056> tag=100

I'm sorry about that. I tested it and it worked!

Thank you again for your help,
Eric


On Wed, Nov 18, 2020 at 12:22 PM eric c <mailto:xcell...@gmail.com>> wrote:


Hello Paolo,

Thank you for the reference.  I just looked at this and tested it
but it did not filter out the network I specified.  When I
wiresharked on the receiving host it was showing all traffic but not
the specified network (src_net=192.168.0.0/24 <http://192.168.0.0/24>) .

Below are the configs I used:

# nfacctd.conf
daemonize: false
nfacctd_port: 2055
nfacctd_ip: 0.0.0.0
logfile: /var/log/nfacctd.log

tee_transparent: true
maps_index: true

plugins: tee[a]

tee_receivers[a]: tee_receivers.lst
pre_tag_map[a]: pretag.map

plugin_buffer_size: 10240
plugin_pipe_size: 1024000
nfacctd_pipe_size: 1024000

# tee_receivers.lst
id=100 ip=192.168.10.50:3056 <http://192.168.10.50:3056>

# pretag.map
set_tag=100     ip=0.0.0.0/0 <http://0.0.0.0/0>  
  src_net=192.168.0.0/24 <http://192.168.0.0/24>


I'm using nfacctd 1.7.5-git (20200510-00); FYI

Is there another part I'm missing from the config?

Thank you!
Eric




On Wed, Nov 18, 2020 at 10:46 AM Paolo Lucente mailto:pa...@pmacct.net>> wrote:


Hi Eric,

You could look at this piece of documentation for what you are
trying to
do:
https://github.com/pmacct/pmacct/blob/1.7.5/QUICKSTART#L2106-#L2200
<https://github.com/pmacct/pmacct/blob/1.7.5/QUICKSTART#L2106-%23L2200>

The example focuses on src_mac and dst_mac, you should be using
src_net
and dst_net instead.

Paolo

On 18/11/2020 05:38, eric c wrote:
 > Good afternoon,
 >
 > Tring to setup nfacctd as replicator but would like to filter
what
 > subnets to replicate to the next receiver.
 >
 > Below is a config that is working without filtering:
 >
 > # nfacctd.conf
 > daemonize: false
 > nfacctd_port: 2055
 > nfacctd_ip: 0.0.0.0
 > logfile: /var/log/nfacctd.log
 >
 > plugins: tee[a]
 > tee_receivers[a]: tee_nflow_receivers.lst
 > tee_transparent: true
 >
 > # tee_nflow_receivers.lst
 > id=1 ip=192.168.10.50:3056 <http://192.168.10.50:3056>
<http://192.168.10.50:3056 <http://192.168.10.50:3056>>
 >
 > What config change can I add to only replicate IP src/dst to
10.0.0.0/24 <http://10.0.0.0/24>
 > <http://10.0.0.0/24 <http://10.0.0.0/24>> and 192.168.0.0/24
<http://192.168.0.0/24> <http://192.168.0.0/24
<http://192.168.0.0/24>> for example?
 >
 > Thank you!
 > Eric
 >
 > ___
 > pmacct-discussion mailing list
 > http://www.pmacct.net/#mailinglists
<http://www.pmacct.net/#mailinglists>
 >


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists
<http://www.pmacct.net/#mailinglists>


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] nfacctd tee - filter subnets before transmit

2020-11-18 Thread Paolo Lucente



Hi Eric,

You could look at this piece of documentation for what you are trying to 
do: https://github.com/pmacct/pmacct/blob/1.7.5/QUICKSTART#L2106-#L2200


The example focuses on src_mac and dst_mac, you should be using src_net 
and dst_net instead.


Paolo

On 18/11/2020 05:38, eric c wrote:

Good afternoon,

Tring to setup nfacctd as replicator but would like to filter what 
subnets to replicate to the next receiver.


Below is a config that is working without filtering:

# nfacctd.conf
daemonize: false
nfacctd_port: 2055
nfacctd_ip: 0.0.0.0
logfile: /var/log/nfacctd.log

plugins: tee[a]
tee_receivers[a]: tee_nflow_receivers.lst
tee_transparent: true

# tee_nflow_receivers.lst
id=1 ip=192.168.10.50:3056 

What config change can I add to only replicate IP src/dst to 10.0.0.0/24 
 and 192.168.0.0/24  for example?


Thank you!
Eric

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] 95 percentile (again)

2020-11-03 Thread Paolo Lucente



Hi Klaas,

Yes, you can set networks_file_filter to true (by default false):

https://github.com/pmacct/pmacct/blob/1.7.5/CONFIG-KEYS#L874=#L880

Paolo

On 03/11/2020 12:47, Klaas Tammling wrote:

Hi Paolo,

thanks. Recording of data seems to work so far.

However I want to clean up all this a bit and what I noticed is even I 
only have one IPv4 prefix inside my networks_file the nfacctd is also 
dumping IPv6 information into my database.


Is there any additional filter I can set to exclude everything which is 
not defined in my networks_file?


Thanks.

-
Klaas

*Von:* Paolo Lucente 
*Gesendet:* Dienstag, 3. November 2020 00:42
*An:* pmacct-discussion@pmacct.net ; Klaas 
Tammling 

*Betreff:* Re: [pmacct-discussion] 95 percentile (again)

Hi Klaas,

You are right pmacct does not do 95th percentile calculations as these
are much better suited to be post-process actions (due to the increased
data visibility they require) than done in-line at the collector layer.

On your question about bits/s. 95th percentile bases on the assumption
you do bucket your data. One min buckets, 5 mins buckets, 1 hour
buckets, etc. You make it a discrete exercise where essentially you say
for those, say, 5 mins that is the amount of bytes accounted for. This
is what pmacct does for you. Then you take the 95th highest measurement
of the buckets within a time frame of choice an hour, a day, a week, a
month, etc. So then let us say you decide to go for 5 mins buckets, you
would just need to do for the winning bucket a "bytes * 8 / 300"
operation to convert bytes to bits (* 8) and then divide by the amount
of seconds in the bucket (/ 300).

Paolo

On 02/11/2020 13:07, Klaas Tammling wrote:

Hi,

this year I'm trying to give pmacct a try for some 95 percentile 
calculation. I understood that pmacct doesn't do the calculation by 
itself however it can assist in collecting the needed data.


I understood the following:

plugins: pgsql[in], pgsql[out]
sql_table[in]: acct_in
sql_table[out]: acct_out
aggregate[in]: dst_host
aggregate[out]: src_host
sql_history: 1h
sql_history_roundoff: h

By changing sql_history to 5m I can get the 5 minute aggregate of the 
received data.


By setting a networks_file (networks_file: ...) I would be able to only 
collect data for networks I'm defining in that list.


My question would be now if there is a way to record the bits/s for that 
flow for the given timestamp.


Or am I completely wrong with my assumptions?

Thanks very much for any help.

-
Klaas



___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists






___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] 95 percentile (again)

2020-11-02 Thread Paolo Lucente



Hi Klaas,

You are right pmacct does not do 95th percentile calculations as these 
are much better suited to be post-process actions (due to the increased 
data visibility they require) than done in-line at the collector layer.


On your question about bits/s. 95th percentile bases on the assumption 
you do bucket your data. One min buckets, 5 mins buckets, 1 hour 
buckets, etc. You make it a discrete exercise where essentially you say 
for those, say, 5 mins that is the amount of bytes accounted for. This 
is what pmacct does for you. Then you take the 95th highest measurement 
of the buckets within a time frame of choice an hour, a day, a week, a 
month, etc. So then let us say you decide to go for 5 mins buckets, you 
would just need to do for the winning bucket a "bytes * 8 / 300" 
operation to convert bytes to bits (* 8) and then divide by the amount 
of seconds in the bucket (/ 300).


Paolo

On 02/11/2020 13:07, Klaas Tammling wrote:

Hi,

this year I'm trying to give pmacct a try for some 95 percentile 
calculation. I understood that pmacct doesn't do the calculation by 
itself however it can assist in collecting the needed data.


I understood the following:

plugins: pgsql[in], pgsql[out]
sql_table[in]: acct_in
sql_table[out]: acct_out
aggregate[in]: dst_host
aggregate[out]: src_host
sql_history: 1h
sql_history_roundoff: h

By changing sql_history to 5m I can get the 5 minute aggregate of the 
received data.


By setting a networks_file (networks_file: ...) I would be able to only 
collect data for networks I'm defining in that list.


My question would be now if there is a way to record the bits/s for that 
flow for the given timestamp.


Or am I completely wrong with my assumptions?

Thanks very much for any help.

-
Klaas



___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Netflow Iframe Index

2020-10-28 Thread Paolo Lucente


Hi Samir,

Not sure, do you mean SNMP interface ifIndex by iframe index in/out? If 
not, please tell me more - it does not ring a bell; if yes, you have it 
there populated, 'iface_in' and 'iface_out' fields.


Paolo

On 28/10/2020 19:41, Samir Faci wrote:
I'm using the nfacctd process to capture netflow data.  All the routers 
are writing to the exposed udp port.


I'm writing the output the rabbit MQ and the output looks like this:

This is an example of the output I'm getting:

|{ "event_type": "purge", "as_src": 0, "as_dst": 0, "as_path": "", 
"local_pref": 0, "med": 0, "peer_as_src": 0, "peer_as_dst": 0, 
"peer_ip_src": "192.168.32.1", "peer_ip_dst": "", "iface_in": 909, 
"iface_out": 517, "ip_src": "87.100.13.241", "ip_dst": "106.89.180.131", 
"port_src": 55664, "port_dst": 443, "ip_proto": "tcp", 
"timestamp_start": "1574664912.00", "timestamp_end": 
"1574665005.00", "packets": 700, "bytes": 39200, "writer_id": 
"default_amqp/18" } |



For the fields i'm in my config I'm using these so far:

aggregate: 
peer_src_ip,label,src_host,dst_host,src_port,dst_port,proto,in_iface,out_iface,src_as,dst_as,peer_dst_ip,timestamp_start,timestamp_end,as_path,peer_src_as,peer_dst_as,local_pref,med



What config key should I enable that allows the iframe index in/out to 
be visible?


--
Samir Faci

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] IPFIX record export with BGP Next Hop added

2020-10-22 Thread Paolo Lucente



Hi Kamiel,

Unfortunately, no, this scenario (take IPFIX, massage it & spit it out) 
is not supported.


Paolo

On 21/10/2020 09:55, Braet, Kamiel wrote:

Hello everyone,

Just wanted to know if it is possible to use PMACCT to import IPFIX 
records and BGP data. After this determine the BGP Next Hop the router 
has selected for the IP destination in the IPFIX record using the BGP 
information.


Then use this information to create an updated version of the IPFIX 
record with the BGP Next Hop as added field and export this to other 
IPFIX collectors while having the source IP address in the IP header of 
the UDP IPFIX packet set to the router of the original IPFIX record was 
created.


Kind Regards,

Kamiel Braet


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] DTLS encrypted flow data

2020-10-13 Thread Paolo Lucente


Hi Felix,

Thank you for your always kind words.

DE-CIX uses this piece of code to encrypt: 
https://github.com/de-cix/udp-dtls-wrapper . We could test it 
inter-working against pmacct code, ie. at my end i can decrypt the IPFIX 
templates they sent over.


As a small news this week i am starting working on making nfprobe, the 
NetFlow/IPFIX exporter plugin of pmacct, support DTLS. And, sure thing, 
let me know how your test is going - i am unfortunately unfamiliar 
myself with ncat and, in general, i am familiarizing only right now on 
how to produce UDP packets over DTLS.


Paolo


On 13/10/2020 08:16, Felix Stolba wrote:

Paolo,

it's my pleasure, hope you're doing great also. Wonderful to see all the 
progress pmacct has been making since we last met.

Thanks for confirming IPFIX/DTLS is a topic that's still ongoing. While the 
immediate need for encrypted transport can be alleviated by utilizing IPSEC 
tunnels and the like, being able to produce encrypted streams will make 
ingesting data over untrusted transport much simpler. Wondering how DE-CIX 
produces theirs.

Out of curiosity I've been playing around with ncat, trying to encrypt a 
regular IPFIX stream and sending it to nfacctd_dtls_port. While nfacctd 
acknowledges that it's receiving DTLS there seem to be some issues that prevent 
successful parsing of data. Hope I'll be able to find some more time to dig 
deeper and make it work.

Stay safe,
Felix



Am 09.10.20, 21:49 schrieb "Paolo Lucente" :

 
 Hi Felix,
 
 Monumental pleasure to read from you, hope all is well.
 
 The feature was conceived in conjunction with the great DE-CIX folks,

 you can see the announcement here:
 https://twitter.com/thking/status/1292903640877932544 .
 
 In the context of pmacct, yes, i have indeed on the roadmap to

 "disseminate" DTLS a bit further to the 'nfprobe' (export) and 'tee'
 (replication) plugins. Yet another dimension would be to apply this to
 sFlow - curious if anybody reading cares.
 
 I am not aware of any vendors supporting this at this very moment but i

 do agree with you that that would be intriguing (in general but perhaps
 specifically) for all people that do rely on 3rd party services to run
 their own infrastructure, thinking to L2/L3 MPLS VPNs and suchs.
 
 Paolo
 
 On 09/10/2020 13:28, Felix Stolba wrote:

 > Hi everyone,
 >
 > so recently the config parameter nfacctd_dtls_port was introduced. By 
using this, pmacct can consume flow data contained in a DTLS stream as specified 
in RFC5153.
 >
 > Having an integrated, secure transport for flow data is an intriguing 
idea. But that poses the question, how can such a stream be produced? Is this a 
vendor specific feature on various network operating systems or is there a 3rd 
party software that can handle the encryption? Which vendors support that? Anyone 
willing to share any experience here?
 >
 > Has this feature been considered for the pmacct roadmap? Being able to 
produce encrypted Netflow using the tee plugin would be very useful in certain 
scenarios.
 >
 > Appreciate any input on the matter.
 >
 > Thanks,
 > Felix
 >
 >
 > ___
 > pmacct-discussion mailing list
 > http://www.pmacct.net/#mailinglists
 >
 
 




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] DTLS encrypted flow data

2020-10-09 Thread Paolo Lucente



Hi Felix,

Monumental pleasure to read from you, hope all is well.

The feature was conceived in conjunction with the great DE-CIX folks, 
you can see the announcement here: 
https://twitter.com/thking/status/1292903640877932544 .


In the context of pmacct, yes, i have indeed on the roadmap to 
"disseminate" DTLS a bit further to the 'nfprobe' (export) and 'tee' 
(replication) plugins. Yet another dimension would be to apply this to 
sFlow - curious if anybody reading cares.


I am not aware of any vendors supporting this at this very moment but i 
do agree with you that that would be intriguing (in general but perhaps 
specifically) for all people that do rely on 3rd party services to run 
their own infrastructure, thinking to L2/L3 MPLS VPNs and suchs.


Paolo

On 09/10/2020 13:28, Felix Stolba wrote:

Hi everyone,

so recently the config parameter nfacctd_dtls_port was introduced. By using 
this, pmacct can consume flow data contained in a DTLS stream as specified in 
RFC5153.

Having an integrated, secure transport for flow data is an intriguing idea. But 
that poses the question, how can such a stream be produced? Is this a vendor 
specific feature on various network operating systems or is there a 3rd party 
software that can handle the encryption? Which vendors support that? Anyone 
willing to share any experience here?

Has this feature been considered for the pmacct roadmap? Being able to produce 
encrypted Netflow using the tee plugin would be very useful in certain 
scenarios.

Appreciate any input on the matter.

Thanks,
Felix


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] pmbgpd -> Kafka Local Queue Full

2020-09-02 Thread Paolo Lucente


Hi Andy,

I may suggest to check Kafka logs and perhaps see if anything useful 
comes out of librdkafka stats (ie. set "global, statistics.interval.ms, 
6" in your librdkafka.conf). Check also that, if you are adding load 
to existing load, the Kafka broker is not pegging 100% CPU or maxing out 
some threads count (or perhaps, if this is a testing environment, remove 
the existing load and test with only the pmbgpd export .. that may proof 
something too).


My first suggestion would have been to tune buffers in librdkafka but 
you did that already. In any case this is, yes, an interaction between 
librdkafka and the Kafka broker; i go a bit errands here: make sure you 
have recent versions of both the library and the broker.


Especially if the topic is newly provisioned i may also suggest to try 
to produce / consume some data "by hand", like using the 
kafka-console-producer.sh and kafka-console-consumer.sh scripts shipped 
with Kafka to proof data passing through no problem.


Paolo

On 02/09/2020 09:09, Andy Davidson wrote:

Hello!

I am feeding some BMP feeds via pmbmpd into Kafka and it’s working well.  I now 
want to feed some BGP feeds into a Kafka topic using pmbgpd but similar 
configuration is causing a different behaviour.

Sep  1 22:48:00 bump pmbgpd[10992]: INFO ( default/core ): Reading 
configuration file '/etc/pmacct/pmbgpd.conf'.
Sep  1 22:48:00 bump pmbgpd[10992]: INFO ( default/core ): maximum BGP peers 
allowed: 100
Sep  1 22:48:00 bump pmbgpd[10992]: INFO ( default/core ): waiting for BGP data 
on 185.1.94.6:179
Sep  1 22:48:03 bump pmbgpd[10992]: INFO ( default/core ): [185.1.94.1] BGP 
peers usage: 1/100
Sep  1 22:48:03 bump pmbgpd[10992]: INFO ( default/core ): [185.1.94.1] 
Capability: MultiProtocol [1] AFI [1] SAFI [1]
Sep  1 22:48:03 bump pmbgpd[10992]: INFO ( default/core ): [185.1.94.1] 
Capability: 4-bytes AS [41] ASN [59964]
Sep  1 22:48:03 bump pmbgpd[10992]: INFO ( default/core ): [185.1.94.1] 
BGP_OPEN: Local AS: 43470 Remote AS: 59964 HoldTime: 240
Sep  1 22:48:05 bump pmbgpd[10992]: ERROR ( default/core ): Failed to produce 
to topic bgptest partition -1: Local: Queue full
Sep  1 22:48:05 bump pmbgpd[10992]: ERROR ( default/core ): Connection failed 
to Kafka: p_kafka_close()
Sep  1 22:48:05 bump systemd[1]: pmbgpd.service: Main process exited, 
code=killed, status=11/SEGV
Sep  1 22:48:05 bump systemd[1]: pmbgpd.service: Failed with result 'signal'.

I have verified that it's not connectivity - the topic is created at the Kafka 
end of the link, and I can open a tcp socket with telnet from the computer 
running pmbgpd and the Kafka server's port 9092

I have of course read some Github issues and list archives about the Local: 
Queue full fault and it suggests some librdkafka buffer and timer tweaking, I 
have played with various values (some of them insane) and I don't see any 
different behaviour logged by pmbgpd:

root@bump:/home/andy# cat /etc/pmacct/pmbgpd.conf
bgp_daemon_ip: 185.1.94.6
bgp_daemon_max_peers: 100
bgp_daemon_as: 43470
!
syslog: user
daemonize: true
!
kafka_config_file: /etc/pmacct/librdkafka.conf
!
bgp_daemon_msglog_kafka_output: json
bgp_daemon_msglog_kafka_broker_host: .hostname
bgp_daemon_msglog_kafka_broker_port: 9092
bgp_daemon_msglog_kafka_topic: bgptest

root@bump:/home/andy# cat /etc/pmacct/librdkafka.conf
global, queue.buffering.max.messages, 800
global, batch.num.messages, 10
global, queue.buffering.max.messages, 2
global, queue.buffering.max.ms, 100
global, queue.buffering.max.kbytes, 900
global, linger.ms, 100
global, socket.request.max.bytes, 104857600
global, socket.receive.buffer.bytes, 10485760
global, socket.send.buffer.bytes, 10485760
global, queued.max.requests, 1000

Any advice on where to troubleshoot next?


Thanks
Andy

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Capturing interface traffic with pmacct and inserting the data in PostgreSQL

2020-08-26 Thread Paolo Lucente


Hi Arda,

I see that in your config you have 'daemonize: true' but no logfile 
statement set, ie. 'logfile: /tmp/pmacctd.log': this is preventing you 
from seeing any errors / warnings that pmacctd is logging and that may 
put you on the right path - is it an auth issue, is it a schema issue, 
etc. So that would be my first and foremost advice.


A second advice i may give you is, since you ask 'Should I expect the 
same level of detail that I see when I use tshark or tcpdump?', to get 
started with the 'print' plugin and follow 
https://github.com/pmacct/pmacct/blob/master/QUICKSTART#L2521-#L2542 . 
For example, given your config:


[..]
!
plugins: print[in], print[out]
aggregate[in]: dst_host
aggregate[out]: src_host
aggregate_filter[in]: dst net 10.10.10.0/24
aggregate_filter[out]: src net 10.10.10.0/24
!
print_refresh_time: 60
print_history: 1h
print_history_roundoff: h
print_output: csv
!
print_output_file[in]: /path/to/file-in-%Y%m%d-%H%M.csv
print_output_file[out]: /path/to/file-out-%Y%m%d-%H%M.csv
!
pcap_interfaces_map: /usr/local/share/pmacct/pcap_interfaces.map

This way, although in a CSV format in a file, playing with 'aggregate' 
you can get an idea what pmacct can get you compared to tcpdump/tshark 
(it will be pretty immediate to realise given the output).


Once you baseline pmacct is the tool for you and you get familiar with 
it, i guess you can complicate things putting a SQL database in the way.


Paolo


On 26/08/2020 19:30, Arda Savran wrote:
I just installed pmacct with postgres support on CentOS8 from GitHub; 
and I think it was a successful installation based on the following:


*[root@pcap pmacct]# pmacct -V*

*pmacct IMT plugin client, pmacct 1.7.6-git (20200826-0 (57a0334d))*

*'--enable-pgsql' '--enable-l2' '--enable-traffic-bins' 
'--enable-bgp-bins' '--enable-bmp-bins' '--enable-st-bins'*


**

*For suggestions, critics, bugs, contact me: Paolo Lucente 
.*


*[root@pcap pmacct]# pmacctd -V*

*Promiscuous Mode Accounting Daemon, pmacctd 1.7.6-git [20200826-0 
(57a0334d)]*


**

*Arguments:*

*'--enable-pgsql' '--enable-l2' '--enable-traffic-bins' 
'--enable-bgp-bins' '--enable-bmp-bins' '--enable-st-bins'*


**

*Libs:*

*libpcap version 1.9.0-PRE-GIT (with TPACKET_V3)*

*PostgreSQL 120001*

**

*System:*

*Linux 4.18.0-193.14.2.el8_2.x86_64 #1 SMP Sun Jul 26 03:54:29 UTC 2020 
x86_64*


**

*Compiler:*

*gcc 8.3.1*

**

*For suggestions, critics, bugs, contact me: Paolo Lucente 
.*


My goal is to capture the in/out network traffic on this machine’s 
interfaces and record them in PostgreSQL. I created myself a 
pmacctd.conf file under /usr/local/share/pmacct folder and a 
pcap_interfaces.map under the same folder. Before my question, can 
someone please confirm that my expectations from pmacct is accurate:


  * Pmacct can capture all the network traffic on the local interface
(ens192) and record it in PostgreSQL. Should I expect the same level
detail that I see when I use tshark or tcpdump?
  * Pmacct can store all the packet details in PostgreSQL if needed. If
this is not supported, does this mean that I am obligated to
aggregate the interface traffic before it is inserted into PostgreSQL.

My issue is that I am not seeing any data being written into any of the 
following tables:


*pmacct=# \dt*

*  List of relations*

*Schema |   Name   | Type  |  Owner*

*+--+---+--*

*public | acct | table | postgres*

*public | acct_as  | table | postgres*

*public | acct_uni | table | postgres*

*public | acct_v9  | table | postgres*

*public | proto    | table | postgres*

I started the daemon by running: pmacctd -f pmacctd.conf

My conf file is based on what I read on the WiKi page:

*!*

*daemonize: true*

*plugins: pgsql[in], pgsql[out]*

*aggregate[in]: dst_host*

*aggregate[out]: src_host*

*aggregate_filter[in]: dst net 10.10.10.0/24*

*aggregate_filter[out]: src net 10.10.10.0/24*

*sql_table[in]: acct_in*

*sql_table[out]: acct_out*

*sql_refresh_time: 60*

*sql_history: 1h*

*sql_history_roundoff: h*

*pcap_interfaces_map: /usr/local/share/pmacct/pcap_interfaces.map*

*! ...*

I am not sure how to proceed from here. I don’t know if I am supposed to 
be creating a table on PostgreSQL manually first based on my aggregation 
settings and somehow include that in the config file.


Can some please point me to the right direction.

Thanks,

Sent from Mail <https://go.microsoft.com/fwlink/?LinkId=550986> for 
Windows 10



___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] tee plugin ipv6 problem

2020-07-28 Thread Paolo Lucente


Hey Alexander,

Can you send me a sample of the IPv6 packets by unicast email? Ideally 
two tcpdump captures, ie. 'tcpdump -i lo -n -w  port ' 
and 'tcpdump -i  -n -w  port 2101', taken in 
parallel. Shall i find you positive on generating a sample, please do 
not do one single capture with '-i any' as that would cut out some of 
the lower layers data which could be of interest for the analysis.


Paolo

On 28/07/2020 13:07, Alexander Brusilov wrote:

I've tested in latest code 1.7.5-git (20200510-00) with same result.
Some clarification to my previous message
In ipv4 all checksums and lengths in all packets are fine.
About ipv6 bad packet example:
BAD UDP LENGTH 1332 > IP PAYLOAD LENGTH] Len=1324 [ILLEGAL CHECKSUM (0)
Data (1304 bytes)
UDP header: Length: 1332 (bogus, payload length 1312)    <<< in my 
understanding length should be 1312 (data + 8 bytes)
IPV6 header: Length: 1332 (bogus, payload length 1312)   <<< in my 
understanding length should be 1372 (data + 8 bytes + 40)


вт, 28 июл. 2020 г. в 12:34, Alexander Brusilov >:


Hi all,
i use following scenario in ipv4 and it work fine:
tee plugin listen on external interface and replicate sflow data in
two streams via loopback interface, here is part of configs:
/opt/etc/sf_tee.conf
promisc: false
interface: 
!
sfacctd_port: 2101
sfacctd_ip: 
!
plugins: tee[sf]
tee_receivers[sf]: /opt/etc/tee_receivers_sf.lst
tee_transparent: true
!
pre_tag_map: /opt/etc/pretag.map
!

/opt/etc/tee_receivers_sf.lst
id=2101 ip=127.0.0.1:2101 
id=111 ip=127.0.0.1:20111  tag=111

/opt/etc/pretag.map
set_tag=111 ip=

i am trying do same with ipv6, but with no success, here is configs:
/opt/etc/sf_tee_v6.conf
promisc: false
interface: 
!
sfacctd_port: 2101
sfacctd_ip: 
!
plugins: tee[sf]
tee_receivers[sf]: /opt/etc/tee_receivers_sf_v6.lst
tee_transparent: true
!
pre_tag_map: /opt/etc/pretag.map
!

/opt/etc/tee_receivers_sf_v6.lst
id=2101 ip=[::1]:2101
id=111 ip=[::1]:20111 tag=111

ipv6 sflow data stream replicated according configs, but
sfacctd backend (and some other software too) ignore this replicated
packets.
I've run tcpdump on external and lo interface and see that packets
on lo interface (replicated by tee plugin) have wrong payload length
in ipv6 header (in udp may be too). In ipv4 all checksums in
all packets fine.
It's normal behaviour or not? Can this cause that sfaccd backend
ignore this packets? Or may be i missing something?

Here example some info of bad packet from wireshark
BAD UDP LENGTH 1332 > IP PAYLOAD LENGTH] Len=1324 [ILLEGAL CHECKSUM (0)
Data (1304 bytes)
UDP: Length: 1332 (bogus, payload length 1312)
IPV6: Length: 1332 (bogus, payload length 1312)   <<< in my
understanding length should be 1372

# /opt/sbin/sfacctd -V
sFlow Accounting Daemon, sfacctd 1.7.4-git (20191126-01+c6)

Arguments:
  '--prefix=/opt' '--enable-geoipv2' '--enable-jansson'
'--enable-zmq' '--enable-pgsql'
'PKG_CONFIG_PATH=/usr/pgsql-11/lib/pkgconfig' '--enable-l2'
'--enable-64bit' '--enable-traffic-bins' '--enable-bgp-bins'
'--enable-bmp-bins' '--enable-st-bins'

System:
Linux 3.10.0-1127.13.1.el7.x86_64 #1 SMP Tue Jun 23 15:46:38 UTC
2020 x86_64

# cat /etc/redhat-release
CentOS Linux release 7.8.2003 (Core)


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] master - ndpi on 32bit CentOS 6

2020-07-09 Thread Paolo Lucente


I did test on a Debian 10:

4.19.0-8-686-pae #1 SMP Debian 4.19.98-1 (2020-01-26) i686 GNU/Linux

As i was suspecting, passing the pcap you sent me through a daemon 
compiled on this box went fine (that is, i can't reproduce the issue).

From what i see, by the way, this is not something related to nDPI.

Paolo

On 09/07/2020 18:19, Steve Clark wrote:

Thanks for checking, could you tell what distro and version you tested on?

Also when I compile on 32 bit I get a lot of warning of redefines 
between ndpi.h and pmacct.h

do you get those also?




On 07/09/2020 11:55 AM, Paolo Lucente wrote:

Hi Steve,

I do have avail of a i686-based VM. I can't say everything is tested on
i686 but i tend to check every now and then that nothing fundamental is
broken. I took the example config you used, compiled master code with
the same config switches as you did (essentially --enable-ndpi) and had
no joy reproducing the issue.

You could send me privately your capture and i may try with that one
(although i am not highly positive it will be a successful test); or you
could arrange me access to your box to read the pcap. Let me know.

Paolo

On 09/07/2020 14:54, Steve Clark wrote:

Hi Paolo,

I have compiled master with nDPI on both 32bit and 64bit CentOS 6
systems. The 64 bit pmacctd seems
to work fine. But I get bogus byte counts when I run the 32bit version
against the same pcap file.

Just wondered if you have done any testing on 32bit intel system with
the above combination.

below is the output when using 32bit pmacctd - first the pmacctd
invocation then the nfacctd output
pmacct/src/pmacctd -f ./mypaolo.conf -I v1.7.5_v9_ndpi_class_paolo.pcap
INFO ( default/core ): Promiscuous Mode Accounting Daemon, pmacctd
1.7.6-git (20200707-01)
INFO ( default/core ):  '--enable-ndpi'
'--with-ndpi-static-lib=/usr/local/lib/' '--enable-l2'
'--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins'
'--enable-st-bins'
INFO ( default/core ): Reading configuration file
'/var/lib/pgsql/sclark/mypaolo.conf'.
INFO ( p4p1/nfprobe ): NetFlow probe plugin is originally based on
softflowd 0.9.7 software, Copyright 2002 Damien Miller 
All rights reserved.
INFO ( p4p1/nfprobe ):   TCP timeout: 3600s
INFO ( p4p1/nfprobe ):  TCP post-RST timeout: 120s
INFO ( p4p1/nfprobe ):  TCP post-FIN timeout: 300s
INFO ( p4p1/nfprobe ):   UDP timeout: 300s
INFO ( p4p1/nfprobe ):  ICMP timeout: 300s
INFO ( p4p1/nfprobe ):   General timeout: 3600s
INFO ( p4p1/nfprobe ):  Maximum lifetime: 604800s
INFO ( p4p1/nfprobe ):   Expiry interval: 60s
INFO ( default/core ): PCAP capture file, sleeping for 2 seconds
INFO ( p4p1/nfprobe ): Exporting flows to [172.24.109.157]:rrac
WARN ( p4p1/nfprobe ): Shutting down on user request.
INFO ( default/core ): OK, Exiting ...

src/nfacctd -f examples/nfacctd-print.conf.example
INFO ( default/core ): NetFlow Accounting Daemon, nfacctd 1.7.6-git
(20200623-00)
INFO ( default/core ):  '--enable-ndpi'
'--with-ndpi-static-lib=/usr/local/lib/' '--enable-l2'
'--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins'
'--enable-st-bins'
INFO ( default/core ): Reading configuration file
'/var/lib/pgsql/sclark/pmacct/examples/nfacctd-print.conf.example'.
INFO ( default/core ): waiting for NetFlow/IPFIX data on :::5678
INFO ( foo/print ): cache entries=16411 base cache memory=56322552 bytes
WARN ( foo/print ): no print_output_file and no print_output_lock_file
defined.
INFO ( foo/print ): *** Purging cache - START (PID: 21926) ***
CLASS SRC_IP
DST_IP SRC_PORT  DST_PORT
PROTOCOL    PACKETS   BYTES
NetFlow   172.24.110.104
172.24.109.247 41900 2055
udp 26 1576253010996
NetFlow   172.24.110.104
172.24.109.247 58131 2055
udp 21    1576253008620
INFO ( foo/print ): *** Purging cache - END (PID: 21926, QN: 2/2, ET: 
0) ***

^CINFO ( foo/print ): *** Purging cache - START (PID: 21559) ***
INFO ( foo/print ): *** Purging cache - END (PID: 21559, QN: 0/0, ET: 
X) ***

INFO ( default/core ): OK, Exiting ...

Now the output when using and the same .pcap file 64bit version of 
pmacctd


sudo /root/pmacctd-176 -f ./mypaolo.conf -I 
v1.7.5_v9_ndpi_class_paolo.pcap

INFO ( default/core ): Promiscuous Mode Accounting Daemon, pmacctd
1.7.6-git (20200623-00)
INFO ( default/core ):  '--enable-ndpi'
'--with-ndpi-static-lib=/usr/local/lib/' '--enable-l2'
'--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins'
'--enable-st-bins'
INFO ( default/core ): Reading configuration file
'/var/lib/pgsql/sclark/mypaolo.conf'.
INFO ( p4p1/nfprobe ): NetFlow probe plugin is originally based on
softflowd 0.9.7 software, Copyright 2002 Damien Miller 
All rights reserved.
INFO ( default/core ): PCAP capture file, sleeping for 2 seconds
INFO ( p4p1/nfprobe ):   TCP timeout: 3600s
INFO ( p4p1/nfprobe ):  TCP post-RST timeout: 120s

Re: [pmacct-discussion] master - ndpi on 32bit CentOS 6

2020-07-09 Thread Paolo Lucente


Hi Steve,

I do have avail of a i686-based VM. I can't say everything is tested on 
i686 but i tend to check every now and then that nothing fundamental is 
broken. I took the example config you used, compiled master code with 
the same config switches as you did (essentially --enable-ndpi) and had 
no joy reproducing the issue.


You could send me privately your capture and i may try with that one 
(although i am not highly positive it will be a successful test); or you 
could arrange me access to your box to read the pcap. Let me know.


Paolo

On 09/07/2020 14:54, Steve Clark wrote:

Hi Paolo,

I have compiled master with nDPI on both 32bit and 64bit CentOS 6 
systems. The 64 bit pmacctd seems
to work fine. But I get bogus byte counts when I run the 32bit version 
against the same pcap file.


Just wondered if you have done any testing on 32bit intel system with 
the above combination.


below is the output when using 32bit pmacctd - first the pmacctd 
invocation then the nfacctd output

pmacct/src/pmacctd -f ./mypaolo.conf -I v1.7.5_v9_ndpi_class_paolo.pcap
INFO ( default/core ): Promiscuous Mode Accounting Daemon, pmacctd 
1.7.6-git (20200707-01)
INFO ( default/core ):  '--enable-ndpi' 
'--with-ndpi-static-lib=/usr/local/lib/' '--enable-l2' 
'--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins' 
'--enable-st-bins'
INFO ( default/core ): Reading configuration file 
'/var/lib/pgsql/sclark/mypaolo.conf'.
INFO ( p4p1/nfprobe ): NetFlow probe plugin is originally based on 
softflowd 0.9.7 software, Copyright 2002 Damien Miller  
All rights reserved.

INFO ( p4p1/nfprobe ):   TCP timeout: 3600s
INFO ( p4p1/nfprobe ):  TCP post-RST timeout: 120s
INFO ( p4p1/nfprobe ):  TCP post-FIN timeout: 300s
INFO ( p4p1/nfprobe ):   UDP timeout: 300s
INFO ( p4p1/nfprobe ):  ICMP timeout: 300s
INFO ( p4p1/nfprobe ):   General timeout: 3600s
INFO ( p4p1/nfprobe ):  Maximum lifetime: 604800s
INFO ( p4p1/nfprobe ):   Expiry interval: 60s
INFO ( default/core ): PCAP capture file, sleeping for 2 seconds
INFO ( p4p1/nfprobe ): Exporting flows to [172.24.109.157]:rrac
WARN ( p4p1/nfprobe ): Shutting down on user request.
INFO ( default/core ): OK, Exiting ...

src/nfacctd -f examples/nfacctd-print.conf.example
INFO ( default/core ): NetFlow Accounting Daemon, nfacctd 1.7.6-git 
(20200623-00)
INFO ( default/core ):  '--enable-ndpi' 
'--with-ndpi-static-lib=/usr/local/lib/' '--enable-l2' 
'--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins' 
'--enable-st-bins'
INFO ( default/core ): Reading configuration file 
'/var/lib/pgsql/sclark/pmacct/examples/nfacctd-print.conf.example'.

INFO ( default/core ): waiting for NetFlow/IPFIX data on :::5678
INFO ( foo/print ): cache entries=16411 base cache memory=56322552 bytes
WARN ( foo/print ): no print_output_file and no print_output_lock_file 
defined.

INFO ( foo/print ): *** Purging cache - START (PID: 21926) ***
CLASS SRC_IP 
DST_IP SRC_PORT  DST_PORT  
PROTOCOL    PACKETS   BYTES
NetFlow   172.24.110.104 
172.24.109.247 41900 2055  
udp 26 1576253010996
NetFlow   172.24.110.104 
172.24.109.247 58131 2055  
udp 21    1576253008620

INFO ( foo/print ): *** Purging cache - END (PID: 21926, QN: 2/2, ET: 0) ***
^CINFO ( foo/print ): *** Purging cache - START (PID: 21559) ***
INFO ( foo/print ): *** Purging cache - END (PID: 21559, QN: 0/0, ET: X) ***
INFO ( default/core ): OK, Exiting ...

Now the output when using and the same .pcap file 64bit version of pmacctd

sudo /root/pmacctd-176 -f ./mypaolo.conf -I v1.7.5_v9_ndpi_class_paolo.pcap
INFO ( default/core ): Promiscuous Mode Accounting Daemon, pmacctd 
1.7.6-git (20200623-00)
INFO ( default/core ):  '--enable-ndpi' 
'--with-ndpi-static-lib=/usr/local/lib/' '--enable-l2' 
'--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins' 
'--enable-st-bins'
INFO ( default/core ): Reading configuration file 
'/var/lib/pgsql/sclark/mypaolo.conf'.
INFO ( p4p1/nfprobe ): NetFlow probe plugin is originally based on 
softflowd 0.9.7 software, Copyright 2002 Damien Miller  
All rights reserved.

INFO ( default/core ): PCAP capture file, sleeping for 2 seconds
INFO ( p4p1/nfprobe ):   TCP timeout: 3600s
INFO ( p4p1/nfprobe ):  TCP post-RST timeout: 120s
INFO ( p4p1/nfprobe ):  TCP post-FIN timeout: 300s
INFO ( p4p1/nfprobe ):   UDP timeout: 300s
INFO ( p4p1/nfprobe ):  ICMP timeout: 300s
INFO ( p4p1/nfprobe ):   General timeout: 3600s
INFO ( p4p1/nfprobe ):  Maximum lifetime: 604800s
INFO ( p4p1/nfprobe ):   Expiry interval: 60s
INFO ( p4p1/nfprobe ): Exporting flows to [172.24.109.157]:rrac
WARN ( p4p1/nfprobe ): Shutting down on user request.
INFO ( default/core 

Re: [pmacct-discussion] 1.7.5 with static ndpi

2020-06-24 Thread Paolo Lucente


Hi Steve,

Apart from asking the obvious - personal curiosity! - why do you want to
link against a static nDPI library. There are a couple main avenues i
can point you to depending on your goal:

1) You can supply configure with a --with-ndpi-static-lib knob; guess
the static lib and the dynamic lib are in different places, you should
be game. Even simplifying further: should you make the 'shared object'
library disappear then things will be forced onto the static library;

2) did you see the "pmacct & Docker" email that did just circulate on
the list? In the seek for a static library? Perhaps time to look into a
container instead? :-D

Paolo 

On Tue, Jun 23, 2020 at 01:44:32PM -0400, Stephen Clark wrote:
> Hello,
> 
> Can anyone give the magic configuration items I need to build using a static
> libndpi.a
> 
> I have spend all day trying to do this without any success. It seem like I
> tried every combination
> that ./configure --help displays.
> 
> Any help would be appreciated.
> 
> Thanks,
> Steve
> 
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


[pmacct-discussion] pmacct & Docker

2020-06-24 Thread Paolo Lucente


Dears,

A brief email to say that thanks to the monumental efforts of Marc Sune
and Claudio Ortega we could bring pmacct a bit closer to the Docker
universe. Since today we are shipping official pmacct containers on
Docker Hub ( https://hub.docker.com/u/pmacct ) organized as follows:

* A special container, base container, that is the base of the rest of
containers with all pmacct daemons installed and bash as an entry point.
It can be useful to debug and to create your customized docker image.

* One container per daemon (pmacctd, nfacctd, sfacctd, uacctd, pmbgpd,
pmbmpd, pmtelemetryd) where the entry point is the daemon itself and a
config file is expected in /etc/pmacct . For more info you can read the
'How to use it' section of the description on Docker Hub (ie.
https://hub.docker.com/r/pmacct/nfacctd ).

Three tags are being offered:

* latest: latest stable image of that container
* vX.Y.Z: version specific tag
* bleeding-edge: only for the brave. Latest commit on master

We also created a docker-doct...@pmacct.net email address which is going
to be used for maintenance and development. Should you have any
comments, questions, critics, bug reports please write us there. Marc
and myself will be reading. We are eager to hear goods and bads from you. 

Finally, although fragmentation is not always avoidable, in an effort to
prevent confusion among users, if you had your Dockerfile published on,
say, GitHub or Docker Hub we would much appreciate if you could make it
explicit / clear that it is an unofficial effort. You are very welcome
to join effort with us if you have an interest in pmacct & Docker!

Regards,
Paolo 

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


[pmacct-discussion] pmacct 1.7.5 released !

2020-06-17 Thread Paolo Lucente
VERSION.
1.7.5


DESCRIPTION.
pmacct is a small set of multi-purpose passive network monitoring tools. It
can account, classify, aggregate, replicate and export forwarding-plane data,
ie. IPv4 and IPv6 traffic; collect and correlate control-plane data via BGP
and BMP; collect and correlate RPKI data; collect infrastructure data via
Streaming Telemetry. Each component works both as a standalone daemon and
as a thread of execution for correlation purposes (ie. enrich NetFlow with
BGP data).

A pluggable architecture allows to store collected forwarding-plane data into
memory tables, RDBMS (MySQL, PostgreSQL, SQLite), noSQL databases (MongoDB,
BerkeleyDB), AMQP (RabbitMQ) and Kafka message exchanges and flat-files.
pmacct offers customizable historical data breakdown, data enrichments like
BGP and IGP correlation and GeoIP lookups, filtering, tagging and triggers.
Libpcap, Linux Netlink/NFLOG, sFlow v2/v4/v5, NetFlow v5/v8/v9 and IPFIX are
all supported as inputs for forwarding-plane data. Replication of incoming
NetFlow, IPFIX and sFlow datagrams is also available. Statistics can be
easily exported to time-series databases like ElasticSearch and InfluxDB and
traditional tools Cacti RRDtool MRTG, Net-SNMP, GNUPlot, etc.

Control-plane and infrastructure data, collected via BGP, BMP and Streaming
Telemetry, can be all logged real-time or dumped at regular time intervals
to AMQP (RabbitMQ) and Kafka message exchanges and flat-files.


HOMEPAGE.
http://www.pmacct.net/


DOWNLOAD.
http://www.pmacct.net/pmacct-1.7.5.tar.gz


CHANGELOG.
+ pmacct & Redis: pmacct daemons can now connect to a Redis cache.
  The main use-case currenly covered is: registering every stable
  daemon component in a table so to have, when running a cluster
  comprising several daemons / components, an olistic view of what
  is currently running and where; shall a component stop running
  or crash it will disappear from the inventory.
+ BMP daemon: as part of the IETF 107 vHackaton, preliminar support
  for draft-xu-grow-bmp-route-policy-attr-trace and draft-lucente-
  grow-bmp-tlv-ebit was introduced. Also added support for Peer
  Distinguisher field in the BMP Per-Peer Header.
+ BMP daemon: added support for reading from savefiles in libpcap
  format (pcap_savefile, pcap_savefile_delay, pcap_savefile_replay,
  pcap_filter) as an alternative to the use of bmp_play.py.
+ BMP daemon: re-worked, improved and generalized support for TLVs
  at the end of BMP messages. In this context, unknown Stats data
  is handled as a generic TLV. 
+ BMP daemon: added SO_KEEPALIVE TCP socket option (ie. to keep the
  sessions alive via a firewall / NAT kind of device). Thanks to
  Jared Mauch ( @jaredmauch ) for his patch. 
+ nfacctd, nfprobe plugin: added usec timestamp resolution to IPFIX
  collector and export via IEs #154, #155. For export, this can be
  configured via the new nfprobe_tstamp_usec knob.
+ nfacctd: new nfacctd_templates_receiver and nfacctd_templates_port
  config directives allow respectively to specify a destination
  where to copy NetFlow v9/IPFIX templates to and a port where to
  listen for templates from. If nfacctd_templates_receiver points to
  a replicator and the replicator exports to nfacctd_templates_port
  of a set of collectors then, for example, it gets possible to share
  templates among collectors in a cluster for the purpose of seamless
  scale-out.
+ pmtelemetryd: in addition to existing TCP, UDP and ZeroMQ inputs,
  the daemon can now read Streaming Telemetry data in JSON format
  from a Kafka broker (telemetry_daemon_kafka_* config knobs).
+ pmgrpcd.py: Use of multiple processes for the Kafka Avro exporter
  to leverage the potential of multi-core/processors architectures.
  Code is from Raphael P. Barazzutti ( @rbarazzutti ).
+ pmgrpcd.py: added -F / --no-flatten command-line option to disable
  object flattening (default true for backward compatibility); also
  export to a Kafka broker for (flattened) JSON objects was added (in
  addition to existing export to ZeroMQ).
+ nDPI: introduced support for nDPI 3.2 and dropped support for all
  earlier versions of the library due to changes to the API.
+ Docker: embraced the technology for CI purposes; added a docker/
  directory in the file distribution where Dockerfile and scripts to
  build pmacct and dependencies are shared. Thanks to Claudio Ortega
  ( @claudio-ortega ) for contributing his excellent work in the area.
! fix, pmacctd: pcap_setdirection() enabled and moved to the right
  place in code. Libpcap tested for function presence. Thanks to
  Mikhail Sennikovsky for his patch.
! fix, pmacctd: SEGV has been detected if passing messages with an
  unsupported link layer. 
! fix, uacctd: handle non-ethernet packets correctly. Use mac_len = 0
  for non-ethernet packets in which case a zeroed ethernet header is
  used. Thanks to @aleksandrgilfanov for his patch.
! fix, BGP daemon: improved handling of withdrawals for label-unicast
  and mpls-vpn NLRIs.
! fix, BGP 

Re: [pmacct-discussion] networks_file reload

2020-06-08 Thread Paolo Lucente


Hi Olaf,

To confirm that the file is reloaded. Unfortunately all log messages in
loading up a networks_file are related to errors, warnings and debug. No
info message to say that simply all went good. So i just added one as an
action item for the issue you raised:

https://github.com/pmacct/pmacct/commit/5f4c424f86d20821b4c028d9d180aa506f76

Now you can see the file is loaded upon startup and also upon sending a
SIGUSR2 to the process(es). Thank you!

Paolo

On Fri, Jun 05, 2020 at 11:16:19AM +0100, Olaf de Bree wrote:
> Hi all,
> 
> hoping someone can help.
> 
> I am using networks_file to map ASNs to prefixes under nfacctd version 1.7.5
> 
> The pmacct documentation suggests under the maps_refresh directive that
> the networks_file is reloadable via -SIGUSR2 but when I issue a "pkill
> -SIGUSR2 nfacctd" while running debug I see evidence that pre_tag_map is
> reloaded in the logs but not the networks_file.
> 
> Is the networks_file silently reloaded with no log? or could this be a bug?
> 
> Thanks in advance
> Olaf

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] pmacctd and src_std_comm aggregation

2020-05-26 Thread Paolo Lucente

Ciao Simone,

You see, that is the thing with the example you proposed: it is very
likely that one of the two paths is not in the RIB (like you said) so
you can't really make 'suggestions', you have to force it through a
static mapping; i do see people being fine with that for the Peer
Source ASN because in majority of cases (not all, ie. some remote
peerings may escape this scheme depending how they are built) you may
map it 1:1 to an interface, a VLAN or a MAC address. But being the
communities bound to a NLRI, anything static does not really scale (and
that is the main reason the knob is not there today).

So your alternate idea, let's make the path visible via ADD-PATH:, sure,
that would be actually a nice test to use your lab for. A possible
logics could be (again as you suggested): 1) a bgp_peer_src_as_map
compiled as you do today + bgp_nexthop (key already supported so zero
work there), to associate ASNs, the expected next-hop and interface /
VLAN / MAC address; 2) have a BGP session with ADD-PATH so that the
multiple paths remain visible (also here zero work to do) and 3) use the
BGP next-hop info in the bgp_peer_src_as_map as selector in the ADD-PATH
vector (also this is logics already existing BUT NOT in conjunction with
a BGP next-hop fed by a bgp_peer_src_as_map or, let me re-phrase, at
least this is untested / uncharted territory). 

Sounds like fun. Shall we move to unicast email for lab access and
arranging all the different pieces? Since it's not urgent, doing some
spare-time work on it, i guess we can converge on this in a week or a
couple.

Paolo

On Mon, May 25, 2020 at 06:21:56PM +0200, Simone Ricci wrote:
> Ciao Paolo,
> 
> > Il giorno 25 mag 2020, alle ore 16:03, Paolo Lucente  ha 
> > scritto:
> > 
> > Ciao Simone,
> > 
> > If i got it correct you are after static mapping of communities to input
> > traffic - given an input interface / vlan or an ingress router or a
> > source MAC address.
> 
> Yes, just to be clear imagine this scenario:
> 
> route 192.0.2.0/24, originated by AS1000, coming in from two upstreams 
> (AS100, AS200) announced as follows:
> 
> 192.0.2.0/24 100 500 1000 (100:100)
> 192.0.2.0/24 200 1000 (200:100)
> 
> Obviously the path via AS200 will be the best one, so pmacctd always attaches 
> community 200:100 to inbound traffic…even if enters via AS100 (it can 
> discriminate the peer src AS thanks to the relevant map)
> 
> > It seems doable, like you said, adding a machinery
> > like it exists for the source peer ASN.
> 
> If I understand correctly, in nfacctd/sfacctd the association is done looking 
> at the BGP next-hop attribute; maybe it’s possible to sort it out just by 
> “extending” the existing map to let the user “suggest” that traffic matching 
> a relevant filter has a specific bgp next-hop as well as a peer src AS…but 
> I’m just thinking out loud here.
> 
> > I'd have one question for you:
> > 
> > How would the 'output' look like: one single community or a list of
> > communities (ths may make less sense but still i'd like to double-check
> > with you)?
> 
> Regarding the format, the actual output will be OK (single string, 
> communities separated by _), as I push everything via AMQP and parsing gets 
> done upper in the stack; it’s not a bad thing, when considering that not all 
> databases are going to accept arrays of objects and that trasformation is 
> easily supported by a lot of tooling (be it logstash, telegraph, fluentd…)
> 
> > I guess you may be interested in either standard or large
> > communities but not extended, true? And, if true, would you have any
> > preferences among the two? Perhaps the standard ones since you mention
> > 'src_std_comm’?
> 
> At the moment the support for the standard ones will suffice, for me
> 
> > It's not a biggie and i guess i can converge on this relatively soon;
> > can you confrm your priority / urgency? 
> 
> Oh it’s not urgent at all, but it would be a very nice-to-have feature which 
> helps getting a lot of interesting insights.
> Just one thing: as you may remember I’ve got a nice testing environment that 
> you’re welcome to use if it helps.
> 
> Thank you!
> Simone.
> 

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] pmacctd and src_std_comm aggregation

2020-05-25 Thread Paolo Lucente

Ciao Simone,

If i got it correct you are after static mapping of communities to input
traffic - given an input interface / vlan or an ingress router or a
source MAC address. It seems doable, like you said, adding a machinery
like it exists for the source peer ASN. I'd have one question for you:

How would the 'output' look like: one single community or a list of
communities (ths may make less sense but still i'd like to double-check
with you)? I guess you may be interested in either standard or large
communities but not extended, true? And, if true, would you have any
preferences among the two? Perhaps the standard ones since you mention
'src_std_comm'?

It's not a biggie and i guess i can converge on this relatively soon;
can you confrm your priority / urgency? 

Paolo

On Sun, May 24, 2020 at 02:09:24PM +0200, Simone Ricci wrote:
> Good Evening,
> 
> I’m trying to configure pmacctd to aggregate inbound traffic by these 
> primitives:
> 
> peer_src_as, src_as, src_std_comm
> 
> The goal is to see if traffic from certain networks announced by carrier X 
> with specific communities comes in from X or another hypothetical path (in 
> that case the communities are not relevant, but that’s another story).
> 
> My configuration is the following: machine running pmacctd sees the traffic 
> thru 2 NICs, connected to SPAN port on core switches (where the carriers are 
> linked); only inbound traffic is presented. I also setup a bgp peering with 
> the border router, and enabled ADD-PATH capability on the session.
> 
> The setup seems to work, the problem being that the community list always 
> refers to the bgp best path; digging thru the documentation I see that in the 
> ADD-PATH case, the method to select the relevant entry is looking at the 
> bgp_next_hop of the flow…but I think that's actually applicable only to 
> netflow/sflow collectors, right? I were wondering if it’s possible to extend 
> bgp_peer_src_as_map to set the relevant information, so that every flow will 
> have the community field populated by leveraging the same mechanics actually 
> used to populate the peer_src_as field.
> 
> Thank you
> Simone
> 
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] BGP correlation not working with nfacctd, all BGP set to 0

2020-05-19 Thread Paolo Lucente


Hi Wilfrid,

"we already capture all the flows matching the different rd because
of our netflow setup.". Although i may appreciate if you could elaborate
more on your netflow setup (it makes the exercise less a treasure hunt
for me), I am sure you do: can you paste me the content of one of your
flows with any indication that would point a collector, ie. a RD field,
to the right VPN RIB? You know, we need a linking pin - it that's there
(i hint it is not) then we are all good. You can take a capture of your
flows with tcpdump and inspect them conveniently with Wireshark. 

So the exercise for you is the following, take this record for example:

{
   "seq": 3,
   "timestamp": "2020-05-19 07:15:00",
   "peer_ip_src": "w.x.y.z",
   "ip_prefix": "a.b.c.d/27",
   "rd": "0:ASN:900290024",
   "label": "63455"
}

We need to match . peer_ip_src is the IP of
the device exporting NetFlow, easy, check. ip_prefix is contained in the
NetFlow record, easy, check. Where is 0:ASN:900290024 being mentioned in
the flow? Hint, hint: nowhere and you need to help the collector
deriving this information with a flow_to_rd_map.

Paolo

On Tue, May 19, 2020 at 03:21:53PM +, Grassot, Wilfrid wrote:
> Hi Paolo,
> 
> Unless I misunderstand the flow_to_rd_map, but this one would not help in
> our case.
> Indeed we already capture all the flows matching the different rd because
> of our netflow setup.
> nfacctd already receives only flows from the specific RDs involved in the
> monitored L3VPN .
> 
> My concern is about the correlation of a flow to a src_as, dst_as,
> dst_peer... retrieved from the captured BGP RIB  of this L3VPN and dumped
> to  bgp-$peer_src_ip-%H%M.log,
> 
> Would you please confirm that the flow, BGP correlation can only work only
> if router only advertise to the pmbgpd the best path
> 
> In other words, would you please confirm that our setup is not supported
> because for each prefixes there are not a unique entry in the captured BGP
> RIB, but at least 2 or 3 entries (no best path selection at the
> route-reflector because each vpnv4 address are seen as unique because of
> the different RDs involved) ?
> 
> Please see below an example of the pmbgpd dump file:
> 
> sudo jq '. | select(."ip_prefix" | contains ("a.b.c.d"))'
> bgp-w_x_y_z-0715.log | more
> {
>   "seq": 3,
>   "timestamp": "2020-05-19 07:15:00",
>   "peer_ip_src": "w.x.y.z",
>   "ip_prefix": "a.b.c.d/27",
>   "rd": "0:ASN:900290024",
>   "label": "63455"
> }
> {
>   "seq": 3,
>   "timestamp": "2020-05-19 07:15:00",
>   "peer_ip_src": " w.x.y.z ",
>   "ip_prefix": "a.b.c.d/27",
>   "rd": "0:ASN:911790015",
>   "label": "49061"
> }
> {
>   "seq": 3,
>   "timestamp": "2020-05-19 07:15:00",
>   "peer_ip_src": " w.x.y.z ",
>   "ip_prefix": "a.b.c.d/27",
>   "rd": "0:ASN:911790023",
>   "label": "49059"
> }
> 
> Thank you
> Wilfrid
> 
> -Original Message-
> From: Paolo Lucente 
> Sent: Tuesday, 19 May 2020 16:01
> To: Grassot, Wilfrid 
> Cc: pmacct-discussion@pmacct.net
> Subject: Re: [pmacct-discussion] BGP correlation not working with nfacctd,
> all BGP set to 0
> 
> 
> Hi Wilfrid,
> 
> This is very possibly point #1 of my previous email. The need for a
> flow_to_rd_map to associate flows to the right RD. You can find some
> examples here on how to compose it:
> 
> https://github.com/pmacct/pmacct/blob/1.7.5/examples/flow_to_rd.map.exampl
> e
> 
> Paolo
> 
> On Tue, May 19, 2020 at 08:17:44AM +, Grassot, Wilfrid wrote:
> > Hi Paolo,
> >
> > Could the issue be that correlation does not work because for each
> > "ip_prefix" there is not one, but two or three routes collected by
> > pmbgpd ?
> > Indeed because of redundancies, each prefixes are received by several
> > different routers in our network and by design each of the routers use
> > different route distinguisher (rd).
> > Hence the pmbgpd does not receive a unique route corresponding to best
> > path selected by the route-reflector, but the two or three different
> > vpnv4 addresses (rd:a.b.c.d) corresponding to ip_prefix = a.b.c.d ?
> >
> > Wilfrid
> >
> >
> > -Original Message-
> > From: Grassot, Wilfrid 
> > Sent: Monday, 18 May 2020 17:05
> > To: Paolo Lucente ; 

Re: [pmacct-discussion] help configuration cisco 4948E-F netflow-lite

2020-05-19 Thread Paolo Lucente

Hi Ionut,

Thanks for getting in touch with this.

From the log file you sent apparently the switch sends element #104
(layer2packetSectionData) to include portion of the sampled frame.
Unfortunately such element has been "deprecated in favor of 315
dataLinkFrameSection. Layer 2 packet section data." according to
IANA. Element #315, implemented by some Nexus-family kits, is instead
supported by pmacct just fine.

The above is just to say that implementing element #104 should be
relatively straightforward, pretty much handle it same as #315 that is
already implemented, we speak about adding a couple lines of code.

Can you send me privately a sample of your data to see if the theory
holds and we are not set off by any pesky details? 

Paolo
 
On Tue, May 19, 2020 at 04:45:34PM +0300, Ionuț Bîru wrote:
> Hi guys,
> 
> I'm struggling a bit to collect netflow-v9 lite from this particular device.
> 
> cisco configuration: https://paste.xinu.at/QLW0j/
> nfacctd config: https://paste.xinu.at/bnKHc3/
> nfacctd -f netflow.conf -d log: https://paste.xinu.at/oaJ/
> 
> pmacct -s doesn't have any information, is like nfacctd doesn't receive any
> information from cisco related to src_ip and so on.
> 
> Is somebody that managed to collect flow information using netflow-lite?

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] BGP correlation not working with nfacctd, all BGP set to 0

2020-05-19 Thread Paolo Lucente


Hi Wilfrid,

This is very possibly point #1 of my previous email. The need for a
flow_to_rd_map to associate flows to the right RD. You can find some
examples here on how to compose it:

https://github.com/pmacct/pmacct/blob/1.7.5/examples/flow_to_rd.map.example

Paolo 

On Tue, May 19, 2020 at 08:17:44AM +, Grassot, Wilfrid wrote:
> Hi Paolo,
> 
> Could the issue be that correlation does not work because for each
> "ip_prefix" there is not one, but two or three routes collected by pmbgpd
> ?
> Indeed because of redundancies, each prefixes are received by several
> different routers in our network and by design each of the routers use
> different route distinguisher (rd).
> Hence the pmbgpd does not receive a unique route corresponding to best
> path selected by the route-reflector, but the two or three different vpnv4
> addresses (rd:a.b.c.d) corresponding to ip_prefix = a.b.c.d ?
> 
> Wilfrid
> 
> 
> -Original Message-
> From: Grassot, Wilfrid 
> Sent: Monday, 18 May 2020 17:05
> To: Paolo Lucente ; pmacct-discussion@pmacct.net
> Subject: RE: [pmacct-discussion] BGP correlation not working with nfacctd,
> all BGP set to 0
> 
> Hi Paolo,
> 
> Thank you for your answer.
> 
> My bad in the description of the issue:
> w.x.y.z is indeed the ipv4 address of the router loop0 which is also its
> router-id.
> 
> Currently our setup is to iBGP peer with the router (router-id w.x.y.z) at
> the address-family vpnv4.
> We already filter out using route-target on the router for nfacctd  to
> receive only ipv4 routes from the monitored L3VPN.
> So the BGP daemon is only collecting routes of the monitored L3VPN
> 
> On nfacctd collector we also receive only the netflow from routers
> interfaces configured on this vrf.
> If I manually make the correlation of the captured netflow, I can see in
> the BGP dump files the corresponding src_as, dest_as, peer_dst_ip
> 
> So netflow and BGP are fine and bgp_agent_map file is  bgp_ip=w.x.y.z.
> ip=0.0.0.0/0 where w.x.y.z is the loopback0 (router-id) of the router,
> and nfacctd is peering with it (sorry again for the mishap).
> 
> I use the latest pmacctd 1.7.4 and I compile with ./configure
> --enable-jansson  (--enable-threads is not available)
> 
> And yes our network is a confederation of 6 sub_as.
> 
> Thank you
> 
> Wilfrid Grassot
> 
> 
> 
> 
> 
> 
> -Original Message-
> From: Paolo Lucente 
> Sent: Monday, 18 May 2020 16:30
> To: pmacct-discussion@pmacct.net; Grassot, Wilfrid
> 
> Subject: Re: [pmacct-discussion] BGP correlation not working with nfacctd,
> all BGP set to 0
> 
> 
> Hi Wilfrid,
> 
> Thanks for getting in touch. A couple of notes:
> 
> 1) if you are sending vpnv4 routes - and if that is a requirement - then
> you will need a flow_to_rd_map to map flows to the right VPN (maybe basing
> on the input interface at the ingress router? just an idea);
> 
> 2) Confederations always do add up to the fun :-) I may not have the
> complete info at the moment in order to comment further on this;
> 
> 3) bgp_ip in the bgp_agent_map may have been set incorrectly; in the
> comment you say "where w.x.y.z is the IP address of the nfacctd collector"
> but, according to docs, it should be set to the "IPv4/IPv6 session address
> or Router ID of the BGP peer.".
> 
> You may start working on #1 and #3. Probably more info is needed for #2
> and for this reason I suggest that, if things do not just work out at this
> round, we move the conversation to unicast email.
> 
> Paolo
> 
> 
> On 17/05/2020 16:24, Grassot, Wilfrid wrote:
> > Good afternoon
> >
> > I cannot have my netflow augmented with bgp data (src_as, dst_as,
> > peer_dst_ip…) all of the BGP data stay 0 or are empty
> >
> > An output of the csv file is:
> >
> > 0,0,63.218.164.15,,62.140.128.166,220.206.187.242,2123,2123,udp,1,40
> >
> > Where 0,0 are the missing src_as, dst_as  and , , is the missing
> > peer_dst_ip
> >
> > I try to monitor traffic of a L3VPN by having all routers sending
> > netflow to nfacctd and augment them with BGP data.
> >
> > The nfacctd collector peers with the route-reflector on address-family
> > vpnv4.
> >
> > _Please mind the network is a confederation network with sub-as_
> >
> > __
> >
> > I cannot figure out what is wrong
> >
> > __
> >
> > BGP session is up,
> >
> > bgp_table_dump_file collects properly all routes from the vrf
> >
> > netflow is properly collected by nfacctd
> >
> > But all aggregate values that should augment the data stay at zero for
> > the 

Re: [pmacct-discussion] BGP correlation not working with nfacctd, all BGP set to 0

2020-05-18 Thread Paolo Lucente



Hi Wilfrid,

Thanks for getting in touch. A couple of notes:

1) if you are sending vpnv4 routes - and if that is a requirement - then 
you will need a flow_to_rd_map to map flows to the right VPN (maybe 
basing on the input interface at the ingress router? just an idea);


2) Confederations always do add up to the fun :-) I may not have the 
complete info at the moment in order to comment further on this;


3) bgp_ip in the bgp_agent_map may have been set incorrectly; in the 
comment you say "where w.x.y.z is the IP address of the nfacctd 
collector" but, according to docs, it should be set to the "IPv4/IPv6 
session address or Router ID of the BGP peer.".


You may start working on #1 and #3. Probably more info is needed for #2 
and for this reason I suggest that, if things do not just work out at 
this round, we move the conversation to unicast email.


Paolo


On 17/05/2020 16:24, Grassot, Wilfrid wrote:

Good afternoon

I cannot have my netflow augmented with bgp data (src_as, dst_as, 
peer_dst_ip…) all of the BGP data stay 0 or are empty


An output of the csv file is:

0,0,63.218.164.15,,62.140.128.166,220.206.187.242,2123,2123,udp,1,40

Where 0,0 are the missing src_as, dst_as  and , , is the missing peer_dst_ip

I try to monitor traffic of a L3VPN by having all routers sending 
netflow to nfacctd and augment them with BGP data.


The nfacctd collector peers with the route-reflector on address-family 
vpnv4.


_Please mind the network is a confederation network with sub-as_

__

I cannot figure out what is wrong

__

BGP session is up,

bgp_table_dump_file collects properly all routes from the vrf

netflow is properly collected by nfacctd

But all aggregate values that should augment the data stay at zero for 
the AS, or empty like peer_dst_ip


My bgp_agent_map file has the below entry

bgp_ip=w.x.y.z.   ip=0.0.0.0/0     where w.x.y.z is the IP address of 
the nfacctd collector


my nfacctd config file is:

daemonize: false

debug: true

bgp_peer_as_skip_subas: true

bgp_src_std_comm_type: bgp

bgp_src_ext_comm_type: bgp

bgp_src_as_path_type: bgp

bgp_agent_map: /usr/local/etc/pmacct/map.txt

nfacctd_as_new: bgp

nfacctd_net: bgp

nfacctd_as: bgp

nfacctd_port: 2055

nfacctd_templates_file: /usr/local/etc/pmacct/nfacctd-template.txt

nfacctd_time_new: true

plugin_buffer_size: 70240

plugin_pipe_size: 2024000

bgp_daemon: true

bgp_daemon_ip: w.x.y.z

bgp_daemon_id: w.x.y.z

bgp_daemon_max_peers: 100

bgp_table_dump_file: /var/spool/bgp-$peer_src_ip-%H%M.log

plugins: print

print_output_file: /var/spool/plugin.log

print_output_file_append: true

print_refresh_time: 3

print_output: cvs

aggregate: proto, src_host, src_port, dst_host, dst_port, src_as, 
dst_as, peer_src_ip, peer_dst_ip


Thank you in advance

Wilfrid


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists




___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


[pmacct-discussion] pmacct 1.7.5 code freeze

2020-05-10 Thread Paolo Lucente


Dears,

pmacct 1.7.5 has entered code freeze today with the outlook of having the
official release wrapped up in approx one month. The code has been
branched out on GitHub:

https://github.com/pmacct/pmacct/tree/1.7.5

Code freeze means that until release time only capital bug fixes will be
committed to this code branch. To allow us all to benefit of an improved
quality of released code, i encourage everybody to test this code (should
you have a non-production environment available) and report any issues
you may stumble upon.

To clone code in a specific branch you need to use the -b knob, ie.:

git clone -b 1.7.5 https://github.com/pmacct/pmacct.git

Regards,
Paolo


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Tracking ingress throughput

2020-04-30 Thread Paolo Lucente


Hi,

By sendng a SIGUSR1 to the daemon you are returned some stats informaton
in the log. Please see here:

https://github.com/pmacct/pmacct/blob/1.7.4/docs/SIGNALS#L17-#L40

Paolo 

On Wed, Apr 29, 2020 at 10:12:53AM +0530, HEMA CHANDRA YEDDULA wrote:
> 
> Hi paolo,
> 
> Is there any way to track the amount data pmacct is receiving. Is there any 
> counter 
> for this in the code ?
> 
> Any help regarding the query is appreciated.
> 
> Thanks & Regards,
> Hema Chandra
> ---
> ::Disclaimer::
> ---
> 
> The contents of this email and any attachment(s) are confidential and intended
> for the named recipient(s) only. It shall not attach any liability on C-DOT.
> Any views or opinions presented in this email are solely those of the author
> and  may  not  necessarily  reflect  the  opinions  of  C-DOT.  Any  form of
> reproduction, dissemination, copying, disclosure, modification, distribution
> and / or publication of this message without the prior written consent of the
> author of this e-mail is strictly prohibited. If you have received this email
> in error please delete it and notify the sender immediately.
> 
> ---
> 

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


[pmacct-discussion] Test

2020-04-23 Thread Paolo Lucente


Please ignore

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


[pmacct-discussion] Test

2020-04-23 Thread Paolo Lucente


Please ignore


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


[pmacct-discussion] Test

2020-04-23 Thread Paolo Lucente


Please ignore

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


[pmacct-discussion] Test

2020-04-23 Thread Paolo Lucente


Please ignore


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] BGP attributes are empty for almost all the data

2020-04-17 Thread Paolo Lucente


Hi Alexandre,

Why don't you try to do a dump of routes received by pmacct? Like:

https://github.com/pmacct/pmacct/blob/1.7.4/QUICKSTART#L1780-#L1781

This test may require you compiling pmacct with JSON / Jansson support.
Also, for a test you could also add 'dst_host' on your 'aggregate'
config directive so you can see what comes into flows (that is, before
pmacct trying to perform the magics with network masking). With all of
this you should have a full view: what you get from BGP, what you get
from flows, what works and what does not work and perhaps establish a
pattern (if not already finding a cause, ie. a partial BGP view and
suchs).

Paolo
 
On Thu, Apr 16, 2020 at 05:17:55PM +0200, alexandre S. wrote:
> Hello,
> 
> I am trying to log sflow data in a sqlite database, aggregated by
> destination AS and prefix.
> 
> Currently I have configured the bgp_deamon plugin, along with the sqlite3
> one to run with sfacctd.
> 
> 
> The problem is that the data that almost all the traffic as an AS set to 0
> with a destination prefix set to 0.0.0.0/0. By looking at the database entry
> I found that a small part of the data is saved with the right values, and I
> can't find why.
> 
> My guess was that the first hour had wrong values because of the time it
> took to receive all bgp information, so I let the daemon run for a few hours
> but it has happened each time the data was saved.
> 
> The configuration file look like this :
> 
> 
> 
> #daemonize: true
> 
> #debug: true
> #debug_internal_msg: true
> 
> bgp_daemon: true
> bgp_daemon_ip: 127.0.0.1
> bgp_daemon_port: 1180
> bgp_daemon_as: 
> bgp_daemon_max_peers: 1
> bgp_table_dump_file:
> /pmacct-1.7.3/output/bgp-$peer_src_ip-%Y_%m_%dT%H_%M_%S.txt
> bgp_table_dump_refresh_time: 3600
> bgp_agent_map: /pmacct-1.7.3/etc/sfacctd_bgp.map
> 
> plugins: sqlite3[simple]
> 
> sql_db[simple]: /pmacct-1.7.3/output/pmacct.db
> sql_refresh_time[simple]: 3600
> sql_history[simple]: 60m
> sql_history_roundoff[simple]: h
> sql_table[simple]: acct
> sql_table_version[simple]: 9
> 
> aggregate: dst_net, dst_mask, dst_as
> 
> sfacctd_as: bgp
> sfacctd_net: bgp
> sfacctd_port: 2602
> sfacctd_ip: 
> sfacctd_time_new: true
> 
> --
> 
> BGP agent map:
> 
> bgp_ip=127.0.0.1 ip=
> 
> --
> 
> Regards, Alexandre
> 
> 
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


  1   2   3   4   5   6   7   8   9   10   >