Re: [pmacct-discussion] Multiples nfacctd deamons writing to same Kafka topic

2020-04-22 Thread Emanuel dos Reis Rodrigues
Thank you so much!

On Wed, Apr 22, 2020, 12:01 PM Brian Solar  wrote:

> use the named configuration feature:
>
> kafka_topic[config_name]: netflow
> kafka_broker_host[config_name]: 192.168100.105
> ...
> ...
>
>
>
> ‐‐‐ Original Message ‐‐‐
> On Sunday, April 19, 2020 5:51 PM, Emanuel dos Reis Rodrigues <
> emanueldosr...@gmail.com> wrote:
>
> I see, I actually tried it before and the realized the write_id was
> changing based on the  PID of nfacctd. Do you know what is the parameter to
> customize the writer_id ?
>
> Thanks !
>
> Best Regards,
> Emanuel
>
>
> On Sun, Apr 19, 2020 at 11:42 AM Brian Solar  wrote:
>
>> You already seem to have a solution, but to me the writer_id is what you
>> want.  Change the name of the process in your configuration file.
>>
>>
>>
>>
>>
>> ‐‐‐ Original Message ‐‐‐
>> On Wednesday, April 15, 2020 7:33 PM, Emanuel dos Reis Rodrigues <
>> emanueldosr...@gmail.com> wrote:
>>
>> Hey, I just realize it worked. I think I was little behind on the
>> messages parking on my kafka, now I can see the tag.
>>
>> Thank you so much for your help.
>>
>> On Wed, Apr 15, 2020 at 10:33 AM Emanuel dos Reis Rodrigues <
>> emanueldosr...@gmail.com> wrote:
>>
>>> I am using:
>>>
>>> NetFlow Accounting Daemon, nfacctd 1.7.2-git (20181018-00+c3)
>>>
>>> Arguments:
>>>  '--enable-kafka' '--enable-jansson'
>>> 'JANSSON_CFLAGS=-I/usr/local/include/' 'JANSSON_LIBS=-L/usr/local/lib
>>> -ljansson' '--enable-l2' '--enable-ipv6' '--enable-64bit'
>>> '--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins'
>>> '--enable-st-bins'
>>>
>>> Libs:
>>> libpcap version 1.5.3
>>> rdkafka 0.11.4
>>> jansson 2.12
>>>
>>> I can upgrade it to a newer version and try again.
>>>
>>>
>>> On Wed, Apr 15, 2020 at 8:59 AM Paolo Lucente  wrote:
>>>

 Hey Emanuel,

 The config is correct and I did try your same config and that does work
 for me, ie.:

 $ ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092
 --topic pmacct.flows
 {"event_type": "purge", "tag": 1, [ .. ]}

 What version of the software are you using? Is it 1.7.4p1 (latest
 stable) or master code from GitHub? If so, is it possible an old running
 nfacctd process is reading the data instead of the newly configured one?

 Paolo

 On Wed, Apr 15, 2020 at 12:17:43AM -0400, Emanuel dos Reis Rodrigues
 wrote:
 > I tried, follow my config:
 >
 > kafka_topic: netflow
 > kafka_broker_host: 192.168100.105
 > kafka_broker_port: 9092
 > kafka_refresh_time: 1
 > #daemonize: true
 > plugins: kafka
 > nfacctd_port: 9995
 > post_tag: 1
 > aggregate: tag, peer_src_ip, src_host, dst_host, timestamp_start,
 > timestamp_end, src_port, dst_port, proto
 >
 >
 > I kept the peer_src_ip, but the tag one is not being posted to Kafka.
 >
 > {'event_type': 'purge', 'peer_ip_src': '172.18.0.2', 'ip_src':
 > '192.168.1.100', 'ip_dst': 'x.46.x.245', 'port_src': 51184,
 'port_dst':
 > 443, 'ip_proto': 'tcp', 'timestamp_start': '2020-04-14
 14:15:39.00',
 > 'timestamp_end': '2020-04-14 14:15:54.00', 'packets': 5, 'bytes':
 260,
 > 'writer_id': 'default_kafka/75091'}
 >
 > Did I miss anything ?
 >
 >
 > Thanks !
 >
 >
 >
 > On Tue, Apr 14, 2020 at 10:26 AM Paolo Lucente 
 wrote:
 >
 > >
 > > I may have skipped the important detail you need to add the 'tag'
 key to
 > > your 'aggregate' line in the config, my bad. This is in addition
 to, say,
 > > 'post_tag: 1' to identify collector 1. Let me know how it goes.
 > >
 > > Paolo
 > >
 > > On Tue, Apr 14, 2020 at 10:18:55AM -0400, Emanuel dos Reis
 Rodrigues wrote:
 > > > Thank you man, I did this test but I did not see the id being
 pushed
 > > along
 > > > with the Netflow info to Kafka topic. Is there the place the
 information
 > > > would show up ?
 > > >
 > > >
 > > > On Tue, Apr 14, 2020 at 9:15 AM Paolo Lucente 
 wrote:
 > > >
 > > > >
 > > > > Hi Emanuel,
 > > > >
 > > > > Apologies i did not get you wanted and ID for the collector. The
 > > > > simplest way of achieving that is 'post_tag' as you just have
 to supply
 > > > > a number as ID; pre_tag_map expects a map and may be better to
 be
 > > > > reserved for more complex use-cases.
 > > > >
 > > > > Paolo
 > > > >
 > > > > On Mon, Apr 13, 2020 at 03:35:52PM -0400, Emanuel dos Reis
 Rodrigues
 > > wrote:
 > > > > > Thank you for your help. Appreciate it !
 > > > > >
 > > > > > See, I did use it for testing after I sent this email.
 However, the
 > > ip
 > > > > > showed there was the IP from my nfacctd machine, the collector
 > > itself.
 > > > > Not
 > > > > > the exporter.
 > > > > >
 > > > > > peer_src_ip  : IP address 

Re: [pmacct-discussion] Multiples nfacctd deamons writing to same Kafka topic

2020-04-22 Thread Brian Solar
use the named configuration feature:

kafka_topic[config_name]: netflow
kafka_broker_host[config_name]: 192.168100.105
...
...

‐‐‐ Original Message ‐‐‐
On Sunday, April 19, 2020 5:51 PM, Emanuel dos Reis Rodrigues 
 wrote:

> I see, I actually tried it before and the realized the write_id was changing 
> based on the  PID of nfacctd. Do you know what is the parameter to customize 
> the writer_id ?
>
> Thanks !
>
> Best Regards,
> Emanuel
>
> On Sun, Apr 19, 2020 at 11:42 AM Brian Solar  wrote:
>
>> You already seem to have a solution, but to me the writer_id is what you 
>> want.  Change the name of the process in your configuration file.
>>
>> ‐‐‐ Original Message ‐‐‐
>> On Wednesday, April 15, 2020 7:33 PM, Emanuel dos Reis Rodrigues 
>>  wrote:
>>
>>> Hey, I just realize it worked. I think I was little behind on the messages 
>>> parking on my kafka, now I can see the tag.
>>>
>>> Thank you so much for your help.
>>>
>>> On Wed, Apr 15, 2020 at 10:33 AM Emanuel dos Reis Rodrigues 
>>>  wrote:
>>>
 I am using:

 NetFlow Accounting Daemon, nfacctd 1.7.2-git (20181018-00+c3)

 Arguments:
  '--enable-kafka' '--enable-jansson' 
 'JANSSON_CFLAGS=-I/usr/local/include/' 'JANSSON_LIBS=-L/usr/local/lib 
 -ljansson' '--enable-l2' '--enable-ipv6' '--enable-64bit' 
 '--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins' 
 '--enable-st-bins'

 Libs:
 libpcap version 1.5.3
 rdkafka 0.11.4
 jansson 2.12

 I can upgrade it to a newer version and try again.

 On Wed, Apr 15, 2020 at 8:59 AM Paolo Lucente  wrote:

> Hey Emanuel,
>
> The config is correct and I did try your same config and that does work
> for me, ie.:
>
> $ ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 
> --topic pmacct.flows
> {"event_type": "purge", "tag": 1, [ .. ]}
>
> What version of the software are you using? Is it 1.7.4p1 (latest
> stable) or master code from GitHub? If so, is it possible an old running
> nfacctd process is reading the data instead of the newly configured one?
>
> Paolo
>
> On Wed, Apr 15, 2020 at 12:17:43AM -0400, Emanuel dos Reis Rodrigues 
> wrote:
>> I tried, follow my config:
>>
>> kafka_topic: netflow
>> kafka_broker_host: 192.168100.105
>> kafka_broker_port: 9092
>> kafka_refresh_time: 1
>> #daemonize: true
>> plugins: kafka
>> nfacctd_port: 9995
>> post_tag: 1
>> aggregate: tag, peer_src_ip, src_host, dst_host, timestamp_start,
>> timestamp_end, src_port, dst_port, proto
>>
>>
>> I kept the peer_src_ip, but the tag one is not being posted to Kafka.
>>
>> {'event_type': 'purge', 'peer_ip_src': '172.18.0.2', 'ip_src':
>> '192.168.1.100', 'ip_dst': 'x.46.x.245', 'port_src': 51184, 'port_dst':
>> 443, 'ip_proto': 'tcp', 'timestamp_start': '2020-04-14 14:15:39.00',
>> 'timestamp_end': '2020-04-14 14:15:54.00', 'packets': 5, 'bytes': 
>> 260,
>> 'writer_id': 'default_kafka/75091'}
>>
>> Did I miss anything ?
>>
>>
>> Thanks !
>>
>>
>>
>> On Tue, Apr 14, 2020 at 10:26 AM Paolo Lucente  wrote:
>>
>> >
>> > I may have skipped the important detail you need to add the 'tag' key 
>> > to
>> > your 'aggregate' line in the config, my bad. This is in addition to, 
>> > say,
>> > 'post_tag: 1' to identify collector 1. Let me know how it goes.
>> >
>> > Paolo
>> >
>> > On Tue, Apr 14, 2020 at 10:18:55AM -0400, Emanuel dos Reis Rodrigues 
>> > wrote:
>> > > Thank you man, I did this test but I did not see the id being pushed
>> > along
>> > > with the Netflow info to Kafka topic. Is there the place the 
>> > > information
>> > > would show up ?
>> > >
>> > >
>> > > On Tue, Apr 14, 2020 at 9:15 AM Paolo Lucente  
>> > > wrote:
>> > >
>> > > >
>> > > > Hi Emanuel,
>> > > >
>> > > > Apologies i did not get you wanted and ID for the collector. The
>> > > > simplest way of achieving that is 'post_tag' as you just have to 
>> > > > supply
>> > > > a number as ID; pre_tag_map expects a map and may be better to be
>> > > > reserved for more complex use-cases.
>> > > >
>> > > > Paolo
>> > > >
>> > > > On Mon, Apr 13, 2020 at 03:35:52PM -0400, Emanuel dos Reis 
>> > > > Rodrigues
>> > wrote:
>> > > > > Thank you for your help. Appreciate it !
>> > > > >
>> > > > > See, I did use it for testing after I sent this email. However, 
>> > > > > the
>> > ip
>> > > > > showed there was the IP from my nfacctd machine, the collector
>> > itself.
>> > > > Not
>> > > > > the exporter.
>> > > > >
>> > > > > peer_src_ip  : IP address or identificator of
>> > > > telemetry
>> > > > > exporting device
>> > > > >
>> > > > > 

Re: [pmacct-discussion] Multiples nfacctd deamons writing to same Kafka topic

2020-04-19 Thread Emanuel dos Reis Rodrigues
I see, I actually tried it before and the realized the write_id was
changing based on the  PID of nfacctd. Do you know what is the parameter to
customize the writer_id ?

Thanks !

Best Regards,
Emanuel


On Sun, Apr 19, 2020 at 11:42 AM Brian Solar  wrote:

> You already seem to have a solution, but to me the writer_id is what you
> want.  Change the name of the process in your configuration file.
>
>
>
>
>
> ‐‐‐ Original Message ‐‐‐
> On Wednesday, April 15, 2020 7:33 PM, Emanuel dos Reis Rodrigues <
> emanueldosr...@gmail.com> wrote:
>
> Hey, I just realize it worked. I think I was little behind on the messages
> parking on my kafka, now I can see the tag.
>
> Thank you so much for your help.
>
> On Wed, Apr 15, 2020 at 10:33 AM Emanuel dos Reis Rodrigues <
> emanueldosr...@gmail.com> wrote:
>
>> I am using:
>>
>> NetFlow Accounting Daemon, nfacctd 1.7.2-git (20181018-00+c3)
>>
>> Arguments:
>>  '--enable-kafka' '--enable-jansson'
>> 'JANSSON_CFLAGS=-I/usr/local/include/' 'JANSSON_LIBS=-L/usr/local/lib
>> -ljansson' '--enable-l2' '--enable-ipv6' '--enable-64bit'
>> '--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins'
>> '--enable-st-bins'
>>
>> Libs:
>> libpcap version 1.5.3
>> rdkafka 0.11.4
>> jansson 2.12
>>
>> I can upgrade it to a newer version and try again.
>>
>>
>> On Wed, Apr 15, 2020 at 8:59 AM Paolo Lucente  wrote:
>>
>>>
>>> Hey Emanuel,
>>>
>>> The config is correct and I did try your same config and that does work
>>> for me, ie.:
>>>
>>> $ ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092
>>> --topic pmacct.flows
>>> {"event_type": "purge", "tag": 1, [ .. ]}
>>>
>>> What version of the software are you using? Is it 1.7.4p1 (latest
>>> stable) or master code from GitHub? If so, is it possible an old running
>>> nfacctd process is reading the data instead of the newly configured one?
>>>
>>> Paolo
>>>
>>> On Wed, Apr 15, 2020 at 12:17:43AM -0400, Emanuel dos Reis Rodrigues
>>> wrote:
>>> > I tried, follow my config:
>>> >
>>> > kafka_topic: netflow
>>> > kafka_broker_host: 192.168100.105
>>> > kafka_broker_port: 9092
>>> > kafka_refresh_time: 1
>>> > #daemonize: true
>>> > plugins: kafka
>>> > nfacctd_port: 9995
>>> > post_tag: 1
>>> > aggregate: tag, peer_src_ip, src_host, dst_host, timestamp_start,
>>> > timestamp_end, src_port, dst_port, proto
>>> >
>>> >
>>> > I kept the peer_src_ip, but the tag one is not being posted to Kafka.
>>> >
>>> > {'event_type': 'purge', 'peer_ip_src': '172.18.0.2', 'ip_src':
>>> > '192.168.1.100', 'ip_dst': 'x.46.x.245', 'port_src': 51184, 'port_dst':
>>> > 443, 'ip_proto': 'tcp', 'timestamp_start': '2020-04-14
>>> 14:15:39.00',
>>> > 'timestamp_end': '2020-04-14 14:15:54.00', 'packets': 5, 'bytes':
>>> 260,
>>> > 'writer_id': 'default_kafka/75091'}
>>> >
>>> > Did I miss anything ?
>>> >
>>> >
>>> > Thanks !
>>> >
>>> >
>>> >
>>> > On Tue, Apr 14, 2020 at 10:26 AM Paolo Lucente 
>>> wrote:
>>> >
>>> > >
>>> > > I may have skipped the important detail you need to add the 'tag'
>>> key to
>>> > > your 'aggregate' line in the config, my bad. This is in addition to,
>>> say,
>>> > > 'post_tag: 1' to identify collector 1. Let me know how it goes.
>>> > >
>>> > > Paolo
>>> > >
>>> > > On Tue, Apr 14, 2020 at 10:18:55AM -0400, Emanuel dos Reis Rodrigues
>>> wrote:
>>> > > > Thank you man, I did this test but I did not see the id being
>>> pushed
>>> > > along
>>> > > > with the Netflow info to Kafka topic. Is there the place the
>>> information
>>> > > > would show up ?
>>> > > >
>>> > > >
>>> > > > On Tue, Apr 14, 2020 at 9:15 AM Paolo Lucente 
>>> wrote:
>>> > > >
>>> > > > >
>>> > > > > Hi Emanuel,
>>> > > > >
>>> > > > > Apologies i did not get you wanted and ID for the collector. The
>>> > > > > simplest way of achieving that is 'post_tag' as you just have to
>>> supply
>>> > > > > a number as ID; pre_tag_map expects a map and may be better to be
>>> > > > > reserved for more complex use-cases.
>>> > > > >
>>> > > > > Paolo
>>> > > > >
>>> > > > > On Mon, Apr 13, 2020 at 03:35:52PM -0400, Emanuel dos Reis
>>> Rodrigues
>>> > > wrote:
>>> > > > > > Thank you for your help. Appreciate it !
>>> > > > > >
>>> > > > > > See, I did use it for testing after I sent this email.
>>> However, the
>>> > > ip
>>> > > > > > showed there was the IP from my nfacctd machine, the collector
>>> > > itself.
>>> > > > > Not
>>> > > > > > the exporter.
>>> > > > > >
>>> > > > > > peer_src_ip  : IP address or identificator
>>> of
>>> > > > > telemetry
>>> > > > > > exporting device
>>> > > > > >
>>> > > > > > In fact, it may have todo with the fact I currently have an SSH
>>> > > tunnel
>>> > > > > with
>>> > > > > > socat with the remote machine in order to collect the data.
>>> This may
>>> > > be
>>> > > > > the
>>> > > > > > reason why which is definitively not a ordinary condition. :)
>>> > > > > >
>>> > > > > > I am wondering if I could use this one to include a different
>>> tag on
>>> > > it
>>> > > > > > 

Re: [pmacct-discussion] Multiples nfacctd deamons writing to same Kafka topic

2020-04-19 Thread Brian Solar
You already seem to have a solution, but to me the writer_id is what you want.  
Change the name of the process in your configuration file.

‐‐‐ Original Message ‐‐‐
On Wednesday, April 15, 2020 7:33 PM, Emanuel dos Reis Rodrigues 
 wrote:

> Hey, I just realize it worked. I think I was little behind on the messages 
> parking on my kafka, now I can see the tag.
>
> Thank you so much for your help.
>
> On Wed, Apr 15, 2020 at 10:33 AM Emanuel dos Reis Rodrigues 
>  wrote:
>
>> I am using:
>>
>> NetFlow Accounting Daemon, nfacctd 1.7.2-git (20181018-00+c3)
>>
>> Arguments:
>>  '--enable-kafka' '--enable-jansson' 'JANSSON_CFLAGS=-I/usr/local/include/' 
>> 'JANSSON_LIBS=-L/usr/local/lib -ljansson' '--enable-l2' '--enable-ipv6' 
>> '--enable-64bit' '--enable-traffic-bins' '--enable-bgp-bins' 
>> '--enable-bmp-bins' '--enable-st-bins'
>>
>> Libs:
>> libpcap version 1.5.3
>> rdkafka 0.11.4
>> jansson 2.12
>>
>> I can upgrade it to a newer version and try again.
>>
>> On Wed, Apr 15, 2020 at 8:59 AM Paolo Lucente  wrote:
>>
>>> Hey Emanuel,
>>>
>>> The config is correct and I did try your same config and that does work
>>> for me, ie.:
>>>
>>> $ ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic 
>>> pmacct.flows
>>> {"event_type": "purge", "tag": 1, [ .. ]}
>>>
>>> What version of the software are you using? Is it 1.7.4p1 (latest
>>> stable) or master code from GitHub? If so, is it possible an old running
>>> nfacctd process is reading the data instead of the newly configured one?
>>>
>>> Paolo
>>>
>>> On Wed, Apr 15, 2020 at 12:17:43AM -0400, Emanuel dos Reis Rodrigues wrote:
 I tried, follow my config:

 kafka_topic: netflow
 kafka_broker_host: 192.168100.105
 kafka_broker_port: 9092
 kafka_refresh_time: 1
 #daemonize: true
 plugins: kafka
 nfacctd_port: 9995
 post_tag: 1
 aggregate: tag, peer_src_ip, src_host, dst_host, timestamp_start,
 timestamp_end, src_port, dst_port, proto


 I kept the peer_src_ip, but the tag one is not being posted to Kafka.

 {'event_type': 'purge', 'peer_ip_src': '172.18.0.2', 'ip_src':
 '192.168.1.100', 'ip_dst': 'x.46.x.245', 'port_src': 51184, 'port_dst':
 443, 'ip_proto': 'tcp', 'timestamp_start': '2020-04-14 14:15:39.00',
 'timestamp_end': '2020-04-14 14:15:54.00', 'packets': 5, 'bytes': 260,
 'writer_id': 'default_kafka/75091'}

 Did I miss anything ?


 Thanks !



 On Tue, Apr 14, 2020 at 10:26 AM Paolo Lucente  wrote:

 >
 > I may have skipped the important detail you need to add the 'tag' key to
 > your 'aggregate' line in the config, my bad. This is in addition to, say,
 > 'post_tag: 1' to identify collector 1. Let me know how it goes.
 >
 > Paolo
 >
 > On Tue, Apr 14, 2020 at 10:18:55AM -0400, Emanuel dos Reis Rodrigues 
 > wrote:
 > > Thank you man, I did this test but I did not see the id being pushed
 > along
 > > with the Netflow info to Kafka topic. Is there the place the 
 > > information
 > > would show up ?
 > >
 > >
 > > On Tue, Apr 14, 2020 at 9:15 AM Paolo Lucente  wrote:
 > >
 > > >
 > > > Hi Emanuel,
 > > >
 > > > Apologies i did not get you wanted and ID for the collector. The
 > > > simplest way of achieving that is 'post_tag' as you just have to 
 > > > supply
 > > > a number as ID; pre_tag_map expects a map and may be better to be
 > > > reserved for more complex use-cases.
 > > >
 > > > Paolo
 > > >
 > > > On Mon, Apr 13, 2020 at 03:35:52PM -0400, Emanuel dos Reis Rodrigues
 > wrote:
 > > > > Thank you for your help. Appreciate it !
 > > > >
 > > > > See, I did use it for testing after I sent this email. However, the
 > ip
 > > > > showed there was the IP from my nfacctd machine, the collector
 > itself.
 > > > Not
 > > > > the exporter.
 > > > >
 > > > > peer_src_ip  : IP address or identificator of
 > > > telemetry
 > > > > exporting device
 > > > >
 > > > > In fact, it may have todo with the fact I currently have an SSH
 > tunnel
 > > > with
 > > > > socat with the remote machine in order to collect the data. This 
 > > > > may
 > be
 > > > the
 > > > > reason why which is definitively not a ordinary condition. :)
 > > > >
 > > > > I am wondering if I could use this one to include a different tag 
 > > > > on
 > it
 > > > > process/collector, but have not yet figured out how. Any thoughts ?
 > > > >
 > > > > label: String label, ie. as result of
 > > > > pre_tag_map evaluation
 > > > >
 > > > >
 > > > > Thank you again.
 > > > >
 > > > > On Mon, Apr 13, 2020 at 9:07 AM Paolo Lucente 
 > wrote:
 > > > >
 > > > > >
 > > > > > Hi Emanuel,
 > > > > >
 > > > > > I 

Re: [pmacct-discussion] Multiples nfacctd deamons writing to same Kafka topic

2020-04-15 Thread Emanuel dos Reis Rodrigues
Hey, I just realize it worked. I think I was little behind on the messages
parking on my kafka, now I can see the tag.

Thank you so much for your help.

On Wed, Apr 15, 2020 at 10:33 AM Emanuel dos Reis Rodrigues <
emanueldosr...@gmail.com> wrote:

> I am using:
>
> NetFlow Accounting Daemon, nfacctd 1.7.2-git (20181018-00+c3)
>
> Arguments:
>  '--enable-kafka' '--enable-jansson'
> 'JANSSON_CFLAGS=-I/usr/local/include/' 'JANSSON_LIBS=-L/usr/local/lib
> -ljansson' '--enable-l2' '--enable-ipv6' '--enable-64bit'
> '--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins'
> '--enable-st-bins'
>
> Libs:
> libpcap version 1.5.3
> rdkafka 0.11.4
> jansson 2.12
>
> I can upgrade it to a newer version and try again.
>
>
> On Wed, Apr 15, 2020 at 8:59 AM Paolo Lucente  wrote:
>
>>
>> Hey Emanuel,
>>
>> The config is correct and I did try your same config and that does work
>> for me, ie.:
>>
>> $ ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092
>> --topic pmacct.flows
>> {"event_type": "purge", "tag": 1, [ .. ]}
>>
>> What version of the software are you using? Is it 1.7.4p1 (latest
>> stable) or master code from GitHub? If so, is it possible an old running
>> nfacctd process is reading the data instead of the newly configured one?
>>
>> Paolo
>>
>> On Wed, Apr 15, 2020 at 12:17:43AM -0400, Emanuel dos Reis Rodrigues
>> wrote:
>> > I tried, follow my config:
>> >
>> > kafka_topic: netflow
>> > kafka_broker_host: 192.168100.105
>> > kafka_broker_port: 9092
>> > kafka_refresh_time: 1
>> > #daemonize: true
>> > plugins: kafka
>> > nfacctd_port: 9995
>> > post_tag: 1
>> > aggregate: tag, peer_src_ip, src_host, dst_host, timestamp_start,
>> > timestamp_end, src_port, dst_port, proto
>> >
>> >
>> > I kept the peer_src_ip, but the tag one is not being posted to Kafka.
>> >
>> > {'event_type': 'purge', 'peer_ip_src': '172.18.0.2', 'ip_src':
>> > '192.168.1.100', 'ip_dst': 'x.46.x.245', 'port_src': 51184, 'port_dst':
>> > 443, 'ip_proto': 'tcp', 'timestamp_start': '2020-04-14 14:15:39.00',
>> > 'timestamp_end': '2020-04-14 14:15:54.00', 'packets': 5, 'bytes':
>> 260,
>> > 'writer_id': 'default_kafka/75091'}
>> >
>> > Did I miss anything ?
>> >
>> >
>> > Thanks !
>> >
>> >
>> >
>> > On Tue, Apr 14, 2020 at 10:26 AM Paolo Lucente 
>> wrote:
>> >
>> > >
>> > > I may have skipped the important detail you need to add the 'tag' key
>> to
>> > > your 'aggregate' line in the config, my bad. This is in addition to,
>> say,
>> > > 'post_tag: 1' to identify collector 1. Let me know how it goes.
>> > >
>> > > Paolo
>> > >
>> > > On Tue, Apr 14, 2020 at 10:18:55AM -0400, Emanuel dos Reis Rodrigues
>> wrote:
>> > > > Thank you man, I did this test but I did not see the id being pushed
>> > > along
>> > > > with the Netflow info to Kafka topic. Is there the place the
>> information
>> > > > would show up ?
>> > > >
>> > > >
>> > > > On Tue, Apr 14, 2020 at 9:15 AM Paolo Lucente 
>> wrote:
>> > > >
>> > > > >
>> > > > > Hi Emanuel,
>> > > > >
>> > > > > Apologies i did not get you wanted and ID for the collector. The
>> > > > > simplest way of achieving that is 'post_tag' as you just have to
>> supply
>> > > > > a number as ID; pre_tag_map expects a map and may be better to be
>> > > > > reserved for more complex use-cases.
>> > > > >
>> > > > > Paolo
>> > > > >
>> > > > > On Mon, Apr 13, 2020 at 03:35:52PM -0400, Emanuel dos Reis
>> Rodrigues
>> > > wrote:
>> > > > > > Thank you for your help. Appreciate it !
>> > > > > >
>> > > > > > See, I did use it for testing after I sent this email. However,
>> the
>> > > ip
>> > > > > > showed there was the IP from my nfacctd machine, the collector
>> > > itself.
>> > > > > Not
>> > > > > > the exporter.
>> > > > > >
>> > > > > > peer_src_ip  : IP address or identificator
>> of
>> > > > > telemetry
>> > > > > > exporting device
>> > > > > >
>> > > > > > In fact, it may have todo with the fact I currently have an SSH
>> > > tunnel
>> > > > > with
>> > > > > > socat with the remote machine in order to collect the data.
>> This may
>> > > be
>> > > > > the
>> > > > > > reason why which is definitively not a ordinary condition. :)
>> > > > > >
>> > > > > > I am wondering if I could use this one to include a different
>> tag on
>> > > it
>> > > > > > process/collector, but have not yet figured out how. Any
>> thoughts ?
>> > > > > >
>> > > > > > label: String label, ie. as result
>> of
>> > > > > > pre_tag_map evaluation
>> > > > > >
>> > > > > >
>> > > > > > Thank you again.
>> > > > > >
>> > > > > > On Mon, Apr 13, 2020 at 9:07 AM Paolo Lucente > >
>> > > wrote:
>> > > > > >
>> > > > > > >
>> > > > > > > Hi Emanuel,
>> > > > > > >
>> > > > > > > I think you are looking for (i admit, non-intuitive)
>> 'peer_src_ip'
>> > > > > > > primitive:
>> > > > > > >
>> > > > > > > $ nfacctd -a | grep peer_src_ip
>> > > > > > > peer_src_ip  : IP address or
>> identificator of
>> > > > > > > telemetry exporting 

Re: [pmacct-discussion] Multiples nfacctd deamons writing to same Kafka topic

2020-04-15 Thread Emanuel dos Reis Rodrigues
I am using:

NetFlow Accounting Daemon, nfacctd 1.7.2-git (20181018-00+c3)

Arguments:
 '--enable-kafka' '--enable-jansson' 'JANSSON_CFLAGS=-I/usr/local/include/'
'JANSSON_LIBS=-L/usr/local/lib -ljansson' '--enable-l2' '--enable-ipv6'
'--enable-64bit' '--enable-traffic-bins' '--enable-bgp-bins'
'--enable-bmp-bins' '--enable-st-bins'

Libs:
libpcap version 1.5.3
rdkafka 0.11.4
jansson 2.12

I can upgrade it to a newer version and try again.


On Wed, Apr 15, 2020 at 8:59 AM Paolo Lucente  wrote:

>
> Hey Emanuel,
>
> The config is correct and I did try your same config and that does work
> for me, ie.:
>
> $ ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092
> --topic pmacct.flows
> {"event_type": "purge", "tag": 1, [ .. ]}
>
> What version of the software are you using? Is it 1.7.4p1 (latest
> stable) or master code from GitHub? If so, is it possible an old running
> nfacctd process is reading the data instead of the newly configured one?
>
> Paolo
>
> On Wed, Apr 15, 2020 at 12:17:43AM -0400, Emanuel dos Reis Rodrigues wrote:
> > I tried, follow my config:
> >
> > kafka_topic: netflow
> > kafka_broker_host: 192.168100.105
> > kafka_broker_port: 9092
> > kafka_refresh_time: 1
> > #daemonize: true
> > plugins: kafka
> > nfacctd_port: 9995
> > post_tag: 1
> > aggregate: tag, peer_src_ip, src_host, dst_host, timestamp_start,
> > timestamp_end, src_port, dst_port, proto
> >
> >
> > I kept the peer_src_ip, but the tag one is not being posted to Kafka.
> >
> > {'event_type': 'purge', 'peer_ip_src': '172.18.0.2', 'ip_src':
> > '192.168.1.100', 'ip_dst': 'x.46.x.245', 'port_src': 51184, 'port_dst':
> > 443, 'ip_proto': 'tcp', 'timestamp_start': '2020-04-14 14:15:39.00',
> > 'timestamp_end': '2020-04-14 14:15:54.00', 'packets': 5, 'bytes':
> 260,
> > 'writer_id': 'default_kafka/75091'}
> >
> > Did I miss anything ?
> >
> >
> > Thanks !
> >
> >
> >
> > On Tue, Apr 14, 2020 at 10:26 AM Paolo Lucente  wrote:
> >
> > >
> > > I may have skipped the important detail you need to add the 'tag' key
> to
> > > your 'aggregate' line in the config, my bad. This is in addition to,
> say,
> > > 'post_tag: 1' to identify collector 1. Let me know how it goes.
> > >
> > > Paolo
> > >
> > > On Tue, Apr 14, 2020 at 10:18:55AM -0400, Emanuel dos Reis Rodrigues
> wrote:
> > > > Thank you man, I did this test but I did not see the id being pushed
> > > along
> > > > with the Netflow info to Kafka topic. Is there the place the
> information
> > > > would show up ?
> > > >
> > > >
> > > > On Tue, Apr 14, 2020 at 9:15 AM Paolo Lucente 
> wrote:
> > > >
> > > > >
> > > > > Hi Emanuel,
> > > > >
> > > > > Apologies i did not get you wanted and ID for the collector. The
> > > > > simplest way of achieving that is 'post_tag' as you just have to
> supply
> > > > > a number as ID; pre_tag_map expects a map and may be better to be
> > > > > reserved for more complex use-cases.
> > > > >
> > > > > Paolo
> > > > >
> > > > > On Mon, Apr 13, 2020 at 03:35:52PM -0400, Emanuel dos Reis
> Rodrigues
> > > wrote:
> > > > > > Thank you for your help. Appreciate it !
> > > > > >
> > > > > > See, I did use it for testing after I sent this email. However,
> the
> > > ip
> > > > > > showed there was the IP from my nfacctd machine, the collector
> > > itself.
> > > > > Not
> > > > > > the exporter.
> > > > > >
> > > > > > peer_src_ip  : IP address or identificator of
> > > > > telemetry
> > > > > > exporting device
> > > > > >
> > > > > > In fact, it may have todo with the fact I currently have an SSH
> > > tunnel
> > > > > with
> > > > > > socat with the remote machine in order to collect the data. This
> may
> > > be
> > > > > the
> > > > > > reason why which is definitively not a ordinary condition. :)
> > > > > >
> > > > > > I am wondering if I could use this one to include a different
> tag on
> > > it
> > > > > > process/collector, but have not yet figured out how. Any
> thoughts ?
> > > > > >
> > > > > > label: String label, ie. as result of
> > > > > > pre_tag_map evaluation
> > > > > >
> > > > > >
> > > > > > Thank you again.
> > > > > >
> > > > > > On Mon, Apr 13, 2020 at 9:07 AM Paolo Lucente 
> > > wrote:
> > > > > >
> > > > > > >
> > > > > > > Hi Emanuel,
> > > > > > >
> > > > > > > I think you are looking for (i admit, non-intuitive)
> 'peer_src_ip'
> > > > > > > primitive:
> > > > > > >
> > > > > > > $ nfacctd -a | grep peer_src_ip
> > > > > > > peer_src_ip  : IP address or identificator
> of
> > > > > > > telemetry exporting device
> > > > > > >
> > > > > > > Without the grep you can see all supported primitives by the
> > > nfacctd
> > > > > > > release you are using along with a text explanation.
> > > > > > >
> > > > > > > Paolo
> > > > > > >
> > > > > > > On Sun, Apr 12, 2020 at 06:55:26PM -0400, Emanuel dos Reis
> > > Rodrigues
> > > > > wrote:
> > > > > > > > Hello guys,
> > > > > > > >
> > > > > > > > I implemented nfacctd acting as a Netflow 

Re: [pmacct-discussion] Multiples nfacctd deamons writing to same Kafka topic

2020-04-15 Thread Paolo Lucente


Hey Emanuel,

The config is correct and I did try your same config and that does work
for me, ie.:

$ ./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic 
pmacct.flows
{"event_type": "purge", "tag": 1, [ .. ]}

What version of the software are you using? Is it 1.7.4p1 (latest
stable) or master code from GitHub? If so, is it possible an old running
nfacctd process is reading the data instead of the newly configured one?

Paolo 

On Wed, Apr 15, 2020 at 12:17:43AM -0400, Emanuel dos Reis Rodrigues wrote:
> I tried, follow my config:
> 
> kafka_topic: netflow
> kafka_broker_host: 192.168100.105
> kafka_broker_port: 9092
> kafka_refresh_time: 1
> #daemonize: true
> plugins: kafka
> nfacctd_port: 9995
> post_tag: 1
> aggregate: tag, peer_src_ip, src_host, dst_host, timestamp_start,
> timestamp_end, src_port, dst_port, proto
> 
> 
> I kept the peer_src_ip, but the tag one is not being posted to Kafka.
> 
> {'event_type': 'purge', 'peer_ip_src': '172.18.0.2', 'ip_src':
> '192.168.1.100', 'ip_dst': 'x.46.x.245', 'port_src': 51184, 'port_dst':
> 443, 'ip_proto': 'tcp', 'timestamp_start': '2020-04-14 14:15:39.00',
> 'timestamp_end': '2020-04-14 14:15:54.00', 'packets': 5, 'bytes': 260,
> 'writer_id': 'default_kafka/75091'}
> 
> Did I miss anything ?
> 
> 
> Thanks !
> 
> 
> 
> On Tue, Apr 14, 2020 at 10:26 AM Paolo Lucente  wrote:
> 
> >
> > I may have skipped the important detail you need to add the 'tag' key to
> > your 'aggregate' line in the config, my bad. This is in addition to, say,
> > 'post_tag: 1' to identify collector 1. Let me know how it goes.
> >
> > Paolo
> >
> > On Tue, Apr 14, 2020 at 10:18:55AM -0400, Emanuel dos Reis Rodrigues wrote:
> > > Thank you man, I did this test but I did not see the id being pushed
> > along
> > > with the Netflow info to Kafka topic. Is there the place the information
> > > would show up ?
> > >
> > >
> > > On Tue, Apr 14, 2020 at 9:15 AM Paolo Lucente  wrote:
> > >
> > > >
> > > > Hi Emanuel,
> > > >
> > > > Apologies i did not get you wanted and ID for the collector. The
> > > > simplest way of achieving that is 'post_tag' as you just have to supply
> > > > a number as ID; pre_tag_map expects a map and may be better to be
> > > > reserved for more complex use-cases.
> > > >
> > > > Paolo
> > > >
> > > > On Mon, Apr 13, 2020 at 03:35:52PM -0400, Emanuel dos Reis Rodrigues
> > wrote:
> > > > > Thank you for your help. Appreciate it !
> > > > >
> > > > > See, I did use it for testing after I sent this email. However, the
> > ip
> > > > > showed there was the IP from my nfacctd machine, the collector
> > itself.
> > > > Not
> > > > > the exporter.
> > > > >
> > > > > peer_src_ip  : IP address or identificator of
> > > > telemetry
> > > > > exporting device
> > > > >
> > > > > In fact, it may have todo with the fact I currently have an SSH
> > tunnel
> > > > with
> > > > > socat with the remote machine in order to collect the data. This may
> > be
> > > > the
> > > > > reason why which is definitively not a ordinary condition. :)
> > > > >
> > > > > I am wondering if I could use this one to include a different tag on
> > it
> > > > > process/collector, but have not yet figured out how. Any thoughts ?
> > > > >
> > > > > label: String label, ie. as result of
> > > > > pre_tag_map evaluation
> > > > >
> > > > >
> > > > > Thank you again.
> > > > >
> > > > > On Mon, Apr 13, 2020 at 9:07 AM Paolo Lucente 
> > wrote:
> > > > >
> > > > > >
> > > > > > Hi Emanuel,
> > > > > >
> > > > > > I think you are looking for (i admit, non-intuitive) 'peer_src_ip'
> > > > > > primitive:
> > > > > >
> > > > > > $ nfacctd -a | grep peer_src_ip
> > > > > > peer_src_ip  : IP address or identificator of
> > > > > > telemetry exporting device
> > > > > >
> > > > > > Without the grep you can see all supported primitives by the
> > nfacctd
> > > > > > release you are using along with a text explanation.
> > > > > >
> > > > > > Paolo
> > > > > >
> > > > > > On Sun, Apr 12, 2020 at 06:55:26PM -0400, Emanuel dos Reis
> > Rodrigues
> > > > wrote:
> > > > > > > Hello guys,
> > > > > > >
> > > > > > > I implemented nfacctd acting as a Netflow collector using
> > pmacct. It
> > > > is
> > > > > > > working perfectly and writing the flows to a Kafka topic which I
> > > > have an
> > > > > > > application processing it.
> > > > > > >
> > > > > > > Following is my configuration:
> > > > > > >
> > > > > > > kafka_topic: netflow
> > > > > > > kafka_broker_host: Kafka-host
> > > > > > > kafka_broker_port: 9092
> > > > > > > kafka_refresh_time: 1
> > > > > > > daemonize: true
> > > > > > > plugins: kafka
> > > > > > > pcap_interface: enp0s8
> > > > > > > nfacctd_ip: 192.168.1.100
> > > > > > > nfacctd_port: 9995
> > > > > > > aggregate: src_host, dst_host, timestamp_start, timestamp_end,
> > > > src_port,
> > > > > > > dst_port, proto
> > > > > > >
> > > > > > > Currently, there is only one Netflow exporter 

Re: [pmacct-discussion] Multiples nfacctd deamons writing to same Kafka topic

2020-04-14 Thread Emanuel dos Reis Rodrigues
I tried, follow my config:

kafka_topic: netflow
kafka_broker_host: 192.168100.105
kafka_broker_port: 9092
kafka_refresh_time: 1
#daemonize: true
plugins: kafka
nfacctd_port: 9995
post_tag: 1
aggregate: tag, peer_src_ip, src_host, dst_host, timestamp_start,
timestamp_end, src_port, dst_port, proto


I kept the peer_src_ip, but the tag one is not being posted to Kafka.

{'event_type': 'purge', 'peer_ip_src': '172.18.0.2', 'ip_src':
'192.168.1.100', 'ip_dst': 'x.46.x.245', 'port_src': 51184, 'port_dst':
443, 'ip_proto': 'tcp', 'timestamp_start': '2020-04-14 14:15:39.00',
'timestamp_end': '2020-04-14 14:15:54.00', 'packets': 5, 'bytes': 260,
'writer_id': 'default_kafka/75091'}

Did I miss anything ?


Thanks !



On Tue, Apr 14, 2020 at 10:26 AM Paolo Lucente  wrote:

>
> I may have skipped the important detail you need to add the 'tag' key to
> your 'aggregate' line in the config, my bad. This is in addition to, say,
> 'post_tag: 1' to identify collector 1. Let me know how it goes.
>
> Paolo
>
> On Tue, Apr 14, 2020 at 10:18:55AM -0400, Emanuel dos Reis Rodrigues wrote:
> > Thank you man, I did this test but I did not see the id being pushed
> along
> > with the Netflow info to Kafka topic. Is there the place the information
> > would show up ?
> >
> >
> > On Tue, Apr 14, 2020 at 9:15 AM Paolo Lucente  wrote:
> >
> > >
> > > Hi Emanuel,
> > >
> > > Apologies i did not get you wanted and ID for the collector. The
> > > simplest way of achieving that is 'post_tag' as you just have to supply
> > > a number as ID; pre_tag_map expects a map and may be better to be
> > > reserved for more complex use-cases.
> > >
> > > Paolo
> > >
> > > On Mon, Apr 13, 2020 at 03:35:52PM -0400, Emanuel dos Reis Rodrigues
> wrote:
> > > > Thank you for your help. Appreciate it !
> > > >
> > > > See, I did use it for testing after I sent this email. However, the
> ip
> > > > showed there was the IP from my nfacctd machine, the collector
> itself.
> > > Not
> > > > the exporter.
> > > >
> > > > peer_src_ip  : IP address or identificator of
> > > telemetry
> > > > exporting device
> > > >
> > > > In fact, it may have todo with the fact I currently have an SSH
> tunnel
> > > with
> > > > socat with the remote machine in order to collect the data. This may
> be
> > > the
> > > > reason why which is definitively not a ordinary condition. :)
> > > >
> > > > I am wondering if I could use this one to include a different tag on
> it
> > > > process/collector, but have not yet figured out how. Any thoughts ?
> > > >
> > > > label: String label, ie. as result of
> > > > pre_tag_map evaluation
> > > >
> > > >
> > > > Thank you again.
> > > >
> > > > On Mon, Apr 13, 2020 at 9:07 AM Paolo Lucente 
> wrote:
> > > >
> > > > >
> > > > > Hi Emanuel,
> > > > >
> > > > > I think you are looking for (i admit, non-intuitive) 'peer_src_ip'
> > > > > primitive:
> > > > >
> > > > > $ nfacctd -a | grep peer_src_ip
> > > > > peer_src_ip  : IP address or identificator of
> > > > > telemetry exporting device
> > > > >
> > > > > Without the grep you can see all supported primitives by the
> nfacctd
> > > > > release you are using along with a text explanation.
> > > > >
> > > > > Paolo
> > > > >
> > > > > On Sun, Apr 12, 2020 at 06:55:26PM -0400, Emanuel dos Reis
> Rodrigues
> > > wrote:
> > > > > > Hello guys,
> > > > > >
> > > > > > I implemented nfacctd acting as a Netflow collector using
> pmacct. It
> > > is
> > > > > > working perfectly and writing the flows to a Kafka topic which I
> > > have an
> > > > > > application processing it.
> > > > > >
> > > > > > Following is my configuration:
> > > > > >
> > > > > > kafka_topic: netflow
> > > > > > kafka_broker_host: Kafka-host
> > > > > > kafka_broker_port: 9092
> > > > > > kafka_refresh_time: 1
> > > > > > daemonize: true
> > > > > > plugins: kafka
> > > > > > pcap_interface: enp0s8
> > > > > > nfacctd_ip: 192.168.1.100
> > > > > > nfacctd_port: 9995
> > > > > > aggregate: src_host, dst_host, timestamp_start, timestamp_end,
> > > src_port,
> > > > > > dst_port, proto
> > > > > >
> > > > > > Currently, there is only one Netflow exporter sending data to
> this
> > > > > > demon and I would like to add another exporter. The problem is
> that
> > > I am
> > > > > > not finding a way to differentiate the flows coming from
> different
> > > > > > exporters.
> > > > > >
> > > > > > Let's say I have the exporter A currently sending data to nfacctd
> > > running
> > > > > > at port 9995 and the data is being written to Kafka topic
> Netflow.
> > > > > >
> > > > > > Now I want a new exporter B to start sending data to nfacctd port
> > > 9996
> > > > > which
> > > > > > will be running as a separate demon ( just because I though so,
> not
> > > sure
> > > > > > yet if it is a necessary approach)  and writing the data to the
> > > > > > same Netflow topic in Kafka.
> > > > > >
> > > > > > When the data comes from Kafka to my application, I 

Re: [pmacct-discussion] Multiples nfacctd deamons writing to same Kafka topic

2020-04-14 Thread Paolo Lucente


I may have skipped the important detail you need to add the 'tag' key to
your 'aggregate' line in the config, my bad. This is in addition to, say,
'post_tag: 1' to identify collector 1. Let me know how it goes.

Paolo

On Tue, Apr 14, 2020 at 10:18:55AM -0400, Emanuel dos Reis Rodrigues wrote:
> Thank you man, I did this test but I did not see the id being pushed along
> with the Netflow info to Kafka topic. Is there the place the information
> would show up ?
> 
> 
> On Tue, Apr 14, 2020 at 9:15 AM Paolo Lucente  wrote:
> 
> >
> > Hi Emanuel,
> >
> > Apologies i did not get you wanted and ID for the collector. The
> > simplest way of achieving that is 'post_tag' as you just have to supply
> > a number as ID; pre_tag_map expects a map and may be better to be
> > reserved for more complex use-cases.
> >
> > Paolo
> >
> > On Mon, Apr 13, 2020 at 03:35:52PM -0400, Emanuel dos Reis Rodrigues wrote:
> > > Thank you for your help. Appreciate it !
> > >
> > > See, I did use it for testing after I sent this email. However, the ip
> > > showed there was the IP from my nfacctd machine, the collector itself.
> > Not
> > > the exporter.
> > >
> > > peer_src_ip  : IP address or identificator of
> > telemetry
> > > exporting device
> > >
> > > In fact, it may have todo with the fact I currently have an SSH tunnel
> > with
> > > socat with the remote machine in order to collect the data. This may be
> > the
> > > reason why which is definitively not a ordinary condition. :)
> > >
> > > I am wondering if I could use this one to include a different tag on it
> > > process/collector, but have not yet figured out how. Any thoughts ?
> > >
> > > label: String label, ie. as result of
> > > pre_tag_map evaluation
> > >
> > >
> > > Thank you again.
> > >
> > > On Mon, Apr 13, 2020 at 9:07 AM Paolo Lucente  wrote:
> > >
> > > >
> > > > Hi Emanuel,
> > > >
> > > > I think you are looking for (i admit, non-intuitive) 'peer_src_ip'
> > > > primitive:
> > > >
> > > > $ nfacctd -a | grep peer_src_ip
> > > > peer_src_ip  : IP address or identificator of
> > > > telemetry exporting device
> > > >
> > > > Without the grep you can see all supported primitives by the nfacctd
> > > > release you are using along with a text explanation.
> > > >
> > > > Paolo
> > > >
> > > > On Sun, Apr 12, 2020 at 06:55:26PM -0400, Emanuel dos Reis Rodrigues
> > wrote:
> > > > > Hello guys,
> > > > >
> > > > > I implemented nfacctd acting as a Netflow collector using pmacct. It
> > is
> > > > > working perfectly and writing the flows to a Kafka topic which I
> > have an
> > > > > application processing it.
> > > > >
> > > > > Following is my configuration:
> > > > >
> > > > > kafka_topic: netflow
> > > > > kafka_broker_host: Kafka-host
> > > > > kafka_broker_port: 9092
> > > > > kafka_refresh_time: 1
> > > > > daemonize: true
> > > > > plugins: kafka
> > > > > pcap_interface: enp0s8
> > > > > nfacctd_ip: 192.168.1.100
> > > > > nfacctd_port: 9995
> > > > > aggregate: src_host, dst_host, timestamp_start, timestamp_end,
> > src_port,
> > > > > dst_port, proto
> > > > >
> > > > > Currently, there is only one Netflow exporter sending data to this
> > > > > demon and I would like to add another exporter. The problem is that
> > I am
> > > > > not finding a way to differentiate the flows coming from different
> > > > > exporters.
> > > > >
> > > > > Let's say I have the exporter A currently sending data to nfacctd
> > running
> > > > > at port 9995 and the data is being written to Kafka topic Netflow.
> > > > >
> > > > > Now I want a new exporter B to start sending data to nfacctd port
> > 9996
> > > > which
> > > > > will be running as a separate demon ( just because I though so, not
> > sure
> > > > > yet if it is a necessary approach)  and writing the data to the
> > > > > same Netflow topic in Kafka.
> > > > >
> > > > > When the data comes from Kafka to my application, I cannot tell from
> > > > > which exporter the data came from. I would need some sort of
> > > > identification
> > > > > in order to make this differentiation. It is important for me,
> > because my
> > > > > application may treat differently Netflow traffic coming from these
> > > > > two Netflow exporters.
> > > > >
> > > > > Thanks in advance.
> > > > >
> > > > > Emanuel
> > > >
> > > > > ___
> > > > > pmacct-discussion mailing list
> > > > > http://www.pmacct.net/#mailinglists
> > > >
> > > >
> > > > ___
> > > > pmacct-discussion mailing list
> > > > http://www.pmacct.net/#mailinglists
> > > >
> >

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Multiples nfacctd deamons writing to same Kafka topic

2020-04-14 Thread Emanuel dos Reis Rodrigues
Thank you man, I did this test but I did not see the id being pushed along
with the Netflow info to Kafka topic. Is there the place the information
would show up ?


On Tue, Apr 14, 2020 at 9:15 AM Paolo Lucente  wrote:

>
> Hi Emanuel,
>
> Apologies i did not get you wanted and ID for the collector. The
> simplest way of achieving that is 'post_tag' as you just have to supply
> a number as ID; pre_tag_map expects a map and may be better to be
> reserved for more complex use-cases.
>
> Paolo
>
> On Mon, Apr 13, 2020 at 03:35:52PM -0400, Emanuel dos Reis Rodrigues wrote:
> > Thank you for your help. Appreciate it !
> >
> > See, I did use it for testing after I sent this email. However, the ip
> > showed there was the IP from my nfacctd machine, the collector itself.
> Not
> > the exporter.
> >
> > peer_src_ip  : IP address or identificator of
> telemetry
> > exporting device
> >
> > In fact, it may have todo with the fact I currently have an SSH tunnel
> with
> > socat with the remote machine in order to collect the data. This may be
> the
> > reason why which is definitively not a ordinary condition. :)
> >
> > I am wondering if I could use this one to include a different tag on it
> > process/collector, but have not yet figured out how. Any thoughts ?
> >
> > label: String label, ie. as result of
> > pre_tag_map evaluation
> >
> >
> > Thank you again.
> >
> > On Mon, Apr 13, 2020 at 9:07 AM Paolo Lucente  wrote:
> >
> > >
> > > Hi Emanuel,
> > >
> > > I think you are looking for (i admit, non-intuitive) 'peer_src_ip'
> > > primitive:
> > >
> > > $ nfacctd -a | grep peer_src_ip
> > > peer_src_ip  : IP address or identificator of
> > > telemetry exporting device
> > >
> > > Without the grep you can see all supported primitives by the nfacctd
> > > release you are using along with a text explanation.
> > >
> > > Paolo
> > >
> > > On Sun, Apr 12, 2020 at 06:55:26PM -0400, Emanuel dos Reis Rodrigues
> wrote:
> > > > Hello guys,
> > > >
> > > > I implemented nfacctd acting as a Netflow collector using pmacct. It
> is
> > > > working perfectly and writing the flows to a Kafka topic which I
> have an
> > > > application processing it.
> > > >
> > > > Following is my configuration:
> > > >
> > > > kafka_topic: netflow
> > > > kafka_broker_host: Kafka-host
> > > > kafka_broker_port: 9092
> > > > kafka_refresh_time: 1
> > > > daemonize: true
> > > > plugins: kafka
> > > > pcap_interface: enp0s8
> > > > nfacctd_ip: 192.168.1.100
> > > > nfacctd_port: 9995
> > > > aggregate: src_host, dst_host, timestamp_start, timestamp_end,
> src_port,
> > > > dst_port, proto
> > > >
> > > > Currently, there is only one Netflow exporter sending data to this
> > > > demon and I would like to add another exporter. The problem is that
> I am
> > > > not finding a way to differentiate the flows coming from different
> > > > exporters.
> > > >
> > > > Let's say I have the exporter A currently sending data to nfacctd
> running
> > > > at port 9995 and the data is being written to Kafka topic Netflow.
> > > >
> > > > Now I want a new exporter B to start sending data to nfacctd port
> 9996
> > > which
> > > > will be running as a separate demon ( just because I though so, not
> sure
> > > > yet if it is a necessary approach)  and writing the data to the
> > > > same Netflow topic in Kafka.
> > > >
> > > > When the data comes from Kafka to my application, I cannot tell from
> > > > which exporter the data came from. I would need some sort of
> > > identification
> > > > in order to make this differentiation. It is important for me,
> because my
> > > > application may treat differently Netflow traffic coming from these
> > > > two Netflow exporters.
> > > >
> > > > Thanks in advance.
> > > >
> > > > Emanuel
> > >
> > > > ___
> > > > pmacct-discussion mailing list
> > > > http://www.pmacct.net/#mailinglists
> > >
> > >
> > > ___
> > > pmacct-discussion mailing list
> > > http://www.pmacct.net/#mailinglists
> > >
>
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] Multiples nfacctd deamons writing to same Kafka topic

2020-04-14 Thread Paolo Lucente


Hi Emanuel,

Apologies i did not get you wanted and ID for the collector. The
simplest way of achieving that is 'post_tag' as you just have to supply
a number as ID; pre_tag_map expects a map and may be better to be
reserved for more complex use-cases.

Paolo

On Mon, Apr 13, 2020 at 03:35:52PM -0400, Emanuel dos Reis Rodrigues wrote:
> Thank you for your help. Appreciate it !
> 
> See, I did use it for testing after I sent this email. However, the ip
> showed there was the IP from my nfacctd machine, the collector itself. Not
> the exporter.
> 
> peer_src_ip  : IP address or identificator of telemetry
> exporting device
> 
> In fact, it may have todo with the fact I currently have an SSH tunnel with
> socat with the remote machine in order to collect the data. This may be the
> reason why which is definitively not a ordinary condition. :)
> 
> I am wondering if I could use this one to include a different tag on it
> process/collector, but have not yet figured out how. Any thoughts ?
> 
> label: String label, ie. as result of
> pre_tag_map evaluation
> 
> 
> Thank you again.
> 
> On Mon, Apr 13, 2020 at 9:07 AM Paolo Lucente  wrote:
> 
> >
> > Hi Emanuel,
> >
> > I think you are looking for (i admit, non-intuitive) 'peer_src_ip'
> > primitive:
> >
> > $ nfacctd -a | grep peer_src_ip
> > peer_src_ip  : IP address or identificator of
> > telemetry exporting device
> >
> > Without the grep you can see all supported primitives by the nfacctd
> > release you are using along with a text explanation.
> >
> > Paolo
> >
> > On Sun, Apr 12, 2020 at 06:55:26PM -0400, Emanuel dos Reis Rodrigues wrote:
> > > Hello guys,
> > >
> > > I implemented nfacctd acting as a Netflow collector using pmacct. It is
> > > working perfectly and writing the flows to a Kafka topic which I have an
> > > application processing it.
> > >
> > > Following is my configuration:
> > >
> > > kafka_topic: netflow
> > > kafka_broker_host: Kafka-host
> > > kafka_broker_port: 9092
> > > kafka_refresh_time: 1
> > > daemonize: true
> > > plugins: kafka
> > > pcap_interface: enp0s8
> > > nfacctd_ip: 192.168.1.100
> > > nfacctd_port: 9995
> > > aggregate: src_host, dst_host, timestamp_start, timestamp_end, src_port,
> > > dst_port, proto
> > >
> > > Currently, there is only one Netflow exporter sending data to this
> > > demon and I would like to add another exporter. The problem is that I am
> > > not finding a way to differentiate the flows coming from different
> > > exporters.
> > >
> > > Let's say I have the exporter A currently sending data to nfacctd running
> > > at port 9995 and the data is being written to Kafka topic Netflow.
> > >
> > > Now I want a new exporter B to start sending data to nfacctd port 9996
> > which
> > > will be running as a separate demon ( just because I though so, not sure
> > > yet if it is a necessary approach)  and writing the data to the
> > > same Netflow topic in Kafka.
> > >
> > > When the data comes from Kafka to my application, I cannot tell from
> > > which exporter the data came from. I would need some sort of
> > identification
> > > in order to make this differentiation. It is important for me, because my
> > > application may treat differently Netflow traffic coming from these
> > > two Netflow exporters.
> > >
> > > Thanks in advance.
> > >
> > > Emanuel
> >
> > > ___
> > > pmacct-discussion mailing list
> > > http://www.pmacct.net/#mailinglists
> >
> >
> > ___
> > pmacct-discussion mailing list
> > http://www.pmacct.net/#mailinglists
> >

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Multiples nfacctd deamons writing to same Kafka topic

2020-04-13 Thread Emanuel dos Reis Rodrigues
Thank you for your help. Appreciate it !

See, I did use it for testing after I sent this email. However, the ip
showed there was the IP from my nfacctd machine, the collector itself. Not
the exporter.

peer_src_ip  : IP address or identificator of telemetry
exporting device

In fact, it may have todo with the fact I currently have an SSH tunnel with
socat with the remote machine in order to collect the data. This may be the
reason why which is definitively not a ordinary condition. :)

I am wondering if I could use this one to include a different tag on it
process/collector, but have not yet figured out how. Any thoughts ?

label: String label, ie. as result of
pre_tag_map evaluation


Thank you again.

On Mon, Apr 13, 2020 at 9:07 AM Paolo Lucente  wrote:

>
> Hi Emanuel,
>
> I think you are looking for (i admit, non-intuitive) 'peer_src_ip'
> primitive:
>
> $ nfacctd -a | grep peer_src_ip
> peer_src_ip  : IP address or identificator of
> telemetry exporting device
>
> Without the grep you can see all supported primitives by the nfacctd
> release you are using along with a text explanation.
>
> Paolo
>
> On Sun, Apr 12, 2020 at 06:55:26PM -0400, Emanuel dos Reis Rodrigues wrote:
> > Hello guys,
> >
> > I implemented nfacctd acting as a Netflow collector using pmacct. It is
> > working perfectly and writing the flows to a Kafka topic which I have an
> > application processing it.
> >
> > Following is my configuration:
> >
> > kafka_topic: netflow
> > kafka_broker_host: Kafka-host
> > kafka_broker_port: 9092
> > kafka_refresh_time: 1
> > daemonize: true
> > plugins: kafka
> > pcap_interface: enp0s8
> > nfacctd_ip: 192.168.1.100
> > nfacctd_port: 9995
> > aggregate: src_host, dst_host, timestamp_start, timestamp_end, src_port,
> > dst_port, proto
> >
> > Currently, there is only one Netflow exporter sending data to this
> > demon and I would like to add another exporter. The problem is that I am
> > not finding a way to differentiate the flows coming from different
> > exporters.
> >
> > Let's say I have the exporter A currently sending data to nfacctd running
> > at port 9995 and the data is being written to Kafka topic Netflow.
> >
> > Now I want a new exporter B to start sending data to nfacctd port 9996
> which
> > will be running as a separate demon ( just because I though so, not sure
> > yet if it is a necessary approach)  and writing the data to the
> > same Netflow topic in Kafka.
> >
> > When the data comes from Kafka to my application, I cannot tell from
> > which exporter the data came from. I would need some sort of
> identification
> > in order to make this differentiation. It is important for me, because my
> > application may treat differently Netflow traffic coming from these
> > two Netflow exporters.
> >
> > Thanks in advance.
> >
> > Emanuel
>
> > ___
> > pmacct-discussion mailing list
> > http://www.pmacct.net/#mailinglists
>
>
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists
>
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] Multiples nfacctd deamons writing to same Kafka topic

2020-04-13 Thread Paolo Lucente


Hi Emanuel,

I think you are looking for (i admit, non-intuitive) 'peer_src_ip'
primitive:

$ nfacctd -a | grep peer_src_ip
peer_src_ip  : IP address or identificator of telemetry 
exporting device

Without the grep you can see all supported primitives by the nfacctd
release you are using along with a text explanation.

Paolo

On Sun, Apr 12, 2020 at 06:55:26PM -0400, Emanuel dos Reis Rodrigues wrote:
> Hello guys,
> 
> I implemented nfacctd acting as a Netflow collector using pmacct. It is
> working perfectly and writing the flows to a Kafka topic which I have an
> application processing it.
> 
> Following is my configuration:
> 
> kafka_topic: netflow
> kafka_broker_host: Kafka-host
> kafka_broker_port: 9092
> kafka_refresh_time: 1
> daemonize: true
> plugins: kafka
> pcap_interface: enp0s8
> nfacctd_ip: 192.168.1.100
> nfacctd_port: 9995
> aggregate: src_host, dst_host, timestamp_start, timestamp_end, src_port,
> dst_port, proto
> 
> Currently, there is only one Netflow exporter sending data to this
> demon and I would like to add another exporter. The problem is that I am
> not finding a way to differentiate the flows coming from different
> exporters.
> 
> Let's say I have the exporter A currently sending data to nfacctd running
> at port 9995 and the data is being written to Kafka topic Netflow.
> 
> Now I want a new exporter B to start sending data to nfacctd port 9996 which
> will be running as a separate demon ( just because I though so, not sure
> yet if it is a necessary approach)  and writing the data to the
> same Netflow topic in Kafka.
> 
> When the data comes from Kafka to my application, I cannot tell from
> which exporter the data came from. I would need some sort of identification
> in order to make this differentiation. It is important for me, because my
> application may treat differently Netflow traffic coming from these
> two Netflow exporters.
> 
> Thanks in advance.
> 
> Emanuel

> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists