[pmacct-discussion] sending one netflow stream to different NF_PROBE receivers

2019-01-15 Thread Edvinas K
Hello,

Is't possible to send one netflow stream to different NF_PROBE receivers or
i need to run separate instances ?

Thanks
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] pmact to netflow collector

2019-01-03 Thread Edvinas K
Hello,

How to check if the PF_RING is in action and active ? some forwarded
packets counts, or etc ?

Thanks

On Thu, Dec 27, 2018 at 3:00 PM Edvinas K  wrote:

> thank you,
>
> seems all easy things didin't help.
>
> I tried to set up the buffer size in kernel:
>
> prod [root@netvpn001prpjay pmacct-1.7.2]# cat
> /proc/sys/net/core/[rw]mem_max
> 20
> 20
>
> and then
>
> prod [root@netvpn001prpjay pmacct-1.7.2]# cat flowexport.cfg
>!
>daemonize: no
>aggregate: src_host, dst_host, src_port, dst_port, proto, tos
>plugins: nfprobe
>nfprobe_receiver: 10.3.14.101:2101
>nfprobe_version: 9
>
>pmacctd_pipe_size: 20
>plugin_pipe_size: 100
>plugin_buffer_size: 1
>
>! nfprobe_engine: 1:1
>! nfprobe_timeouts: tcp=120:maxlife=3600
>!
>! networks_file: /path/to/networks.lst
>!...
>
> maybe after putting plugin_pipe_size and  plugin_buffer_size drops got
> little bit lower, but still a lot.
> also noticed strange log message: "INFO ( default/core ): short IPv4
> packet read (36/38/frags). Snaplen issue ?"
>
> I going to try that PF_RING stuff.
>
> On Thu, Dec 20, 2018 at 10:08 PM Paolo Lucente  wrote:
>
>>
>> Hi Edvinas,
>>
>> I wanted to confirm that when you changed pmacctd_pipe_size to 2GB you
>> ALSO changed /proc/sys/net/core/[rw]mem_max to 2GB and ALSO restarted
>> pmacctd after having done so.
>>
>> Wrt PF_RING: i can't voice since i don't use it myself. While i never
>> heard any horror story with it (thumbs up!), i think doing a proof of
>> concept first is always a good idea; this is also to answer your second
>> question: it will improve things for sure but how much you have to test.
>>
>> Another thing you may do is also to increase buffering internal to
>> pmacct (it may help reduce CPU cycles by the core process and hence help
>> it process more data), i see that in your config you have NO buffering
>> enabled. For a quick test you could set:
>>
>> plugin_pipe_size: 100
>> plugin_buffer_size: 1
>>
>> And depending if you see any benefits/improvement and if you have memory
>> you could ramp these values up. Or alternatively you could introduce
>> ZeroMQ. Again, this is internal queueuing (whereas in my previous email
>> i was tackling the queueing between kernel and pmacct):
>>
>> https://github.com/pmacct/pmacct/blob/master/CONFIG-KEYS#L234-#L292
>>
>> Paolo
>>
>> On Wed, Dec 19, 2018 at 06:40:14PM +0200, Edvinas K wrote:
>> > Hello,
>> >
>> > How would you recommend to test PF_RING:
>> >
>> > Some questions:
>> >
>> > Is't safe to install it on production server ?
>> > Is't possible to hope, that this PF_RING will solve all the discards ?
>> >
>> > Thanks
>> >
>> > On Tue, Dec 18, 2018 at 5:59 PM Edvinas K 
>> wrote:
>> >
>> > > thanks,
>> > >
>> > > I tried to change the pipe size. As i noticed my OS (centos) default
>> and
>> > > max size are the same:
>> > >
>> > > prod [root@netvpn001prpjay pmacct-1.7.2]# cat
>> > > /proc/sys/net/core/[rw]mem_default
>> > > 212992
>> > > 212992
>> > >
>> > > prod [root@netvpn001prpjay pmacct-1.7.2]# cat
>> > > /proc/sys/net/core/[rw]mem_max
>> > > 212992
>> > > 212992
>> > >
>> > > I tried to set the pmacctd_pipe_size: to 20  and later to
>> 212992.
>> > > Seems the drops is still occuring.
>> > > Tomorrow i will try to look at that PF_RING thing.
>> > >
>> > > Thanks
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > On Tue, Dec 18, 2018 at 5:32 PM Paolo Lucente 
>> wrote:
>> > >
>> > >>
>> > >> Hi Edvinas,
>> > >>
>> > >> Easier thing first, i recommend to inject some test traffic and see
>> that
>> > >> one how it looks like.
>> > >>
>> > >> The dropped packets highlight a buffering issue. You could take an
>> > >> intermediate step and see if enlarging buffers helps. Configure
>> > >> pmacctd_pipe_size to 20 and follow instructions here for the
>> > >> /proc files to touch:
>> > >>
>> > >> https://github.com/pmacct/pmacct/blob/1.7.2/CONFIG-KEYS#L203-#L216
>> > >>
>> > >> If it helps, goo

Re: [pmacct-discussion] pmact to netflow collector

2018-12-19 Thread Edvinas K
Hello,

How would you recommend to test PF_RING:

Some questions:

Is't safe to install it on production server ?
Is't possible to hope, that this PF_RING will solve all the discards ?

Thanks

On Tue, Dec 18, 2018 at 5:59 PM Edvinas K  wrote:

> thanks,
>
> I tried to change the pipe size. As i noticed my OS (centos) default and
> max size are the same:
>
> prod [root@netvpn001prpjay pmacct-1.7.2]# cat
> /proc/sys/net/core/[rw]mem_default
> 212992
> 212992
>
> prod [root@netvpn001prpjay pmacct-1.7.2]# cat
> /proc/sys/net/core/[rw]mem_max
> 212992
> 212992
>
> I tried to set the pmacctd_pipe_size: to 20  and later to 212992.
> Seems the drops is still occuring.
> Tomorrow i will try to look at that PF_RING thing.
>
> Thanks
>
>
>
>
>
> On Tue, Dec 18, 2018 at 5:32 PM Paolo Lucente  wrote:
>
>>
>> Hi Edvinas,
>>
>> Easier thing first, i recommend to inject some test traffic and see that
>> one how it looks like.
>>
>> The dropped packets highlight a buffering issue. You could take an
>> intermediate step and see if enlarging buffers helps. Configure
>> pmacctd_pipe_size to 20 and follow instructions here for the
>> /proc files to touch:
>>
>> https://github.com/pmacct/pmacct/blob/1.7.2/CONFIG-KEYS#L203-#L216
>>
>> If it helps, good. If not: you should really look into one of the
>> frameworks i was pointing you to in my previous email. PF_RING, for
>> example, can do sampling and/or balancing. Sampling should not be done
>> inside pmacct because the dropped packets are between the kernel and the
>> application.
>>
>> Paolo
>>
>> On Mon, Dec 17, 2018 at 02:52:48PM +0200, Edvinas K wrote:
>> > Seems there're lots of dropped packets:
>> >
>> > prod [root@netvpn001prpjay pmacct-1.7.2]# pmacctd -i ens1f0.432 -f
>> > flowexport.cfg
>> > WARN: [flowexport.cfg:2] Invalid value. Ignored.
>> > INFO ( default/core ): Promiscuous Mode Accounting Daemon, pmacctd
>> > 1.7.2-git (20181018-00+c3)
>> > INFO ( default/core ):  '--enable-l2' '--enable-ipv6' '--enable-64bit'
>> > '--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins'
>> > '--enable-st-bins'
>> > INFO ( default/core ): Reading configuration file
>> > '/opt/pmacct-1.7.2/flowexport.cfg'.
>> > INFO ( default_nfprobe/nfprobe ): NetFlow probe plugin is originally
>> based
>> > on softflowd 0.9.7 software, Copyright 2002 Damien Miller <
>> d...@mindrot.org>
>> > All rights reserved.
>> > INFO ( default_nfprobe/nfprobe ):   TCP timeout: 3600s
>> > INFO ( default_nfprobe/nfprobe ):  TCP post-RST timeout: 120s
>> > INFO ( default_nfprobe/nfprobe ):  TCP post-FIN timeout: 300s
>> > INFO ( default_nfprobe/nfprobe ):   UDP timeout: 300s
>> > INFO ( default_nfprobe/nfprobe ):  ICMP timeout: 300s
>> > INFO ( default_nfprobe/nfprobe ):   General timeout: 3600s
>> > INFO ( default_nfprobe/nfprobe ):  Maximum lifetime: 604800s
>> > INFO ( default_nfprobe/nfprobe ):   Expiry interval: 60s
>> > INFO ( default_nfprobe/nfprobe ): Exporting flows to
>> > [10.3.14.101]:rtcm-sc104
>> > INFO ( default/core ): [ens1f0.432,0] link type is: 1
>> > ^C^C^C^C^C^C^C^C
>> >
>> > after 1 minute:
>> >
>> > WARN ( default_nfprobe/nfprobe ): Shutting down on user request.
>> > INFO ( default/core ): OK, Exiting ...
>> > NOTICE ( default/core ): +++
>> > NOTICE ( default/core ): [ens1f0.432,0] received_packets=3441854
>> > *dropped_packets=2365166*
>> >
>> > About 1GB of traffic is passing through the router where i'm capturing
>> the
>> > packets. Isn't it too much traffic for nfrpobe to process ? CPUs seems
>> not
>> > in 100% usage. We're using  Intel Xeon E5-2620 0 @ 2.00GHz
>> > <
>> http://netmon.adform.com/device/device=531/tab=health/metric=processor/processor_id=1466/
>> >
>> > x
>> > 24.
>> >
>> > prod [root@netvpn001prpjay ~]# ps -aux | grep pmacct
>> > root 41840 30.9  0.0  18964  7760 ?Rs   Dec14 1309:50
>> pmacctd:
>> > Core Process [default]
>> > root 41841 *68.4%*  0.0  22932  9756 ?RDec14 2898:29
>> > pmacctd: Netflow Probe Plugin [default_nfprobe]
>> > root 41869 32.5  0.0  19360  8128 ?Ss   Dec14 1378:29
>> pmacctd:
>> > Core Process [default]
>> > root 41870 *67.6%* 0.0  22928  9760 ?RDec14 2865:35
>> > pmacctd: Netflow Probe P

Re: [pmacct-discussion] pmact to netflow collector

2018-12-18 Thread Edvinas K
thanks,

I tried to change the pipe size. As i noticed my OS (centos) default and
max size are the same:

prod [root@netvpn001prpjay pmacct-1.7.2]# cat
/proc/sys/net/core/[rw]mem_default
212992
212992

prod [root@netvpn001prpjay pmacct-1.7.2]# cat /proc/sys/net/core/[rw]mem_max
212992
212992

I tried to set the pmacctd_pipe_size: to 20  and later to 212992.
Seems the drops is still occuring.
Tomorrow i will try to look at that PF_RING thing.

Thanks





On Tue, Dec 18, 2018 at 5:32 PM Paolo Lucente  wrote:

>
> Hi Edvinas,
>
> Easier thing first, i recommend to inject some test traffic and see that
> one how it looks like.
>
> The dropped packets highlight a buffering issue. You could take an
> intermediate step and see if enlarging buffers helps. Configure
> pmacctd_pipe_size to 20 and follow instructions here for the
> /proc files to touch:
>
> https://github.com/pmacct/pmacct/blob/1.7.2/CONFIG-KEYS#L203-#L216
>
> If it helps, good. If not: you should really look into one of the
> frameworks i was pointing you to in my previous email. PF_RING, for
> example, can do sampling and/or balancing. Sampling should not be done
> inside pmacct because the dropped packets are between the kernel and the
> application.
>
> Paolo
>
> On Mon, Dec 17, 2018 at 02:52:48PM +0200, Edvinas K wrote:
> > Seems there're lots of dropped packets:
> >
> > prod [root@netvpn001prpjay pmacct-1.7.2]# pmacctd -i ens1f0.432 -f
> > flowexport.cfg
> > WARN: [flowexport.cfg:2] Invalid value. Ignored.
> > INFO ( default/core ): Promiscuous Mode Accounting Daemon, pmacctd
> > 1.7.2-git (20181018-00+c3)
> > INFO ( default/core ):  '--enable-l2' '--enable-ipv6' '--enable-64bit'
> > '--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins'
> > '--enable-st-bins'
> > INFO ( default/core ): Reading configuration file
> > '/opt/pmacct-1.7.2/flowexport.cfg'.
> > INFO ( default_nfprobe/nfprobe ): NetFlow probe plugin is originally
> based
> > on softflowd 0.9.7 software, Copyright 2002 Damien Miller <
> d...@mindrot.org>
> > All rights reserved.
> > INFO ( default_nfprobe/nfprobe ):   TCP timeout: 3600s
> > INFO ( default_nfprobe/nfprobe ):  TCP post-RST timeout: 120s
> > INFO ( default_nfprobe/nfprobe ):  TCP post-FIN timeout: 300s
> > INFO ( default_nfprobe/nfprobe ):   UDP timeout: 300s
> > INFO ( default_nfprobe/nfprobe ):  ICMP timeout: 300s
> > INFO ( default_nfprobe/nfprobe ):   General timeout: 3600s
> > INFO ( default_nfprobe/nfprobe ):  Maximum lifetime: 604800s
> > INFO ( default_nfprobe/nfprobe ):   Expiry interval: 60s
> > INFO ( default_nfprobe/nfprobe ): Exporting flows to
> > [10.3.14.101]:rtcm-sc104
> > INFO ( default/core ): [ens1f0.432,0] link type is: 1
> > ^C^C^C^C^C^C^C^C
> >
> > after 1 minute:
> >
> > WARN ( default_nfprobe/nfprobe ): Shutting down on user request.
> > INFO ( default/core ): OK, Exiting ...
> > NOTICE ( default/core ): +++
> > NOTICE ( default/core ): [ens1f0.432,0] received_packets=3441854
> > *dropped_packets=2365166*
> >
> > About 1GB of traffic is passing through the router where i'm capturing
> the
> > packets. Isn't it too much traffic for nfrpobe to process ? CPUs seems
> not
> > in 100% usage. We're using  Intel Xeon E5-2620 0 @ 2.00GHz
> > <
> http://netmon.adform.com/device/device=531/tab=health/metric=processor/processor_id=1466/
> >
> > x
> > 24.
> >
> > prod [root@netvpn001prpjay ~]# ps -aux | grep pmacct
> > root 41840 30.9  0.0  18964  7760 ?Rs   Dec14 1309:50
> pmacctd:
> > Core Process [default]
> > root 41841 *68.4%*  0.0  22932  9756 ?RDec14 2898:29
> > pmacctd: Netflow Probe Plugin [default_nfprobe]
> > root 41869 32.5  0.0  19360  8128 ?Ss   Dec14 1378:29
> pmacctd:
> > Core Process [default]
> > root 41870 *67.6%* 0.0  22928  9760 ?RDec14 2865:35
> > pmacctd: Netflow Probe Plugin [default_nfprobe]
> >
> > Before starting with your mentioned 'steroid' things, i would like to
> ask,
> > is't really worth to go to that kernel "things", or start with techniques
> > for example like sampling, or like Nikola recommended try to fidle with
> > nfprobe_engine settings ?
> >
> > Thanks
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > On Sun, Dec 16, 2018 at 6:25 PM Paolo Lucente  wrote:
> >
> > >
> > > Hi Edvinas,
> > >
> > > You may want to check whether libpc

Re: [pmacct-discussion] pmact to netflow collector

2018-12-17 Thread Edvinas K
Seems there're lots of dropped packets:

prod [root@netvpn001prpjay pmacct-1.7.2]# pmacctd -i ens1f0.432 -f
flowexport.cfg
WARN: [flowexport.cfg:2] Invalid value. Ignored.
INFO ( default/core ): Promiscuous Mode Accounting Daemon, pmacctd
1.7.2-git (20181018-00+c3)
INFO ( default/core ):  '--enable-l2' '--enable-ipv6' '--enable-64bit'
'--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins'
'--enable-st-bins'
INFO ( default/core ): Reading configuration file
'/opt/pmacct-1.7.2/flowexport.cfg'.
INFO ( default_nfprobe/nfprobe ): NetFlow probe plugin is originally based
on softflowd 0.9.7 software, Copyright 2002 Damien Miller 
All rights reserved.
INFO ( default_nfprobe/nfprobe ):   TCP timeout: 3600s
INFO ( default_nfprobe/nfprobe ):  TCP post-RST timeout: 120s
INFO ( default_nfprobe/nfprobe ):  TCP post-FIN timeout: 300s
INFO ( default_nfprobe/nfprobe ):   UDP timeout: 300s
INFO ( default_nfprobe/nfprobe ):  ICMP timeout: 300s
INFO ( default_nfprobe/nfprobe ):   General timeout: 3600s
INFO ( default_nfprobe/nfprobe ):  Maximum lifetime: 604800s
INFO ( default_nfprobe/nfprobe ):   Expiry interval: 60s
INFO ( default_nfprobe/nfprobe ): Exporting flows to
[10.3.14.101]:rtcm-sc104
INFO ( default/core ): [ens1f0.432,0] link type is: 1
^C^C^C^C^C^C^C^C

after 1 minute:

WARN ( default_nfprobe/nfprobe ): Shutting down on user request.
INFO ( default/core ): OK, Exiting ...
NOTICE ( default/core ): +++
NOTICE ( default/core ): [ens1f0.432,0] received_packets=3441854
*dropped_packets=2365166*

About 1GB of traffic is passing through the router where i'm capturing the
packets. Isn't it too much traffic for nfrpobe to process ? CPUs seems not
in 100% usage. We're using  Intel Xeon E5-2620 0 @ 2.00GHz
<http://netmon.adform.com/device/device=531/tab=health/metric=processor/processor_id=1466/>
x
24.

prod [root@netvpn001prpjay ~]# ps -aux | grep pmacct
root 41840 30.9  0.0  18964  7760 ?Rs   Dec14 1309:50 pmacctd:
Core Process [default]
root 41841 *68.4%*  0.0  22932  9756 ?RDec14 2898:29
pmacctd: Netflow Probe Plugin [default_nfprobe]
root 41869 32.5  0.0  19360  8128 ?Ss   Dec14 1378:29 pmacctd:
Core Process [default]
root 41870 *67.6%* 0.0  22928  9760 ?RDec14 2865:35
pmacctd: Netflow Probe Plugin [default_nfprobe]

Before starting with your mentioned 'steroid' things, i would like to ask,
is't really worth to go to that kernel "things", or start with techniques
for example like sampling, or like Nikola recommended try to fidle with
nfprobe_engine settings ?

Thanks
















On Sun, Dec 16, 2018 at 6:25 PM Paolo Lucente  wrote:

>
> Hi Edvinas,
>
> You may want to check whether libpcap is dropping packets on input to
> pmacctd. You can achieve that sending a SIGUSR1 and checking the output
> in the logfile/syslog/console. You will get something a-la:
>
> https://github.com/pmacct/pmacct/blob/master/docs/SIGNALS#L16-#L34
>
> Should amount of dropped packets be non-zero and visibly increasing then
> you may want to put your libpcap on steroids:
>
> https://github.com/pmacct/pmacct/blob/master/FAQS#L71-#L101
>
> Should, instead, that not be the case, i am unsure and would need
> further investigation. You could try to produce a controlled stream of
> data and sniff nfprobe output. Or collect with a different software for
> a quick counter-test (nfacctd itself or another of your choice).
>
> Paolo
>
> On Fri, Dec 14, 2018 at 03:02:35PM +0200, Edvinas K wrote:
> > Thanks, i really appreciate your help.
> >
> > Everything seems working OK, on NFSEN (NFDUMP) graphs of flows statistics
> > looks good, but the traffic rate Mb/s (45 Mb/s) is somehow 10x lower than
> > really is. Maybe some tips to troubleshoot that ?
> >
> > [image: image.png]
> >
> > Is there any hidden things to check about ?
> >
> > My config:
> >
> > 1050  pmacctd -i ens1f0.432 -f flowexport.cfg
> > 1051  pmacctd -i ens1f1.433 -f flowexport.cfg
> >
> > cat flowexport.cfg
> >!
> >daemonize: true
> >aggregate: src_host, dst_host, src_port, dst_port, proto, tos
> >plugins: nfprobe
> >nfprobe_receiver: 10.3.14.101:2101
> >nfprobe_version: 9
> >! nfprobe_engine: 1:1
> >! nfprobe_timeouts: tcp=120:maxlife=3600
> >!
> >! networks_file: /path/to/networks.lst
> >
> > On Thu, Dec 13, 2018 at 4:32 AM Paolo Lucente  wrote:
> >
> > >
> > > Hi Nikola,
> > >
> > > I see, makes sense. Thanks very much for clarifying.
> > >
> > > Paolo
> > >
> > > On Wed, Dec 12, 2018 at 06:20:58PM -0800, Nikola Kolev wrote:
> > > > Hi Paollo,
> > > >
> > > > Sorry for being c

Re: [pmacct-discussion] pmact to netflow collector

2018-12-14 Thread Edvinas K
Thanks, i really appreciate your help.

Everything seems working OK, on NFSEN (NFDUMP) graphs of flows statistics
looks good, but the traffic rate Mb/s (45 Mb/s) is somehow 10x lower than
really is. Maybe some tips to troubleshoot that ?

[image: image.png]

Is there any hidden things to check about ?

My config:

1050  pmacctd -i ens1f0.432 -f flowexport.cfg
1051  pmacctd -i ens1f1.433 -f flowexport.cfg

cat flowexport.cfg
   !
   daemonize: true
   aggregate: src_host, dst_host, src_port, dst_port, proto, tos
   plugins: nfprobe
   nfprobe_receiver: 10.3.14.101:2101
   nfprobe_version: 9
   ! nfprobe_engine: 1:1
   ! nfprobe_timeouts: tcp=120:maxlife=3600
   !
   ! networks_file: /path/to/networks.lst

On Thu, Dec 13, 2018 at 4:32 AM Paolo Lucente  wrote:

>
> Hi Nikola,
>
> I see, makes sense. Thanks very much for clarifying.
>
> Paolo
>
> On Wed, Dec 12, 2018 at 06:20:58PM -0800, Nikola Kolev wrote:
> > Hi Paollo,
> >
> > Sorry for being cryptic - what I meant was that I wasn't able to
> > launch pmacctd/uacctd in a way that it deals with dynamic interfaces as
> > ppp. Basically I failed to find any reference in the docs on how to make
> > it run in such a way, that it collects info from ppp* (a-la the ppp+
> > syntax of iptables), without launching a separate pmacctd instance for
> > each interface, hence the complicated setup with
> > iptables-nflog-uacctd-nfdump.
> >
> > On Thu, 13 Dec 2018 01:35:00 +
> > Paolo Lucente  wrote:
> >
> > >
> > > Hi Nikola,
> > >
> > > Can you please elaborate a bit more? The cryptic part for me is "as
> > > nfacctd is not supporting wildcard addresses to be bound to".
> > >
> > > Thanks,
> > > Paolo
> > >
> > > On Wed, Dec 12, 2018 at 04:50:33PM -0800, Nikola Kolev wrote:
> > > > Hey,
> > > >
> > > > If I may add to that:
> > > >
> > > > I'm doing something similar, but in a slightly different manner:
> > > >
> > > > as nfacctd is not supporting wildcard addresses to be bound to, I'm
> > > > using iptables' rules to export via nflog to uacctd, which then can
> > > > send to nfdump. Just food for thought...
> > > >
> > > > On 2018-12-12 14:58, Paolo Lucente wrote:
> > > > >Hi Edvinas,
> > > > >
> > > > >You are looking for the nfprobe plugin. You can follow the relevant
> > > > >section in the QUICKSTART to get going:
> > > > >
> > > > >https://github.com/pmacct/pmacct/blob/1.7.2/QUICKSTART#L1167-#L1302
> > > > >
> > > > >Paolo
> > > > >
> > > > >On Wed, Dec 12, 2018 at 03:12:39PM +0200, Edvinas K wrote:
> > > > >>Hello,
> > > > >>
> > > > >>I managed to run basic pmacct to capture linux router (FRR) flows
> > > > >>from libcap:
> > > > >>"pmacctd -P print -O formatted -r 10 -i bond0.2170 -c
> > > > >>src_host,dst_host,src_port,dst_port,proto"
> > > > >>
> > > > >>now I need to push all the flows as a netflow format to the
> > > > >>netflow collector (nfdump). Could you give me some advice how to
> > > > >>configure that ?
> > > > >>Thank you
> > > > >
> > > > >>___
> > > > >>pmacct-discussion mailing list
> > > > >>http://www.pmacct.net/#mailinglists
> > > > >
> > > > >
> > > > >___
> > > > >pmacct-discussion mailing list
> > > > >http://www.pmacct.net/#mailinglists
> > > >
> > > > --
> > > > Nikola
> >
> >
> > --
> > Nikola
>
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists
>
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

[pmacct-discussion] Pmacct tips and trics (how to see all daemonized processes and how to kill them)

2018-12-13 Thread Edvinas K
Hello,

Is't possible to see all daemonized processes and is there any option how
to kill them ? or i need to use Linux basic commands like Kill, etc. ?

Thanks
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

[pmacct-discussion] pmact to netflow collector

2018-12-12 Thread Edvinas K
Hello,

I managed to run basic pmacct to capture linux router (FRR) flows from
libcap:
"pmacctd -P print -O formatted -r 10 -i bond0.2170 -c
src_host,dst_host,src_port,dst_port,proto"

now I need to push all the flows as a netflow format to the netflow
collector (nfdump). Could you give me some advice how to configure that ?
Thank you
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists