Re: [pmacct-discussion] pmact to netflow collector

2019-01-03 Thread Paolo Lucente


Hi Edvinas,

'pmacctd -V' returns all the libs it is linked against, including
version. There you *should* find an indication the PF_RING-enabled
libpcap is being used.

Paolo
 
On Thu, Jan 03, 2019 at 10:46:55AM +0200, Edvinas K wrote:
> Hello,
> 
> How to check if the PF_RING is in action and active ? some forwarded
> packets counts, or etc ?
> 
> Thanks
> 
> On Thu, Dec 27, 2018 at 3:00 PM Edvinas K  wrote:
> 
> > thank you,
> >
> > seems all easy things didin't help.
> >
> > I tried to set up the buffer size in kernel:
> >
> > prod [root@netvpn001prpjay pmacct-1.7.2]# cat
> > /proc/sys/net/core/[rw]mem_max
> > 20
> > 20
> >
> > and then
> >
> > prod [root@netvpn001prpjay pmacct-1.7.2]# cat flowexport.cfg
> >!
> >daemonize: no
> >aggregate: src_host, dst_host, src_port, dst_port, proto, tos
> >plugins: nfprobe
> >nfprobe_receiver: 10.3.14.101:2101
> >nfprobe_version: 9
> >
> >pmacctd_pipe_size: 20
> >plugin_pipe_size: 100
> >plugin_buffer_size: 1
> >
> >! nfprobe_engine: 1:1
> >! nfprobe_timeouts: tcp=120:maxlife=3600
> >!
> >! networks_file: /path/to/networks.lst
> >!...
> >
> > maybe after putting plugin_pipe_size and  plugin_buffer_size drops got
> > little bit lower, but still a lot.
> > also noticed strange log message: "INFO ( default/core ): short IPv4
> > packet read (36/38/frags). Snaplen issue ?"
> >
> > I going to try that PF_RING stuff.
> >
> > On Thu, Dec 20, 2018 at 10:08 PM Paolo Lucente  wrote:
> >
> >>
> >> Hi Edvinas,
> >>
> >> I wanted to confirm that when you changed pmacctd_pipe_size to 2GB you
> >> ALSO changed /proc/sys/net/core/[rw]mem_max to 2GB and ALSO restarted
> >> pmacctd after having done so.
> >>
> >> Wrt PF_RING: i can't voice since i don't use it myself. While i never
> >> heard any horror story with it (thumbs up!), i think doing a proof of
> >> concept first is always a good idea; this is also to answer your second
> >> question: it will improve things for sure but how much you have to test.
> >>
> >> Another thing you may do is also to increase buffering internal to
> >> pmacct (it may help reduce CPU cycles by the core process and hence help
> >> it process more data), i see that in your config you have NO buffering
> >> enabled. For a quick test you could set:
> >>
> >> plugin_pipe_size: 100
> >> plugin_buffer_size: 1
> >>
> >> And depending if you see any benefits/improvement and if you have memory
> >> you could ramp these values up. Or alternatively you could introduce
> >> ZeroMQ. Again, this is internal queueuing (whereas in my previous email
> >> i was tackling the queueing between kernel and pmacct):
> >>
> >> https://github.com/pmacct/pmacct/blob/master/CONFIG-KEYS#L234-#L292
> >>
> >> Paolo
> >>
> >> On Wed, Dec 19, 2018 at 06:40:14PM +0200, Edvinas K wrote:
> >> > Hello,
> >> >
> >> > How would you recommend to test PF_RING:
> >> >
> >> > Some questions:
> >> >
> >> > Is't safe to install it on production server ?
> >> > Is't possible to hope, that this PF_RING will solve all the discards ?
> >> >
> >> > Thanks
> >> >
> >> > On Tue, Dec 18, 2018 at 5:59 PM Edvinas K 
> >> wrote:
> >> >
> >> > > thanks,
> >> > >
> >> > > I tried to change the pipe size. As i noticed my OS (centos) default
> >> and
> >> > > max size are the same:
> >> > >
> >> > > prod [root@netvpn001prpjay pmacct-1.7.2]# cat
> >> > > /proc/sys/net/core/[rw]mem_default
> >> > > 212992
> >> > > 212992
> >> > >
> >> > > prod [root@netvpn001prpjay pmacct-1.7.2]# cat
> >> > > /proc/sys/net/core/[rw]mem_max
> >> > > 212992
> >> > > 212992
> >> > >
> >> > > I tried to set the pmacctd_pipe_size: to 20  and later to
> >> 212992.
> >> > > Seems the drops is still occuring.
> >> > > Tomorrow i will try to look at that PF_RING thing.
> >> > >
> >> > > Thanks
> >> > >
> >> > >
> >> > >
> >> > >
> >> > >
> >> > > On Tue, Dec 18, 2018 at 5:32 PM Paolo Lucente 
> >> wrote:
> >> > >
> >> > >>
> >> > >> Hi Edvinas,
> >> > >>
> >> > >> Easier thing first, i recommend to inject some test traffic and see
> >> that
> >> > >> one how it looks like.
> >> > >>
> >> > >> The dropped packets highlight a buffering issue. You could take an
> >> > >> intermediate step and see if enlarging buffers helps. Configure
> >> > >> pmacctd_pipe_size to 20 and follow instructions here for the
> >> > >> /proc files to touch:
> >> > >>
> >> > >> https://github.com/pmacct/pmacct/blob/1.7.2/CONFIG-KEYS#L203-#L216
> >> > >>
> >> > >> If it helps, good. If not: you should really look into one of the
> >> > >> frameworks i was pointing you to in my previous email. PF_RING, for
> >> > >> example, can do sampling and/or balancing. Sampling should not be
> >> done
> >> > >> inside pmacct because the dropped packets are between the kernel and
> >> the
> >> > >> application.
> >> > >>
> >> > >> Paolo
> >> > >>
> >> > >> On Mon, Dec 17, 2018 at 02:52:48PM +0200, Edvinas K wrote:
> >> > >> > Seems there're lots of dropped packets:

Re: [pmacct-discussion] pmact to netflow collector

2019-01-03 Thread Edvinas K
Hello,

How to check if the PF_RING is in action and active ? some forwarded
packets counts, or etc ?

Thanks

On Thu, Dec 27, 2018 at 3:00 PM Edvinas K  wrote:

> thank you,
>
> seems all easy things didin't help.
>
> I tried to set up the buffer size in kernel:
>
> prod [root@netvpn001prpjay pmacct-1.7.2]# cat
> /proc/sys/net/core/[rw]mem_max
> 20
> 20
>
> and then
>
> prod [root@netvpn001prpjay pmacct-1.7.2]# cat flowexport.cfg
>!
>daemonize: no
>aggregate: src_host, dst_host, src_port, dst_port, proto, tos
>plugins: nfprobe
>nfprobe_receiver: 10.3.14.101:2101
>nfprobe_version: 9
>
>pmacctd_pipe_size: 20
>plugin_pipe_size: 100
>plugin_buffer_size: 1
>
>! nfprobe_engine: 1:1
>! nfprobe_timeouts: tcp=120:maxlife=3600
>!
>! networks_file: /path/to/networks.lst
>!...
>
> maybe after putting plugin_pipe_size and  plugin_buffer_size drops got
> little bit lower, but still a lot.
> also noticed strange log message: "INFO ( default/core ): short IPv4
> packet read (36/38/frags). Snaplen issue ?"
>
> I going to try that PF_RING stuff.
>
> On Thu, Dec 20, 2018 at 10:08 PM Paolo Lucente  wrote:
>
>>
>> Hi Edvinas,
>>
>> I wanted to confirm that when you changed pmacctd_pipe_size to 2GB you
>> ALSO changed /proc/sys/net/core/[rw]mem_max to 2GB and ALSO restarted
>> pmacctd after having done so.
>>
>> Wrt PF_RING: i can't voice since i don't use it myself. While i never
>> heard any horror story with it (thumbs up!), i think doing a proof of
>> concept first is always a good idea; this is also to answer your second
>> question: it will improve things for sure but how much you have to test.
>>
>> Another thing you may do is also to increase buffering internal to
>> pmacct (it may help reduce CPU cycles by the core process and hence help
>> it process more data), i see that in your config you have NO buffering
>> enabled. For a quick test you could set:
>>
>> plugin_pipe_size: 100
>> plugin_buffer_size: 1
>>
>> And depending if you see any benefits/improvement and if you have memory
>> you could ramp these values up. Or alternatively you could introduce
>> ZeroMQ. Again, this is internal queueuing (whereas in my previous email
>> i was tackling the queueing between kernel and pmacct):
>>
>> https://github.com/pmacct/pmacct/blob/master/CONFIG-KEYS#L234-#L292
>>
>> Paolo
>>
>> On Wed, Dec 19, 2018 at 06:40:14PM +0200, Edvinas K wrote:
>> > Hello,
>> >
>> > How would you recommend to test PF_RING:
>> >
>> > Some questions:
>> >
>> > Is't safe to install it on production server ?
>> > Is't possible to hope, that this PF_RING will solve all the discards ?
>> >
>> > Thanks
>> >
>> > On Tue, Dec 18, 2018 at 5:59 PM Edvinas K 
>> wrote:
>> >
>> > > thanks,
>> > >
>> > > I tried to change the pipe size. As i noticed my OS (centos) default
>> and
>> > > max size are the same:
>> > >
>> > > prod [root@netvpn001prpjay pmacct-1.7.2]# cat
>> > > /proc/sys/net/core/[rw]mem_default
>> > > 212992
>> > > 212992
>> > >
>> > > prod [root@netvpn001prpjay pmacct-1.7.2]# cat
>> > > /proc/sys/net/core/[rw]mem_max
>> > > 212992
>> > > 212992
>> > >
>> > > I tried to set the pmacctd_pipe_size: to 20  and later to
>> 212992.
>> > > Seems the drops is still occuring.
>> > > Tomorrow i will try to look at that PF_RING thing.
>> > >
>> > > Thanks
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > On Tue, Dec 18, 2018 at 5:32 PM Paolo Lucente 
>> wrote:
>> > >
>> > >>
>> > >> Hi Edvinas,
>> > >>
>> > >> Easier thing first, i recommend to inject some test traffic and see
>> that
>> > >> one how it looks like.
>> > >>
>> > >> The dropped packets highlight a buffering issue. You could take an
>> > >> intermediate step and see if enlarging buffers helps. Configure
>> > >> pmacctd_pipe_size to 20 and follow instructions here for the
>> > >> /proc files to touch:
>> > >>
>> > >> https://github.com/pmacct/pmacct/blob/1.7.2/CONFIG-KEYS#L203-#L216
>> > >>
>> > >> If it helps, good. If not: you should really look into one of the
>> > >> frameworks i was pointing you to in my previous email. PF_RING, for
>> > >> example, can do sampling and/or balancing. Sampling should not be
>> done
>> > >> inside pmacct because the dropped packets are between the kernel and
>> the
>> > >> application.
>> > >>
>> > >> Paolo
>> > >>
>> > >> On Mon, Dec 17, 2018 at 02:52:48PM +0200, Edvinas K wrote:
>> > >> > Seems there're lots of dropped packets:
>> > >> >
>> > >> > prod [root@netvpn001prpjay pmacct-1.7.2]# pmacctd -i ens1f0.432 -f
>> > >> > flowexport.cfg
>> > >> > WARN: [flowexport.cfg:2] Invalid value. Ignored.
>> > >> > INFO ( default/core ): Promiscuous Mode Accounting Daemon, pmacctd
>> > >> > 1.7.2-git (20181018-00+c3)
>> > >> > INFO ( default/core ):  '--enable-l2' '--enable-ipv6'
>> '--enable-64bit'
>> > >> > '--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins'
>> > >> > '--enable-st-bins'
>> > >> > INFO ( default/core ): Reading configuration file
>> >