Re: [pmacct-discussion] Pmacct configuration with direction of traffic

2020-02-27 Thread Alex K
Thank you Paolo,

I see I can use aggregation filters also. So I guess will find a way to
implement what is needed without having a convoluted configuration file.

cheers,
Alex

On Thu, Feb 27, 2020 at 12:24 PM Paolo Lucente  wrote:

>
> Hi Alex,
>
> Ack. The other way you could "filter" out is with a networks_file: in
> there you specify the network(s) you are interested in following the
> example here:
>
> https://github.com/pmacct/pmacct/blob/master/examples/networks.lst.example
>
> In the simplest case, you just want to list networks of interest one per
> line. Then in the config you want to set 'networks_file_filter: true' as
> well. This is kind-of filtering: networks / IPs not of interest will be
> just zeroed out and rolled up as a 0.0.0.0 src_host / dst_host.
>

> Paolo
>
> On Wed, Feb 26, 2020 at 11:32:31AM +0200, Alex K wrote:
> > Hi Paolo,
> >
> > On Tue, Feb 25, 2020 at 6:41 PM Paolo Lucente  wrote:
> >
> > >
> > > Hi Alex,
> > >
> > > Thanks for your feedback. I see you did run "tcpdump -n -vv -i nflog:1"
> > > which is equivalent to run uacctd without any filters; as you may know,
> > > you can append a BPF-style filter to the tcpdump command-line,
> precisely
> > > as you express it in pre_tag_map. Can you give that a try and see if
> you
> > > get any luck?
> > >
> > Bad luck... I get:
> > tcpdump -nvv -i  nflog:1 src net 192.168.28.0/24
> > tcpdump: NFLOG link-layer type filtering not implemented
> > It seems that filtering at nflog interface is not supported.
> > Running tcpdump -nvv -i eth0 src net 192.168.28.0/24 does capture
> traffic
> > normally.
> > Is there any other way I could apply some filtering with uacctd? I need
> to
> > use uacctd since I get all the pre-nat, post-nat details of the flows, so
> > as to account traffic at the WAN interfaces with the real source details.
> >
> >
> > > My expextation is: if something does not work with pre_tag_map, it
> > > should also not work with tcpdump; if you work out a filter to work
> > > against tcpdump, that should work in pre_tag_map as well. Any
> disconnect
> > > among the two may bring the scent of a bug.
> > >
> > > Paolo
> > >
> > > On Tue, Feb 25, 2020 at 11:20:21AM +0200, Alex K wrote:
> > > > Here is the output when running in debug mode:
> > > >
> > > > INFO ( default/core ): Linux NetFilter NFLOG Accounting Daemon,
> uacctd
> > > > (20200222-01)
> > > > INFO ( default/core ):  '--prefix=/usr' '--enable-mysql'
> '--enable-nflog'
> > > > '--enable-l2' '--enable-traffic-bins' '--enable-bgp-bins'
> > > > '--enable-bmp-bins' '--enable-st-bins'
> > > > INFO ( default/core ): Reading configuration file
> > > > '/root/pmacct/uacctd2.conf'.
> > > > INFO ( print_wan0_in/print ): plugin_pipe_size=4096000 bytes
> > > > plugin_buffer_size=280 bytes
> > > > INFO ( print_wan0_in/print ): ctrl channel: obtained=212992 bytes
> > > > target=117024 bytes
> > > > INFO ( print_wan0_out/print ): plugin_pipe_size=4096000 bytes
> > > > plugin_buffer_size=280 bytes
> > > > INFO ( print_wan0_out/print ): ctrl channel: obtained=212992 bytes
> > > > target=117024 bytes
> > > > INFO ( print_wan0_in/print ): cache entries=16411 base cache
> > > > memory=54878384 bytes
> > > > INFO ( default/core ): [pretag2.map] (re)loading map.
> > > > INFO ( print_wan0_out/print ): cache entries=16411 base cache
> > > > memory=54878384 bytes
> > > > INFO ( default/core ): [pretag2.map] map successfully (re)loaded.
> > > > INFO ( default/core ): [pretag2.map] (re)loading map.
> > > > INFO ( default/core ): [pretag2.map] map successfully (re)loaded.
> > > > INFO ( default/core ): [pretag2.map] (re)loading map.
> > > > INFO ( default/core ): [pretag2.map] map successfully (re)loaded.
> > > > INFO ( default/core ): Successfully connected Netlink NFLOG socket
> > > >
> > > > It doesn't seem to have any issues loading the maps, though it is not
> > > > collecting anything. When capturing with tcpdump I see packets going
> > > > through:
> > > >
> > > > tcpdump -n -vv -i nflog:1
> > > > 09:16:05.831131 IP (tos 0x0, ttl 64, id 36511, offset 0, flags [DF],
> > > proto
> > > > ICMP (1), length 84)
> > > > 192.168.28.11 > 8.8.8.8: ICMP echo request, id 17353, seq 1,
> length
> > &g

Re: [pmacct-discussion] Pmacct configuration with direction of traffic

2020-02-26 Thread Alex K
Hi Paolo,

On Tue, Feb 25, 2020 at 6:41 PM Paolo Lucente  wrote:

>
> Hi Alex,
>
> Thanks for your feedback. I see you did run "tcpdump -n -vv -i nflog:1"
> which is equivalent to run uacctd without any filters; as you may know,
> you can append a BPF-style filter to the tcpdump command-line, precisely
> as you express it in pre_tag_map. Can you give that a try and see if you
> get any luck?
>
Bad luck... I get:
tcpdump -nvv -i  nflog:1 src net 192.168.28.0/24
tcpdump: NFLOG link-layer type filtering not implemented
It seems that filtering at nflog interface is not supported.
Running tcpdump -nvv -i eth0 src net 192.168.28.0/24 does capture traffic
normally.
Is there any other way I could apply some filtering with uacctd? I need to
use uacctd since I get all the pre-nat, post-nat details of the flows, so
as to account traffic at the WAN interfaces with the real source details.


> My expextation is: if something does not work with pre_tag_map, it
> should also not work with tcpdump; if you work out a filter to work
> against tcpdump, that should work in pre_tag_map as well. Any disconnect
> among the two may bring the scent of a bug.
>
> Paolo
>
> On Tue, Feb 25, 2020 at 11:20:21AM +0200, Alex K wrote:
> > Here is the output when running in debug mode:
> >
> > INFO ( default/core ): Linux NetFilter NFLOG Accounting Daemon, uacctd
> > (20200222-01)
> > INFO ( default/core ):  '--prefix=/usr' '--enable-mysql' '--enable-nflog'
> > '--enable-l2' '--enable-traffic-bins' '--enable-bgp-bins'
> > '--enable-bmp-bins' '--enable-st-bins'
> > INFO ( default/core ): Reading configuration file
> > '/root/pmacct/uacctd2.conf'.
> > INFO ( print_wan0_in/print ): plugin_pipe_size=4096000 bytes
> > plugin_buffer_size=280 bytes
> > INFO ( print_wan0_in/print ): ctrl channel: obtained=212992 bytes
> > target=117024 bytes
> > INFO ( print_wan0_out/print ): plugin_pipe_size=4096000 bytes
> > plugin_buffer_size=280 bytes
> > INFO ( print_wan0_out/print ): ctrl channel: obtained=212992 bytes
> > target=117024 bytes
> > INFO ( print_wan0_in/print ): cache entries=16411 base cache
> > memory=54878384 bytes
> > INFO ( default/core ): [pretag2.map] (re)loading map.
> > INFO ( print_wan0_out/print ): cache entries=16411 base cache
> > memory=54878384 bytes
> > INFO ( default/core ): [pretag2.map] map successfully (re)loaded.
> > INFO ( default/core ): [pretag2.map] (re)loading map.
> > INFO ( default/core ): [pretag2.map] map successfully (re)loaded.
> > INFO ( default/core ): [pretag2.map] (re)loading map.
> > INFO ( default/core ): [pretag2.map] map successfully (re)loaded.
> > INFO ( default/core ): Successfully connected Netlink NFLOG socket
> >
> > It doesn't seem to have any issues loading the maps, though it is not
> > collecting anything. When capturing with tcpdump I see packets going
> > through:
> >
> > tcpdump -n -vv -i nflog:1
> > 09:16:05.831131 IP (tos 0x0, ttl 64, id 36511, offset 0, flags [DF],
> proto
> > ICMP (1), length 84)
> > 192.168.28.11 > 8.8.8.8: ICMP echo request, id 17353, seq 1, length
> 64
> > 09:16:05.831362 IP (tos 0x0, ttl 49, id 0, offset 0, flags [none], proto
> > ICMP (1), length 84)
> > 8.8.8.8 > 192.168.28.11: ICMP echo reply, id 17353, seq 1, length 64
> > 09:16:05.831392 IP (tos 0x0, ttl 64, id 36682, offset 0, flags [DF],
> proto
> > ICMP (1), length 84)
> > 192.168.28.11 > 8.8.8.8: ICMP echo request, id 17353, seq 2, length
> 64
> > 09:16:06.855200 IP (tos 0x0, ttl 49, id 0, offset 0, flags [none], proto
> > ICMP (1), length 84)
> > 8.8.8.8 > 192.168.28.11: ICMP echo reply, id 17353, seq 2, length 64
> >
> > The pmacct  version I am running is latest master.
> > Thank you for your assistance.
> >
> > Alex
> >
> >
> > On Mon, Feb 24, 2020 at 6:20 PM Alex K  wrote:
> >
> > > Hi Paolo,
> > >
> > > On Sat, Feb 22, 2020 at 4:18 PM Paolo Lucente 
> wrote:
> > >
> > >>
> > >> Hi Alex,
> > >>
> > >> Is it possible with the new setup - the one where pre_tag_map does not
> > >> match anything - the traffic is VLAN-tagged (or MPLS-labelled)? If so,
> > >> you should adjust filters accordingly and add 'vlan and', ie. "vlan
> and
> > >> src net 192.168.28.0/24 or vlan and src net 192.168.100.0/24".
> > >>
> > > The traffic is not VLAN or MPLS. It is simple one. I confirm I can
> collect
> > > traffic when removing the pretag directives. Also when stopping
> uacctd, I
> > > can capture traffic at nflog:1 

Re: [pmacct-discussion] Pmacct configuration with direction of traffic

2020-02-25 Thread Alex K
Here is the output when running in debug mode:

INFO ( default/core ): Linux NetFilter NFLOG Accounting Daemon, uacctd
(20200222-01)
INFO ( default/core ):  '--prefix=/usr' '--enable-mysql' '--enable-nflog'
'--enable-l2' '--enable-traffic-bins' '--enable-bgp-bins'
'--enable-bmp-bins' '--enable-st-bins'
INFO ( default/core ): Reading configuration file
'/root/pmacct/uacctd2.conf'.
INFO ( print_wan0_in/print ): plugin_pipe_size=4096000 bytes
plugin_buffer_size=280 bytes
INFO ( print_wan0_in/print ): ctrl channel: obtained=212992 bytes
target=117024 bytes
INFO ( print_wan0_out/print ): plugin_pipe_size=4096000 bytes
plugin_buffer_size=280 bytes
INFO ( print_wan0_out/print ): ctrl channel: obtained=212992 bytes
target=117024 bytes
INFO ( print_wan0_in/print ): cache entries=16411 base cache
memory=54878384 bytes
INFO ( default/core ): [pretag2.map] (re)loading map.
INFO ( print_wan0_out/print ): cache entries=16411 base cache
memory=54878384 bytes
INFO ( default/core ): [pretag2.map] map successfully (re)loaded.
INFO ( default/core ): [pretag2.map] (re)loading map.
INFO ( default/core ): [pretag2.map] map successfully (re)loaded.
INFO ( default/core ): [pretag2.map] (re)loading map.
INFO ( default/core ): [pretag2.map] map successfully (re)loaded.
INFO ( default/core ): Successfully connected Netlink NFLOG socket

It doesn't seem to have any issues loading the maps, though it is not
collecting anything. When capturing with tcpdump I see packets going
through:

tcpdump -n -vv -i nflog:1
09:16:05.831131 IP (tos 0x0, ttl 64, id 36511, offset 0, flags [DF], proto
ICMP (1), length 84)
192.168.28.11 > 8.8.8.8: ICMP echo request, id 17353, seq 1, length 64
09:16:05.831362 IP (tos 0x0, ttl 49, id 0, offset 0, flags [none], proto
ICMP (1), length 84)
8.8.8.8 > 192.168.28.11: ICMP echo reply, id 17353, seq 1, length 64
09:16:05.831392 IP (tos 0x0, ttl 64, id 36682, offset 0, flags [DF], proto
ICMP (1), length 84)
192.168.28.11 > 8.8.8.8: ICMP echo request, id 17353, seq 2, length 64
09:16:06.855200 IP (tos 0x0, ttl 49, id 0, offset 0, flags [none], proto
ICMP (1), length 84)
8.8.8.8 > 192.168.28.11: ICMP echo reply, id 17353, seq 2, length 64

The pmacct  version I am running is latest master.
Thank you for your assistance.

Alex


On Mon, Feb 24, 2020 at 6:20 PM Alex K  wrote:

> Hi Paolo,
>
> On Sat, Feb 22, 2020 at 4:18 PM Paolo Lucente  wrote:
>
>>
>> Hi Alex,
>>
>> Is it possible with the new setup - the one where pre_tag_map does not
>> match anything - the traffic is VLAN-tagged (or MPLS-labelled)? If so,
>> you should adjust filters accordingly and add 'vlan and', ie. "vlan and
>> src net 192.168.28.0/24 or vlan and src net 192.168.100.0/24".
>>
> The traffic is not VLAN or MPLS. It is simple one. I confirm I can collect
> traffic when removing the pretag directives. Also when stopping uacctd, I
> can capture traffic at nflog:1 interface.
> I simplified the configuration as below:
>
> !
> daemonize: true
> promisc:   false
> uacctd_group: 1
> !
> pre_tag_map: pretag2.map
> pre_tag_filter[print_wan0_in]: 1
> pre_tag_filter[print_wan0_out]: 2
> !
> !-
> plugins: print[print_wan0_in], print[print_wan0_out]
> print_refresh_time: 10
> print_history: 15m
> print_output_file_append: true
> !
> print_output[print_wan0_in]: csv
> print_output[print_wan0_out]: csv
> print_output_file[print_wan0_in]: traffic-wan0-in.csv
> print_output_file[print_wan0_out]: traffic-wan0-out.csv
> !
> aggregate[print_wan0_in]: tag, src_host, dst_host, src_port, dst_port,
> proto
> aggregate[print_wan0_out]: tag, src_host, dst_host, src_port, dst_port,
> proto
> !
>
> with pretag2.map
> set_tag=1 filter='src net 192.168.28.0/24'
> set_tag=2 filter='dst net 192.168.28.0/24'
>
> As soon as I enable the pretag directives as below, I do not see any
> traffic being collected from uacctd at NFLOG goup 1
>
> pre_tag_map: pretag2.map
> pre_tag_filter[print_wan0_in]: 1
> pre_tag_filter[print_wan0_out]: 2
>
> I am running pmacct 1.7.4.
>
>
>> Paolo
>>
>> On Fri, Feb 21, 2020 at 01:04:25PM +0200, Alex K wrote:
>> > Working further on this, it seems that for pmacct is sufficient to
>> filter
>> > traffic using only the pre_tag_filter, thus no need for the aggregation
>> > filters.
>> > The issue with this setup though is that I loose the information of the
>> > pre_nat source IP address when monitoring at the WAN interfaces. Due to
>> > this I am switching to uacctd as following:
>> >
>> > !
>> > daemonize: true
>> > promisc:   false
>> > uacctd_group: 1
>> > !networks_file: networks.lst
>> > !ports_file: ports.lst
>

Re: [pmacct-discussion] Pmacct configuration with direction of traffic

2020-02-24 Thread Alex K
Hi Paolo,

On Sat, Feb 22, 2020 at 4:18 PM Paolo Lucente  wrote:

>
> Hi Alex,
>
> Is it possible with the new setup - the one where pre_tag_map does not
> match anything - the traffic is VLAN-tagged (or MPLS-labelled)? If so,
> you should adjust filters accordingly and add 'vlan and', ie. "vlan and
> src net 192.168.28.0/24 or vlan and src net 192.168.100.0/24".
>
The traffic is not VLAN or MPLS. It is simple one. I confirm I can collect
traffic when removing the pretag directives. Also when stopping uacctd, I
can capture traffic at nflog:1 interface.
I simplified the configuration as below:

!
daemonize: true
promisc:   false
uacctd_group: 1
!
pre_tag_map: pretag2.map
pre_tag_filter[print_wan0_in]: 1
pre_tag_filter[print_wan0_out]: 2
!
!-
plugins: print[print_wan0_in], print[print_wan0_out]
print_refresh_time: 10
print_history: 15m
print_output_file_append: true
!
print_output[print_wan0_in]: csv
print_output[print_wan0_out]: csv
print_output_file[print_wan0_in]: traffic-wan0-in.csv
print_output_file[print_wan0_out]: traffic-wan0-out.csv
!
aggregate[print_wan0_in]: tag, src_host, dst_host, src_port, dst_port, proto
aggregate[print_wan0_out]: tag, src_host, dst_host, src_port, dst_port,
proto
!

with pretag2.map
set_tag=1 filter='src net 192.168.28.0/24'
set_tag=2 filter='dst net 192.168.28.0/24'

As soon as I enable the pretag directives as below, I do not see any
traffic being collected from uacctd at NFLOG goup 1

pre_tag_map: pretag2.map
pre_tag_filter[print_wan0_in]: 1
pre_tag_filter[print_wan0_out]: 2

I am running pmacct 1.7.4.


> Paolo
>
> On Fri, Feb 21, 2020 at 01:04:25PM +0200, Alex K wrote:
> > Working further on this, it seems that for pmacct is sufficient to filter
> > traffic using only the pre_tag_filter, thus no need for the aggregation
> > filters.
> > The issue with this setup though is that I loose the information of the
> > pre_nat source IP address when monitoring at the WAN interfaces. Due to
> > this I am switching to uacctd as following:
> >
> > !
> > daemonize: true
> > promisc:   false
> > uacctd_group: 1
> > !networks_file: networks.lst
> > !ports_file: ports.lst
> > !
> > pre_tag_map: pretag2.map
> > pre_tag_filter[print_wan0_in]: 1
> > pre_tag_filter[print_wan0_out]: 2
> > pre_tag_filter[wan0_in]: 1
> > pre_tag_filter[wan0_out]: 2
> > !
> > plugins: print[print_wan0_in], print[print_wan0_out], mysql[wan0_in],
> > mysql[wan0_out]
> > plugin_pipe_size[wan0_in]: 1024000
> > plugin_pipe_size[wan0_out]: 1024000
> > print_refresh_time: 10
> > print_history: 15m
> > print_output_file_append: true
> > !
> > print_output[print_wan0_in]: csv
> > print_output_file[print_wan0_in]: in_traffic.csv
> > print_output[print_wan0_out]: csv
> > print_output_file[print_wan0_out]: out_traffic.csv
> > !
> > aggregate[print_wan0_in]: dst_host, src_port, dst_port, proto
> > aggregate[print_wan0_out]: src_host, src_port, dst_port, proto
> > !
> > sql_table[wan0_in]: traffic_wan0_in_%Y%m%d_%H%M
> > sql_table[wan0_out]: traffic_wan0_out_%Y%m%d_%H%M
> > !
> > sql_table_schema[wan0_in]: traffic_wan0_in.schema
> > sql_table_schema[wan0_out]: traffic_wan0_out.schema
> > !
> > sql_host: localhost
> > sql_db : uacct
> > sql_user : uacct
> > sql_passwd: uacct
> > sql_refresh_time: 30
> > sql_optimize_clauses: true
> > sql_history : 24h
> > sql_history_roundoff: mhd
> > !
> > aggregate[wan0_in]: dst_host, src_port, dst_port, proto
> > aggregate[wan0_out]: src_host, src_port, dst_port, proto
> >
> > Where pretag2.map:
> > set_tag=1 filter='src net 192.168.28.0/24 or src net 192.168.100.0/24'
> > set_tag=2 filter='dst net 192.168.28.0/24 or dst net 192.168.100.0/24'
> >
> > The issue I have with the above config is that no traffic is being
> > collected at all. I confirm that when removing the pre_tag filters,
> traffic
> > is collected, though it is not sorted per direction as I would like to
> > have.
> > Can I use pre_tag_map and pre_tag_filter with uacctd? I don't see any
> > examples for uacctd at
> > https://github.com/pmacct/pmacct/blob/master/examples/pretag.map.example
> .
> >
> > Thanx,
> > Alex
> >
> > On Thu, Feb 20, 2020 at 6:33 PM Alex K  wrote:
> >
> > > Hi all,
> > >
> > > I have a router with multiple interfaces and will need to account
> traffic
> > > at its several WAN interfaces. My purpose is toaccount the traffic
> with the
> > > tuple details and the direction.
> > >
> > > As a test I have compiled the fo

Re: [pmacct-discussion] Pmacct configuration with direction of traffic

2020-02-21 Thread Alex K
Working further on this, it seems that for pmacct is sufficient to filter
traffic using only the pre_tag_filter, thus no need for the aggregation
filters.
The issue with this setup though is that I loose the information of the
pre_nat source IP address when monitoring at the WAN interfaces. Due to
this I am switching to uacctd as following:

!
daemonize: true
promisc:   false
uacctd_group: 1
!networks_file: networks.lst
!ports_file: ports.lst
!
pre_tag_map: pretag2.map
pre_tag_filter[print_wan0_in]: 1
pre_tag_filter[print_wan0_out]: 2
pre_tag_filter[wan0_in]: 1
pre_tag_filter[wan0_out]: 2
!
plugins: print[print_wan0_in], print[print_wan0_out], mysql[wan0_in],
mysql[wan0_out]
plugin_pipe_size[wan0_in]: 1024000
plugin_pipe_size[wan0_out]: 1024000
print_refresh_time: 10
print_history: 15m
print_output_file_append: true
!
print_output[print_wan0_in]: csv
print_output_file[print_wan0_in]: in_traffic.csv
print_output[print_wan0_out]: csv
print_output_file[print_wan0_out]: out_traffic.csv
!
aggregate[print_wan0_in]: dst_host, src_port, dst_port, proto
aggregate[print_wan0_out]: src_host, src_port, dst_port, proto
!
sql_table[wan0_in]: traffic_wan0_in_%Y%m%d_%H%M
sql_table[wan0_out]: traffic_wan0_out_%Y%m%d_%H%M
!
sql_table_schema[wan0_in]: traffic_wan0_in.schema
sql_table_schema[wan0_out]: traffic_wan0_out.schema
!
sql_host: localhost
sql_db : uacct
sql_user : uacct
sql_passwd: uacct
sql_refresh_time: 30
sql_optimize_clauses: true
sql_history : 24h
sql_history_roundoff: mhd
!
aggregate[wan0_in]: dst_host, src_port, dst_port, proto
aggregate[wan0_out]: src_host, src_port, dst_port, proto

Where pretag2.map:
set_tag=1 filter='src net 192.168.28.0/24 or src net 192.168.100.0/24'
set_tag=2 filter='dst net 192.168.28.0/24 or dst net 192.168.100.0/24'

The issue I have with the above config is that no traffic is being
collected at all. I confirm that when removing the pre_tag filters, traffic
is collected, though it is not sorted per direction as I would like to
have.
Can I use pre_tag_map and pre_tag_filter with uacctd? I don't see any
examples for uacctd at
https://github.com/pmacct/pmacct/blob/master/examples/pretag.map.example.

Thanx,
Alex

On Thu, Feb 20, 2020 at 6:33 PM Alex K  wrote:

> Hi all,
>
> I have a router with multiple interfaces and will need to account traffic
> at its several WAN interfaces. My purpose is toaccount the traffic with the
> tuple details and the direction.
>
> As a test I have compiled the following simple configuration for pmacctd:
>
> !
> daemonize: true
> plugins: print[wan0_in], print[wan0_out]
> print_refresh_time: 10
> print_history: 15m
> !
> print_output[wan0_in]: csv
> print_output_file[wan0_in]: in_traffic.csv
> print_output[wan0_out]: csv
> print_output_file[wan0_out]: out_traffic.csv
> !
> aggregate[wan0_in]: src_host, dst_host, src_port, dst_port, tag
> aggregate[wan0_out]: src_host, dst_host, src_port, dst_port, tag
> !
> pre_tag_filter[wan0_in]:1
> pre_tag_filter[wan0_out]:2
> !
> pcap_interface: eth0
> pre_tag_map: pretag.map
> networks_file: networks.lst
> ports_file: ports.lst
> !
>
> where pretag.map is:
> set_tag=1 filter='ether dst 52:54:00:69:a6:0b'
> set_tag=2 filter='ether src 52:54:00:69:a6:0b'
>
> and networks.lst is:
> 10.100.100.0/24
>
> It seems that the details output at the CSV are correctly filtered
> according to the tag, thus recording the direction also, based on the MAC
> address of the WAN0 interface.
>
> Is this the correct approach to achieve this or is there any other
> recommended way? Do I need to use aggregate_filters?
>
> Also, although I have set a network filter to capture only 10.100.100.0/24,
> I observe several networks in/out being collected, indicating that the
> network_file directive is ignored or I have misunderstood its purpose. My
> purpose it to collect traffic only generated from subnets that belong to
> configured interfaces of the router.
>
> Thanx for your feedback!
> Alex
>
>
>
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

[pmacct-discussion] Pmacct configuration with direction of traffic

2020-02-20 Thread Alex K
Hi all,

I have a router with multiple interfaces and will need to account traffic
at its several WAN interfaces. My purpose is toaccount the traffic with the
tuple details and the direction.

As a test I have compiled the following simple configuration for pmacctd:

!
daemonize: true
plugins: print[wan0_in], print[wan0_out]
print_refresh_time: 10
print_history: 15m
!
print_output[wan0_in]: csv
print_output_file[wan0_in]: in_traffic.csv
print_output[wan0_out]: csv
print_output_file[wan0_out]: out_traffic.csv
!
aggregate[wan0_in]: src_host, dst_host, src_port, dst_port, tag
aggregate[wan0_out]: src_host, dst_host, src_port, dst_port, tag
!
pre_tag_filter[wan0_in]:1
pre_tag_filter[wan0_out]:2
!
pcap_interface: eth0
pre_tag_map: pretag.map
networks_file: networks.lst
ports_file: ports.lst
!

where pretag.map is:
set_tag=1 filter='ether dst 52:54:00:69:a6:0b'
set_tag=2 filter='ether src 52:54:00:69:a6:0b'

and networks.lst is:
10.100.100.0/24

It seems that the details output at the CSV are correctly filtered
according to the tag, thus recording the direction also, based on the MAC
address of the WAN0 interface.

Is this the correct approach to achieve this or is there any other
recommended way? Do I need to use aggregate_filters?

Also, although I have set a network filter to capture only 10.100.100.0/24,
I observe several networks in/out being collected, indicating that the
network_file directive is ignored or I have misunderstood its purpose. My
purpose it to collect traffic only generated from subnets that belong to
configured interfaces of the router.

Thanx for your feedback!
Alex
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] pmacct on ppp interface

2019-05-30 Thread Alex K
Hi Paolo,

After a reboot I can see outgoing traffic being captured from uacctd at the
nflog:1 interface.
I run uaccd as below for debugging:

uacctd -r 5 -g 1 -P print -c 'src_host,dst_host,src_port,dst_port,proto'

The issue remains the same with the latest version. Incoming traffic is not
being captured at the nflog:1 interface of the sim0 ppp interface. At other
non-ppp interfaces capturing is fine for IN/OUT.

Thanx

On Wed, May 29, 2019 at 6:08 PM Alex K  wrote:

> Hi Paolo,
>
>
> On Wed, May 29, 2019 at 4:31 PM Alex K  wrote:
>
>> Hi Paolo,
>>
>> You just caught me doing the upgrade :)
>> I will let you know the outcome.
>> Thank you!
>>
>>
>> On Wed, May 29, 2019 at 4:17 PM Paolo Lucente  wrote:
>>
>>>
>>> Hi Alex,
>>>
>>> First thing first 1.6.1 is a release of almost 3 years ago, i can't
>>> support that - please upgrade to 1.7.3 or master code. That said i can
>>> confirm pmacctd/uacctd should support PPP-encapsulated traffic. Also, you
>>> may send me a trace of the NFLOG traffic (as captured by tcpdump) via
>>> unicast email for some troubleshooting.
>>>
>> I have installed version 1.7.4. I can confirm that I can get traffic from
> physical net interfaces or tunnel interfaces from VPN (OpenVPN) that go
> inside the ppp interface. With this new version, I do not get either IN or
> OUT traffic. With previous version I was having OUT traffic being captured
> from uacctd and printed to CSV. Attached the tcpdump capture at nflog:1
> interface.
>
> I did run also uacctd -d -r 5 -g 1 and I am getting the following, which
> might help:
>
> WARN: [cmdline] No plugin has been activated; defaulting to in-memory
> table.
> DEBUG: [cmdline] plugin name/type: 'default'/'core'.
> DEBUG: [cmdline] plugin name/type: 'default_memory'/'memory'.
> DEBUG: [cmdline] debug:true
> DEBUG: [cmdline] sql_refresh_time:5
> DEBUG: [cmdline] uacctd_group:1
> INFO ( default/core ): Linux NetFilter NFLOG Accounting Daemon, uacctd
> (20190528-00)
> INFO ( default/core ):  '--prefix=/usr' '--enable-mysql' '--enable-nflog'
> '--enable-l2' '--enable-64bit' '--enable-traffic-bins' '--enable-bgp-bins'
> '--enable-bmp-bins' '--enable-st-bins'
> INFO ( default/core ): Reading configuration from cmdline.
> WARN ( default_memory/memory ): defaulting to SRC HOST aggregation.
> INFO ( default_memory/memory ): plugin_pipe_size=4096000 bytes
> plugin_buffer_size=280 bytes
> INFO ( default_memory/memory ): ctrl channel: obtained=212992 bytes
> target=117024 bytes
> INFO ( default/core ): Successfully connected Netlink NFLOG socket
> DEBUG ( default_memory/memory ): allocating a new memory segment.
> DEBUG ( default_memory/memory ): allocating a new memory segment.
> OK ( default_memory/memory ): waiting for data on: '/tmp/collect.pipe'
> DEBUG ( default_memory/memory ): Selecting bucket 12551.
> DEBUG ( default_memory/memory ): Selecting bucket 12551.
> DEBUG ( default_memory/memory ): Selecting bucket 12551.
> DEBUG ( default_memory/memory ): Selecting bucket 12551.
> DEBUG ( default_memory/memory ): Selecting bucket 12551.
> DEBUG ( default_memory/memory ): Selecting bucket 12551.
> DEBUG ( default_memory/memory ): Selecting bucket 12551.
> DEBUG ( default_memory/memory ): Selecting bucket 12551.
> DEBUG ( default_memory/memory ): Selecting bucket 12551.
> DEBUG ( default_memory/memory ): Selecting bucket 12551.
> DEBUG ( default_memory/memory ): Selecting bucket 12551.
> DEBUG ( default_memory/memory ): Selecting bucket 12551.
> DEBUG ( default_memory/memory ): Selecting bucket 12551.
>
> Thank you for your assistance!
>
>>
>>> Paolo
>>>
>>> On Wed, May 29, 2019 at 12:37:40PM +0300, Alex K wrote:
>>> > Hi All,
>>> >
>>> > I am facing the following issue:
>>> >
>>> > I have configured iptables to log packets coming through a ppp
>>> interface
>>> > (named sim0) using NFLOG target. These packets are forwarded to uacctd
>>> to
>>> > the respective uacctd group, as below, which are printed in a CSV file
>>> > using the print plugin:
>>> >
>>> >
>>> > iptables (mangle table):
>>> > -A INPUT -i sim0 -j NFLOG --nflog-group 1 --nflog-size 40
>>> --nflog-threshold
>>> > 10 --nflog-prefix sim0in
>>> > -A FORWARD -i sim0 -j NFLOG --nflog-group 1 --nflog-size 40
>>> > --nflog-threshold 10 --nflog-prefix sim0in
>>> > -A POSTROUTING -o sim0 -j NFLOG --nflog-group 1 --nflog-size 40
>>> > --nflog-threshold 10 --nflog-prefix sim0out
>>> >
>>> >
>>> > uacctd config

Re: [pmacct-discussion] pmacct on ppp interface

2019-05-29 Thread Alex K
Hi Paolo,

You just caught me doing the upgrade :)
I will let you know the outcome.
Thank you!


On Wed, May 29, 2019 at 4:17 PM Paolo Lucente  wrote:

>
> Hi Alex,
>
> First thing first 1.6.1 is a release of almost 3 years ago, i can't
> support that - please upgrade to 1.7.3 or master code. That said i can
> confirm pmacctd/uacctd should support PPP-encapsulated traffic. Also, you
> may send me a trace of the NFLOG traffic (as captured by tcpdump) via
> unicast email for some troubleshooting.
>
> Paolo
>
> On Wed, May 29, 2019 at 12:37:40PM +0300, Alex K wrote:
> > Hi All,
> >
> > I am facing the following issue:
> >
> > I have configured iptables to log packets coming through a ppp interface
> > (named sim0) using NFLOG target. These packets are forwarded to uacctd to
> > the respective uacctd group, as below, which are printed in a CSV file
> > using the print plugin:
> >
> >
> > iptables (mangle table):
> > -A INPUT -i sim0 -j NFLOG --nflog-group 1 --nflog-size 40
> --nflog-threshold
> > 10 --nflog-prefix sim0in
> > -A FORWARD -i sim0 -j NFLOG --nflog-group 1 --nflog-size 40
> > --nflog-threshold 10 --nflog-prefix sim0in
> > -A POSTROUTING -o sim0 -j NFLOG --nflog-group 1 --nflog-size 40
> > --nflog-threshold 10 --nflog-prefix sim0out
> >
> >
> > uacctd config:
> > ! Collect traffic on sim0
> > daemonize: true
> > debug:  true
> > promisc:   false
> > pidfile:   /var/run/uacctd_sim0.pid
> > imt_path:  /tmp/uacctd_sim0.pipe
> > !syslog: daemon
> > logfile: /var/log/uacct/uacct_sim0.log
> > uacctd_group: 1
> > plugins: print[in_out_sim0]
> > aggregate[in_out_sim0]:src_host,dst_host,src_port,dst_port,proto
> > print_output[in_out_sim0]: csv
> > print_output_file[in_out_sim0]: /var/lib/uacctd-sim0-%Y%m%d.csv
> > print_output_file_append[in_out_sim0]: true
> > print_refresh_time: 10
> > print_history: 24h
> >
> > I receive normally outgoing traffic which is logged at the CSV file.
> > Using tcpdump I can see all the in/out traffic and iptables counters are
> > rising at the respective chains. The sim0 interface is dynamically
> brought
> > up from a ppp connection.
> >
> > Do you have any idea why uacctd is not getting those incoming packets
> > (INPUT and FORWARD chain) or how this can be troubleshooted. I am using
> > pmacct 1.6.1-1.
> >
> > Thank you!
> > Alex
>
> > ___
> > pmacct-discussion mailing list
> > http://www.pmacct.net/#mailinglists
>
>
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists
>
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

[pmacct-discussion] pmacct on ppp interface

2019-05-29 Thread Alex K
Hi All,

I am facing the following issue:

I have configured iptables to log packets coming through a ppp interface
(named sim0) using NFLOG target. These packets are forwarded to uacctd to
the respective uacctd group, as below, which are printed in a CSV file
using the print plugin:


iptables (mangle table):
-A INPUT -i sim0 -j NFLOG --nflog-group 1 --nflog-size 40 --nflog-threshold
10 --nflog-prefix sim0in
-A FORWARD -i sim0 -j NFLOG --nflog-group 1 --nflog-size 40
--nflog-threshold 10 --nflog-prefix sim0in
-A POSTROUTING -o sim0 -j NFLOG --nflog-group 1 --nflog-size 40
--nflog-threshold 10 --nflog-prefix sim0out


uacctd config:
! Collect traffic on sim0
daemonize: true
debug:  true
promisc:   false
pidfile:   /var/run/uacctd_sim0.pid
imt_path:  /tmp/uacctd_sim0.pipe
!syslog: daemon
logfile: /var/log/uacct/uacct_sim0.log
uacctd_group: 1
plugins: print[in_out_sim0]
aggregate[in_out_sim0]:src_host,dst_host,src_port,dst_port,proto
print_output[in_out_sim0]: csv
print_output_file[in_out_sim0]: /var/lib/uacctd-sim0-%Y%m%d.csv
print_output_file_append[in_out_sim0]: true
print_refresh_time: 10
print_history: 24h

I receive normally outgoing traffic which is logged at the CSV file.
Using tcpdump I can see all the in/out traffic and iptables counters are
rising at the respective chains. The sim0 interface is dynamically brought
up from a ppp connection.

Do you have any idea why uacctd is not getting those incoming packets
(INPUT and FORWARD chain) or how this can be troubleshooted. I am using
pmacct 1.6.1-1.

Thank you!
Alex
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] uacctd memory usage

2018-06-25 Thread Alex K
I have not experience with RabbitMQ but I've seen this guy around on high
volume data transfer scenarios.
A message queuing approach sounds more resilient to me but this will need
consideration at a later point. Thanx though for the thoughts.
What I currently do is: iptables -> uacctd -> mysql -> processing of mysql
and further aggregation -> json file -> push to central hub (rsync) ->
import to DB -> visualize... (I've been playing like this for at least 3
years without issues)
It is interesting to hear how you tackle such issues.

Alex


On Mon, Jun 25, 2018 at 7:51 PM, Dariush Marsh-Mossadeghi <
dari...@gravitas.co.uk> wrote:

> Rasto's comments got me thinking…
>
> Not being privy to your application and it’s architecture this may not
> work for you at all.
>
> I’ve had success in the past using rabbitmq to offload flow logs.
> We used it to deliver netflow to ELK stacks running on AWS, and it worked
> really well.
> It has the major advantage of operational resilience.
>
> Something like this:
>
> netflow sources -> nfacctd -> rabbitmq -> big long unreliable network path
> across public internet -> rabbitmq -> logstash -> elasticsearch
>
> as ever YMMV :-)
> Dariush
>
>
> On 25 Jun 2018, at 16:53, Alex K  wrote:
>
>
>
> On Mon, Jun 25, 2018 at 6:49 PM, Rasto Rickardt  wrote:
>
>> Hello,
>>
>> If there is no more RAM available, i would test sqlite3 plugin, as
>> sqlite3 is more suited for limited resources usage. Youl will possibly
>> need to change your workflow to export sqlite3 files and load it
>> somewhere else, but it should be a lot cheaper memory wise.
>>
>> Thanx. Seems that I have to stick with mysql at least at this phase.
>
> This one is swapping pretty heavily, i would suggest to set to around 10
>> for 4GB RAM.
>>
>> sysctl vm.swappiness=10 so it will start using swap when available RAM
>> is around 400MB.
>>
> This is already done.
>
>
>> r.
>>
>> On 06/25/2018 05:35 PM, Alex K wrote:
>> > Thanks Dariush. Appreciate your feedback.
>> >
>> > I was testing several stripped down kernels by compiling and removing
>> > most of unused modules. The gain I had was in the range of few MB.
>> > Seems that I have to find a more elegant approach at uacctd
>> > configuration since with current setup I am loading several mysql
>> > plugins so as to be able to filter traffic with directions, networks and
>> > ports.
>> > I was testing an edge case where these plugins may rise up to 210
>> > instances thus leading to this memory usage. Normally this will not be
>> > done so the normal case will be using 2.5 GB of RAM instead of 3.5 GB.
>> > Thus in normal cases seems that I am ok.
>> >
>> > Cheers
>> >
>> >
>> > On Mon, Jun 25, 2018 at 6:07 PM, Dariush Marsh-Mossadeghi
>> > mailto:dari...@gravitas.co.uk>> wrote:
>> >
>> > OK, so it’s effectively an embedded system scenario, with a fixed
>> > config hardware, or similar
>> > No silver bullets or one-liner fixes here :-\
>> >
>> > You’ve a number of options to slim down your memory footprint
>> > - Strip Debian of all the packages you don’t need, learn a lot about
>> > kernel tuning, and tune the kernel to your needs.
>> > - Start looking at one of the more embedded systems oriented
>> > distros, rather than OOTB Debian. Although Debian is skinnier than
>> > most, there’s a lot that can be done to minimise footprint. The fact
>> > that you’ve got a 64bit Debian9 running on it makes me fairly
>> > optimistic that you could run pretty much any Debian derivative.
>> > - Profiling your userspace software stack and see if there are any
>> > dependencies you don’t need, strip them out. Whether you can do this
>> > will very much depend on the modularity of the components.
>> >
>> > All of the above lead you down the path of maintaining forks and
>> > compiling your own packages, to a greater or lesser extent. That’s a
>> > maintenance overhead you want to avoid unless you have to.
>> > Having said all that, if you’re looking at deploying thousands of
>> > units, the economics of software maintenance may stack up for you.
>> >
>> > HTH
>> > Dariush
>> >
>> >
>> >> On 25 Jun 2018, at 14:32, Alex K > >> <mailto:rightkickt...@gmail.com>> wrote:
>> >>
>> >> Let me change the posting s

Re: [pmacct-discussion] uacctd memory usage

2018-06-25 Thread Alex K
On Mon, Jun 25, 2018 at 6:49 PM, Rasto Rickardt  wrote:

> Hello,
>
> If there is no more RAM available, i would test sqlite3 plugin, as
> sqlite3 is more suited for limited resources usage. Youl will possibly
> need to change your workflow to export sqlite3 files and load it
> somewhere else, but it should be a lot cheaper memory wise.
>
> Thanx. Seems that I have to stick with mysql at least at this phase.

This one is swapping pretty heavily, i would suggest to set to around 10
> for 4GB RAM.
>
> sysctl vm.swappiness=10 so it will start using swap when available RAM
> is around 400MB.
>
This is already done.


> r.
>
> On 06/25/2018 05:35 PM, Alex K wrote:
> > Thanks Dariush. Appreciate your feedback.
> >
> > I was testing several stripped down kernels by compiling and removing
> > most of unused modules. The gain I had was in the range of few MB.
> > Seems that I have to find a more elegant approach at uacctd
> > configuration since with current setup I am loading several mysql
> > plugins so as to be able to filter traffic with directions, networks and
> > ports.
> > I was testing an edge case where these plugins may rise up to 210
> > instances thus leading to this memory usage. Normally this will not be
> > done so the normal case will be using 2.5 GB of RAM instead of 3.5 GB.
> > Thus in normal cases seems that I am ok.
> >
> > Cheers
> >
> >
> > On Mon, Jun 25, 2018 at 6:07 PM, Dariush Marsh-Mossadeghi
> > mailto:dari...@gravitas.co.uk>> wrote:
> >
> > OK, so it’s effectively an embedded system scenario, with a fixed
> > config hardware, or similar
> > No silver bullets or one-liner fixes here :-\
> >
> > You’ve a number of options to slim down your memory footprint
> > - Strip Debian of all the packages you don’t need, learn a lot about
> > kernel tuning, and tune the kernel to your needs.
> > - Start looking at one of the more embedded systems oriented
> > distros, rather than OOTB Debian. Although Debian is skinnier than
> > most, there’s a lot that can be done to minimise footprint. The fact
> > that you’ve got a 64bit Debian9 running on it makes me fairly
> > optimistic that you could run pretty much any Debian derivative.
> > - Profiling your userspace software stack and see if there are any
> > dependencies you don’t need, strip them out. Whether you can do this
> > will very much depend on the modularity of the components.
> >
> > All of the above lead you down the path of maintaining forks and
> > compiling your own packages, to a greater or lesser extent. That’s a
> > maintenance overhead you want to avoid unless you have to.
> > Having said all that, if you’re looking at deploying thousands of
> > units, the economics of software maintenance may stack up for you.
> >
> > HTH
> > Dariush
> >
> >
> >> On 25 Jun 2018, at 14:32, Alex K  >> <mailto:rightkickt...@gmail.com>> wrote:
> >>
> >> Let me change the posting style... :)
> >>
> >> On Mon, Jun 25, 2018 at 3:51 PM, Dariush
> >> Marsh-Mossadeghi  >> <mailto:dari...@gravitas.co.uk>> wrote:
> >>
> >> OK, so we’re moving from bottom-posting to top-posting…
> >> that’ll make it interesting for other readers ;-)
> >>
> >> The output of free doesn’t look desperate, but it is starting
> >> to look a bit tight.
> >> You’ve got about a gig of buffer/cache, which the kernel will
> >> evict if it needs it.
> >> You’ve got 200M of genuinely free memory.
> >> What is potentially a little concerning is the gig of swap in
> >> use, that may or may not be a problem depending on what else
> >> is running on the box and how it’s memory use varies over time.
> >>
> >> I’m not intimatley familiar with the internals of uacctd’s
> >> mysql plugins, but my advice would be:
> >>
> >> - Monitor the memory usage. If it doesn’t vary much over time
> >> you could go on for years and be just fine. Keep a close watch
> >> on swap usage, if that varies a lot or grows over time you’ll
> >> want to do something about it.
> >>
> >> Monitoring of last 2 weeks is showing that free mem is fluctuating
> >> form 50M to 220 MB and swaps remains steady. Thats why I am trying
> >>     to find a solution.
> >>
> &g

Re: [pmacct-discussion] uacctd memory usage

2018-06-25 Thread Alex K
Thanks Dariush. Appreciate your feedback.

I was testing several stripped down kernels by compiling and removing most
of unused modules. The gain I had was in the range of few MB.
Seems that I have to find a more elegant approach at uacctd configuration
since with current setup I am loading several mysql plugins so as to be
able to filter traffic with directions, networks and ports.
I was testing an edge case where these plugins may rise up to 210 instances
thus leading to this memory usage. Normally this will not be done so the
normal case will be using 2.5 GB of RAM instead of 3.5 GB.
Thus in normal cases seems that I am ok.

Cheers


On Mon, Jun 25, 2018 at 6:07 PM, Dariush Marsh-Mossadeghi <
dari...@gravitas.co.uk> wrote:

> OK, so it’s effectively an embedded system scenario, with a fixed config
> hardware, or similar
> No silver bullets or one-liner fixes here :-\
>
> You’ve a number of options to slim down your memory footprint
> - Strip Debian of all the packages you don’t need, learn a lot about
> kernel tuning, and tune the kernel to your needs.
> - Start looking at one of the more embedded systems oriented distros,
> rather than OOTB Debian. Although Debian is skinnier than most, there’s a
> lot that can be done to minimise footprint. The fact that you’ve got a
> 64bit Debian9 running on it makes me fairly optimistic that you could run
> pretty much any Debian derivative.
> - Profiling your userspace software stack and see if there are any
> dependencies you don’t need, strip them out. Whether you can do this will
> very much depend on the modularity of the components.
>
> All of the above lead you down the path of maintaining forks and compiling
> your own packages, to a greater or lesser extent. That’s a maintenance
> overhead you want to avoid unless you have to.
> Having said all that, if you’re looking at deploying thousands of units,
> the economics of software maintenance may stack up for you.
>
> HTH
> Dariush
>
>
> On 25 Jun 2018, at 14:32, Alex K  wrote:
>
> Let me change the posting style... :)
>
> On Mon, Jun 25, 2018 at 3:51 PM, Dariush Marsh-Mossadeghi  gravitas.co.uk> wrote:
>
>> OK, so we’re moving from bottom-posting to top-posting… that’ll make it
>> interesting for other readers ;-)
>>
>> The output of free doesn’t look desperate, but it is starting to look a
>> bit tight.
>> You’ve got about a gig of buffer/cache, which the kernel will evict if it
>> needs it.
>> You’ve got 200M of genuinely free memory.
>> What is potentially a little concerning is the gig of swap in use, that
>> may or may not be a problem depending on what else is running on the box
>> and how it’s memory use varies over time.
>>
>> I’m not intimatley familiar with the internals of uacctd’s mysql plugins,
>> but my advice would be:
>>
>> - Monitor the memory usage. If it doesn’t vary much over time you could
>> go on for years and be just fine. Keep a close watch on swap usage, if that
>> varies a lot or grows over time you’ll want to do something about it.
>>
> Monitoring of last 2 weeks is showing that free mem is fluctuating form
> 50M to 220 MB and swaps remains steady. Thats why I am trying to find a
> solution.
>
>
>> - Read up on OOM Killer, what it does and how it behaves. If you see OOM
>> Killer entries in your logs, it’s time to think again
>>
> I am aware of that. thanx
>
>>
>> - Can you put some more memory in the box? 8GB of RAM will cost you about
>> £60. How much is your time worth ?
>>
> This is not an option. What I have is 4 GB. This is not just a personal
> project. There are going to be thousands of such installations
>
>
> HTH
> Dariush
>
>
> On 25 Jun 2018, at 12:32, Alex K  wrote:
>
> Thanx for the reply.
>
> The output of free is the following:
>
> free
> totalusedfreeshared buff/cache
> available
> Mem:4046572 2832576  152012  784240 1061984
> 204248
> Swap:   3906556 1086080 2820476
>
> While is as below when stopped:
>
> free
>   totalusedfree  shared  buff/cache
> available
> Mem:4046572      520352 3223884   14916  302336
> 3286952
> Swap:   3906556  485040 3421516
>
> Seems that mysql plugins are reserving quite some memory as they list
> first in htop when sorted with memory.
>
> Thanx,
> Alex
>
>
> On Mon, Jun 25, 2018 at 2:22 PM, Dariush Marsh-Mossadeghi  gravitas.co.uk> wrote:
>
>>
>> On 25 Jun 2018, at 11:54, Alex K  wrote:
>>
>> Hi all,
>>
>> I have a setup with uacctd monitoring traffic of several int

Re: [pmacct-discussion] uacctd memory usage

2018-06-25 Thread Alex K
Let me change the posting style... :)

On Mon, Jun 25, 2018 at 3:51 PM, Dariush Marsh-Mossadeghi <
dari...@gravitas.co.uk> wrote:

> OK, so we’re moving from bottom-posting to top-posting… that’ll make it
> interesting for other readers ;-)
>
> The output of free doesn’t look desperate, but it is starting to look a
> bit tight.
> You’ve got about a gig of buffer/cache, which the kernel will evict if it
> needs it.
> You’ve got 200M of genuinely free memory.
> What is potentially a little concerning is the gig of swap in use, that
> may or may not be a problem depending on what else is running on the box
> and how it’s memory use varies over time.
>
> I’m not intimatley familiar with the internals of uacctd’s mysql plugins,
> but my advice would be:
>
> - Monitor the memory usage. If it doesn’t vary much over time you could go
> on for years and be just fine. Keep a close watch on swap usage, if that
> varies a lot or grows over time you’ll want to do something about it.
>
Monitoring of last 2 weeks is showing that free mem is fluctuating form 50M
to 220 MB and swaps remains steady. Thats why I am trying to find a
solution.


> - Read up on OOM Killer, what it does and how it behaves. If you see OOM
> Killer entries in your logs, it’s time to think again
>
I am aware of that. thanx

>
> - Can you put some more memory in the box? 8GB of RAM will cost you about
> £60. How much is your time worth ?
>
This is not an option. What I have is 4 GB. This is not just a personal
project. There are going to be thousands of such installations...

HTH
Dariush


On 25 Jun 2018, at 12:32, Alex K  wrote:

Thanx for the reply.

The output of free is the following:

free
totalusedfreeshared buff/cache
available
Mem:4046572 2832576  152012  784240 1061984
204248
Swap:   3906556 1086080 2820476

While is as below when stopped:

free
  totalusedfree  shared  buff/cache
available
Mem:4046572  520352 3223884   14916  302336
3286952
Swap:   3906556  485040 3421516

Seems that mysql plugins are reserving quite some memory as they list first
in htop when sorted with memory.

Thanx,
Alex


On Mon, Jun 25, 2018 at 2:22 PM, Dariush Marsh-Mossadeghi <
dari...@gravitas.co.uk> wrote:

>
> On 25 Jun 2018, at 11:54, Alex K  wrote:
>
> Hi all,
>
> I have a setup with uacctd monitoring traffic of several interfaces
> through NFLOG.
> With uacctd stopped I see that the server (a relatively small device with
> 4 GB of RAM) consumes 450MB of RAM. Once I start uacctd the mem usage goes
> up to 3.5 GB. I am using mysql plugin and this is running on Debian9 64 bit.
>
> Is there any tweaks I can use to put a limit on the memory usage of
> uacctd.
>
> Thanx,
> Alex
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists
>
>
>
> tldr; post the output of the free command when uacctd is running and I’ll
> do my best to interpret it for you :-)
>
> Is it _really_ using it, or is it just not completely unused?
>
> The linux kernel generally does pretty good job of keeping stuff that
> might be useful in memory, but getting rid of it very very quickly if the
> space is needed for something else.
> The challenge you face is that most of the userspace tools come with a
> long list of caveats about what they appear to report vs what’s really
> happening, this is mainly due to the way the kernel shares memory between
> processes, and its use of as much memory as possible for various caches.
>
> Some threads worth reading if you’re not familiar with the in’s and out’s
> of linux kernel memory management...
>
> https://www.linuxatemyram.com/
> https://stackoverflow.com/questions/4802481/how-to-see-top-
> processes-sorted-by-actual-memory-usage
> https://stackoverflow.com/questions/3784974/want-to-know-
> whether-enough-memory-is-free-on-a-linux-machine-to-deploy-a-new-app/
>
> HTH
> Dariush
>
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists
>

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists



> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists
>
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] uacctd memory usage

2018-06-25 Thread Alex K
Thanx for the reply.

The output of free is the following:

free
totalusedfreeshared buff/cache
available
Mem:4046572 2832576  152012  784240 1061984
204248
Swap:   3906556 1086080 2820476

While is as below when stopped:

free
  totalusedfree  shared  buff/cache
available
Mem:4046572  520352 3223884   14916  302336
3286952
Swap:   3906556  485040 3421516

Seems that mysql plugins are reserving quite some memory as they list first
in htop when sorted with memory.

Thanx,
Alex


On Mon, Jun 25, 2018 at 2:22 PM, Dariush Marsh-Mossadeghi <
dari...@gravitas.co.uk> wrote:

>
> On 25 Jun 2018, at 11:54, Alex K  wrote:
>
> Hi all,
>
> I have a setup with uacctd monitoring traffic of several interfaces
> through NFLOG.
> With uacctd stopped I see that the server (a relatively small device with
> 4 GB of RAM) consumes 450MB of RAM. Once I start uacctd the mem usage goes
> up to 3.5 GB. I am using mysql plugin and this is running on Debian9 64 bit.
>
> Is there any tweaks I can use to put a limit on the memory usage of
> uacctd.
>
> Thanx,
> Alex
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists
>
>
>
> tldr; post the output of the free command when uacctd is running and I’ll
> do my best to interpret it for you :-)
>
> Is it _really_ using it, or is it just not completely unused?
>
> The linux kernel generally does pretty good job of keeping stuff that
> might be useful in memory, but getting rid of it very very quickly if the
> space is needed for something else.
> The challenge you face is that most of the userspace tools come with a
> long list of caveats about what they appear to report vs what’s really
> happening, this is mainly due to the way the kernel shares memory between
> processes, and its use of as much memory as possible for various caches.
>
> Some threads worth reading if you’re not familiar with the in’s and out’s
> of linux kernel memory management...
>
> https://www.linuxatemyram.com/
> https://stackoverflow.com/questions/4802481/how-to-see-
> top-processes-sorted-by-actual-memory-usage
> https://stackoverflow.com/questions/3784974/want-to-
> know-whether-enough-memory-is-free-on-a-linux-machine-to-deploy-a-new-app/
>
> HTH
> Dariush
>
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists
>
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] pmacct 1.7.1 released !

2018-05-06 Thread Alex K
Keep up the good work Paolo and thanx for this excellent software!

Alex

On Sun, May 6, 2018 at 4:44 PM, Paolo Lucente  wrote:

> VERSION.
> 1.7.1
>
>
> DESCRIPTION.
> pmacct is a small set of multi-purpose passive network monitoring tools. It
> can account, classify, aggregate, replicate and export forwarding-plane
> data,
> ie. IPv4 and IPv6 traffic; collect and correlate control-plane data via BGP
> and BMP; collect infrastructure data via Streaming Telemetry. Each
> component
> works both as a standalone daemon and as a thread of execution for
> correlation
> purposes (ie. enrich NetFlow with BGP data).
>
> A pluggable architecture allows to store collected forwarding-plane data
> into
> memory tables, RDBMS (MySQL, PostgreSQL, SQLite), noSQL databases (MongoDB,
> BerkeleyDB), AMQP (RabbitMQ) and Kafka message exchanges and flat-files.
> pmacct offers customizable historical data breakdown, data enrichments like
> BGP and IGP correlation and GeoIP lookups, filtering, tagging and triggers.
> Libpcap, Linux Netlink/NFLOG, sFlow v2/v4/v5, NetFlow v5/v8/v9 and IPFIX
> are
> all supported as inputs for forwarding-plane data. Replication of incoming
> NetFlow, IPFIX and sFlow datagrams is also available. Statistics can be
> easily exported to time-series databases like ElasticSearch and InfluxDB
> and traditional tools Cacti RRDtool MRTG, Net-SNMP, GNUPlot, etc.
>
> Control-plane and infrastructure data, collected via BGP, BMP and Streaming
> Telemetry, can be all logged real-time or dumped at regular time intervals
> to AMQP (RabbitMQ) and Kafka message exchanges and flat-files.
>
>
> HOMEPAGE.
> http://www.pmacct.net/
>
>
> DOWNLOAD.
> http://www.pmacct.net/pmacct-1.7.1.tar.gz
>
>
> CHANGELOG.
> + pmbgpd: introduced a BGP x-connect feature meant to map BGP peers
>   (ie. PE routers) to BGP collectors (ie. nfacctd, sfacctd) via a
>   standalone BGP daemon (pmbgpd). The aim is to facilitate operations
>   when re-sizing/re-balancing the collection infrastructure without
>   impacting (ie. re-configuring) BGP peers. bgp_daemon_xconnect_map
>   expects full pathname to a file where cross-connects are defined;
>   mapping works only against the IP source address and not the BGP
>   Router ID, only 1:1 relationships can be formed (ie. this is about
>   cross-connecting, not replication) and only one session per BGP
>   peer is supported (ie. multiple BGP agents are running on the same
>   IP address or NAT traversal scenarios are not supported [yet]).
>   A sample map is provided in 'examples/bgp_xconnects.map.example'.
> + pmbgpd: introduced a BGP Looking Glass server allowing to perform
>   queries, ie. lookup of IP addresses/prefixes or get the list of BGP
>   peers, against available BGP RIBs. The server is asyncronous and
>   uses ZeroMQ as transport layer to serve incoming queries. Sample
>   C/Python LG clients are available in 'examples/lg'. A sample LG
>   server config is available in QUICKSTART. Request/Reply Looking
>   Glass formats are documented in 'docs/LOOKING_GLASS_FORMAT'.
> + pmacctd: a single daemon can now listen for traffic on multiple
>   interfaces via a polling mechanism. This can be configured via a
>   pcap_interfaces_map feature (interface/pcap_interface can still be
>   used for backward compatiblity to listen on a single interface). The
>   map allows to define also ifindex mapping and capturing direction on
>   a per-interface basis. The map can be reloaded at runtime via a USR2
>   signal and a sample map is in examples/pcap_interfaces.map.example.
> + Kafka plugin: dynamic partitioning via kafka_partition_dynamic and
>   kafka_partition_key knobs is introduced. The Kafka topic can contain
>   variables, ie. $peer_src_ip, $src_host, $dst_port, $tag, etc., which
>   are all computed when data is purged to the backend. This feature is
>   in addition to the existing kafka_partition feature which allows to
>   rely on the built-in Kafka partitioning to assign data statically to
>   one partition or rely dynamically on the default partitioner. The
>   feature is courtesy by Corentin Neau / Codethink ( @weyfonk ).
> + Introduced rfc3339 formatted timestamps: in logs, ie. UTC timezone
>   represented as -MM-ddTHH:mm:ss(.ss)Z; for aggregation primitives
>   the timestamps_rfc3339 knob can be used to enable this feature (left
>   disabled by default for backward compatibility).
> + timestamps_utc: new knob to decode timestamps to UTC timezone even
>   if the Operating System is set to a different timezone. On the goods
>   of running a system set to UTC please read Q18 of FAQS.
> + sfacctd: implemented mpls_label_top, mpls_label_bottom and
>   mpls_stack_depth primitives decoded from sFlow flow sample headers.
>   Thanks to David Barroso ( @dbarrosop ) for his support.
> + nfacctd: added support for IEs 130 (exporterIPv4Address) and 131
>   (exporterIPv6Address) when passed as part of NetFlow v9/IPFIX
>   option packets (these IEs were already supported when