Re: [pmacct-discussion] Multiple updates per interval (print plugin)

2014-02-05 Thread Ruben Laban

Cause for the memory usage has been found:


print_cache_entries: 91


This was needed when I was testing with random destination IP 
addresses. As our production environment will only be accounting for say 
1 IPs, this value might not need tweaking.


Regards,
Ruben

On 2014-02-05 08:28, Ruben Laban wrote:

Hi Paolo,

See inline...

On 2014-02-04 18:26, Paolo Lucente wrote:

Configuration looks allright. And it can't supposedly happen with
libpcap that you have packets for the current (and/or future time
slots) when writing to the file: this might happen with nfacctd
instead, ie. in case NetFlow timestamps are in future because of
routers not being NTP sync'd.

I was wondering if plugin_buffer_size is not maybe too much for
your scenario but then again you would not see a smaller amount
of packets before a larger one. Maybe worth trying reducing it
anyway and see if this has any effect.


After cutting down both plugin_buffer_size and plugin_pipe_size by a
factor 10, I get the same behavior. Same with a factor 100, though
then I get a lot of:

Feb  5 07:53:39 gw02 pmacctd[3806]: ERROR ( traffic/print ): We are
missing data.
Feb  5 07:53:39 gw02 pmacctd[3806]: If you see this message once in a
while, discard it. Otherwise some solutions follow:
Feb  5 07:53:39 gw02 pmacctd[3806]: - increase shared memory size,
'plugin_pipe_size'; now: '4024000'.
Feb  5 07:53:39 gw02 pmacctd[3806]: - increase buffer size,
'plugin_buffer_size'; now: '4024'.
Feb  5 07:53:39 gw02 pmacctd[3806]: - increase system maximum socket
size.#012

(Which also shows a \n that shouldn't be there.)


What pmacct version you are running? Should the buffering idea
above not work, would it be an option to get temporary remote
access to your box for some troubleshooting?


These tests are performed using 1.5.0rc2. As for getting access, I'll
have to see if I can place this test environment somewhere safe
network-wise so you could access it.

Not sure if it's related (was gonna keep all issues separate at
first, but perhaps they're somehow linked), but the memory usage is
something that caught my attention as well:

vanilla pmacctd:

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
31909 root  20   0 2577m 2.5g 384m R   92 32.2 487:02.71 pmacctd:
Print Plugin [traffic]
31908 root  20   0  397m 387m 386m R   72  4.9 381:43.56 pmacctd:
Core Process [default]

pmacctd with PF_RING support:

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
14556 root  20   0 2575m 2.5g 384m S   20 32.2  51:54.62 pmacctd:
Print Plugin [traffic]
14555 root  20   0  394m 385m 385m R   95  4.8 196:28.39 pmacctd:
Core Process [default]


The above were using the initial config, thus with sizes set to
402400/40240. When reducing those by a factor 100, I still get
this (with PF_RING support):

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 3806 root  20   0 2195m 2.1g 4536 S   31 27.4   3:07.41 pmacctd:
Print Plugin [traffic]
 3805 root  20   0 14552 5800 5240 R   97  0.1   9:54.50 pmacctd:
Core Process [default]

This is on a Dell R210-II with an Intel(R) Xeon(R) CPU E31230 @
3.20GHz and 8GB RAM. Traffic being monitored is sent using pfsend on
an identically spec'ed box at the following rate:
TX rate: [current 862'368.05 pps/0.58 Gbps][average 863'190.72
pps/0.58 Gbps][total 70'765'733'449.00 pkts]

I don't expect this kind of traffic in my live environment under
normal circumstances, but one of the goals of this project is to make
sure everything keeps working properly during a (D)DoS as well.

Regards,
Ruben



Cheers,
Paolo

On Tue, Feb 04, 2014 at 08:31:42AM +0100, Ruben Laban wrote:

Hi,

I'll start with my config:

daemonize: true
pidfile: /var/run/pmacctd.pid
syslog: daemon
aggregate: dst_host
interface: eth5
plugins: print[traffic]
print_output_file[traffic]: /tmp/traffic-eth5-%Y%m%d_%H%M.txt
print_output[traffic]: csv
print_refresh_time[traffic]: 60
print_history[traffic]: 1m
plugin_buffer_size: 402400
plugin_pipe_size: 40240
print_cache_entries: 91
print_output_file_append: true
print_history_roundoff: m

What I observe is that every minute when the data gets flushed to
disk, 2 files get updated: the file for the previous minute, and 
the

file for the current minute. This leads to files containing the
following:

# for i in /tmp/traffic-eth5-20140204_09* ; do echo $i: ; cat $i ; 
done

/tmp/traffic-eth5-20140204_0900.txt:
DST_IP,PACKETS,BYTES
192.168.0.1,1496262,68828052
192.168.0.1,87794632,4038553072
/tmp/traffic-eth5-20140204_0901.txt:
DST_IP,PACKETS,BYTES
192.168.0.1,662553,30477438
192.168.0.1,45962195,2114260970
/tmp/traffic-eth5-20140204_0902.txt:
DST_IP,PACKETS,BYTES
192.168.0.1,1495840,68808640

(This time I'm using psend to send traffic through the monitored
interface, which uses a single destination IP.)

As you can see this leads to "duplicate" entries (more than one
entry per aggregate). One way to get rid of the duplicates would be
to disab

Re: [pmacct-discussion] Multiple updates per interval (print plugin)

2014-02-05 Thread Ruben Laban
Another thing I noticed was the lack of markers in the output files. 
Had forgotten this is now controlled by the config. So after enabling 
it, I get files like these:


--START (1391596920+60)--
DST_IP,PACKETS,BYTES
192.168.0.1,1059620,48742520
--END--
192.168.0.1,62895128,2893175888
--END--

Not sure if this is useful in any way, but I thought I'd share it.

Regards,
Ruben


On 2014-02-05 10:37, Ruben Laban wrote:

Cause for the memory usage has been found:


print_cache_entries: 91


This was needed when I was testing with random destination IP
addresses. As our production environment will only be accounting for
say 1 IPs, this value might not need tweaking.

Regards,
Ruben

On 2014-02-05 08:28, Ruben Laban wrote:

Hi Paolo,

See inline...

On 2014-02-04 18:26, Paolo Lucente wrote:

Configuration looks allright. And it can't supposedly happen with
libpcap that you have packets for the current (and/or future time
slots) when writing to the file: this might happen with nfacctd
instead, ie. in case NetFlow timestamps are in future because of
routers not being NTP sync'd.

I was wondering if plugin_buffer_size is not maybe too much for
your scenario but then again you would not see a smaller amount
of packets before a larger one. Maybe worth trying reducing it
anyway and see if this has any effect.


After cutting down both plugin_buffer_size and plugin_pipe_size by a
factor 10, I get the same behavior. Same with a factor 100, though
then I get a lot of:

Feb  5 07:53:39 gw02 pmacctd[3806]: ERROR ( traffic/print ): We are
missing data.
Feb  5 07:53:39 gw02 pmacctd[3806]: If you see this message once in 
a

while, discard it. Otherwise some solutions follow:
Feb  5 07:53:39 gw02 pmacctd[3806]: - increase shared memory size,
'plugin_pipe_size'; now: '4024000'.
Feb  5 07:53:39 gw02 pmacctd[3806]: - increase buffer size,
'plugin_buffer_size'; now: '4024'.
Feb  5 07:53:39 gw02 pmacctd[3806]: - increase system maximum socket
size.#012

(Which also shows a \n that shouldn't be there.)


What pmacct version you are running? Should the buffering idea
above not work, would it be an option to get temporary remote
access to your box for some troubleshooting?


These tests are performed using 1.5.0rc2. As for getting access, 
I'll

have to see if I can place this test environment somewhere safe
network-wise so you could access it.

Not sure if it's related (was gonna keep all issues separate at
first, but perhaps they're somehow linked), but the memory usage is
something that caught my attention as well:

vanilla pmacctd:

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
31909 root  20   0 2577m 2.5g 384m R   92 32.2 487:02.71 
pmacctd:

Print Plugin [traffic]
31908 root  20   0  397m 387m 386m R   72  4.9 381:43.56 
pmacctd:

Core Process [default]

pmacctd with PF_RING support:

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
14556 root  20   0 2575m 2.5g 384m S   20 32.2  51:54.62 
pmacctd:

Print Plugin [traffic]
14555 root  20   0  394m 385m 385m R   95  4.8 196:28.39 
pmacctd:

Core Process [default]


The above were using the initial config, thus with sizes set to
402400/40240. When reducing those by a factor 100, I still get
this (with PF_RING support):

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 3806 root  20   0 2195m 2.1g 4536 S   31 27.4   3:07.41 
pmacctd:

Print Plugin [traffic]
 3805 root  20   0 14552 5800 5240 R   97  0.1   9:54.50 
pmacctd:

Core Process [default]

This is on a Dell R210-II with an Intel(R) Xeon(R) CPU E31230 @
3.20GHz and 8GB RAM. Traffic being monitored is sent using pfsend on
an identically spec'ed box at the following rate:
TX rate: [current 862'368.05 pps/0.58 Gbps][average 863'190.72
pps/0.58 Gbps][total 70'765'733'449.00 pkts]

I don't expect this kind of traffic in my live environment under
normal circumstances, but one of the goals of this project is to 
make

sure everything keeps working properly during a (D)DoS as well.

Regards,
Ruben



Cheers,
Paolo

On Tue, Feb 04, 2014 at 08:31:42AM +0100, Ruben Laban wrote:

Hi,

I'll start with my config:

daemonize: true
pidfile: /var/run/pmacctd.pid
syslog: daemon
aggregate: dst_host
interface: eth5
plugins: print[traffic]
print_output_file[traffic]: /tmp/traffic-eth5-%Y%m%d_%H%M.txt
print_output[traffic]: csv
print_refresh_time[traffic]: 60
print_history[traffic]: 1m
plugin_buffer_size: 402400
plugin_pipe_size: 40240
print_cache_entries: 91
print_output_file_append: true
print_history_roundoff: m

What I observe is that every minute when the data gets flushed to
disk, 2 files get updated: the file for the previous minute, and 
the

file for the current minute. This leads to files containing the
following:

# for i in /tmp/traffic-eth5-20140204_09* ; do echo $i: ; cat $i ; 
done

/tmp/traffic-eth5-20140204_0900.txt:
DST_IP,PACKETS,BYTES
192.168.0.1,1496262,68828052
192.168.0.1,87794632,4038553072
/tmp/traffic-eth5-201402

[pmacct-discussion] Strange results on nfdump when using networks_file

2014-02-05 Thread Joan
I am trying to setup again a system to export flows with as number by using
the networks_file, since creating a full networks_file with the script at (
https://github.com/paololucente/pmacct-contrib/tree/master/st1) failed
leaving all the AS fields as 0, I simplified the file to a minimal case
(only google's 8.8.8.x and 8.8.4.x)


! generated by quagga_gen_as_network.pl at 20140205-11:25.51
193.149.55.94,15169,8.8.4.0/24
193.149.55.94,15169,8.8.8.0/24


Now I'm getting the srcas and dstas setted for all the traffic as if it was
originated and destinated to google.
I'm using the current 1.5.0rc2.
Feb  5 11:37:43 flower pmacctd[9562]: INFO ( default/core ): Start logging ...
Feb  5 11:37:43 flower pmacctd[9562]: INFO ( default/nfprobe ): plugin_pipe_size=4096000 bytes plugin_buffer_size=4096 bytes
Feb  5 11:37:43 flower pmacctd[9562]: INFO ( default/nfprobe ): ctrl channel: obtained=163840 bytes target=4000 bytes
Feb  5 11:37:43 flower pmacctd[9562]: DEBUG ( /etc/pmacct/networks.lst ): [networks table IPv4] nh: 193.150.1.123 peer asn: 0 asn: 15169 net: 8.8.4.0 mask: 24
Feb  5 11:37:43 flower pmacctd[9563]: INFO ( default/nfprobe ): NetFlow probe plugin is originally based on softflowd 0.9.7 software, Copyright 2002 Damien M
iller  All rights reserved.
Feb  5 11:37:43 flower pmacctd[9562]: DEBUG ( /etc/pmacct/networks.lst ): [networks table IPv4] nh: 193.150.1.123 peer asn: 0 asn: 15169 net: 8.8.8.0 mask: 24
Feb  5 11:37:43 flower pmacctd[9563]: INFO ( default/nfprobe ):   TCP timeout: 3600s
Feb  5 11:37:43 flower pmacctd[9563]: INFO ( default/nfprobe ):  TCP post-RST timeout: 120s
Feb  5 11:37:43 flower pmacctd[9562]: DEBUG ( /etc/pmacct/networks.lst ): IPv4 Networks Cache successfully created: 1 entries.
Feb  5 11:37:43 flower pmacctd[9563]: INFO ( default/nfprobe ):  TCP post-FIN timeout: 300s
Feb  5 11:37:43 flower pmacctd[9563]: INFO ( default/nfprobe ):   UDP timeout: 300s
Feb  5 11:37:43 flower pmacctd[9563]: INFO ( default/nfprobe ):  ICMP timeout: 300s
Feb  5 11:37:43 flower pmacctd[9563]: INFO ( default/nfprobe ):   General timeout: 3600s
Feb  5 11:37:43 flower pmacctd[9563]: INFO ( default/nfprobe ):  Maximum lifetime: 604800s
Feb  5 11:37:43 flower pmacctd[9563]: INFO ( default/nfprobe ):   Expiry interval: 60s
Feb  5 11:37:43 flower pmacctd[9562]: DEBUG ( /etc/pmacct/networks.lst ): [networks table IPv6] nh: 193.150.1.123 peer_asn: 0 asn: 15169 net: :: mask: 0
Feb  5 11:37:43 flower pmacctd[9562]: DEBUG ( /etc/pmacct/networks.lst ): [networks table IPv6] contains a default route
Feb  5 11:37:43 flower pmacctd[9562]: DEBUG ( /etc/pmacct/networks.lst ): IPv6 Networks Cache successfully created: 32771 entries.
Feb  5 11:37:43 flower pmacctd[9563]: INFO ( default/nfprobe ): Exporting flows to [192.168.1.123]:2591
Feb  5 11:37:43 flower pmacctd[9563]: DEBUG ( /etc/pmacct/networks.lst ): [networks table IPv4] nh: 193.150.1.123 peer asn: 0 asn: 15169 net: 8.8.4.0 mask: 24
Feb  5 11:37:43 flower pmacctd[9563]: DEBUG ( /etc/pmacct/networks.lst ): [networks table IPv4] nh: 193.150.1.123 peer asn: 0 asn: 15169 net: 8.8.8.0 mask: 24
Feb  5 11:37:43 flower pmacctd[9563]: DEBUG ( /etc/pmacct/networks.lst ): IPv4 Networks Cache successfully created: 1 entries.
Feb  5 11:37:43 flower pmacctd[9563]: DEBUG ( /etc/pmacct/networks.lst ): [networks table IPv6] nh: 193.150.1.123 peer_asn: 0 asn: 15169 net: :: mask: 0
Feb  5 11:37:43 flower pmacctd[9563]: DEBUG ( /etc/pmacct/networks.lst ): [networks table IPv6] contains a default route
Feb  5 11:37:43 flower pmacctd[9563]: DEBUG ( /etc/pmacct/networks.lst ): IPv6 Networks Cache successfully created: 32771 entries.
Feb  5 11:37:43 flower pmacctd[9562]: OK ( default/core ): link type is: 1

 Dst IP Addr FlowsBytes  Packets Src AS Dst AS
   209.23.235.22 1   921  15169  15169
88.26.252.71 1  3855  15169  15169
  166.78.151.214 1   871  15169  15169
88.26.252.71 1  4185  15169  15169
  162.242.162.82 1   811  15169  15169
69.28.95.170 1   801  15169  15169
69.28.95.154 1   781  15169  15169
218.189.3.34 1   761  15169  15169
   64.132.253.13 1   741  15169  15169
88.26.252.71 1  4185  15169  15169
   195.55.157.82 1  1561  15169  15169
  205.251.194.67 1   861  15169  15169
88.26.252.71 1  4185  15169  15169
   178.79.150.32 1   921  15169  15169
  176.58.111.122 1   921  15169  15169
   209.59.139.12 1   731  15169  15169
   178.79.150.32 1  1101  15169  15169
54.248.92.63 1   761  15169  15169


networks.lst
Description: Binary data
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] Strange results on nfdump when using networks_file

2014-02-05 Thread Paolo Lucente
Hi Joan,

I verified the issue you describe and fixed in the CVS. Can you give
it a try and see if that works for you?

Cheers,
Paolo

On Wed, Feb 05, 2014 at 11:50:55AM +0100, Joan wrote:
> I am trying to setup again a system to export flows with as number by using
> the networks_file, since creating a full networks_file with the script at (
> https://github.com/paololucente/pmacct-contrib/tree/master/st1) failed
> leaving all the AS fields as 0, I simplified the file to a minimal case
> (only google's 8.8.8.x and 8.8.4.x)
> 
> 
> ! generated by quagga_gen_as_network.pl at 20140205-11:25.51
> 193.149.55.94,15169,8.8.4.0/24
> 193.149.55.94,15169,8.8.8.0/24
> 
> 
> Now I'm getting the srcas and dstas setted for all the traffic as if it was
> originated and destinated to google.
> I'm using the current 1.5.0rc2.

> Feb  5 11:37:43 flower pmacctd[9562]: INFO ( default/core ): Start logging ...
> Feb  5 11:37:43 flower pmacctd[9562]: INFO ( default/nfprobe ): 
> plugin_pipe_size=4096000 bytes plugin_buffer_size=4096 bytes
> Feb  5 11:37:43 flower pmacctd[9562]: INFO ( default/nfprobe ): ctrl channel: 
> obtained=163840 bytes target=4000 bytes
> Feb  5 11:37:43 flower pmacctd[9562]: DEBUG ( /etc/pmacct/networks.lst ): 
> [networks table IPv4] nh: 193.150.1.123 peer asn: 0 asn: 15169 net: 8.8.4.0 
> mask: 24
> Feb  5 11:37:43 flower pmacctd[9563]: INFO ( default/nfprobe ): NetFlow probe 
> plugin is originally based on softflowd 0.9.7 software, Copyright 2002 Damien 
> M
> iller  All rights reserved.
> Feb  5 11:37:43 flower pmacctd[9562]: DEBUG ( /etc/pmacct/networks.lst ): 
> [networks table IPv4] nh: 193.150.1.123 peer asn: 0 asn: 15169 net: 8.8.8.0 
> mask: 24
> Feb  5 11:37:43 flower pmacctd[9563]: INFO ( default/nfprobe ):   TCP 
> timeout: 3600s
> Feb  5 11:37:43 flower pmacctd[9563]: INFO ( default/nfprobe ):  TCP post-RST 
> timeout: 120s
> Feb  5 11:37:43 flower pmacctd[9562]: DEBUG ( /etc/pmacct/networks.lst ): 
> IPv4 Networks Cache successfully created: 1 entries.
> Feb  5 11:37:43 flower pmacctd[9563]: INFO ( default/nfprobe ):  TCP post-FIN 
> timeout: 300s
> Feb  5 11:37:43 flower pmacctd[9563]: INFO ( default/nfprobe ):   UDP 
> timeout: 300s
> Feb  5 11:37:43 flower pmacctd[9563]: INFO ( default/nfprobe ):  ICMP 
> timeout: 300s
> Feb  5 11:37:43 flower pmacctd[9563]: INFO ( default/nfprobe ):   General 
> timeout: 3600s
> Feb  5 11:37:43 flower pmacctd[9563]: INFO ( default/nfprobe ):  Maximum 
> lifetime: 604800s
> Feb  5 11:37:43 flower pmacctd[9563]: INFO ( default/nfprobe ):   Expiry 
> interval: 60s
> Feb  5 11:37:43 flower pmacctd[9562]: DEBUG ( /etc/pmacct/networks.lst ): 
> [networks table IPv6] nh: 193.150.1.123 peer_asn: 0 asn: 15169 net: :: mask: 0
> Feb  5 11:37:43 flower pmacctd[9562]: DEBUG ( /etc/pmacct/networks.lst ): 
> [networks table IPv6] contains a default route
> Feb  5 11:37:43 flower pmacctd[9562]: DEBUG ( /etc/pmacct/networks.lst ): 
> IPv6 Networks Cache successfully created: 32771 entries.
> Feb  5 11:37:43 flower pmacctd[9563]: INFO ( default/nfprobe ): Exporting 
> flows to [192.168.1.123]:2591
> Feb  5 11:37:43 flower pmacctd[9563]: DEBUG ( /etc/pmacct/networks.lst ): 
> [networks table IPv4] nh: 193.150.1.123 peer asn: 0 asn: 15169 net: 8.8.4.0 
> mask: 24
> Feb  5 11:37:43 flower pmacctd[9563]: DEBUG ( /etc/pmacct/networks.lst ): 
> [networks table IPv4] nh: 193.150.1.123 peer asn: 0 asn: 15169 net: 8.8.8.0 
> mask: 24
> Feb  5 11:37:43 flower pmacctd[9563]: DEBUG ( /etc/pmacct/networks.lst ): 
> IPv4 Networks Cache successfully created: 1 entries.
> Feb  5 11:37:43 flower pmacctd[9563]: DEBUG ( /etc/pmacct/networks.lst ): 
> [networks table IPv6] nh: 193.150.1.123 peer_asn: 0 asn: 15169 net: :: mask: 0
> Feb  5 11:37:43 flower pmacctd[9563]: DEBUG ( /etc/pmacct/networks.lst ): 
> [networks table IPv6] contains a default route
> Feb  5 11:37:43 flower pmacctd[9563]: DEBUG ( /etc/pmacct/networks.lst ): 
> IPv6 Networks Cache successfully created: 32771 entries.
> Feb  5 11:37:43 flower pmacctd[9562]: OK ( default/core ): link type is: 1
> 

>  Dst IP Addr FlowsBytes  Packets Src AS Dst AS
>209.23.235.22 1   921  15169  15169
> 88.26.252.71 1  3855  15169  15169
>   166.78.151.214 1   871  15169  15169
> 88.26.252.71 1  4185  15169  15169
>   162.242.162.82 1   811  15169  15169
> 69.28.95.170 1   801  15169  15169
> 69.28.95.154 1   781  15169  15169
> 218.189.3.34 1   761  15169  15169
>64.132.253.13 1   741  15169  15169
> 88.26.252.71 1  4185  15169  151