Re: [pmacct-discussion] [pmacct] Can I use pmactt to measure consumption per program and month? (#16)

2016-03-24 Thread Stig Thormodsrud
I hope this isn't against the forum rules to mention another project, but
like Victor we have a lot of customers that need monthly usage.  I've been
using vnstat on my home router.  Now it doesn't do application breakdowns
since that needs dpi, but if monthly usages helps then it might be worth a
try.   http://humdi.net/vnstat/

Example output:

Per month:

admin@stig-home:~$ vnstat -m

 eth0  /  monthly

   monthrx  | tx  |total|   avg. rate
+-+-+---
  Oct '15 49.94 GiB |3.67 GiB |   53.60 GiB |  167.87 kbit/s
  Nov '15106.27 GiB |9.26 GiB |  115.53 GiB |  373.89 kbit/s
  Dec '15166.67 GiB |8.37 GiB |  175.04 GiB |  548.23 kbit/s
  Jan '16185.51 GiB |6.83 GiB |  192.33 GiB |  602.38 kbit/s
  Feb '16118.45 GiB |7.06 GiB |  125.51 GiB |  420.21 kbit/s
  Mar '16109.82 GiB |5.70 GiB |  115.52 GiB |  476.77 kbit/s
+-+-+---
estimated144.71 GiB |7.52 GiB |  152.23 GiB |


admin@stig-home:~$ vnstat -d

 eth0  /  daily

 day rx  | tx  |total|   avg. rate
 +-+-+---
  02/24/16  1.93 GiB |  512.70 MiB |2.43 GiB |  236.03 kbit/s
  02/25/16980.94 MiB |  265.50 MiB |1.22 GiB |  118.18 kbit/s
  02/26/16  6.34 GiB |  305.11 MiB |6.64 GiB |  644.96 kbit/s
  02/27/16 13.19 GiB |  414.19 MiB |   13.60 GiB |1.32 Mbit/s
  02/28/16  5.37 GiB |  224.21 MiB |5.59 GiB |  542.47 kbit/s
  02/29/16  1.50 GiB |  450.65 MiB |1.94 GiB |  188.62 kbit/s
  03/01/16  2.30 GiB |  214.90 MiB |2.51 GiB |  243.49 kbit/s
  03/02/16  1.24 GiB |  177.02 MiB |1.41 GiB |  136.85 kbit/s
  03/03/16  1.32 GiB |  203.94 MiB |1.52 GiB |  147.29 kbit/s
  03/04/16  5.38 GiB |  248.26 MiB |5.62 GiB |  545.56 kbit/s
  03/05/16 16.40 GiB |  371.50 MiB |   16.76 GiB |1.63 Mbit/s
  03/06/16 14.60 GiB |  374.61 MiB |   14.97 GiB |1.45 Mbit/s
  03/07/16  1.28 GiB |  462.27 MiB |1.73 GiB |  167.85 kbit/s
  03/08/16  1.74 GiB |  274.97 MiB |2.00 GiB |  194.64 kbit/s
  03/09/16748.99 MiB |  217.34 MiB |  966.33 MiB |   91.62 kbit/s
  03/10/16  1.86 GiB |  209.54 MiB |2.06 GiB |  200.37 kbit/s
  03/11/16  5.02 GiB |  262.04 MiB |5.28 GiB |  512.60 kbit/s
  03/12/16 10.41 GiB |  304.21 MiB |   10.70 GiB |1.04 Mbit/s
  03/13/16  9.72 GiB |  326.63 MiB |   10.04 GiB |  974.51 kbit/s
  03/14/16  1.83 GiB |  197.54 MiB |2.02 GiB |  196.01 kbit/s
  03/15/16  3.39 GiB |  186.71 MiB |3.57 GiB |  346.90 kbit/s
  03/16/16  1.40 GiB |  185.31 MiB |1.58 GiB |  153.47 kbit/s
  03/17/16  1.37 GiB |  180.40 MiB |1.55 GiB |  150.07 kbit/s
  03/18/16  1.71 GiB |  213.81 MiB |1.91 GiB |  185.81 kbit/s
  03/19/16 12.28 GiB |  412.81 MiB |   12.69 GiB |1.23 Mbit/s
  03/20/16  8.77 GiB |  284.41 MiB |9.04 GiB |  878.12 kbit/s
  03/21/16872.39 MiB |  135.04 MiB |0.98 GiB |   95.52 kbit/s
  03/22/16  3.39 GiB |  192.81 MiB |3.58 GiB |  347.33 kbit/s
  03/23/16  2.68 GiB |  149.60 MiB |2.82 GiB |  274.16 kbit/s
  03/24/16179.13 MiB |   55.53 MiB |  234.66 MiB |   39.25 kbit/s
 +-+-+---
 estimated   315 MiB |  97 MiB | 412 MiB |



On Thu, Mar 24, 2016 at 12:18 PM, Paolo Lucente 
wrote:

> Dear Victor: no, pmacct is not meant to count traffic per application on a
> system. It is an interesting case but i also can't help pointing you to an
> application able to do that. Cheers, Paolo
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly or view it on GitHub
> 
>
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] json output of iface

2014-02-21 Thread Stig Thormodsrud
Sure I can try that.  I'm wondering if it's a big endian issue?  My
platform is MIPS64.

stig


On Fri, Feb 21, 2014 at 11:44 AM, Paolo Lucente pa...@pmacct.net wrote:

 Hi Stig,

 I tried to reproduce the issue with no joy, ie. all works good. I
 suspect this might be something architecture specific - what CPU
 is this?

 Thing is input/output interface fields are u_int32_t and using 'i'
 would get us in trouble reading from NetFlow/sFlow exporters where
 interfaces have high numbering; can you test you actually get the
 same issue against some other fields packed as 'I', ie. src/dst
 network masks (defined as u_int8_t) and UDP/TCP ports (defined as
 u_int16_t)?

 Cheers,
 Paolo

 On Thu, Feb 20, 2014 at 03:11:16PM -0800, Stig Thormodsrud wrote:
  Hi Paolo,
 
  I've been experimenting with the new json output - nice addition.  One
  thing I ran into was that the iface value was wrong.  For example here's
  the csv output compared with the json output:
 
  root@ubnt-SJ:/etc/pmacct# pmacct -p /tmp/uacctd-e.pipe -s -T bytes -O
 csv
  TAG,IN_IFACE,OUT_IFACE,DST_IP,PACKETS,BYTES
  7,7,15,10.1.6.191,55864,81578675
 
  root@ubnt-SJ:/etc/pmacct# pmacct -p /tmp/uacctd-e.pipe -s -T bytes -O
 json
  {tag: 7, ip_dst: 10.1.10.10, iface_out: 36515643520, iface_in:
  32220676224, packets: 59789, bytes: 85446840}
 
  I think the problem might be that the jansson library is treating at 16
 bit
  value as 64 bits.  If I change it to:
 
  diff --git a/src/pmacct.c b/src/pmacct.c
  index 2fb915a..ea79788 100644
  --- a/src/pmacct.c
  +++ b/src/pmacct.c
  @@ -2929,13 +2929,13 @@ char *pmc_compose_json(u_int64_t wtc, u_int64_t
  wtc_2, u
 }
 
 if (wtc  COUNT_IN_IFACE) {
  -kv = json_pack({sI}, iface_in, pbase-ifindex_in);
  +kv = json_pack({si}, iface_in, pbase-ifindex_in);
   json_object_update_missing(obj, kv);
   json_decref(kv);
 }
 
 if (wtc  COUNT_OUT_IFACE) {
  -kv = json_pack({sI}, iface_out, pbase-ifindex_out);
  +kv = json_pack({si}, iface_out, pbase-ifindex_out);
   json_object_update_missing(obj, kv);
   json_decref(kv);
 }
 
 
  Then I get:
 
  root@ubnt-SJ:/etc/pmacct# pmacct -p /tmp/uacctd-e.pipe -s -T bytes -O
 json
  {tag: 7, ip_dst: 10.1.10.10, iface_out: 8, iface_in: 7,
  packets: 119479, bytes: 170724634}
 
  Not sure if that is the correct fix.
 
  stig

  ___
  pmacct-discussion mailing list
  http://www.pmacct.net/#mailinglists


 ___
 pmacct-discussion mailing list
 http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] json output of iface

2014-02-21 Thread Stig Thormodsrud
Yes, the ports, vlan and tos are also jacked-up for me:

TAG, SRC_MAC,   DST_MAC,   VLAN, SRC_IP,DST_IP,
SRC_PORT, DST_PORT, PROTOCOL, TOS, PACKETS, FLOWS, BYTES
7,   dc:9f:db:28:ff:aa, 24:a4:3c:3d:51:e2, 0,10.1.0.91, 10.2.0.195,
1935, 55750,tcp,  0,   957956,  1, 1338738381


{tag: 7, tos: 2155905152, ip_proto: tcp, mac_dst:
24:a4:3c:3d:51:e2, mac_src: dc:9f:db:28:ff:aa, vlan: 2155905152,
ip_src: 10.1.0.91, port_src: 8312917622912, ip_dst: 10.2.0.195,
port_dst: 239446582657152, 
packets: 962493, flows: 1, bytes: 1345089033}


On Fri, Feb 21, 2014 at 12:09 PM, Stig Thormodsrud sthor...@gmail.comwrote:

 Sure I can try that.  I'm wondering if it's a big endian issue?  My
 platform is MIPS64.

 stig


 On Fri, Feb 21, 2014 at 11:44 AM, Paolo Lucente pa...@pmacct.net wrote:

 Hi Stig,

 I tried to reproduce the issue with no joy, ie. all works good. I
 suspect this might be something architecture specific - what CPU
 is this?

 Thing is input/output interface fields are u_int32_t and using 'i'
 would get us in trouble reading from NetFlow/sFlow exporters where
 interfaces have high numbering; can you test you actually get the
 same issue against some other fields packed as 'I', ie. src/dst
 network masks (defined as u_int8_t) and UDP/TCP ports (defined as
 u_int16_t)?

 Cheers,
 Paolo

 On Thu, Feb 20, 2014 at 03:11:16PM -0800, Stig Thormodsrud wrote:
  Hi Paolo,
 
  I've been experimenting with the new json output - nice addition.  One
  thing I ran into was that the iface value was wrong.  For example here's
  the csv output compared with the json output:
 
  root@ubnt-SJ:/etc/pmacct# pmacct -p /tmp/uacctd-e.pipe -s -T bytes -O
 csv
  TAG,IN_IFACE,OUT_IFACE,DST_IP,PACKETS,BYTES
  7,7,15,10.1.6.191,55864,81578675
 
  root@ubnt-SJ:/etc/pmacct# pmacct -p /tmp/uacctd-e.pipe -s -T bytes -O
 json
  {tag: 7, ip_dst: 10.1.10.10, iface_out: 36515643520, iface_in:
  32220676224, packets: 59789, bytes: 85446840}
 
  I think the problem might be that the jansson library is treating at 16
 bit
  value as 64 bits.  If I change it to:
 
  diff --git a/src/pmacct.c b/src/pmacct.c
  index 2fb915a..ea79788 100644
  --- a/src/pmacct.c
  +++ b/src/pmacct.c
  @@ -2929,13 +2929,13 @@ char *pmc_compose_json(u_int64_t wtc, u_int64_t
  wtc_2, u
 }
 
 if (wtc  COUNT_IN_IFACE) {
  -kv = json_pack({sI}, iface_in, pbase-ifindex_in);
  +kv = json_pack({si}, iface_in, pbase-ifindex_in);
   json_object_update_missing(obj, kv);
   json_decref(kv);
 }
 
 if (wtc  COUNT_OUT_IFACE) {
  -kv = json_pack({sI}, iface_out, pbase-ifindex_out);
  +kv = json_pack({si}, iface_out, pbase-ifindex_out);
   json_object_update_missing(obj, kv);
   json_decref(kv);
 }
 
 
  Then I get:
 
  root@ubnt-SJ:/etc/pmacct# pmacct -p /tmp/uacctd-e.pipe -s -T bytes -O
 json
  {tag: 7, ip_dst: 10.1.10.10, iface_out: 8, iface_in: 7,
  packets: 119479, bytes: 170724634}
 
  Not sure if that is the correct fix.
 
  stig

  ___
  pmacct-discussion mailing list
  http://www.pmacct.net/#mailinglists


 ___
 pmacct-discussion mailing list
 http://www.pmacct.net/#mailinglists



___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

[pmacct-discussion] minor patch for geoip and json output

2014-02-21 Thread Stig Thormodsrud
I noticed when using geoip and json output that the country_ip_src was
displayed but not the country_ip_dst.  This patch seems to fix it:

diff --git a/src/pmacct.c b/src/pmacct.c
index d62ba44..b29c7a3 100644
--- a/src/pmacct.c
+++ b/src/pmacct.c
@@ -2996,7 +2996,7 @@ char *pmc_compose_json(u_int64_t wtc, u_int64_t
wtc_2, u_int8_t flow_type, struc
 json_decref(kv);
   }

-  if (wtc  COUNT_DST_HOST_COUNTRY) {
+  if (wtc_2  COUNT_DST_HOST_COUNTRY) {
 if (pbase-dst_ip_country  0)
   kv = json_pack({ss}, country_ip_dst,
GeoIP_code_by_id(pbase-dst_ip_country));
 else
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] minor patch for geoip and json output

2014-02-21 Thread Stig Thormodsrud
Also related to geoip lookups, it might be worth adding to the
documentation that this is for geoip 1 not geoip2.  I just googled mindmax
free geoip database and download the free geoip2 database (not know there
was a geoip1).  When I fire up pmacct pointing to the geoip2 database it
seg faults in a hurry:

Program received signal SIGSEGV, Segmentation fault.
INFO ( 10.1.1.205-9996/nfprobe ): Exporting flows to [10.1.1.205]:9996
0x77f5d7b0 in _GeoIP_seek_record () from /usr/lib/libGeoIP.so.1
(gdb) where
#0  0x77f5d7b0 in _GeoIP_seek_record () from /usr/lib/libGeoIP.so.1
#1  0x77f5e03c in GeoIP_id_by_ipnum () from /usr/lib/libGeoIP.so.1
#2  0x00425300 in src_host_country_handler (chptr=value optimized out,
pptrs=0x7fff3c90, data=value optimized out) at pkt_handlers.c:4008
#3  0x0041c388 in exec_plugins (pptrs=0x7fff3c90) at plugin_hooks.c:252
#4  0x004581c8 in pcap_cb (user=0x7fff3ee8 , pkthdr=value optimized
out, buf=value optimized out) at nl.c:80
#5  0x004137e0 in main (argc=3, argv=0x7fff68f4, envp=0x7fff6904) at
uacctd.c:830


Of course using the right database solves that crash.



On Fri, Feb 21, 2014 at 5:04 PM, Stig Thormodsrud sthor...@gmail.comwrote:


 I noticed when using geoip and json output that the country_ip_src was
 displayed but not the country_ip_dst.  This patch seems to fix it:

 diff --git a/src/pmacct.c b/src/pmacct.c
 index d62ba44..b29c7a3 100644
 --- a/src/pmacct.c
 +++ b/src/pmacct.c
 @@ -2996,7 +2996,7 @@ char *pmc_compose_json(u_int64_t wtc, u_int64_t
 wtc_2, u_int8_t flow_type, struc
  json_decref(kv);
}

 -  if (wtc  COUNT_DST_HOST_COUNTRY) {
 +  if (wtc_2  COUNT_DST_HOST_COUNTRY) {
  if (pbase-dst_ip_country  0)
kv = json_pack({ss}, country_ip_dst,
 GeoIP_code_by_id(pbase-dst_ip_country));
  else


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

[pmacct-discussion] json output of iface

2014-02-20 Thread Stig Thormodsrud
Hi Paolo,

I've been experimenting with the new json output - nice addition.  One
thing I ran into was that the iface value was wrong.  For example here's
the csv output compared with the json output:

root@ubnt-SJ:/etc/pmacct# pmacct -p /tmp/uacctd-e.pipe -s -T bytes -O csv
TAG,IN_IFACE,OUT_IFACE,DST_IP,PACKETS,BYTES
7,7,15,10.1.6.191,55864,81578675

root@ubnt-SJ:/etc/pmacct# pmacct -p /tmp/uacctd-e.pipe -s -T bytes -O json
{tag: 7, ip_dst: 10.1.10.10, iface_out: 36515643520, iface_in:
32220676224, packets: 59789, bytes: 85446840}

I think the problem might be that the jansson library is treating at 16 bit
value as 64 bits.  If I change it to:

diff --git a/src/pmacct.c b/src/pmacct.c
index 2fb915a..ea79788 100644
--- a/src/pmacct.c
+++ b/src/pmacct.c
@@ -2929,13 +2929,13 @@ char *pmc_compose_json(u_int64_t wtc, u_int64_t
wtc_2, u
   }

   if (wtc  COUNT_IN_IFACE) {
-kv = json_pack({sI}, iface_in, pbase-ifindex_in);
+kv = json_pack({si}, iface_in, pbase-ifindex_in);
 json_object_update_missing(obj, kv);
 json_decref(kv);
   }

   if (wtc  COUNT_OUT_IFACE) {
-kv = json_pack({sI}, iface_out, pbase-ifindex_out);
+kv = json_pack({si}, iface_out, pbase-ifindex_out);
 json_object_update_missing(obj, kv);
 json_decref(kv);
   }


Then I get:

root@ubnt-SJ:/etc/pmacct# pmacct -p /tmp/uacctd-e.pipe -s -T bytes -O json
{tag: 7, ip_dst: 10.1.10.10, iface_out: 8, iface_in: 7,
packets: 119479, bytes: 170724634}

Not sure if that is the correct fix.

stig
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

[pmacct-discussion] plugin_pipe_size = 0 ???

2013-11-20 Thread Stig Thormodsrud
Hi Paolo,

The Ubiquiti fork of Vyatta is using an old version of pmacct (0.12.5), so
I'm in the process of updating it to 1.5rc1.  In testing the new code I've
noticed some differences in the plugin_pipe_size that I'm using.  With the
following config:

vbash-4.1$ cat uacctd-i.conf
!
! autogenerated by /opt/vyatta/sbin/vyatta-netflow.pl
!
daemonize: true
promisc:   false
pidfile:   /var/run/uacctd-i.pid
imt_path:  /tmp/uacctd-i.pipe
imt_mem_pools_number: 169
uacctd_group: 2
uacctd_nl_size: 2097152
snaplen: 32768
refresh_maps: true
pre_tag_map: /etc/pmacct/int_map
aggregate:
tag,src_mac,dst_mac,vlan,src_host,dst_host,src_port,dst_port,proto,tos,flows
plugin_pipe_size: 10485760
plugin_buffer_size: 10240
syslog: daemon
plugins: ,sfprobe[10.1.7.227-6343]
sfprobe_receiver[10.1.7.227-6343]: 10.1.7.227:6343
sfprobe_agentip[10.1.7.227-6343]: 10.1.1.153
sfprobe_direction[10.1.7.227-6343]: in


Using that config I eventually start getting lots of the following log
message:

Nov 20 01:39:19 ubnt-netflow pmacctd[1048]: ERROR ( 10.1.7.227-6343/sfprobe
): We are missing data.
Nov 20 01:39:19 ubnt-netflow pmacctd[1048]: If you see this message once in
a while, discard it. Otherwise some solutions follow:
Nov 20 01:39:19 ubnt-netflow pmacctd[1048]: - increase shared memory size,
'plugin_pipe_size'; now: '0'.
Nov 20 01:39:19 ubnt-netflow pmacctd[1048]: - increase buffer size,
'plugin_buffer_size'; now: '0'.
Nov 20 01:39:19 ubnt-netflow pmacctd[1048]: - increase system maximum
socket size.#012


It says my pipe and buffersize are 0.   So I added some logging in
load_plugin() to see what the values were at the beginning and end of the
function and see:

root@ubnt-netflow:/etc/pmacct# uacctd -f uacctd-i.conf
load_plugins:pipe_size: 0, buffer_size 10485760
load_plugins end: pipe_size: 0, buffer_size 10485760
INFO ( default/core ): Successfully connected Netlink ULOG socket
INFO ( default/core ): Netlink receive buffer size set to 2097152
INFO ( default/core ): Netlink ULOG: binding to group 2
INFO ( default/core ): Trying to (re)load map: /etc/pmacct/int_map
INFO ( default/core ): map '/etc/pmacct/int_map' successfully (re)loaded.
INFO ( 10.1.7.227-6343/sfprobe ): Exporting flows to [10.1.7.227]:6343
INFO ( 10.1.7.227-6343/sfprobe ): Sampling at: 1/1
INFO ( 10.1.7.227-6343/sfprobe ):'plugin_pipe_size'; now: '0'.
INFO ( 10.1.7.227-6343/sfprobe ):'plugin_buffer_size'; now: '0'.

So load_plugins has pipe_size 0 and buffer_size set to the value of
plugin_pipe_size in the config file.  By the time sfprobe starts both are 0.

If I do the same with version 0.12.5 I see what I expect:

uacctd -f uacctd-i.conf
load_plugins: pipe_size: 10485760, buffer_size 10240
load_plugins end: pipe_size: 10485760, buffer_size 32888
INFO ( default/core ): Successfully connected Netlink ULOG socket
INFO ( default/core ): Netlink receive buffer size set to 2097152
INFO ( default/core ): Netlink ULOG: binding to group 2
INFO ( default/core ): Trying to (re)load map: /etc/pmacct/int_map
INFO ( default/core ): map '/etc/pmacct/int_map' successfully (re)loaded.
INFO ( 10.1.7.227-6343/sfprobe ): Exporting flows to [10.1.7.227]:6343
INFO ( 10.1.7.227-6343/sfprobe ): Sampling at: 1/1
INFO ( 10.1.7.227-6343/sfprobe ):'plugin_pipe_size'; now: '10485760'.
INFO ( 10.1.7.227-6343/sfprobe ):'plugin_buffer_size'; now: '32888'.


So has the behavior of pipe_size/buffer_size changed in the newer version
or could this be a bug?

stig
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] plugin_pipe_size = 0 ???

2013-11-20 Thread Stig Thormodsrud
Ok, false alarm.  I did some more debugging and noticed one difference
between the old version and the current version is that both pipe_size and
buffer_size have changed from int to u_int64_t.  So if I change the log
messages to:

--- a/src/sfprobe_plugin/sfprobe_plugin.c
+++ b/src/sfprobe_plugin/sfprobe_plugin.c
@@ -673,8 +675,8 @@ read_data:
  if (config.debug || (rg_err_count  MAX_RG_COUNT_ERR)) {
Log(LOG_ERR, ERROR ( %s/%s ): We are missing data.\n,
config.name, config.type);
Log(LOG_ERR, If you see this message once in a while, discard
it. Otherwise some solutions follow:\n);
-   Log(LOG_ERR, - increase shared memory size,
'plugin_pipe_size'; now: '%u'.\n, config.pipe_size);
-   Log(LOG_ERR, - increase buffer size, 'plugin_buffer_size';
now: '%u'.\n, config.buffer_size);
+   Log(LOG_ERR, - increase shared memory size,
'plugin_pipe_size'; now: '%llu'.\n, config.pipe_size);
+   Log(LOG_ERR, - increase buffer size, 'plugin_buffer_size';
now: '%llu'.\n, config.buffer_size);
Log(LOG_ERR, - increase system maximum socket size.\n\n);

Then I see that my pipe and buffer size are not actually 0:

ERROR ( 10.1.7.227-6343/sfprobe ): We are missing data.
If you see this message once in a while, discard it. Otherwise some
solutions follow:
- increase shared memory size, 'plugin_pipe_size'; now: '10485760'.
- increase buffer size, 'plugin_buffer_size'; now: '32912'.
- increase system maximum socket size.


Guess I'll try increasing the values.



On Wed, Nov 20, 2013 at 1:58 PM, Stig Thormodsrud sthor...@gmail.comwrote:

 Hi Paolo,

 The Ubiquiti fork of Vyatta is using an old version of pmacct (0.12.5), so
 I'm in the process of updating it to 1.5rc1.  In testing the new code I've
 noticed some differences in the plugin_pipe_size that I'm using.  With the
 following config:

 vbash-4.1$ cat uacctd-i.conf
 !
 ! autogenerated by /opt/vyatta/sbin/vyatta-netflow.pl
 !
 daemonize: true
 promisc:   false
 pidfile:   /var/run/uacctd-i.pid
 imt_path:  /tmp/uacctd-i.pipe
 imt_mem_pools_number: 169
 uacctd_group: 2
 uacctd_nl_size: 2097152
 snaplen: 32768
 refresh_maps: true
 pre_tag_map: /etc/pmacct/int_map
 aggregate:
 tag,src_mac,dst_mac,vlan,src_host,dst_host,src_port,dst_port,proto,tos,flows
 plugin_pipe_size: 10485760
 plugin_buffer_size: 10240
 syslog: daemon
 plugins: ,sfprobe[10.1.7.227-6343]
 sfprobe_receiver[10.1.7.227-6343]: 10.1.7.227:6343
 sfprobe_agentip[10.1.7.227-6343]: 10.1.1.153
 sfprobe_direction[10.1.7.227-6343]: in


 Using that config I eventually start getting lots of the following log
 message:

 Nov 20 01:39:19 ubnt-netflow pmacctd[1048]: ERROR (
 10.1.7.227-6343/sfprobe ): We are missing data.
 Nov 20 01:39:19 ubnt-netflow pmacctd[1048]: If you see this message once
 in a while, discard it. Otherwise some solutions follow:
 Nov 20 01:39:19 ubnt-netflow pmacctd[1048]: - increase shared memory size,
 'plugin_pipe_size'; now: '0'.
 Nov 20 01:39:19 ubnt-netflow pmacctd[1048]: - increase buffer size,
 'plugin_buffer_size'; now: '0'.
 Nov 20 01:39:19 ubnt-netflow pmacctd[1048]: - increase system maximum
 socket size.#012


 It says my pipe and buffersize are 0.   So I added some logging in
 load_plugin() to see what the values were at the beginning and end of the
 function and see:

 root@ubnt-netflow:/etc/pmacct# uacctd -f uacctd-i.conf
 load_plugins:pipe_size: 0, buffer_size 10485760
 load_plugins end: pipe_size: 0, buffer_size 10485760
 INFO ( default/core ): Successfully connected Netlink ULOG socket
 INFO ( default/core ): Netlink receive buffer size set to 2097152
 INFO ( default/core ): Netlink ULOG: binding to group 2
 INFO ( default/core ): Trying to (re)load map: /etc/pmacct/int_map
 INFO ( default/core ): map '/etc/pmacct/int_map' successfully (re)loaded.
 INFO ( 10.1.7.227-6343/sfprobe ): Exporting flows to [10.1.7.227]:6343
 INFO ( 10.1.7.227-6343/sfprobe ): Sampling at: 1/1
 INFO ( 10.1.7.227-6343/sfprobe ):'plugin_pipe_size'; now: '0'.
 INFO ( 10.1.7.227-6343/sfprobe ):'plugin_buffer_size'; now: '0'.

 So load_plugins has pipe_size 0 and buffer_size set to the value of
 plugin_pipe_size in the config file.  By the time sfprobe starts both are 0.

 If I do the same with version 0.12.5 I see what I expect:

 uacctd -f uacctd-i.conf
 load_plugins: pipe_size: 10485760, buffer_size 10240
 load_plugins end: pipe_size: 10485760, buffer_size 32888
 INFO ( default/core ): Successfully connected Netlink ULOG socket
 INFO ( default/core ): Netlink receive buffer size set to 2097152
 INFO ( default/core ): Netlink ULOG: binding to group 2
 INFO ( default/core ): Trying to (re)load map: /etc/pmacct/int_map
 INFO ( default/core ): map '/etc/pmacct/int_map' successfully (re)loaded.
 INFO ( 10.1.7.227-6343/sfprobe ): Exporting flows to [10.1.7.227]:6343
 INFO ( 10.1.7.227-6343/sfprobe ): Sampling at: 1/1
 INFO ( 10.1.7.227-6343/sfprobe ):'plugin_pipe_size'; now: '10485760'.
 INFO ( 10.1.7.227-6343/sfprobe ):'plugin_buffer_size

[pmacct-discussion] sfprobe|nfprobe dying with IMT

2013-11-20 Thread Stig Thormodsrud
Great to hear from you again too Paolo.  I knew I should've checked if you
had fixed it already.

Anyway another issue I'm looking into.  I don't think this is a new issue
because there was a bug open for it back at Vyatta (
https://bugzilla.vyatta.com/show_bug.cgi?id=7693).  Basically if I'm using
either netflow or sflow along with IMT, often time the netflow/sflow daemon
will die.  Running foreground I see:

root@ubnt-netflow:/etc/pmacct# uacctd -f uacctd-i.conf
OK ( default/memory ): waiting for data on: '/tmp/uacctd-i.pipe'
INFO ( default/core ): Successfully connected Netlink ULOG socket
INFO ( default/core ): Netlink receive buffer size set to 2097152
INFO ( default/core ): Netlink ULOG: binding to group 2
INFO ( default/core ): Trying to (re)load map: /etc/pmacct/int_map
INFO ( default/core ): map '/etc/pmacct/int_map' successfully (re)loaded.
INFO ( 10.1.7.227-6343/sfprobe ): Exporting flows to [10.1.7.227]:6343
INFO ( 10.1.7.227-6343/sfprobe ): Sampling at: 1/1
WARN ( default/memory ): Unable to allocate more memory pools, clear stats
manually!
INFO: connection lost to '10.1.7.227-6343-sfprobe'; closing connection.


After the connection lost message the sflow daemon is gone, but IMT still
is fine.  Any thoughts on how to further debug this (other than not using
IMT ;-).
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] uacctd documentation

2010-03-24 Thread Stig Thormodsrud
Ross Vandegrift wrote:
 Hello,
 
 Is there any documentation describing the setup required for uacctd?
 I'd like to check this out, but can't quite figure out all the steps I
 need to do in order to get things working.
 
 Thanks,
 Ross

Below is an example of uacctd that I'm running:

vya...@r1# cat /etc/pmacct/uacctd.conf
! autogenerated by /opt/vyatta/sbin/vyatta-netflow.pl
daemonize: true
promisc:   false
pidfile:   /var/run/uacctd.pid
imt_path:  /tmp/uacctd.pipe
uacctd_group: 2
refresh_maps: true
pre_tag_map: /etc/pmacct/int_map
aggregate:
tag,src_mac,dst_mac,vlan,src_host,dst_host,src_port,dst_port,proto,tos,flows
syslog: daemon
plugins: memory,nfprobe[10.1.0.21-2055]
nfprobe_receiver[10.1.0.21-2055]: 10.1.0.21:2055
nfprobe_version[10.1.0.21-2055]: 9
nfprobe_engine[10.1.0.21-2055]: 0:0


Then I add the interfaces I want to iptables.

vya...@r1# iptables -t raw -nvL PREROUTING
Chain PREROUTING (policy ACCEPT 30 packets, 4236 bytes)
 pkts bytes target prot opt in out source
destination
 1608 85539 ULOG   all  --  eth1   *   0.0.0.0/0
0.0.0.0/0   ULOG copy_range 64 nlgroup 2 queue_threshold 10
0 0 ULOG   all  --  eth1.101 *   0.0.0.0/0
0.0.0.0/0   ULOG copy_range 64 nlgroup 2 queue_threshold 10
 4710 1027K ULOG   all  --  eth0   *   0.0.0.0/0
0.0.0.0/0   ULOG copy_range 64 nlgroup 2 queue_threshold 10


I happen to choose the raw table to see the packets before nat/firewall,
but if you hook into netfilter in the POSTROUTING chain then you can
also get the output interface in the netflow records.

cheers,

stig

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


[pmacct-discussion] pmacctd commands to match cisco ip flow-cache timeout

2009-08-19 Thread Stig Thormodsrud
I'm using pmacctd to export netflow v5 and am experimenting with a netflow
collector that suggests the following cisco flow timeout values:

ip flow-cache timeout active 1
ip flow-cache timeout inactive 15

I'm trying to figure out which nfprobe_timeout vars are the equivalent.
Would it be:

nfprobe_timeouts: maxlife=60:general=15


thanks,

stig

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] segv with memory,sfprobe plugins

2009-08-18 Thread Stig Thormodsrud
Great!  Thanks for the quick fix.

stig

 Hi Stig,
 
 thanks very much for having reported the issue. This is now solved
 in the CVS. I managed to reproduce it.
 
 It was lying in the fact that initialization of the sfprobe plugin
 was explicitely disabling the IP fragment handler in pmacctd; this
 was causing the IMT plugin, configured with L4 primitives (src_port
 for example), to crash because it expects the IP fragment handler
 to be there.
 
 The one-liner fix basically avoids sfprobe to turn the IP fragment
 handler off in case it was previously turned on (hence you see the
 position of the plugins was relevant) as part of operations of a
 concurrent plugin.
 
 Cheers,
 Paolo
 
 
 On Mon, Aug 17, 2009 at 08:27:51PM -0700, Stig Thormodsrud wrote:
  I'm getting a segv fault when using the following conf file:
 
  s...@io:~/git/pmacct-0.11.4/src$ cat pm.conf
  daemonize: false
  debug: true
  promisc: true
  pidfile:   /var/run/pmacctd-eth0.pid
  imt_path:  /tmp/pmacctd-eth0.pipe
  aggregate: src_host,dst_host,proto,src_port,dst_port,tos,flows
  interface: eth0
  !syslog: daemon
  pcap_filter: !ether src 00:15:17:0b:d2:16
  plugins: memory,sfprobe
  sfprobe_agentsubid: 5
  sfprobe_receiver: 172.16.117.25:6343
 
  s...@io:~/git/pmacct-0.11.4/src$ sudo ./pmacctd -f pm.conf
  INFO ( default/memory ): 131070 bytes are available to address shared
  memory segment; buffer size is 132 bytes.
  INFO ( default/memory ): Trying to allocate a shared memory segment of
  4325244 bytes.
  INFO ( default/sfprobe ): Pipe size obtained: 131070 / 49348.
  OK ( default/core ): link type is: 1
  DEBUG ( default/sfprobe ): Creating sFlow agent.
  INFO ( default/sfprobe ): Exporting flows to [172.16.117.25]:6343
  INFO ( default/sfprobe ): Sampling at: 1/1
  DEBUG ( default/memory ): allocating a new memory segment.
  DEBUG ( default/memory ): allocating a new memory segment.
  OK ( default/memory ): waiting for data on: '/tmp/pmacctd-eth0.pipe'
  DEBUG ( default/memory ): Selecting bucket 16151.
  Segmentation fault
 
 
  In gdb it stops at:
 
  (gdb) run -f pm.conf
  Starting program: /home/stig/git/pmacct-0.11.4/src/pmacctd -f pm.conf
  [Thread debugging using libthread_db enabled]
  INFO ( default/memory ): 131070 bytes are available to address shared
  memory segment; buffer size is 132 bytes.
  INFO ( default/memory ): Trying to allocate a shared memory segment of
  4325244 bytes.
  INFO ( default/sfprobe ): Pipe size obtained: 131070 / 49348.
  DEBUG ( default/memory ): allocating a new memory segment.
  DEBUG ( default/sfprobe ): Creating sFlow agent.
  INFO ( default/sfprobe ): Exporting flows to [172.16.117.25]:6343
  INFO ( default/sfprobe ): Sampling at: 1/1
  DEBUG ( default/memory ): allocating a new memory segment.
  OK ( default/memory ): waiting for data on: '/tmp/collect.pipe'
  OK ( default/core ): link type is: 1
  [New Thread 0xb788fa90 (LWP 23213)]
 
  Program received signal SIGSEGV, Segmentation fault.
  [Switching to Thread 0xb788fa90 (LWP 23213)]
  0x080649f3 in src_port_handler (chptr=0x80c3ce0, pptrs=0xbf90dca8,
  data=0xbf90dc6c) at pkt_handlers.c:353
  (gdb)
  (gdb) where
  #0  0x080649f3 in src_port_handler (chptr=0x80c3ce0, pptrs=0xbf90dca8,
  data=0xbf90dc6c) at pkt_handlers.c:353
  #1  0x0805d218 in exec_plugins (pptrs=0xbf90dca8) at
plugin_hooks.c:219
  #2  0x08059b72 in pcap_cb (user=0xbf90de8c \031\,
pkthdr=0xbf90dd88,
  buf=0x883d1ba ) at pmacctd.c:665
  #3  0xb7ebbd45 in ?? () from /usr/lib/libpcap.so.0.8
  #4  0xbf90de8c in ?? ()
  #5  0xbf90dd88 in ?? ()
  #6  0x0883d1ba in ?? ()
  #7  0x0020 in ?? ()
  #8  0xbf90dd74 in ?? ()
  #9  0xbf90dd98 in ?? ()
  #10 0x in ?? ()
  (gdb)
  (gdb) p *pptrs
  $1 = {pkthdr = 0xbf90dd88, f_agent = 0xb7e52219 SMP, f_header = 0x0,
  f_data = 0x1 Address 0x1 out of bounds, f_tpl = 0x0, f_status = 0x1
  Address 0x1 out of bounds, idtable = 0x0, bpas_table = 0x756e694c
  Address 0x756e694c out of bounds, bta_table = 0xbf90e09c \220\223,
  packet_ptr = 0x883d1ba , mac_ptr = 0x883d1ba , l3_proto = 2048,
  l3_handler = 0x8059c77 ip_handler, l4_proto = 6, tag = 0, bpas = 0,
 bta
  = 0, bgp_src = 0xb78900f0 \003\210\020ii\r, bgp_dst = 0x1 Address
0x1
  out of bounds, bgp_peer = 0x1 Address 0x1 out of bounds, pf = 0,
  new_flow = 0 '\0', tcp_flags = 0 '\0', vlan_ptr = 0x0, mpls_ptr = 0x0,
  iph_ptr = 0x883d1c8 E, tlh_ptr = 0x29370 Address 0x29370 out of
  bounds, payload_ptr = 0x0, class = 0, cst = {tentatives = 20 '\024',
  stamp = {tv_sec = 0, tv_usec = 0}, ba = 3213942184, pa = 25312, fa =
240
  ''}, shadow = 0 '\0', tag_dist = 1 '\001'}
  (gdb)
 
  void src_port_handler(struct channels_list_entry *chptr, struct
  packet_ptrs *pptrs, char **data)
  {
struct pkt_data *pdata = (struct pkt_data *) *data;
 
if (pptrs-l4_proto == IPPROTO_UDP || pptrs-l4_proto ==
IPPROTO_TCP)
  pdata-primitives.src_port = ntohs(((struct my_tlhdr *)
  pptrs-tlh_ptr)-src_port);
else pdata-primitives.src_port = 0;
  }
 
 
  Seems like the problem

[pmacct-discussion] multiple interfaces uni-directional flows

2009-08-04 Thread Stig Thormodsrud
I notice with multiple interfaces that I get duplicate flows.  If I recall
correctly a cisco router does netflow only on input while it seems pcap
captures both inbound  outbound packets.  My work around to filter out
the output flows was to use a pcap_filter such as:

!
daemonize: true
promisc:   false
pidfile:   /var/run/pmacctd-eth0.pid
imt_path:  /tmp/pmacctd-eth0.pipe
plugins: nfprobe, memory
aggregate: src_host,dst_host,src_port,dst_port,proto,tos,flows,tag
interface: eth0
syslog: daemon
! filter out packets with the mac address of eth0
pcap_filter: !ether src 00:0c:29:8c:53:7c
nfprobe_receiver: 172.16.117.25:2100
nfprobe_version: 5
nfprobe_engine: 1:2
post_tag: 2


Is this the approach others are using with multiple interfaces or is there
a better way?

Thanks,

stig


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


[pmacct-discussion] running pmacct on multiple interfaces of a router

2009-08-03 Thread Stig Thormodsrud
I've been searching the mail archives for info on this topic and found a
thread suggesting to running an instance of pmacctd per interface (using
the ifindex as the engine_id) and use the nfprobe plugin to export all
instances to nfacctd on localhost.  Then in nfacctd I added a pre_tag_map
to map the engine_id to an id tag.  For nfacctd I was using the memory
plugin and able to see the different interfaces in the id field.  This
all seem to work ok, but I really don't want all the netflow data on the
router, but rather exported to a netflow collector.  So then I tried to
add a nfprobe plugin to the nfacctd to get it off the router, but the
external collector only seems to see flows from one interface and all the
engine_id are reset to 0.

The next thing I tried was to have each interface/instance of pmacctd use
the nfprobe to export to an external collector (again using engine_id per
ifindex).  This seemed to work better, but I'm still wondering if there is
a way to actually set the input interface in the netflow record going out
from nfprobe?

Thanks,

stig


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists