[pmacct-discussion] Splitting In and Out traffic, and others questions

2014-06-23 Thread Raphael Mazelier

Hi Paolo, All,

First I would thank you Paolo for this great piece of software !
Thanks to my predecessor (hi Pym) I already have a working pmacctd 
installation which doing accounting on my network :)


I have some questions tough :

I have enabled inbound accounting in my network.
I want to distinguish in and out traffic.
For now I make something like this, using pre_tag filter :

# more /etc/pmacct/pretag.map
set_tag=100 ip=158.58.176.2 in=527
set_tag=100 ip=158.58.176.2 in=528
set_tag=100 ip=158.58.176.2 in=530
...

set_tag=200 ip=158.58.176.2 out=527
set_tag=200 ip=158.58.176.2 out=528
set_tag=200 ip=158.58.176.2 out=530
...

# more /etc/pmacct/nfacctd.conf

...
pre_tag_filter[in_hour]: 100
pre_tag_filter[out_hour]: 200
...

! sql outbound by hour
sql_refresh_time[out_hour]: 300
sql_history[out_hour]: 5m
sql_history_roundoff[out_hour]: m
sql_table[out_hour]: netflow_out_hour_%Y%m%d_%H
sql_table_schema[out_hour]: /etc/pmacct/netflow_out_hour.schema

! sql inbound by hour
sql_refresh_time[in_hour]: 300
sql_history[in_hour]: 5m
sql_history_roundoff[in_hour]: m
sql_table[in_hour]: netflow_in_hour_%Y%m%d_%H
sql_table_schema[in_hour]: /etc/pmacct/netflow_in_hour.schema


It's working well, but I wonder if it exists another, more clear/simpler 
method ? because I have to maintain the pretag.map.
Or perhaps I could mix In an Out flux in the sql table (but make the 
table much bigger).


Side question about pretag filter ? the tag field in sql is always at 
'0' ? This is not blocking but I wonder why ?


Another question about BGP src_as and dst_as fields :
Depending on the direction the src_as or the dst_as are correclty 
filled, but not the other which is always '0' ? I would assume that it 
will be my As number ? Should I have to deal with network filter ?



I have many other questions, but for now I think that is sufficient :)

best,


--
Raphael Mazelier
AS39605














___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] 1.5.0rc3 nfacctd segfaults

2014-06-23 Thread Tim Jackson
It looks like the IMT of the one I keep clearing the statistics on is
balooning.. Starts around 200-300mb then climbs up..

USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
root  4516  0.0  6.4 208372 123172 ?   Ss   Jun19   3:03
nfacctd: Core Process [default]
root  4518  0.0  6.4 210908 123264 ?   SJun19   2:40
nfacctd: Tee Plugin [fanout]
root  4553  0.0  6.5 211168 125608 ?   Ss   Jun19   3:21
nfacctd: Core Process [default]
root  4555  0.0  7.1 221392 137116 ?   SJun19   2:51
nfacctd: PostgreSQL Plugin [as]
root 10522  0.1  5.0 340124 96760 ?Ss   11:04   0:09
nfacctd: Core Process [default]
root 10524  0.2  9.2 302656 176924 ?   S11:04   0:13
nfacctd: IMT Plugin [full]
root 10525  0.3 34.1 854392 656748 ?   S11:04   0:17
nfacctd: IMT Plugin [dst]
root 12282  0.0  0.0 103256   832 pts/1S+   12:38   0:00 grep
-e USER\|nfacct

USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
root  4516  0.0  6.6 208372 128152 ?   Ss   Jun19   3:03
nfacctd: Core Process [default]
root  4518  0.0  6.6 210908 128248 ?   SJun19   2:40
nfacctd: Tee Plugin [fanout]
root  4553  0.0  6.6 211168 128440 ?   Ss   Jun19   3:21
nfacctd: Core Process [default]
root  4555  0.0  7.2 221392 139928 ?   SJun19   2:52
nfacctd: PostgreSQL Plugin [as]
root 10522  0.1 10.5 340124 203344 ?   Ss   11:04   0:10
nfacctd: Core Process [default]
root 10524  0.2 11.6 302656 222992 ?   S11:04   0:13
nfacctd: IMT Plugin [full]
root 10525  0.3 38.9 885676 748416 ?   S11:04   0:18
nfacctd: IMT Plugin [dst]
root 12306  0.2  0.7 222848 13896 ?S12:39   0:00
nfacctd: pgsql Plugin -- DB Writer [as]
root 12362  0.0  0.0 103252   784 pts/1D+   12:42   0:00 grep
-e USER\|nfacct

USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
root  4516  0.0  6.1 208372 119088 ?   Ss   Jun19   3:03
nfacctd: Core Process [default]
root  4518  0.0  6.2 210908 119184 ?   SJun19   2:40
nfacctd: Tee Plugin [fanout]
root  4553  0.0  6.6 211168 127832 ?   Ss   Jun19   3:21
nfacctd: Core Process [default]
root  4555  0.0  7.2 221392 139260 ?   SJun19   2:52
nfacctd: PostgreSQL Plugin [as]
root 10522  0.1 10.8 340124 208884 ?   Ss   11:04   0:10
nfacctd: Core Process [default]
root 10524  0.2 11.0 302656 211920 ?   S11:04   0:13
nfacctd: IMT Plugin [full]
root 10525  0.3 40.0 901516 769124 ?   S11:04   0:19
nfacctd: IMT Plugin [dst]
root 12401  0.0  0.0 103252   652 pts/1D+   12:45   0:00 grep
-e USER\|nfacct


The [full] IMT is never cleared, and doesn't seem to exhibit this
behavior... I'm performing the queries in this instance with a lock
now as well.

On Sat, Jun 21, 2014 at 10:05 AM, Paolo Lucente pa...@pmacct.net wrote:
 Hi Tim,

 Can you please track down memory utilization to see if it could
 be something related to that? Also, can you try performing a query
 with lock:

 shell pmacct -l  .. parameters .. 

 If none of this helps, then yes, proceed to capture segfault data
 with gdb.

 Cheers,
 Paolo

 On Fri, Jun 20, 2014 at 11:45:57AM -0700, Tim Jackson wrote:
 We're having some issues using nfacctd with IMT.. After running for
 ~6-8 hours ingesting flow data, we see segfaults and the pmacct client
 ceases to function properly returning:

 ERROR: missing EOF from server

 Querying pmacct client every 2 minutes with:

 pmacct -p nfacctd-dst.pipe -O json -a -c tag2 -M 2;3 -T packets,1000

 If that returns data, we then:

 pmacct -p nfacctd-dst.pipe -e

 Associated segfault from nfacctd daemon:

 Jun 20 10:32:02 kernel: nfacctd[21874]: segfault at 21 ip
 0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
 Jun 20 10:36:02 kernel: nfacctd[21930]: segfault at 21 ip
 0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
 Jun 20 10:40:02 kernel: nfacctd[21983]: segfault at 21 ip
 0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
 Jun 20 10:46:02 kernel: nfacctd[22068]: segfault at 21 ip
 0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
 Jun 20 10:54:02 kernel: nfacctd[22188]: segfault at 21 ip
 0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
 Jun 20 11:02:02 kernel: nfacctd[22350]: segfault at 21 ip
 0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
 Jun 20 11:04:02 kernel: nfacctd[22374]: segfault at 21 ip
 0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
 Jun 20 11:32:02 kernel: nfacctd[22903]: segfault at 4d8e6600 ip
 00476103 sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]

 nfacctd Config:

 !daemonize: true
 nfacctd_port: 5680
 plugins: memory[full], memory[dst]

 aggregate[full]: tag, tag2, in_iface, out_iface, src_as, dst_as,
 src_host, dst_host, proto, src_port, dst_port, tcpflags, ext_comm,
 

Re: [pmacct-discussion] Splitting In and Out traffic, and others questions

2014-06-23 Thread Paolo Lucente
Hi Raphael,

Thanks for your kind words about the pmacct project. 

In-line:

On Mon, Jun 23, 2014 at 02:30:35PM +0200, Raphael Mazelier wrote:

 It's working well, but I wonder if it exists another, more
 clear/simpler method ? because I have to maintain the pretag.map.
 Or perhaps I could mix In an Out flux in the sql table (but make the
 table much bigger).

For sure you have to maintain a map to say what is input and what
is output - would be great to find a way that is as most static as
possible for you. What comes to mind for the purpose - all depends
whether you have downstream ASNs, get at least a BGP feed or get
src_as and dst_as populated from NetFlow, get MAC addresses from
NetFlow, etc. - is you can use ASNs, IP prefixes, MAC addresses or
interfaces (this last one is what you are doing at present). 

For example, should you not have downstream ASNs and get src_as and
dst_as correctly populated by your router(s) via NetFlow you could
simply match input traffic as dst_as=0 and output traffic as src_as=0
in your pre_tag_map.

 Side question about pretag filter ? the tag field in sql is always
 at '0' ? This is not blocking but I wonder why ?

Is 'tag' part of your aggregation scheme, ie. 'aggregate' keyword
in your config? If not, then that's the reason and zero is simply
the default value imposed to the field in the SQL schema. 

 Another question about BGP src_as and dst_as fields :
 Depending on the direction the src_as or the dst_as are correclty
 filled, but not the other which is always '0' ? I would assume that
 it will be my As number ? Should I have to deal with network filter
 ?

Correct, when the ASN is zero then it's traffic delivered to/sourced
by your own IP address space. You won't see your own ASN number being
filled in - just like you don't see it in your own BGP routing table.
But you can make some tricks, ie. use a networks_map, to do that. Let
me know if interested.

Cheers,
Paolo

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] 1.5.0rc3 nfacctd segfaults

2014-06-23 Thread Paolo Lucente
Hi,

Can you then verify/confirm if it's the clearing of the statistics
generating the issue? Determining how to reproduce the issue would
help a lot to quickly solve the bug.

Cheers,
Paolo

On Mon, Jun 23, 2014 at 12:47:19PM -0700, Tim Jackson wrote:
 It looks like the IMT of the one I keep clearing the statistics on is
 balooning.. Starts around 200-300mb then climbs up..
 
 USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
 root  4516  0.0  6.4 208372 123172 ?   Ss   Jun19   3:03
 nfacctd: Core Process [default]
 root  4518  0.0  6.4 210908 123264 ?   SJun19   2:40
 nfacctd: Tee Plugin [fanout]
 root  4553  0.0  6.5 211168 125608 ?   Ss   Jun19   3:21
 nfacctd: Core Process [default]
 root  4555  0.0  7.1 221392 137116 ?   SJun19   2:51
 nfacctd: PostgreSQL Plugin [as]
 root 10522  0.1  5.0 340124 96760 ?Ss   11:04   0:09
 nfacctd: Core Process [default]
 root 10524  0.2  9.2 302656 176924 ?   S11:04   0:13
 nfacctd: IMT Plugin [full]
 root 10525  0.3 34.1 854392 656748 ?   S11:04   0:17
 nfacctd: IMT Plugin [dst]
 root 12282  0.0  0.0 103256   832 pts/1S+   12:38   0:00 grep
 -e USER\|nfacct
 
 USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
 root  4516  0.0  6.6 208372 128152 ?   Ss   Jun19   3:03
 nfacctd: Core Process [default]
 root  4518  0.0  6.6 210908 128248 ?   SJun19   2:40
 nfacctd: Tee Plugin [fanout]
 root  4553  0.0  6.6 211168 128440 ?   Ss   Jun19   3:21
 nfacctd: Core Process [default]
 root  4555  0.0  7.2 221392 139928 ?   SJun19   2:52
 nfacctd: PostgreSQL Plugin [as]
 root 10522  0.1 10.5 340124 203344 ?   Ss   11:04   0:10
 nfacctd: Core Process [default]
 root 10524  0.2 11.6 302656 222992 ?   S11:04   0:13
 nfacctd: IMT Plugin [full]
 root 10525  0.3 38.9 885676 748416 ?   S11:04   0:18
 nfacctd: IMT Plugin [dst]
 root 12306  0.2  0.7 222848 13896 ?S12:39   0:00
 nfacctd: pgsql Plugin -- DB Writer [as]
 root 12362  0.0  0.0 103252   784 pts/1D+   12:42   0:00 grep
 -e USER\|nfacct
 
 USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
 root  4516  0.0  6.1 208372 119088 ?   Ss   Jun19   3:03
 nfacctd: Core Process [default]
 root  4518  0.0  6.2 210908 119184 ?   SJun19   2:40
 nfacctd: Tee Plugin [fanout]
 root  4553  0.0  6.6 211168 127832 ?   Ss   Jun19   3:21
 nfacctd: Core Process [default]
 root  4555  0.0  7.2 221392 139260 ?   SJun19   2:52
 nfacctd: PostgreSQL Plugin [as]
 root 10522  0.1 10.8 340124 208884 ?   Ss   11:04   0:10
 nfacctd: Core Process [default]
 root 10524  0.2 11.0 302656 211920 ?   S11:04   0:13
 nfacctd: IMT Plugin [full]
 root 10525  0.3 40.0 901516 769124 ?   S11:04   0:19
 nfacctd: IMT Plugin [dst]
 root 12401  0.0  0.0 103252   652 pts/1D+   12:45   0:00 grep
 -e USER\|nfacct
 
 
 The [full] IMT is never cleared, and doesn't seem to exhibit this
 behavior... I'm performing the queries in this instance with a lock
 now as well.
 
 On Sat, Jun 21, 2014 at 10:05 AM, Paolo Lucente pa...@pmacct.net wrote:
  Hi Tim,
 
  Can you please track down memory utilization to see if it could
  be something related to that? Also, can you try performing a query
  with lock:
 
  shell pmacct -l  .. parameters .. 
 
  If none of this helps, then yes, proceed to capture segfault data
  with gdb.
 
  Cheers,
  Paolo
 
  On Fri, Jun 20, 2014 at 11:45:57AM -0700, Tim Jackson wrote:
  We're having some issues using nfacctd with IMT.. After running for
  ~6-8 hours ingesting flow data, we see segfaults and the pmacct client
  ceases to function properly returning:
 
  ERROR: missing EOF from server
 
  Querying pmacct client every 2 minutes with:
 
  pmacct -p nfacctd-dst.pipe -O json -a -c tag2 -M 2;3 -T packets,1000
 
  If that returns data, we then:
 
  pmacct -p nfacctd-dst.pipe -e
 
  Associated segfault from nfacctd daemon:
 
  Jun 20 10:32:02 kernel: nfacctd[21874]: segfault at 21 ip
  0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
  Jun 20 10:36:02 kernel: nfacctd[21930]: segfault at 21 ip
  0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
  Jun 20 10:40:02 kernel: nfacctd[21983]: segfault at 21 ip
  0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
  Jun 20 10:46:02 kernel: nfacctd[22068]: segfault at 21 ip
  0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
  Jun 20 10:54:02 kernel: nfacctd[22188]: segfault at 21 ip
  0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
  Jun 20 11:02:02 kernel: nfacctd[22350]: segfault at 21 ip
  0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
  Jun 20 11:04:02 kernel: nfacctd[22374]: segfault at 21 ip
  0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
  Jun 20