Re: [pmacct-discussion] Next-hop not populated when using networks file

2014-04-08 Thread Joan
Ok, I think I got it now (still not workin though), there where several
wrong assumptions from my part:

- Next hop is only (logicaly) stored for outgoing packets

- I am using nfsen (ncapd) to capture the flows, by default, nfcapd
captures netflow v9 but only extensions 1 (input/output interface SNMP
numbers)  and 2 (src/dst AS numbers), the nex-hop ip address is extension 4.
So I had to reconfigure nfsen so it added -T +4 to the nfcapd daemon

- A very nice way to debug the flow data is by using tshark (even on non
standard ports):
  tshark -i eth1 host 192.168.1.22 -d udp.port==2591,cflow  -s0 -V

Thanks for all your help,

Joan


2014-04-07 20:56 GMT+02:00 Paolo Lucente pa...@pmacct.net:

 Hi Joan,

 I've just tried to reproduce the issue with latest CVS with
 no luck, ie. BGP next-hop information is inserted just fine.

 If you make a pcap capture of the NetFlow traffic produced
 by nfprobe (or are able to debug NetFlow v9 templates in the
 collector tool) do you reckon the BGP next-hop field is part
 of the template (and hence left as 0.0.0.0)?

 Cheers,
 Paolo

 On Mon, Apr 07, 2014 at 04:37:29PM +0200, Joan wrote:
  Just tried it, it seems that pmacct isn't yet adding th nexthop
  information, this is my current config, I added the
 peer_src_ip,peer_dst_ip
  primitives and the nfacctd_net: file, maybe I'm missing something
 
  ! pmacctd configuration
  
   !
  
   !
  
   !
  
   daemonize: true
  
   pidfile: /var/run/pmacctd.pid
  
   syslog: daemon
  
   !
  
   ! interested in in and outbound traffic
  
   !aggregate: src_host,dst_host,dst_as,src_as,src_port,dst_port,proto,tos
  
   aggregate:
  
 src_host,dst_host,dst_as,src_as,src_port,dst_port,proto,tos,peer_src_ip,peer_dst_ip
  
   ! on this network
  
   !pcap_filter: net 0.0.0.0/0
  
   ! on this interface
  
   interface: eth0
  
   !
  
  
  
   plugins: nfprobe
  
   networks_file: /etc/pmacct/networks.lst
  
   refresh_maps: true
   nfprobe_receiver: 192.168.1.123:2591
   nfprobe_version: 9
   pmacctd_as: file
   !added after last email
   nfacctd_net: file
   !plugin_pipe_size: 2048000
   !plugin_buffer_size: 2048
   plugin_pipe_size: 4096000
   plugin_buffer_size: 4096
   debug : false
 
 
 
  Sample file:
   123.123.123.123,17766,223.255.235.0/24
   123.123.123.123,56000,223.255.236.0/24
   123.123.123.123,56000,223.255.237.0/24
   123.123.123.123,56000,223.255.238.0/24
   123.123.123.123,56000,223.255.239.0/24
   123.123.123.123,55649,223.255.240.0/22
   123.123.123.123,55649,223.255.240.0/24
   123.123.123.123,55649,223.255.241.0/24
   123.123.123.123,55649,223.255.242.0/24
   123.123.123.123,55649,223.255.243.0/24
   123.123.123.123,45954,223.255.244.0/24
   123.123.123.123,45954,223.255.245.0/24
   123.123.123.123,45954,223.255.246.0/24
   123.123.123.123,45954,223.255.247.0/24
   123.123.123.123,55415,223.255.254.0/24
 
 
 
 
  2014-04-07 16:16 GMT+02:00 Joan aseq...@gmail.com:
 
   The date I've in the checkout folder is Feb, 17th, and it's probably
 from
   those days (also it's trunk code), I'll update to current head and
 test it
   again.
  
  
  
   2014-04-05 4:22 GMT+02:00 Paolo Lucente pa...@pmacct.net:
  
   Hi Joan,
  
   Can you confirm you do not run a CVS build past Feb, 5th
   and you want the BGP next-hop taken from a networks_file
   in conjunction with the nfprobe plugin? If yes, you should
   be sorted if downloading latest CVS:
  
   https://www.mail-archive.com/pmacct-commits@pmacct.net/msg00981.html
  
   For the BGP next-hop to be taken from a networks_file you
   should also configure nfacctd_net to 'file': as you might
   see from docs that's the one influencing 'peer_dst_ip' (or
   BGP next-hop). Let me know if this is of help.
  
   Cheers,
   Paolo
  
   On Fri, Apr 04, 2014 at 11:39:28AM +0200, Joan wrote:
I am using a networks_file such as this, being the next hop
123.123.123.123, I do have other bgp providers for other routes.
   
123.123.123.123,17766,223.255.235.0/24
123.123.123.123,56000,223.255.236.0/24
123.123.123.123,56000,223.255.237.0/24
123.123.123.123,56000,223.255.238.0/24
123.123.123.123,56000,223.255.239.0/24
123.123.123.123,55649,223.255.240.0/22
123.123.123.123,55649,223.255.240.0/24
123.123.123.123,55649,223.255.241.0/24
123.123.123.123,55649,223.255.242.0/24
123.123.123.123,55649,223.255.243.0/24
123.123.123.123,45954,223.255.244.0/24
123.123.123.123,45954,223.255.245.0/24
123.123.123.123,45954,223.255.246.0/24
123.123.123.123,45954,223.255.247.0/24
123.123.123.123,55415,223.255.254.0/24
   
   
The issue I am having is that altough the AS numbers are properly
populated, the  BGPNextHop field is always 0.0.0.0
   
I am using this aggregate list:
aggregate:
  
 src_host,dst_host,dst_as,src_as,src_port,dst_port,proto,tos,peer_src_ip,peer_dst_ip
   
   
From the config keys (http://wiki.pmacct.net/OfficialConfigKeys) i
   read:
 when 'true' ('file' being an alias of 'true') it instructs 

[pmacct-discussion] Multiple plugins/ summary pmacct.conf/sfacct.conf.

2014-04-08 Thread Rik Bruggink - Fundaments B . V .
Hello List,

I started using pmacct a while ago to monitor our traffic streams , and now i 
want to segment the traffice in two different tables one with a five minute 
average and one with a daily average.

I am using this config todo so:

! sfacctd configuration

daemonize: true
syslog: user

interface: eth0

plugins: mysql[inbound5m], mysql[outbound5m], mysql[inbounddaily], 
mysql[outbounddaily]

sql_table_version: 7
sql_optimize_clauses: true
sql_refresh_time: 60
sql_dont_try_update: true
sql_use_copy: true
sql_host: hostname
sql_passwd: password
sql_db: pmacct
sql_user: pmacct

sql_history[inbound5m]: 5m
sql_history_roundoff[inbound5m]: m
sql_table[inbound5m]: acct_v7_in_5m
aggregate[inbound5m]: dst_host
aggregate_filter[inbound5m]: vlan and (src net 10.0.0.0/8) and not (dst net 
10.0.0.0/8)

sql_history[outbound5m]: 5m
sql_history_roundoff[outbound5m]: m
sql_table[outbound5m]: acct_v7_out_5m
aggregate[outbound5m]: src_host
aggregate_filter[outbound5m]: vlan and (src net 10.0.0.0/8) and not (dst net 
10.0.0.0/8)

sql_history[inbounddaily]: 1d
sql_history_roundoff[inbounddaily]: d
sql_table[inbounddaily]: acct_v7_in_daily
aggregate[inbounddaily]: dst_host
aggregate_filter[inbounddaily vlan and (src net 10.0.0.0/8) and not (dst net 
10.0.0.0/8)

sql_history[outbounddaily]: 1d
sql_history_roundoff[outbounddaily]: d
sql_table[outbounddaily]: acct_v7_out_daily
aggregate[outbounddaily]: src_host
aggregate_filter[outbounddaily]: vlan and (src net 10.0.0.0/8) and not (dst net 
10.0.0.0/8)

sfacctd_renormalize: true

sample mysql table structure:

CREATE TABLE acct_v7_out_5m (
  agent_id int NOT NULL,
  class_id char(16) NOT NULL,
  mac_src char(17) NOT NULL,
  mac_dst char(17) NOT NULL,
  vlan int NOT NULL,
  as_src int NOT NULL,
  as_dst int NOT NULL,
  ip_src char(45) NOT NULL,
  ip_dst char(45) NOT NULL,
  src_port int NOT NULL,
  dst_port int NOT NULL,
  tcp_flags int NOT NULL,
  ip_proto char(6) NOT NULL,
  tos int NOT NULL,
  packets int NOT NULL,
  bytes bigint NOT NULL,
  flows int NOT NULL,
  stamp_inserted datetime NOT NULL,
  stamp_updated datetime DEFAULT NULL,
  PRIMARY KEY 
(agent_id,class_id,mac_src,mac_dst,vlan,as_src,as_dst,ip_src,ip_dst,src_port,dst_port,ip_proto,tos,stamp_inserted),
  KEY stamp_inserted (stamp_inserted),
  KEY bytes (bytes),
  KEY ip_src (ip_src),
  KEY ip_dst (ip_dst)
);

the problem im facing now is that it is mentioning duplicate key for the sql 
service. Is the config and table structure I am using correct, our am I missing 
something?

With kind regards,

Rik Bruggink

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

[pmacct-discussion] a single aggregate misses almost all traffic

2014-04-08 Thread Johannes Formann
Hi,

I have a strange problem again. I already tested the newest CVS version but it 
persists:

I use four aggregates:
 - inbound: incoming traffic for local IPs
 - outbound: outgoing traffic for local ips
 - TMPflowSRC: short time local outgoing udp traffic (with a short port list)
 - TMPflowDST: short time udp traffic (after destination address) to be able to 
identify potential outgoing ddos

Everything worked fine for some time but today I found that TMPflowSRC accounts 
almost no traffic anymore and if its accounts mostly IPv6 addresses. Sadly in 
this network IPv6 traffic is less than 4% so there must be some fault in my 
configuration.

With debug enabled the initialization of the three aggregates with a network 
filter looks identical, so there shouldn’t be the problem.
commenting out the port-List  didn’t help either.

My configuration:

! pmacctd configuration
!
!
!
daemonize: true
pidfile: /var/run/pmacctd.pid
syslog: daemon
!
aggregate[inbound]: dst_host
aggregate[outbound]: src_host,proto
aggregate[TMPflowSRC]: src_host,src_port,proto
aggregate[TMPflowDST]: dst_host,proto
aggregate_filter[TMPflowSRC]: udp
aggregate_filter[TMPflowDST]: udp
plugins: mysql[inbound], mysql[outbound], mysql[TMPflowSRC], mysql[TMPflowDST]
sql_table[inbound]: acct_%Y_%m_in
sql_table[outbound]: acct_%Y_%m_out
sql_table[TMPflowSRC]: acct_TMPflowSRC
sql_table[TMPflowDST]: acct_TMPflowDST
sql_table_schema[inbound]: /etc/pmacct/inbound.schema
sql_table_schema[outbound]: /etc/pmacct/outbound.schema
networks_file[inbound]: /etc/pmacct/networks
networks_file[outbound]: /etc/pmacct/networks
networks_file[TMPflowSRC]: /etc/pmacct/networks
ports_file[TMPflowSRC]: /etc/pmacct/portsudp 

networks_file_filter: true

interface: eth0

! storage methods
sql_db: pmacct
sql_table_version: 4 
sql_passwd: secret
sql_user: pmacct
sql_refresh_time: 60
sql_optimize_clauses: true
sql_history: 1m 
sql_history_roundoff: m
sql_multi_values: 1200
sql_cache_entries: 64000

pmacctd_flow_buffer_buckets: 4096

sql_dont_try_update: true

plugin_buffer_size: 163840
plugin_pipe_size: 4096

Any ideas where to look?

greetings

Johannes 
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] pmacct performance

2014-04-08 Thread Paolo Lucente
Hi Stathis,

Since you use PF_RING, you can review an advice i gave to Joan a
couple months back when he was asking how to scale up a pmacctd
deployment; see specifically the replication idea i gave in the
following email:

https://www.mail-archive.com/pmacct-discussion@pmacct.net/msg02447.html

Speaking specifically of the classification part: gut feeling is
this is a bit too much resources for only a single classifier that
is looking for an HTTP hostname (i'm not necessarily implying your
shared object is culprit here). It would be great if we could
debug/review this together. Shall we follow-up privately on this?

Cheers,
Paolo

On Mon, Apr 07, 2014 at 11:39:24PM +0300, Stathis Gkotsis wrote:
 Hi Paolo,
 Yes, I use pfring. It is both traffic rate and classification which cause the 
 CPU to go to 100%. If I do not use any classifiers, CPU is around 40%, then, 
 when I enable the classifier, CPU goes above 95%. The classifier is a shared 
 library which tries to match a series of bytes in the packet payload, 
 basically searches for a hostname in the packet payload (I am interested in 
 HTTP traffic).
 Thanks,Stathis
 
  Date: Mon, 7 Apr 2014 18:48:55 +
  From: pa...@pmacct.net
  To: pmacct-discussion@pmacct.net
  Subject: Re: [pmacct-discussion] pmacct performance
  
  Hi Stathis,
  
  Two questions on your current setup: 1) are you already using pmacct
  against a PF_RING-enabled libpcap? You made reference to this in your
  email; 2) Can you determine what makes CPU go to 100%? Is it traffic
  rate or classification? Deterimining this is key to steer further
  recommendations.
  
  Cheers,
  Paolo
  
  On Sun, Apr 06, 2014 at 08:17:07PM +0300, Stathis Gkotsis wrote:
   Hi all,
   
   I am using pmacctd with libpcap. My configuration is the following:
   daemonize: falsepcap_filter: port 80 // only interested in HTTP 
   trafficplugin_pipe_size: 10240plugin_buffer_size: 102400aggregate: 
   src_host,dst_host,src_port,dst_port,proto,classclassifiers: 
   [path_to_classifier]snaplen: 500interface: anyplugins: 
   printprint_num_protos: trueprint_cache_entries: 15485863print_output: 
   csvprint_time_roundoff: mhdprint_output_file: 
   file.%s.%Y%m%d-%H%M.txtprint_refresh_time: 300
   I have defined one classifier and, on the machine I am using, CPU usage 
   of the core process is close to 100%.I have read the relevant FAQ 
   question about high CPU usage and applied what it proposes.
   The question now is how pmacct could cope with more traffic:- are there 
   any other ways to optimize pmacct itself or its configuration?- I was 
   thinking of launching multiple pmacctd instances, each instance receiving 
   a portion of the traffic. This split could be done through BPF filter.  
   How would you split the traffic? For example, you can split based on one 
   bit of the IP address... The goal would be that the separate instances 
   are balanced in terms of CPU usage.- Is pmacct compiled with all relevant 
   gcc optimizations?
   Thanks,Stathis  
  
   ___
   pmacct-discussion mailing list
   http://www.pmacct.net/#mailinglists
  
  
  ___
  pmacct-discussion mailing list
  http://www.pmacct.net/#mailinglists
 

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Multiple plugins/ summary pmacct.conf/sfacct.conf.

2014-04-08 Thread Paolo Lucente
Hi Rik,

This is because you violate the index on the table. You have
to either align sql_refresh_time with sql_history or introduce
an auto-increment field and make it part of the index. This is
all explained in the following page:

http://wiki.pmacct.net/CustomizingTheSqlIndexes

Cheers,
Paolo

On Tue, Apr 08, 2014 at 10:22:08AM +, Rik Bruggink - Fundaments B.V. wrote:
 Hello List,
 
 I started using pmacct a while ago to monitor our traffic streams , and now i 
 want to segment the traffice in two different tables one with a five minute 
 average and one with a daily average.
 
 I am using this config todo so:
 
 ! sfacctd configuration
 
 daemonize: true
 syslog: user
 
 interface: eth0
 
 plugins: mysql[inbound5m], mysql[outbound5m], mysql[inbounddaily], 
 mysql[outbounddaily]
 
 sql_table_version: 7
 sql_optimize_clauses: true
 sql_refresh_time: 60
 sql_dont_try_update: true
 sql_use_copy: true
 sql_host: hostname
 sql_passwd: password
 sql_db: pmacct
 sql_user: pmacct
 
 sql_history[inbound5m]: 5m
 sql_history_roundoff[inbound5m]: m
 sql_table[inbound5m]: acct_v7_in_5m
 aggregate[inbound5m]: dst_host
 aggregate_filter[inbound5m]: vlan and (src net 10.0.0.0/8) and not (dst net 
 10.0.0.0/8)
 
 sql_history[outbound5m]: 5m
 sql_history_roundoff[outbound5m]: m
 sql_table[outbound5m]: acct_v7_out_5m
 aggregate[outbound5m]: src_host
 aggregate_filter[outbound5m]: vlan and (src net 10.0.0.0/8) and not (dst net 
 10.0.0.0/8)
 
 sql_history[inbounddaily]: 1d
 sql_history_roundoff[inbounddaily]: d
 sql_table[inbounddaily]: acct_v7_in_daily
 aggregate[inbounddaily]: dst_host
 aggregate_filter[inbounddaily vlan and (src net 10.0.0.0/8) and not (dst net 
 10.0.0.0/8)
 
 sql_history[outbounddaily]: 1d
 sql_history_roundoff[outbounddaily]: d
 sql_table[outbounddaily]: acct_v7_out_daily
 aggregate[outbounddaily]: src_host
 aggregate_filter[outbounddaily]: vlan and (src net 10.0.0.0/8) and not (dst 
 net 10.0.0.0/8)
 
 sfacctd_renormalize: true
 
 sample mysql table structure:
 
 CREATE TABLE acct_v7_out_5m (
   agent_id int NOT NULL,
   class_id char(16) NOT NULL,
   mac_src char(17) NOT NULL,
   mac_dst char(17) NOT NULL,
   vlan int NOT NULL,
   as_src int NOT NULL,
   as_dst int NOT NULL,
   ip_src char(45) NOT NULL,
   ip_dst char(45) NOT NULL,
   src_port int NOT NULL,
   dst_port int NOT NULL,
   tcp_flags int NOT NULL,
   ip_proto char(6) NOT NULL,
   tos int NOT NULL,
   packets int NOT NULL,
   bytes bigint NOT NULL,
   flows int NOT NULL,
   stamp_inserted datetime NOT NULL,
   stamp_updated datetime DEFAULT NULL,
   PRIMARY KEY 
 (agent_id,class_id,mac_src,mac_dst,vlan,as_src,as_dst,ip_src,ip_dst,src_port,dst_port,ip_proto,tos,stamp_inserted),
   KEY stamp_inserted (stamp_inserted),
   KEY bytes (bytes),
   KEY ip_src (ip_src),
   KEY ip_dst (ip_dst)
 );
 
 the problem im facing now is that it is mentioning duplicate key for the sql 
 service. Is the config and table structure I am using correct, our am I 
 missing something?
 
 With kind regards,
 
 Rik Bruggink
 

 ___
 pmacct-discussion mailing list
 http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] pmacct performance

2014-04-08 Thread Stathis Gkotsis
Hi Paolo,
In your advice to Joan, I guess you refer to Libzero: 
http://www.ntop.org/products/pf_ring/libzero-for-dna/ , should not be too 
difficult to leverage this in pmacct...
Regards,Stathis

 Date: Tue, 8 Apr 2014 16:51:00 +
 From: pa...@pmacct.net
 To: stathisgot...@hotmail.com
 CC: pmacct-discussion@pmacct.net
 Subject: Re: [pmacct-discussion] pmacct performance
 
 Hi Stathis,
 
 Since you use PF_RING, you can review an advice i gave to Joan a
 couple months back when he was asking how to scale up a pmacctd
 deployment; see specifically the replication idea i gave in the
 following email:
 
 https://www.mail-archive.com/pmacct-discussion@pmacct.net/msg02447.html
 
 Speaking specifically of the classification part: gut feeling is
 this is a bit too much resources for only a single classifier that
 is looking for an HTTP hostname (i'm not necessarily implying your
 shared object is culprit here). It would be great if we could
 debug/review this together. Shall we follow-up privately on this?
 
 Cheers,
 Paolo
 
 On Mon, Apr 07, 2014 at 11:39:24PM +0300, Stathis Gkotsis wrote:
  Hi Paolo,
  Yes, I use pfring. It is both traffic rate and classification which cause 
  the CPU to go to 100%. If I do not use any classifiers, CPU is around 40%, 
  then, when I enable the classifier, CPU goes above 95%. The classifier is a 
  shared library which tries to match a series of bytes in the packet 
  payload, basically searches for a hostname in the packet payload (I am 
  interested in HTTP traffic).
  Thanks,Stathis
  
   Date: Mon, 7 Apr 2014 18:48:55 +
   From: pa...@pmacct.net
   To: pmacct-discussion@pmacct.net
   Subject: Re: [pmacct-discussion] pmacct performance
   
   Hi Stathis,
   
   Two questions on your current setup: 1) are you already using pmacct
   against a PF_RING-enabled libpcap? You made reference to this in your
   email; 2) Can you determine what makes CPU go to 100%? Is it traffic
   rate or classification? Deterimining this is key to steer further
   recommendations.
   
   Cheers,
   Paolo
   
   On Sun, Apr 06, 2014 at 08:17:07PM +0300, Stathis Gkotsis wrote:
Hi all,

I am using pmacctd with libpcap. My configuration is the following:
daemonize: falsepcap_filter: port 80 // only interested in HTTP 
trafficplugin_pipe_size: 10240plugin_buffer_size: 102400aggregate: 
src_host,dst_host,src_port,dst_port,proto,classclassifiers: 
[path_to_classifier]snaplen: 500interface: anyplugins: 
printprint_num_protos: trueprint_cache_entries: 15485863print_output: 
csvprint_time_roundoff: mhdprint_output_file: 
file.%s.%Y%m%d-%H%M.txtprint_refresh_time: 300
I have defined one classifier and, on the machine I am using, CPU usage 
of the core process is close to 100%.I have read the relevant FAQ 
question about high CPU usage and applied what it proposes.
The question now is how pmacct could cope with more traffic:- are there 
any other ways to optimize pmacct itself or its configuration?- I was 
thinking of launching multiple pmacctd instances, each instance 
receiving a portion of the traffic. This split could be done through 
BPF filter.  How would you split the traffic? For example, you can 
split based on one bit of the IP address... The goal would be that the 
separate instances are balanced in terms of CPU usage.- Is pmacct 
compiled with all relevant gcc optimizations?
Thanks,Stathis
   
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists
   
   
   ___
   pmacct-discussion mailing list
   http://www.pmacct.net/#mailinglists

  ___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] a single aggregate misses almost all traffic

2014-04-08 Thread Paolo Lucente
Hi Johannes,

The issue is persistent across reloads, correct? 

Enable debug only for the plugin, ie. debug[TMPflowSRC]: true,
then check the log file to see if the problem is somehow with
the database. If log is clear, meaning no entries are written
out, i'd recommend to remove anything that can be potentially
filtering so: aggregate_filter, networks_file and ports_file:
see if this changes anything. Let's take it from there.

Cheers,
Paolo
 
On Tue, Apr 08, 2014 at 05:50:50PM +0200, Johannes Formann wrote:
 Hi,
 
 I have a strange problem again. I already tested the newest CVS version but 
 it persists:
 
 I use four aggregates:
  - inbound: incoming traffic for local IPs
  - outbound: outgoing traffic for local ips
  - TMPflowSRC: short time local outgoing udp traffic (with a short port list)
  - TMPflowDST: short time udp traffic (after destination address) to be able 
 to identify potential outgoing ddos
 
 Everything worked fine for some time but today I found that TMPflowSRC 
 accounts almost no traffic anymore and if its accounts mostly IPv6 addresses. 
 Sadly in this network IPv6 traffic is less than 4% so there must be some 
 fault in my configuration.
 
 With debug enabled the initialization of the three aggregates with a network 
 filter looks identical, so there shouldn’t be the problem.
 commenting out the port-List  didn’t help either.
 
 My configuration:
 
 ! pmacctd configuration
 !
 !
 !
 daemonize: true
 pidfile: /var/run/pmacctd.pid
 syslog: daemon
 !
 aggregate[inbound]: dst_host
 aggregate[outbound]: src_host,proto
 aggregate[TMPflowSRC]: src_host,src_port,proto
 aggregate[TMPflowDST]: dst_host,proto
 aggregate_filter[TMPflowSRC]: udp
 aggregate_filter[TMPflowDST]: udp
 plugins: mysql[inbound], mysql[outbound], mysql[TMPflowSRC], mysql[TMPflowDST]
 sql_table[inbound]: acct_%Y_%m_in
 sql_table[outbound]: acct_%Y_%m_out
 sql_table[TMPflowSRC]: acct_TMPflowSRC
 sql_table[TMPflowDST]: acct_TMPflowDST
 sql_table_schema[inbound]: /etc/pmacct/inbound.schema
 sql_table_schema[outbound]: /etc/pmacct/outbound.schema
 networks_file[inbound]: /etc/pmacct/networks
 networks_file[outbound]: /etc/pmacct/networks
 networks_file[TMPflowSRC]: /etc/pmacct/networks
 ports_file[TMPflowSRC]: /etc/pmacct/portsudp 
 
 networks_file_filter: true
 
 interface: eth0
 
 ! storage methods
 sql_db: pmacct
 sql_table_version: 4 
 sql_passwd: secret
 sql_user: pmacct
 sql_refresh_time: 60
 sql_optimize_clauses: true
 sql_history: 1m 
 sql_history_roundoff: m
 sql_multi_values: 1200
 sql_cache_entries: 64000
 
 pmacctd_flow_buffer_buckets: 4096
 
 sql_dont_try_update: true
 
 plugin_buffer_size: 163840
 plugin_pipe_size: 4096
 
 Any ideas where to look?
 
 greetings
 
 Johannes 
 ___
 pmacct-discussion mailing list
 http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] pmacct performance

2014-04-08 Thread Viacheslav Dubrovskyi

06.04.2014 20:17, Stathis Gkotsis пишет:

Hi all,

I am using pmacctd with libpcap.
Will add my 5cent. Use libpcap with high load system not good idea. You 
will lost more then 50% traffic.
IMHO better idea use http://sourceforge.net/projects/ipt-netflow/ for 
create netflow and then use nfacct for analyze and record it.


--
WBR,
Viacheslav Dubrovskyi



smime.p7s
Description: Криптографическая подпись S/MIME
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists