Hi Paolo, The Ubiquiti fork of Vyatta is using an old version of pmacct (0.12.5), so I'm in the process of updating it to 1.5rc1. In testing the new code I've noticed some differences in the plugin_pipe_size that I'm using. With the following config:
vbash-4.1$ cat uacctd-i.conf ! ! autogenerated by /opt/vyatta/sbin/vyatta-netflow.pl ! daemonize: true promisc: false pidfile: /var/run/uacctd-i.pid imt_path: /tmp/uacctd-i.pipe imt_mem_pools_number: 169 uacctd_group: 2 uacctd_nl_size: 2097152 snaplen: 32768 refresh_maps: true pre_tag_map: /etc/pmacct/int_map aggregate: tag,src_mac,dst_mac,vlan,src_host,dst_host,src_port,dst_port,proto,tos,flows plugin_pipe_size: 10485760 plugin_buffer_size: 10240 syslog: daemon plugins: ,sfprobe[10.1.7.227-6343] sfprobe_receiver[10.1.7.227-6343]: 10.1.7.227:6343 sfprobe_agentip[10.1.7.227-6343]: 10.1.1.153 sfprobe_direction[10.1.7.227-6343]: in Using that config I eventually start getting lots of the following log message: Nov 20 01:39:19 ubnt-netflow pmacctd[1048]: ERROR ( 10.1.7.227-6343/sfprobe ): We are missing data. Nov 20 01:39:19 ubnt-netflow pmacctd[1048]: If you see this message once in a while, discard it. Otherwise some solutions follow: Nov 20 01:39:19 ubnt-netflow pmacctd[1048]: - increase shared memory size, 'plugin_pipe_size'; now: '0'. Nov 20 01:39:19 ubnt-netflow pmacctd[1048]: - increase buffer size, 'plugin_buffer_size'; now: '0'. Nov 20 01:39:19 ubnt-netflow pmacctd[1048]: - increase system maximum socket size.#012 It says my pipe and buffersize are 0. So I added some logging in load_plugin() to see what the values were at the beginning and end of the function and see: root@ubnt-netflow:/etc/pmacct# uacctd -f uacctd-i.conf load_plugins:pipe_size: 0, buffer_size 10485760 load_plugins end: pipe_size: 0, buffer_size 10485760 INFO ( default/core ): Successfully connected Netlink ULOG socket INFO ( default/core ): Netlink receive buffer size set to 2097152 INFO ( default/core ): Netlink ULOG: binding to group 2 INFO ( default/core ): Trying to (re)load map: /etc/pmacct/int_map INFO ( default/core ): map '/etc/pmacct/int_map' successfully (re)loaded. INFO ( 10.1.7.227-6343/sfprobe ): Exporting flows to [10.1.7.227]:6343 INFO ( 10.1.7.227-6343/sfprobe ): Sampling at: 1/1 INFO ( 10.1.7.227-6343/sfprobe ):'plugin_pipe_size'; now: '0'. INFO ( 10.1.7.227-6343/sfprobe ):'plugin_buffer_size'; now: '0'. So load_plugins has pipe_size 0 and buffer_size set to the value of plugin_pipe_size in the config file. By the time sfprobe starts both are 0. If I do the same with version 0.12.5 I see what I expect: uacctd -f uacctd-i.conf load_plugins: pipe_size: 10485760, buffer_size 10240 load_plugins end: pipe_size: 10485760, buffer_size 32888 INFO ( default/core ): Successfully connected Netlink ULOG socket INFO ( default/core ): Netlink receive buffer size set to 2097152 INFO ( default/core ): Netlink ULOG: binding to group 2 INFO ( default/core ): Trying to (re)load map: /etc/pmacct/int_map INFO ( default/core ): map '/etc/pmacct/int_map' successfully (re)loaded. INFO ( 10.1.7.227-6343/sfprobe ): Exporting flows to [10.1.7.227]:6343 INFO ( 10.1.7.227-6343/sfprobe ): Sampling at: 1/1 INFO ( 10.1.7.227-6343/sfprobe ):'plugin_pipe_size'; now: '10485760'. INFO ( 10.1.7.227-6343/sfprobe ):'plugin_buffer_size'; now: '32888'. So has the behavior of pipe_size/buffer_size changed in the newer version or could this be a bug? stig
_______________________________________________ pmacct-discussion mailing list http://www.pmacct.net/#mailinglists
