Hi Paolo,

Based on my understanding of pmacct, and the nfacctd configuration
directives being used, I have defined two memory tables:  plugins:
memory[ucar_in], memory[ucar_out] and I'm using the default
imt_mem_pools_number: 16 and imt_mem_pools_size: 8192.  So each table
would be 16 x 8192 =  131 kbytes.  A "top"  report from the server shows:
Mem:   4059040k total,  1380852k used,  2678188k free,    49032k buffers
Swap:  7926776k total,   346996k used,  7579780k free,   391112k cached

So I believe there enough memory available to allocate these tables.   

I'm hoping to track peer src/dst AS for each member of our RON.  Based
on the filter in place,  I should only be looking at one member, and
would expect a worst case of 40 entries in each table.   I don't think
that would be enough to exceed the allocated memory. 

nfacctd is running on a debian 6.0 (squeeze) box.  I'm using the pmacct 
0.14.0-1.1 debian package from "testing" and used apt-get to install
without any modifications.

Thanks for your response,

--paul


On 2/23/2013 2:22 AM, Paolo Lucente wrote:
> Hi Paul,
>
> >From the log it appears the memory plugins bail out and then the core process
> closes nicely because it has no more plugins to pass data to. The aggregation
> method is short so it looks strange but is it possible there is very little
> memory available for the memory tables? The connection to the bgp_agent_map
> is precisely what you say: without it you have a single entry; with it you are
> properly populating the tables. Btw, is it a self-compiled executable?
>
> Cheers,
> Paolo
>
> On Fri, Feb 22, 2013 at 02:32:22PM -0700, paul dial wrote:
>> Hi,
>>
>> I'm running pmacct-0.14.1 and attempting to setup a bgp feed.  The
>> netflow feed is being tee'd from another process on the box so all the
>> netflow packets have a source IP address of 127.0.0.2.  The bgp feed is
>> coming in from one of our routers.  Here is a snippet from nfacctd.conf:
>>
>> daemonize: true
>> pidfile: /var/run/nfacctd.pid
>> syslog: daemon
>> !
>> debug: true
>> !
>>
>> aggregate[ucar_in]: src_as
>> aggregate_filter[ucar_in]: dst net 128.117.0.0/16
>> aggregate[ucar_out]: dst_as
>> aggregate_filter[ucar_out]: src net 128.117.0.0/16
>>
>> ! plugin_buffer_size: 1024
>> nfacctd_port: 9992
>> ! nfacctd_time_secs: true
>> nfacctd_time_new: true
>> plugins: memory[ucar_in], memory[ucar_out]
>> imt_path[ucar_out]: /tmp/pmacct_out.pipe
>> imt_path[ucar_in]: /tmp/pmacct_in.pipe
>> networks_file: /etc/pmacct/networks.def
>>
>> bgp_daemon: true
>> !bgp_daemon_msglog: true
>> bgp_daemon_ip: 192.XXX.XXX.XXX
>> bgp_daemon_max_peers: 100
>> nfacctd_as_new: bgp
>> bgp_peer_src_as_type: bgp
>> bgp_src_as_path_type: bgp
>> bgp_src_local_pref_type: bgp
>> bgp_src_med_type: bgp
>> bgp_agent_map: /etc/pmacct/agent.map
>>
>> My agent.map looks like this:
>> id=<ip of bgp source>      ip=127.0.0.2
>>
>> When bgp_agent_map config key is used, nfacctd attempts to start but
>> then dies.  Here are the log messages:
>> ===================================================================================================
>> Feb 22 12:07:02 testflow nfacctd[1437]: INFO ( default/core ): Start
>> logging ...
>> Feb 22 12:07:02 testflow nfacctd[1437]: INFO ( default/core ): Trying to
>> (re)load map: /etc/pmacct/agent.map
>> Feb 22 12:07:02 testflow nfacctd[1437]: INFO ( default/core ): map
>> '/etc/pmacct/agent.map' successfully (re)loaded.
>> Feb 22 12:07:02 testflow nfacctd[1437]: DEBUG ( default/core/BGP ): 1
>> thread(s) initialized
>> Feb 22 12:07:02 testflow nfacctd[1437]: INFO ( default/core/BGP ):
>> maximum BGP peers allowed: 100
>> Feb 22 12:07:02 testflow nfacctd[1437]: INFO ( default/core/BGP ):
>> waiting for BGP data on 1xx.xxx.xxx.xxx:179
>> Feb 22 12:07:07 testflow nfacctd[1437]: INFO ( ucar_in/memory ): 131070
>> bytes are available to address shared memory segment; buffer size is 216
>> bytes.
>> Feb 22 12:07:07 testflow nfacctd[1437]: INFO ( ucar_in/memory ): Trying
>> to allocate a shared memory segment of 3538728 bytes.
>> Feb 22 12:07:07 testflow nfacctd[1437]: INFO ( ucar_out/memory ): 131070
>> bytes are available to address shared memory segment; buffer size is 216
>> bytes.
>> Feb 22 12:07:07 testflow nfacctd[1437]: INFO ( ucar_out/memory ): Trying
>> to allocate a shared memory segment of 3538728 bytes.
>> Feb 22 12:07:07 testflow nfacctd[1441]: DEBUG ( /etc/pmacct/networks.def
>> ): (networks table IPv4) AS: 0, net: 80750000, mask (bit): ffff0000,
>> mask (num): 10
>> Feb 22 12:07:07 testflow nfacctd[1441]: DEBUG ( /etc/pmacct/networks.def
>> ): IPv4 Networks Cache successfully created: 99991 entries.
>> Feb 22 12:07:07 testflow nfacctd[1441]: DEBUG ( /etc/pmacct/networks.def
>> ): (networks table IPv6) AS: 0, net: 0:0:0:0, mask (bit): 0:0:0:0, mask
>> (num): 0
>> Feb 22 12:07:07 testflow nfacctd[1441]: DEBUG ( /etc/pmacct/networks.def
>> ): IPv6 Networks Cache successfully created: 32771 entries.
>> Feb 22 12:07:07 testflow nfacctd[1441]: DEBUG ( ucar_in/memory ):
>> allocating a new memory segment.
>> Feb 22 12:07:07 testflow nfacctd[1437]: DEBUG ( /etc/pmacct/networks.def
>> ): (networks table IPv4) AS: 0, net: 80750000, mask (bit): ffff0000,
>> mask (num): 10
>> Feb 22 12:07:07 testflow nfacctd[1443]: DEBUG ( /etc/pmacct/networks.def
>> ): (networks table IPv4) AS: 0, net: 80750000, mask (bit): ffff0000,
>> mask (num): 10
>> Feb 22 12:07:07 testflow nfacctd[1437]: DEBUG ( /etc/pmacct/networks.def
>> ): IPv4 Networks Cache successfully created: 99991 entries.
>> Feb 22 12:07:07 testflow nfacctd[1443]: DEBUG ( /etc/pmacct/networks.def
>> ): IPv4 Networks Cache successfully created: 99991 entries.
>> Feb 22 12:07:07 testflow nfacctd[1443]: DEBUG ( /etc/pmacct/networks.def
>> ): (networks table IPv6) AS: 0, net: 0:0:0:0, mask (bit): 0:0:0:0, mask
>> (num): 0
>> Feb 22 12:07:07 testflow nfacctd[1437]: DEBUG ( /etc/pmacct/networks.def
>> ): (networks table IPv6) AS: 0, net: 0:0:0:0, mask (bit): 0:0:0:0, mask
>> (num): 0
>> Feb 22 12:07:07 testflow nfacctd[1437]: DEBUG ( /etc/pmacct/networks.def
>> ): IPv6 Networks Cache successfully created: 32771 entries.
>> Feb 22 12:07:07 testflow nfacctd[1443]: DEBUG ( /etc/pmacct/networks.def
>> ): IPv6 Networks Cache successfully created: 32771 entries.
>> Feb 22 12:07:07 testflow nfacctd[1437]: INFO ( default/core ): waiting
>> for NetFlow data on :::9992
>> Feb 22 12:07:07 testflow nfacctd[1443]: DEBUG ( ucar_out/memory ):
>> allocating a new memory segment.
>> Feb 22 12:07:07 testflow nfacctd[1441]: DEBUG ( ucar_in/memory ):
>> allocating a new memory segment.
>> Feb 22 12:07:07 testflow nfacctd[1441]: OK ( ucar_in/memory ): waiting
>> for data on: '/tmp/pmacct_in.pipe'
>> Feb 22 12:07:07 testflow nfacctd[1443]: DEBUG ( ucar_out/memory ):
>> allocating a new memory segment.
>> Feb 22 12:07:07 testflow nfacctd[1443]: OK ( ucar_out/memory ): waiting
>> for data on: '/tmp/pmacct_out.pipe'
>> Feb 22 12:07:07 testflow nfacctd[1443]: DEBUG ( ucar_out/memory ):
>> Selecting bucket 6355.
>> Feb 22 12:07:07 testflow nfacctd[1437]: INFO: connection lost to
>> 'ucar_out-memory'; closing connection.
>> Feb 22 12:07:13 testflow nfacctd[1441]: DEBUG ( ucar_in/memory ):
>> Selecting bucket 6355.
>> Feb 22 12:07:20 testflow nfacctd[1437]: INFO: connection lost to
>> 'ucar_in-memory'; closing connection.
>> Feb 22 12:07:20 testflow nfacctd[1437]: INFO: no more plugins active.
>> Shutting down.
>> ================================================================================================
>>
>> If I comment out the bgp_agent_map config key, then nfacctd will start,
>> establish a bgp session, and process netflow data, however, all the AS
>> numbers are listed as '0' (presumably because the IP address of the bgp
>> feed doesn't match the IP address of the netflow feed). 
>>
>> Any thoughts on how to resolve this would be greatly appreciated.
>>
>> Thanks,
>>
>> --paul
>>
>>
>> _______________________________________________
>> pmacct-discussion mailing list
>> http://www.pmacct.net/#mailinglists
> _______________________________________________
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists


_______________________________________________
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Reply via email to