Hi Paul,

Unfortunately it does not help much. Last thing i can propose is for you
to run nfacctd under gdb with "set follow-fork-mode child" so to inspect
what happens to the plugins: from the log you posted it appears they both
crash (and i was explaining previously why such behaviour can be connected
to the presence of a bgp_agent_map statement: it lets tuples to populate
the memory table). Did you also try any plugins different from the memory
one - if yes, same story?

Since we are moving in the debugging phase, which is not of very general
interest, i suggest to move the thread privately. As i was saying, if also
this does not lead to anything then it would be good if i can have a brief
look myself to the issue.

Cheers,
Paolo

On Mon, Mar 11, 2013 at 11:01:08AM -0600, paul dial wrote:
> bump ...
> 
> On 2/26/2013 2:39 PM, paul dial wrote:
> > Hi Paolo,
> >
> > I self compiled pmacct--0.14.2 with just one flag --enable-threads.  The
> > same nfacctd.conf file as listed previously in this thread was used and
> > the bgp feed has a different source IP address than the netflow feed,
> > here is what I found:
> >
> > 1) nfacctd.conf with  bgp_agent_map configured.  nfacctd shuts down as
> > before because "no more plugins active"   Here is the debug output:
> > ==================================================================
> > Feb 26 12:42:25  nfacctd[2281]: INFO ( default/core ): Start logging ...
> > Feb 26 12:42:25  nfacctd[2281]: INFO ( default/core ): Trying to
> > (re)load map: /etc/pmacct/agent.map
> > Feb 26 12:42:25  nfacctd[2281]: INFO ( default/core ): map
> > '/etc/pmacct/agent.map' successfully (re)loaded.
> > Feb 26 12:42:25  nfacctd[2281]: INFO ( default/core/BGP ): maximum BGP
> > peers allowed: 100
> > Feb 26 12:42:25  nfacctd[2281]: INFO ( default/core/BGP ): waiting for
> > BGP data on 192.43.217.2:179
> > Feb 26 12:42:30  nfacctd[2281]: INFO ( ucar_in/memory ): 112640 bytes
> > are available to address shared memory segment; buffer size is 148 bytes.
> > Feb 26 12:42:30  nfacctd[2281]: INFO ( ucar_in/memory ): Trying to
> > allocate a shared memory segment of 4167680 bytes.
> > Feb 26 12:42:30  nfacctd[2281]: INFO ( ucar_out/memory ): 112640 bytes
> > are available to address shared memory segment; buffer size is 148 bytes.
> > Feb 26 12:42:30  nfacctd[2281]: INFO ( ucar_out/memory ): Trying to
> > allocate a shared memory segment of 4167680 bytes.
> > Feb 26 12:42:30  nfacctd[2283]: OK ( ucar_in/memory ): waiting for data
> > on: '/tmp/pmacct_in.pipe'
> > Feb 26 12:42:30  nfacctd[2284]: OK ( ucar_out/memory ): waiting for data
> > on: '/tmp/pmacct_out.pipe'
> > Feb 26 12:42:30  nfacctd[2281]: INFO ( default/core ): waiting for
> > NetFlow data on 0.0.0.0:9992
> > Feb 26 12:42:34  nfacctd[2281]: INFO: connection lost to
> > 'ucar_in-memory'; closing connection.
> > Feb 26 12:42:34  nfacctd[2281]: INFO: connection lost to
> > 'ucar_out-memory'; closing connection.
> > Feb 26 12:42:34  nfacctd[2281]: INFO: no more plugins active. Shutting down.
> > ============================================================
> >
> > 2) nfacctd.conf without bgp_agent_map.  nfacctd runs and establishes a
> > bgp session.  The memory plug-in shows only one entry with an AS of
> > '0'.  This is expected because the source IP address of the bgp feed and
> > the source IP address of the netflow feed are different.
> >
> > 3) If I front end pmacct with a program that allows me to spoof the
> > source IP address of the netflow packets before sending them to pmacct
> > on udp port 9992, and I set that IP address to the same as the source IP
> > address of the bgp feed  (verified both using tcpdump), nfacctd runs,
> > but no data is ever returned to the memory plugin, only the column
> > titles appear:  <SRC|DST>_AS  PACKETS  BYTES .   Note that the
> > bgp_agent_map  configuration directive was NOT active.  here is the
> > debug output:
> > ========================================================
> > Feb 26 13:11:48  nfacctd[3937]: INFO ( default/core ): Start logging ...
> > Feb 26 13:11:48  nfacctd[3937]: INFO ( default/core/BGP ): maximum BGP
> > peers allowed: 100
> > Feb 26 13:11:48  nfacctd[3937]: INFO ( default/core/BGP ): waiting for
> > BGP data on 192.43.217.2:179
> > Feb 26 13:11:53  nfacctd[3937]: INFO ( ucar_in/memory ): 112640 bytes
> > are available to address shared memory segment; buffer size is 148 bytes.
> > Feb 26 13:11:53  nfacctd[3937]: INFO ( ucar_in/memory ): Trying to
> > allocate a shared memory segment of 4167680 bytes.
> > Feb 26 13:11:53  nfacctd[3937]: INFO ( ucar_out/memory ): 112640 bytes
> > are available to address shared memory segment; buffer size is 148 bytes.
> > Feb 26 13:11:53  nfacctd[3937]: INFO ( ucar_out/memory ): Trying to
> > allocate a shared memory segment of 4167680 bytes.
> > Feb 26 13:11:53  nfacctd[3939]: OK ( ucar_in/memory ): waiting for data
> > on: '/tmp/pmacct_in.pipe'
> > Feb 26 13:11:53  nfacctd[3937]: INFO ( default/core ): waiting for
> > NetFlow data on 0.0.0.0:9992
> > Feb 26 13:11:53  nfacctd[3940]: OK ( ucar_out/memory ): waiting for data
> > on: '/tmp/pmacct_out.pipe'
> > Feb 26 13:12:09  nfacctd[3937]: INFO ( default/core/BGP ): BGP peers
> > usage: 1/100
> > ================================================================
> >
> > Not sure if this information sheds any light on the problem? 
> >
> > Thanks!
> >
> > --paul
> >
> >
> > On 2/25/2013 12:39 PM, Paolo Lucente wrote:
> >> Hi Paul,
> >>
> >> Perfectly agree with your thoughts around the aggregation method and
> >> memory required. Can you please download a tarball from the website,
> >> self-compile and give it a try to that one? Can't really say whether
> >> the issue might be with the debian package. If that does not lead to
> >> anything then it would be good if i can have a brief look myself to
> >> the issue for some debugging. Let me know.
> >>
> >> Cheers,
> >> Paolo
> >>
> >> On Mon, Feb 25, 2013 at 11:36:21AM -0700, paul dial wrote:
> >>> Hi Paolo,
> >>>
> >>> Based on my understanding of pmacct, and the nfacctd configuration
> >>> directives being used, I have defined two memory tables:  plugins:
> >>> memory[ucar_in], memory[ucar_out] and I'm using the default
> >>> imt_mem_pools_number: 16 and imt_mem_pools_size: 8192.  So each table
> >>> would be 16 x 8192 =  131 kbytes.  A "top"  report from the server shows:
> >>> Mem:   4059040k total,  1380852k used,  2678188k free,    49032k buffers
> >>> Swap:  7926776k total,   346996k used,  7579780k free,   391112k cached
> >>>
> >>> So I believe there enough memory available to allocate these tables.   
> >>>
> >>> I'm hoping to track peer src/dst AS for each member of our RON.  Based
> >>> on the filter in place,  I should only be looking at one member, and
> >>> would expect a worst case of 40 entries in each table.   I don't think
> >>> that would be enough to exceed the allocated memory. 
> >>>
> >>> nfacctd is running on a debian 6.0 (squeeze) box.  I'm using the pmacct 
> >>> 0.14.0-1.1 debian package from "testing" and used apt-get to install
> >>> without any modifications.
> >>>
> >>> Thanks for your response,
> >>>
> >>> --paul
> >>>
> >>>
> >>> On 2/23/2013 2:22 AM, Paolo Lucente wrote:
> >>>> Hi Paul,
> >>>>
> >>>> >From the log it appears the memory plugins bail out and then the core 
> >>>> >process
> >>>> closes nicely because it has no more plugins to pass data to. The 
> >>>> aggregation
> >>>> method is short so it looks strange but is it possible there is very 
> >>>> little
> >>>> memory available for the memory tables? The connection to the 
> >>>> bgp_agent_map
> >>>> is precisely what you say: without it you have a single entry; with it 
> >>>> you are
> >>>> properly populating the tables. Btw, is it a self-compiled executable?
> >>>>
> >>>> Cheers,
> >>>> Paolo
> >>>>
> >>>> On Fri, Feb 22, 2013 at 02:32:22PM -0700, paul dial wrote:
> >>>>> Hi,
> >>>>>
> >>>>> I'm running pmacct-0.14.1 and attempting to setup a bgp feed.  The
> >>>>> netflow feed is being tee'd from another process on the box so all the
> >>>>> netflow packets have a source IP address of 127.0.0.2.  The bgp feed is
> >>>>> coming in from one of our routers.  Here is a snippet from nfacctd.conf:
> >>>>>
> >>>>> daemonize: true
> >>>>> pidfile: /var/run/nfacctd.pid
> >>>>> syslog: daemon
> >>>>> !
> >>>>> debug: true
> >>>>> !
> >>>>>
> >>>>> aggregate[ucar_in]: src_as
> >>>>> aggregate_filter[ucar_in]: dst net 128.117.0.0/16
> >>>>> aggregate[ucar_out]: dst_as
> >>>>> aggregate_filter[ucar_out]: src net 128.117.0.0/16
> >>>>>
> >>>>> ! plugin_buffer_size: 1024
> >>>>> nfacctd_port: 9992
> >>>>> ! nfacctd_time_secs: true
> >>>>> nfacctd_time_new: true
> >>>>> plugins: memory[ucar_in], memory[ucar_out]
> >>>>> imt_path[ucar_out]: /tmp/pmacct_out.pipe
> >>>>> imt_path[ucar_in]: /tmp/pmacct_in.pipe
> >>>>> networks_file: /etc/pmacct/networks.def
> >>>>>
> >>>>> bgp_daemon: true
> >>>>> !bgp_daemon_msglog: true
> >>>>> bgp_daemon_ip: 192.XXX.XXX.XXX
> >>>>> bgp_daemon_max_peers: 100
> >>>>> nfacctd_as_new: bgp
> >>>>> bgp_peer_src_as_type: bgp
> >>>>> bgp_src_as_path_type: bgp
> >>>>> bgp_src_local_pref_type: bgp
> >>>>> bgp_src_med_type: bgp
> >>>>> bgp_agent_map: /etc/pmacct/agent.map
> >>>>>
> >>>>> My agent.map looks like this:
> >>>>> id=<ip of bgp source>      ip=127.0.0.2
> >>>>>
> >>>>> When bgp_agent_map config key is used, nfacctd attempts to start but
> >>>>> then dies.  Here are the log messages:
> >>>>> ===================================================================================================
> >>>>> Feb 22 12:07:02 testflow nfacctd[1437]: INFO ( default/core ): Start
> >>>>> logging ...
> >>>>> Feb 22 12:07:02 testflow nfacctd[1437]: INFO ( default/core ): Trying to
> >>>>> (re)load map: /etc/pmacct/agent.map
> >>>>> Feb 22 12:07:02 testflow nfacctd[1437]: INFO ( default/core ): map
> >>>>> '/etc/pmacct/agent.map' successfully (re)loaded.
> >>>>> Feb 22 12:07:02 testflow nfacctd[1437]: DEBUG ( default/core/BGP ): 1
> >>>>> thread(s) initialized
> >>>>> Feb 22 12:07:02 testflow nfacctd[1437]: INFO ( default/core/BGP ):
> >>>>> maximum BGP peers allowed: 100
> >>>>> Feb 22 12:07:02 testflow nfacctd[1437]: INFO ( default/core/BGP ):
> >>>>> waiting for BGP data on 1xx.xxx.xxx.xxx:179
> >>>>> Feb 22 12:07:07 testflow nfacctd[1437]: INFO ( ucar_in/memory ): 131070
> >>>>> bytes are available to address shared memory segment; buffer size is 216
> >>>>> bytes.
> >>>>> Feb 22 12:07:07 testflow nfacctd[1437]: INFO ( ucar_in/memory ): Trying
> >>>>> to allocate a shared memory segment of 3538728 bytes.
> >>>>> Feb 22 12:07:07 testflow nfacctd[1437]: INFO ( ucar_out/memory ): 131070
> >>>>> bytes are available to address shared memory segment; buffer size is 216
> >>>>> bytes.
> >>>>> Feb 22 12:07:07 testflow nfacctd[1437]: INFO ( ucar_out/memory ): Trying
> >>>>> to allocate a shared memory segment of 3538728 bytes.
> >>>>> Feb 22 12:07:07 testflow nfacctd[1441]: DEBUG ( /etc/pmacct/networks.def
> >>>>> ): (networks table IPv4) AS: 0, net: 80750000, mask (bit): ffff0000,
> >>>>> mask (num): 10
> >>>>> Feb 22 12:07:07 testflow nfacctd[1441]: DEBUG ( /etc/pmacct/networks.def
> >>>>> ): IPv4 Networks Cache successfully created: 99991 entries.
> >>>>> Feb 22 12:07:07 testflow nfacctd[1441]: DEBUG ( /etc/pmacct/networks.def
> >>>>> ): (networks table IPv6) AS: 0, net: 0:0:0:0, mask (bit): 0:0:0:0, mask
> >>>>> (num): 0
> >>>>> Feb 22 12:07:07 testflow nfacctd[1441]: DEBUG ( /etc/pmacct/networks.def
> >>>>> ): IPv6 Networks Cache successfully created: 32771 entries.
> >>>>> Feb 22 12:07:07 testflow nfacctd[1441]: DEBUG ( ucar_in/memory ):
> >>>>> allocating a new memory segment.
> >>>>> Feb 22 12:07:07 testflow nfacctd[1437]: DEBUG ( /etc/pmacct/networks.def
> >>>>> ): (networks table IPv4) AS: 0, net: 80750000, mask (bit): ffff0000,
> >>>>> mask (num): 10
> >>>>> Feb 22 12:07:07 testflow nfacctd[1443]: DEBUG ( /etc/pmacct/networks.def
> >>>>> ): (networks table IPv4) AS: 0, net: 80750000, mask (bit): ffff0000,
> >>>>> mask (num): 10
> >>>>> Feb 22 12:07:07 testflow nfacctd[1437]: DEBUG ( /etc/pmacct/networks.def
> >>>>> ): IPv4 Networks Cache successfully created: 99991 entries.
> >>>>> Feb 22 12:07:07 testflow nfacctd[1443]: DEBUG ( /etc/pmacct/networks.def
> >>>>> ): IPv4 Networks Cache successfully created: 99991 entries.
> >>>>> Feb 22 12:07:07 testflow nfacctd[1443]: DEBUG ( /etc/pmacct/networks.def
> >>>>> ): (networks table IPv6) AS: 0, net: 0:0:0:0, mask (bit): 0:0:0:0, mask
> >>>>> (num): 0
> >>>>> Feb 22 12:07:07 testflow nfacctd[1437]: DEBUG ( /etc/pmacct/networks.def
> >>>>> ): (networks table IPv6) AS: 0, net: 0:0:0:0, mask (bit): 0:0:0:0, mask
> >>>>> (num): 0
> >>>>> Feb 22 12:07:07 testflow nfacctd[1437]: DEBUG ( /etc/pmacct/networks.def
> >>>>> ): IPv6 Networks Cache successfully created: 32771 entries.
> >>>>> Feb 22 12:07:07 testflow nfacctd[1443]: DEBUG ( /etc/pmacct/networks.def
> >>>>> ): IPv6 Networks Cache successfully created: 32771 entries.
> >>>>> Feb 22 12:07:07 testflow nfacctd[1437]: INFO ( default/core ): waiting
> >>>>> for NetFlow data on :::9992
> >>>>> Feb 22 12:07:07 testflow nfacctd[1443]: DEBUG ( ucar_out/memory ):
> >>>>> allocating a new memory segment.
> >>>>> Feb 22 12:07:07 testflow nfacctd[1441]: DEBUG ( ucar_in/memory ):
> >>>>> allocating a new memory segment.
> >>>>> Feb 22 12:07:07 testflow nfacctd[1441]: OK ( ucar_in/memory ): waiting
> >>>>> for data on: '/tmp/pmacct_in.pipe'
> >>>>> Feb 22 12:07:07 testflow nfacctd[1443]: DEBUG ( ucar_out/memory ):
> >>>>> allocating a new memory segment.
> >>>>> Feb 22 12:07:07 testflow nfacctd[1443]: OK ( ucar_out/memory ): waiting
> >>>>> for data on: '/tmp/pmacct_out.pipe'
> >>>>> Feb 22 12:07:07 testflow nfacctd[1443]: DEBUG ( ucar_out/memory ):
> >>>>> Selecting bucket 6355.
> >>>>> Feb 22 12:07:07 testflow nfacctd[1437]: INFO: connection lost to
> >>>>> 'ucar_out-memory'; closing connection.
> >>>>> Feb 22 12:07:13 testflow nfacctd[1441]: DEBUG ( ucar_in/memory ):
> >>>>> Selecting bucket 6355.
> >>>>> Feb 22 12:07:20 testflow nfacctd[1437]: INFO: connection lost to
> >>>>> 'ucar_in-memory'; closing connection.
> >>>>> Feb 22 12:07:20 testflow nfacctd[1437]: INFO: no more plugins active.
> >>>>> Shutting down.
> >>>>> ================================================================================================
> >>>>>
> >>>>> If I comment out the bgp_agent_map config key, then nfacctd will start,
> >>>>> establish a bgp session, and process netflow data, however, all the AS
> >>>>> numbers are listed as '0' (presumably because the IP address of the bgp
> >>>>> feed doesn't match the IP address of the netflow feed). 
> >>>>>
> >>>>> Any thoughts on how to resolve this would be greatly appreciated.
> >>>>>
> >>>>> Thanks,
> >>>>>
> >>>>> --paul
> >>>>>
> >>>>>
> >>>>> _______________________________________________
> >>>>> pmacct-discussion mailing list
> >>>>> http://www.pmacct.net/#mailinglists
> >>>> _______________________________________________
> >>>> pmacct-discussion mailing list
> >>>> http://www.pmacct.net/#mailinglists
> >>> _______________________________________________
> >>> pmacct-discussion mailing list
> >>> http://www.pmacct.net/#mailinglists
> >> _______________________________________________
> >> pmacct-discussion mailing list
> >> http://www.pmacct.net/#mailinglists
> >
> > _______________________________________________
> > pmacct-discussion mailing list
> > http://www.pmacct.net/#mailinglists
> 
> -- 
> ----
> Paul Dial
> Network Engineer
> National Center for Atmospheric Research
> 303-497-1261
> [email protected]
> 
> 
> _______________________________________________
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

_______________________________________________
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Reply via email to