Re: [pmacct-discussion] Easiest way to ingest nfacctd data into python?

2022-05-03 Thread Tim Jackson
I've done similar with the IMT and Perl years ago:

https://houstongrackles.com/~tjackson/flows_to_es/

Relevant part in Perl:

sub retrieve_flows {
my $pipe = shift;
my $primitive = shift;
my $filter = shift;
my @flows = `/usr/local/bin/pmacct -p $pipe -l -O json -c
"$primitive" -M "$filter"`;
return @flows;
}

The array there is just JSON and can be decoded afterwards.. Python should
be able to do similar with subprocess + run on 3.5+..

--
Tim

On Tue, May 3, 2022 at 2:30 PM Karl O. Pinc  wrote:

> On Tue, 3 May 2022 18:19:50 +
> "Compton, Rich A"  wrote:
>
> > Hi, I’m trying to take the netflow records from nfacctd and process
> > them with a python script.  Can someone suggest how I can do this
> > with python without having nfacctd put them into a database and then
> > have my python script read it?
>
> You could use the in-memory recording and poll and clear
> out the old data with python.   I don't know how well
> that would work, depending on scale.  I tend to favor
> relational databases and would lean toward using
> sqlite if you're looking for "simple".  (And Postgres
> for everything else.)
>
> You are collecting tabular data, which RDBMSs excel at.
>
> Regards,
>
> Karl 
> Free Software:  "You don't pay back, you pay forward."
>  -- Robert A. Heinlein
>
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists
>
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


[pmacct-discussion] Using large hex values for "label" in pre_tag_map results in strange SQL

2020-02-12 Thread Tim Jackson
I'm using some large hex values as the set_label in our pre_tag_map and
getting some weird behavior..

example map:

https://paste.somuch.fail/?bafc96e84fe95322#j6T+54l/gxN90POeMi3yuhBT9XOPMmEqt3IF5cvHOJk=

When using this w/ pgsql as the output plugin, I see some errors randomly
from postgres (I have dont_try_update+use_copy on):

2020-02-12 23:49:41.148 UTC [11632] postgres@sfacctd ERROR:  invalid byte
sequence for encoding "UTF8": 0xc0 0x2c
2020-02-12 23:49:41.148 UTC [11632] postgres@sfacctd CONTEXT:  COPY
acct_v9, line 1
2020-02-12 23:49:41.148 UTC [11632] postgres@sfacctd STATEMENT:  COPY
acct_v9 (mac_src, mac_dst, vlan, ip_src, ip_dst, as_src, iface_in,
iface_out, as_dst, comms, peer_ip_src, port_src, port_dst, tos, ip_proto,
sampling_rate, timestamp_arrival, tag, tag2, label, packets, bytes) FROM
STDIN DELIMITER ','

I'm also seeing when doing some debugging some strange rows being generated
where the label field is longer than any label I have set in the
pre_tag_map file itself:

DEBUG ( all/pgsql ):
ff:ff:fb:f9:b6:c4,ff:ff:ff:a1:8c:2b,0,1.1.1.1,2.2.2.2,0,547,607,30419,,1.1.1.1,23836,2,6,1024,2020-02-12
23:50:21,0,1,c7515ed894354725bc60160ee48775ce0e3b3924fb730,1,307

Any ideas where that could be coming from?

--
Tim
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] Realistic Scaling of pre_tag_map?

2020-02-12 Thread Tim Jackson
That's good news, since everything I've tested so far the maps_index has
worked with. Any worries about reloading the map often/quickly?

Also is there a limit to how large the label can be in the pre_tag_map and
any characters that aren't supported? Seems as if '-' in any set_label
operation means the whole string gets ignored..

The use-case is just mapping ip+ifIndex -> downstream devices with a label,
but I've got a lot of interfaces to match there..

--
Tim


On Wed, Feb 12, 2020, 12:47 AM Paolo Lucente  wrote:

>
> Hey Tim,
>
> It really depends whether you can leverage maps_index (*) or not. If yes
> then computations are O(1) and hence you can scale it as much as you
> like and i can confirm you there is people building maps of the same
> magnitude as you have in mind. If not then it's not going to work but
> then again i'd be interested in your use-case, how the map would look
> like, etc.
>
> Paolo
>
> (*) https://github.com/pmacct/pmacct/blob/1.7.4/CONFIG-KEYS#L1878-#L1891
>
> On Tue, Feb 11, 2020 at 05:54:27PM -0600, Tim Jackson wrote:
> > Just curious, what's the realistic scaling of pre_tag_map?
> >
> > I'm looking to maybe put 50k+ entries in it and reload it every few
> > minutes..
> >
> > Any real gotchas w/ that approach?
> >
> > --
> > Tim
>
> > ___
> > pmacct-discussion mailing list
> > http://www.pmacct.net/#mailinglists
>
>
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists
>
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

[pmacct-discussion] Realistic Scaling of pre_tag_map?

2020-02-11 Thread Tim Jackson
Just curious, what's the realistic scaling of pre_tag_map?

I'm looking to maybe put 50k+ entries in it and reload it every few
minutes..

Any real gotchas w/ that approach?

--
Tim
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] sampling

2016-08-24 Thread Tim Jackson
If the probe is doing sampling, you can have pmacct re-normalize sampling
via either a static sampling value, or via the flow/ipfix information that
tells it how it was sampled..

sample_rate: 
sample_map: 
(s|n|p|u)facctd_renormalize:true

If you're using a tap that samples traffic, you can tell the daemon that
upstream traffic is sampled by X:

pmacctd_ext_sampling_rate | uacctd_ext_sampling_rate

http://wiki.pmacct.net/OfficialConfigKeys

--
Tim


On Wed, Aug 24, 2016 at 9:37 AM, Stephen Clark 
wrote:

> Hi Paolo,
>
> I looked thru the CONFIG_KEYS and didn't find the ability to do sampling
> except
> in the SQL_preprocess keys. Is it possible to do the sampling at the point
> the neflow records are first created - in other words by nfprobe?
>
> Thanks,
> Steve
>
> --
>
> "They that give up essential liberty to obtain temporary safety,
> deserve neither liberty nor safety."  (Ben Franklin)
>
> "The course of history shows that as a government grows, liberty
> decreases."  (Thomas Jefferson)
>
>
>
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists
>
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Re: [pmacct-discussion] Native Elasticsearch backend development

2015-02-10 Thread Tim Jackson
I don't output to JSON files then import, but I use perl to basically
do the same thing.. Query the pmacct IMT for how long it's been since
it was last cleared, query it for data, clear it, add more data to the
record(s) based on some imports and insert them into elasticsearch..

Using the IMT as a cache for data, on 5 minute averages into
ElasticSearch takes very little time (~5-10 seconds for our network
which is 1:2048 sampled NF v5 and a pretty large tuple of aggregates,
basically everything but source/dest IP).. This was first rolled out
with 1 minute data, which was a huge amount of data, but running this
on 1 minute the insertion/classification perl script would take about
~4-5 seconds.

In my opinion, not having the extra data that I can insert into ES
makes things a lot harder, so a native client in pmacct would need the
ability to do some extra stuff:

Correlate in/out ifIndexes with some data (e.g. an ifIndex Map)
Correlate Tags with some data (e.g. Port Type, etc)

I think the idea of being able to do this natively in pmacct is great,
but I don't mind the small hit at all for being able to flexibly add
more data from other sources into this.. Expanding pre-tagging would
be one way to do it, but I've also got bits where we actually look at
the source/dest IP and classify it based on our IPAM as well (but
don't ever store the source/dest IP).. IMHO the flexibility of just
using the pmacct client to query is totally worth it.

Some of my early examples of how I did some of the parsing is here:

http://somuch.fail/~tjackson/flows_to_es/

The final document we store in Elasticsearch is:

{
  _index: flow-full-2015-02-10-13,
  _type: flowdata,
  _id: AUtzk0xQ3fLt3GYsjpA1,
  _score: 0.7958426,
  _source: {
inifname: ge-5/0/0.0,
inifdescr: [CDN] To  Cluster,
@timestamp: 1423573202000,
inout: Output,
avg_size: 79,
pps: 7,
stats: {
  src_comms: ,
  tcp_flags: 0,
  bytes: 161792,
  as_src: ,
  port_src: 36552,
  ip_proto: udp,
  port_dst: 53,
  tag2: 4,
  iface_in: 555,
  packets: 2048,
  as_dst: X,
  tos: 0,
  iface_out: 542,
  comms: ,
  tag: 1486
},
region: 1000,
outifdescr: Unknown,
router: gw2.,
_timestamp: 1423573202000,
bps: 8902,
outifname: Unknown,
class: On Net CDN
  }
}


--
Tim

On Tue, Feb 10, 2015 at 9:56 AM, Mike Bowie mbo...@rocketspace.com wrote:
 Good morning folks,

 First of all, my sincerest thanks to those who contribute, and have
 contributed previously to pmacct. It's a superb tool for us, and has given
 us considerably greater clarity of data than the commercial tools we've
 evaluated in the marketplace. We're a NetBSD shop, and save a minor patch[2]
 to execv, it builds and runs extremely well for us.

 Historically, we've dumped our pmacct data into pgsql, and been moderately
 happy with the results... we grok out what we need and all is well in the
 world.

 Recently, we've started to look at applying Elasticsearch and Kibana to the
 equation, based currently on the excellent Python based work of Pier Carlo
 Chiodi from https://github.com/pierky/pmacct-to-elasticsearch.

 As we look at this in more of a production sense, I'm keen to keep our
 moving parts, and dependencies to a minimum, so am looking at the
 possibility of writing[1] a native pmacct backend to interact with
 Elasticsearch.

 Before I get too far down this path, I'm interested to know if:
  - Anyone is already engaged in a similar effort
  - There is additional expertise out there which may be available
  - There is any interest in seeing this sort of addition developed

 Any feedback welcome.

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] pmacct-to-elasticsearch

2014-12-22 Thread Tim Jackson
I too use pmacct to insert data into ElasticSearch..

One super helpful thing that Paolo added in CVS a few weeks ago is a
command line option to return how many seconds it's been since the IMT
has been cleared.. This allows you to calculate a BPS/PPS value to
insert (or for DoS detection, etc).

Here we use perl to do it with a few subs like:

# Returns JSON
sub retrieve_flows {
my $pmacctbin = shift;
my $pipe = shift;
my $primitive = shift;
my $filter = shift;
my @flows = `$pmacctbin -p $pipe -l -O json -c $primitive -M
$filter`;
return @flows;
}

sub clear_flows {
my $pmacctbin = shift;
my $pipe = shift;
my $flows = `$pmacctbin -l -p $pipe -e`
}

sub get_flow_duration {
my $pmacctbin = shift;
my $pipe = shift;
my $duration = `$pmacctbin -p $pipe -i`;
chomp($duration);
if ($duration =~ /never/i || $duration  518400) {
# if this returns never, return 24 hours to make
everything low bitrate
   $duration = 518400;
} elsif ($duration  60) {
   $duration = 60;
}
return $duration;
}

I also do all of the insertion from Perl, which lets me do things like
cache a copy of all ifindexes of routers that are classified from our
NMS, tag this, etc..

--
Tim

On Mon, Dec 22, 2014 at 12:47 PM, Pier Carlo Chiodi pie...@pierky.com wrote:
 Hello,

 I wish to share here a script that I made in the hope that it will be
 helpful to whoever might be involved in pmacct / ElasticSearch / Kibana
 integration.

 It's pmacct-to-elasticsearch
 (https://github.com/pierky/pmacct-to-elasticsearch); as you can easily guess
 it reads pmacct output and sends it to ElasticSearch for indexing. More
 details on the GitHub page.

 A simple setup guide is available on my blog:
 http://blog.pierky.com/integration-of-pmacct-with-elasticsearch-and-kibana

 Please consider it as a beta version; I will be happy to hear feedback from
 anyone who wants to test it.

 Regards,

 --
 Pier Carlo Chiodi
 http://pierky.com/aboutme


 ___
 pmacct-discussion mailing list
 http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


[pmacct-discussion] pretag Limitations

2014-11-11 Thread Tim Jackson
Are there any pracitcal limitations to the pretagging file? Slipped up
on filtering the interfaces that generate this file and it seemed to
start but not actually push any traffic into the IMT..

(The file was ~120k entries)

--
Tim

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] Seconds since IMT had it's statistics cleared?

2014-11-10 Thread Tim Jackson
It's not a huge deal, I already deal with it in the scripts that pull
data from the pmacct client, but one less thing to keep track of or
store outside is always better..

Thanks!

--
Tim

On Mon, Nov 10, 2014 at 12:28 PM, Paolo Lucente pa...@pmacct.net wrote:
 Hi Tim,

 This info is currently not available, you should script something.
 But it's not a biggie of a work 1) timestamping the event and 2)
 introducing a knob in pmacct client in order to fetch the info. I'm
 putting it on my todo list. Let me know how urgent this is for you,
 including whether you can script something for the shorter-term.

 Cheers,
 Paolo

 On Mon, Nov 10, 2014 at 10:45:47AM -0800, Tim Jackson wrote:
 Is there any way to determine the amount of time that an IMT has been
 collecting data since it last had it's stats cleared?

 --
 Tim

 ___
 pmacct-discussion mailing list
 http://www.pmacct.net/#mailinglists

 ___
 pmacct-discussion mailing list
 http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] 1.5.0rc3 nfacctd segfaults

2014-06-24 Thread Tim Jackson
It doesn't actually appear to be clearing the statistics that cause
the memory to balloon.. I've started clearing both imt tables I have
setup every 2 minutes and:

# ps aux | grep -e 'USER\|nfacct'
USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
root  1512  0.0  6.7 208380 129404 ?   Ss   08:10   0:02
nfacctd: Core Process [default]
root  1514  0.0  6.7 210916 130180 ?   S08:10   0:02
nfacctd: Tee Plugin [fanout]
root  1527  0.0  6.8 211172 132232 ?   Ss   08:10   0:02
nfacctd: Core Process [default]
root  1529  0.0  7.4 221128 142364 ?   S08:10   0:03
nfacctd: PostgreSQL Plugin [as]
root  1554  0.1 13.5 340128 261184 ?   Ss   08:10   0:05
nfacctd: Core Process [default]
root  1556  0.3 10.5 282840 203400 ?   S08:10   0:12
nfacctd: IMT Plugin [full]
root  1557  0.2 27.5 608480 529064 ?   S08:10   0:10
nfacctd: IMT Plugin [dst]
root  2740  0.0  0.0 103256   816 pts/0R+   09:17   0:00 grep
-e USER\|nfacct

# ps aux | grep -e 'USER\|nfacct'
USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
root  1512  0.0  6.7 208380 129384 ?   Ss   08:10   0:03
nfacctd: Core Process [default]
root  1514  0.0  6.7 210916 130160 ?   S08:10   0:02
nfacctd: Tee Plugin [fanout]
root  1527  0.0  6.8 211172 132212 ?   Ss   08:10   0:04
nfacctd: Core Process [default]
root  1529  0.0  7.4 221260 142452 ?   S08:10   0:04
nfacctd: PostgreSQL Plugin [as]
root  1554  0.1 13.5 340128 261164 ?   Ss   08:10   0:07
nfacctd: Core Process [default]
root  1556  0.3 10.9 288932 209520 ?   S08:10   0:18
nfacctd: IMT Plugin [full]
root  1557  0.2 35.7 765692 686324 ?   S08:10   0:14
nfacctd: IMT Plugin [dst]
root  3114  0.3  0.8 222716 16044 ?S09:40   0:00
nfacctd: pgsql Plugin -- DB Writer [as]
root  3160  0.0  0.0 103256   816 pts/0R+   09:42   0:00 grep
-e USER\|nfacct


Is it possible my query is causing this?

This runs every 2 minutes:

pmacct -p /tmp/nfacctd-dst.pipe -l -O json -a -c tag2 -M 2;3 -T
packets,1000

Configuration:

daemonize: true
nfacctd_port: 5680
plugins: memory[full], memory[dst]

aggregate[full]: tag, tag2, in_iface, out_iface, src_as, dst_as,
src_host, dst_host, proto, src_port, dst_port, tcpflags, ext_comm,
src_ext_comm
aggregate[dst]: tag, tag2, in_iface, dst_as, dst_host

imt_path[full]: /tmp/nfacctd-full.pipe
imt_path[dst]: /tmp/nfacctd-dst.pipe

pre_tag_map: /opt/pmacct/etc/pretag.map

! Not sure if needed
nfacctd_time_new: true
!
nfacctd_renormalize: true

plugin_pipe_size: 131072000
plugin_buffer_size: 6400
imt_buckets: 65537
imt_mem_pools_size: 1024000
imt_mem_pools_number: 160


On Mon, Jun 23, 2014 at 4:13 PM, Paolo Lucente pa...@pmacct.net wrote:
 Hi,

 Can you then verify/confirm if it's the clearing of the statistics
 generating the issue? Determining how to reproduce the issue would
 help a lot to quickly solve the bug.

 Cheers,
 Paolo

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] 1.5.0rc3 nfacctd segfaults

2014-06-23 Thread Tim Jackson
It looks like the IMT of the one I keep clearing the statistics on is
balooning.. Starts around 200-300mb then climbs up..

USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
root  4516  0.0  6.4 208372 123172 ?   Ss   Jun19   3:03
nfacctd: Core Process [default]
root  4518  0.0  6.4 210908 123264 ?   SJun19   2:40
nfacctd: Tee Plugin [fanout]
root  4553  0.0  6.5 211168 125608 ?   Ss   Jun19   3:21
nfacctd: Core Process [default]
root  4555  0.0  7.1 221392 137116 ?   SJun19   2:51
nfacctd: PostgreSQL Plugin [as]
root 10522  0.1  5.0 340124 96760 ?Ss   11:04   0:09
nfacctd: Core Process [default]
root 10524  0.2  9.2 302656 176924 ?   S11:04   0:13
nfacctd: IMT Plugin [full]
root 10525  0.3 34.1 854392 656748 ?   S11:04   0:17
nfacctd: IMT Plugin [dst]
root 12282  0.0  0.0 103256   832 pts/1S+   12:38   0:00 grep
-e USER\|nfacct

USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
root  4516  0.0  6.6 208372 128152 ?   Ss   Jun19   3:03
nfacctd: Core Process [default]
root  4518  0.0  6.6 210908 128248 ?   SJun19   2:40
nfacctd: Tee Plugin [fanout]
root  4553  0.0  6.6 211168 128440 ?   Ss   Jun19   3:21
nfacctd: Core Process [default]
root  4555  0.0  7.2 221392 139928 ?   SJun19   2:52
nfacctd: PostgreSQL Plugin [as]
root 10522  0.1 10.5 340124 203344 ?   Ss   11:04   0:10
nfacctd: Core Process [default]
root 10524  0.2 11.6 302656 222992 ?   S11:04   0:13
nfacctd: IMT Plugin [full]
root 10525  0.3 38.9 885676 748416 ?   S11:04   0:18
nfacctd: IMT Plugin [dst]
root 12306  0.2  0.7 222848 13896 ?S12:39   0:00
nfacctd: pgsql Plugin -- DB Writer [as]
root 12362  0.0  0.0 103252   784 pts/1D+   12:42   0:00 grep
-e USER\|nfacct

USER   PID %CPU %MEMVSZ   RSS TTY  STAT START   TIME COMMAND
root  4516  0.0  6.1 208372 119088 ?   Ss   Jun19   3:03
nfacctd: Core Process [default]
root  4518  0.0  6.2 210908 119184 ?   SJun19   2:40
nfacctd: Tee Plugin [fanout]
root  4553  0.0  6.6 211168 127832 ?   Ss   Jun19   3:21
nfacctd: Core Process [default]
root  4555  0.0  7.2 221392 139260 ?   SJun19   2:52
nfacctd: PostgreSQL Plugin [as]
root 10522  0.1 10.8 340124 208884 ?   Ss   11:04   0:10
nfacctd: Core Process [default]
root 10524  0.2 11.0 302656 211920 ?   S11:04   0:13
nfacctd: IMT Plugin [full]
root 10525  0.3 40.0 901516 769124 ?   S11:04   0:19
nfacctd: IMT Plugin [dst]
root 12401  0.0  0.0 103252   652 pts/1D+   12:45   0:00 grep
-e USER\|nfacct


The [full] IMT is never cleared, and doesn't seem to exhibit this
behavior... I'm performing the queries in this instance with a lock
now as well.

On Sat, Jun 21, 2014 at 10:05 AM, Paolo Lucente pa...@pmacct.net wrote:
 Hi Tim,

 Can you please track down memory utilization to see if it could
 be something related to that? Also, can you try performing a query
 with lock:

 shell pmacct -l  .. parameters .. 

 If none of this helps, then yes, proceed to capture segfault data
 with gdb.

 Cheers,
 Paolo

 On Fri, Jun 20, 2014 at 11:45:57AM -0700, Tim Jackson wrote:
 We're having some issues using nfacctd with IMT.. After running for
 ~6-8 hours ingesting flow data, we see segfaults and the pmacct client
 ceases to function properly returning:

 ERROR: missing EOF from server

 Querying pmacct client every 2 minutes with:

 pmacct -p nfacctd-dst.pipe -O json -a -c tag2 -M 2;3 -T packets,1000

 If that returns data, we then:

 pmacct -p nfacctd-dst.pipe -e

 Associated segfault from nfacctd daemon:

 Jun 20 10:32:02 kernel: nfacctd[21874]: segfault at 21 ip
 0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
 Jun 20 10:36:02 kernel: nfacctd[21930]: segfault at 21 ip
 0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
 Jun 20 10:40:02 kernel: nfacctd[21983]: segfault at 21 ip
 0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
 Jun 20 10:46:02 kernel: nfacctd[22068]: segfault at 21 ip
 0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
 Jun 20 10:54:02 kernel: nfacctd[22188]: segfault at 21 ip
 0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
 Jun 20 11:02:02 kernel: nfacctd[22350]: segfault at 21 ip
 0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
 Jun 20 11:04:02 kernel: nfacctd[22374]: segfault at 21 ip
 0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
 Jun 20 11:32:02 kernel: nfacctd[22903]: segfault at 4d8e6600 ip
 00476103 sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]

 nfacctd Config:

 !daemonize: true
 nfacctd_port: 5680
 plugins: memory[full], memory[dst]

 aggregate[full]: tag, tag2, in_iface, out_iface, src_as, dst_as,
 src_host, dst_host, proto, src_port, dst_port, tcpflags, ext_comm

[pmacct-discussion] 1.5.0rc3 nfacctd segfaults

2014-06-20 Thread Tim Jackson
We're having some issues using nfacctd with IMT.. After running for
~6-8 hours ingesting flow data, we see segfaults and the pmacct client
ceases to function properly returning:

ERROR: missing EOF from server

Querying pmacct client every 2 minutes with:

pmacct -p nfacctd-dst.pipe -O json -a -c tag2 -M 2;3 -T packets,1000

If that returns data, we then:

pmacct -p nfacctd-dst.pipe -e

Associated segfault from nfacctd daemon:

Jun 20 10:32:02 kernel: nfacctd[21874]: segfault at 21 ip
0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
Jun 20 10:36:02 kernel: nfacctd[21930]: segfault at 21 ip
0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
Jun 20 10:40:02 kernel: nfacctd[21983]: segfault at 21 ip
0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
Jun 20 10:46:02 kernel: nfacctd[22068]: segfault at 21 ip
0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
Jun 20 10:54:02 kernel: nfacctd[22188]: segfault at 21 ip
0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
Jun 20 11:02:02 kernel: nfacctd[22350]: segfault at 21 ip
0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
Jun 20 11:04:02 kernel: nfacctd[22374]: segfault at 21 ip
0047613d sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]
Jun 20 11:32:02 kernel: nfacctd[22903]: segfault at 4d8e6600 ip
00476103 sp 7fff9ad3e1b0 error 4 in nfacctd[40+de000]

nfacctd Config:

!daemonize: true
nfacctd_port: 5680
plugins: memory[full], memory[dst]

aggregate[full]: tag, tag2, in_iface, out_iface, src_as, dst_as,
src_host, dst_host, proto, src_port, dst_port, tcpflags, ext_comm,
src_ext_comm
aggregate[dst]: tag, tag2, in_iface, dst_as, dst_host

imt_path[full]: /tmp/nfacctd-full.pipe
imt_path[dst]: /tmp/nfacctd-dst.pipe

pre_tag_map: /opt/pmacct/etc/pretag.map

nfacctd_time_new: true
nfacctd_renormalize: true

plugin_pipe_size: 131072000
plugin_buffer_size: 6400
imt_buckets: 65537
imt_mem_pools_size: 1024000

I'm working on capturing the debug output from nfacctd when this
segfault happens, but is there anything else I should capture to help
figure out why this is happening?

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists