Hi Paolo,
Would that account for the really strange duplicated rebroadcast behaviour?


-Adam

On 11/21/2013 07:04 AM, Paolo Lucente wrote:
Hi Adam,

You are right, there is a bug lying around 1.5.0rc1 when not setting
an explicit value for plugin_pipe_size and/or plugin_buffer_size. The
issue was already fixed in the CVS code:

http://www.mail-archive.com/pmacct-commits@pmacct.net/msg00896.html

Cheers,
Paolo

On Thu, Nov 21, 2013 at 12:58:20AM -0500, Adam Jacob Muller wrote:
To continue,

The instance of nfacctd that I said had no issues and was getting
netflow data just had the same behavior.
This instance was running without the buffer/pipe sizes, it handles
a proportionally smaller amount of traffic (probably an order of
magnitude) so perhaps just took longer to enter the same looping
state.

The instance configured with buffer/pipe sizes was running for about
three hours on 1.5 code without issues, it would previously only run
for about 10 minutes.

I think the culprit here lies with pipe/buffer sizes and tee.

-Adam

On 11/20/13, 11:17 PM, Adam Jacob Muller wrote:
Gentoo and 64-bit.

Compiled from sources that I just downloaded.

I just downloaded 0.14.3 to test, and so far 0.14.3 appears to
work correctly (so far, running for about 10m).

With a single odd exception, 0.14.3 refused to start because:

mmap(NULL, 18446744072050714384, PROT_READ|PROT_WRITE,
MAP_SHARED|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate
memory)

It's trying to mmap() 18 exabytes of memory? I am not the NSA, I
don't have that much RAM :)

Setting:
plugin_buffer_size: 102400
plugin_pipe_size[all]: 102400

Seems to have resolved that, its humming along for about 15m now.

-Adam

On 11/20/13, 11:02 PM, Brent Van Dussen wrote:
Hi Adam,

Just for a point on the curve we have Juniper MX IPFIX ->
nfacctd 1.5.0rc1 tee replicating out to a few different
destinations (one local, two remote) and see exactly double the
output traffic as input traffic as expected. This is on a Linux
Debian 7 box.

What OS/Arch are you running?

-Brent

-----Original Message-----
From: pmacct-discussion
[mailto:pmacct-discussion-boun...@pmacct.net] On Behalf Of Adam
Jacob Muller
Sent: Wednesday, November 20, 2013 7:36 PM
To: pmacct-discussion@pmacct.net
Subject: Re: [pmacct-discussion] nfacctd, ipfix and tee transparent mode

Hi,
That was actually one of my first thoughts. But my topology is
extremely simple here and I only see the excess traffic on the
replicator server from server->network, not network->server,
which I would expect in the case of a loop.

The single collector in this case is another nfacctd process
that has no tee configured, it shouldn't even be possible to
loop back (I think the version of nfacctd there is to old to
even have 'tee' actually).

Also tcpdump definitely sees a MUCH higher rate of outbound packets.

-Adam

On 11/20/13, 10:23 PM, Nathan Kennedy wrote:
Hi Adam,

I'm guessing you've already thought of this, but is it at all
possible that one of the destinations you're tee-ing the
packets to (e.f.g.h) is then feeding it back (to a.b.c.d)?

Thanks,
Nathan.

-----Original Message-----
From: pmacct-discussion [mailto:pmacct-discussion-boun...@pmacct.net]
On Behalf Of Adam Jacob Muller
Sent: Thursday, 21 November 2013 4:05 p.m.
To: pmacct-discussion@pmacct.net
Subject: [pmacct-discussion] nfacctd, ipfix and tee transparent mode

Hi,
I have an interesting issue that I think perhaps results from
a perhaps unique configuration.

I have a very simple nfacctd setup on one box, its goal would
be to receive ipfix data from two sources (Juniper MX) and
replicate it out to a few places.

The configuration is -very- simple:
nfacctd_ip:a.b.c.d
nfacctd_port: 2101
plugins: tee[all]
tee_receiver[all]: e.f.g.h:2100
tee_transparent: true

When I turn up the data feed to this source, everything works fine for
a few minutes and then nfacctd will suddenly get into what looks like
a loop internally and start rebroadcasting out the same packets [I
think
-- I did not specifically confirm this] (not a single packet,
but perhaps the same group) as quickly as possible. Like, line
rate pegging the servers gigabit uplink.

Some (hopefully) useful data points:

I have another nfacctd instance teeing with an almost
identical configuration, except that the source is NetFlow v5
(Cisco).

tee_transparent has no effect, I prefer it but it still breaks
with it off.

Disabling the netflow source does not stop the packets.
nfacctd continues to rebroadcast the same (again, presumably
the same) set of packets over and over again until I kill the
process (or ctrl-c, that still works fine).

This seems very unusual, because I assume this would be
obvious in testing / development if things were this badly
broken but my configuration is also exceedingly simple and I
don't particularly see where I did (or even could) go wrong.

Thanks in advance for any advice you can offer,

-Adam


_______________________________________________
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

_______________________________________________
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

_______________________________________________
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

_______________________________________________
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


_______________________________________________
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


_______________________________________________
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

_______________________________________________
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists



_______________________________________________
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists

Reply via email to