Re: [pmacct-discussion] 1.7.5 with static ndpi

2020-06-25 Thread Marc Sune
Steve,

Missatge de Stephen Clark  del dia dj., 25 de juny
2020 a les 13:56:

> Hi Paolo,
>
> We have pmacct installed on a number of remote systems and
> it just more moving parts to keep updated with having to also
> install/update nDPI.
>

Not sure what your requirements are, but if the concern is remote system's
connectivity, docker images can be downloaded in a tar.gz and locally
imported, without the need to have external connectivity.

https://docs.docker.com/engine/reference/commandline/save/


>
> Also I have used the following configure line
>
> ./configure '--enable-ndpi' --with-ndpi-static-lib=/usr/local/lib/
> '--enable-l2'
> '--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins'
> '--enable-st-bins'
>
> and still get a dynamically linked pmacctd. Also the dynamic lib and
> static lib
> are both in /usr/local/lib
>

Can you the resulting config.log?

marc


>
> I just removed the dynamic libs and got pmacctd built - I am testing it
> now.
>
> Thanks for your help,
> Steve
>
>
> On 6/24/20 4:30 PM, Paolo Lucente wrote:
> > Hi Steve,
> >
> > Apart from asking the obvious - personal curiosity! - why do you want to
> > link against a static nDPI library. There are a couple main avenues i
> > can point you to depending on your goal:
> >
> > 1) You can supply configure with a --with-ndpi-static-lib knob; guess
> > the static lib and the dynamic lib are in different places, you should
> > be game. Even simplifying further: should you make the 'shared object'
> > library disappear then things will be forced onto the static library;
> >
> > 2) did you see the "pmacct & Docker" email that did just circulate on
> > the list? In the seek for a static library? Perhaps time to look into a
> > container instead? :-D
> >
> > Paolo
> >
> > On Tue, Jun 23, 2020 at 01:44:32PM -0400, Stephen Clark wrote:
> >> Hello,
> >>
> >> Can anyone give the magic configuration items I need to build using a
> static
> >> libndpi.a
> >>
> >> I have spend all day trying to do this without any success. It seem
> like I
> >> tried every combination
> >> that ./configure --help displays.
> >>
> >> Any help would be appreciated.
> >>
> >> Thanks,
> >> Steve
> >>
> >>
> >> ___
> >> pmacct-discussion mailing list
> >> http://www.pmacct.net/#mailinglists
>
>
> --
>
> "They that give up essential liberty to obtain temporary safety,
> deserve neither liberty nor safety."  (Ben Franklin)
>
> "The course of history shows that as a government grows, liberty
> decreases."  (Thomas Jefferson)
>
> "Beer is proof God loves us and wants us to be happy!" (Ben Franklin)
>
>
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists
>
___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] master - ndpi on 32bit CentOS 6

2020-07-09 Thread Marc Sune
Steve,

Try running it with valgrind and copy&paste the warnings, if any

./valgrind pmacct/src/pmacctd -f ./mypaolo.conf -I
v1.7.5_v9_ndpi_class_paolo.pcap

marc

Missatge de Paolo Lucente  del dia dj., 9 de jul. 2020 a
les 21:08:

>
> I did test on a Debian 10:
>
> 4.19.0-8-686-pae #1 SMP Debian 4.19.98-1 (2020-01-26) i686 GNU/Linux
>
> As i was suspecting, passing the pcap you sent me through a daemon
> compiled on this box went fine (that is, i can't reproduce the issue).
>  From what i see, by the way, this is not something related to nDPI.
>
> Paolo
>
> On 09/07/2020 18:19, Steve Clark wrote:
> > Thanks for checking, could you tell what distro and version you tested
> on?
> >
> > Also when I compile on 32 bit I get a lot of warning of redefines
> > between ndpi.h and pmacct.h
> > do you get those also?
> >
> >
> >
> >
> > On 07/09/2020 11:55 AM, Paolo Lucente wrote:
> >> Hi Steve,
> >>
> >> I do have avail of a i686-based VM. I can't say everything is tested on
> >> i686 but i tend to check every now and then that nothing fundamental is
> >> broken. I took the example config you used, compiled master code with
> >> the same config switches as you did (essentially --enable-ndpi) and had
> >> no joy reproducing the issue.
> >>
> >> You could send me privately your capture and i may try with that one
> >> (although i am not highly positive it will be a successful test); or you
> >> could arrange me access to your box to read the pcap. Let me know.
> >>
> >> Paolo
> >>
> >> On 09/07/2020 14:54, Steve Clark wrote:
> >>> Hi Paolo,
> >>>
> >>> I have compiled master with nDPI on both 32bit and 64bit CentOS 6
> >>> systems. The 64 bit pmacctd seems
> >>> to work fine. But I get bogus byte counts when I run the 32bit version
> >>> against the same pcap file.
> >>>
> >>> Just wondered if you have done any testing on 32bit intel system with
> >>> the above combination.
> >>>
> >>> below is the output when using 32bit pmacctd - first the pmacctd
> >>> invocation then the nfacctd output
> >>> pmacct/src/pmacctd -f ./mypaolo.conf -I v1.7.5_v9_ndpi_class_paolo.pcap
> >>> INFO ( default/core ): Promiscuous Mode Accounting Daemon, pmacctd
> >>> 1.7.6-git (20200707-01)
> >>> INFO ( default/core ):  '--enable-ndpi'
> >>> '--with-ndpi-static-lib=/usr/local/lib/' '--enable-l2'
> >>> '--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins'
> >>> '--enable-st-bins'
> >>> INFO ( default/core ): Reading configuration file
> >>> '/var/lib/pgsql/sclark/mypaolo.conf'.
> >>> INFO ( p4p1/nfprobe ): NetFlow probe plugin is originally based on
> >>> softflowd 0.9.7 software, Copyright 2002 Damien Miller <
> d...@mindrot.org>
> >>> All rights reserved.
> >>> INFO ( p4p1/nfprobe ):   TCP timeout: 3600s
> >>> INFO ( p4p1/nfprobe ):  TCP post-RST timeout: 120s
> >>> INFO ( p4p1/nfprobe ):  TCP post-FIN timeout: 300s
> >>> INFO ( p4p1/nfprobe ):   UDP timeout: 300s
> >>> INFO ( p4p1/nfprobe ):  ICMP timeout: 300s
> >>> INFO ( p4p1/nfprobe ):   General timeout: 3600s
> >>> INFO ( p4p1/nfprobe ):  Maximum lifetime: 604800s
> >>> INFO ( p4p1/nfprobe ):   Expiry interval: 60s
> >>> INFO ( default/core ): PCAP capture file, sleeping for 2 seconds
> >>> INFO ( p4p1/nfprobe ): Exporting flows to [172.24.109.157]:rrac
> >>> WARN ( p4p1/nfprobe ): Shutting down on user request.
> >>> INFO ( default/core ): OK, Exiting ...
> >>>
> >>> src/nfacctd -f examples/nfacctd-print.conf.example
> >>> INFO ( default/core ): NetFlow Accounting Daemon, nfacctd 1.7.6-git
> >>> (20200623-00)
> >>> INFO ( default/core ):  '--enable-ndpi'
> >>> '--with-ndpi-static-lib=/usr/local/lib/' '--enable-l2'
> >>> '--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins'
> >>> '--enable-st-bins'
> >>> INFO ( default/core ): Reading configuration file
> >>> '/var/lib/pgsql/sclark/pmacct/examples/nfacctd-print.conf.example'.
> >>> INFO ( default/core ): waiting for NetFlow/IPFIX data on :::5678
> >>> INFO ( foo/print ): cache entries=16411 base cache memory=56322552
> bytes
> >>> WARN ( foo/print ): no print_output_file and no print_output_lock_file
> >>> defined.
> >>> INFO ( foo/print ): *** Purging cache - START (PID: 21926) ***
> >>> CLASS SRC_IP
> >>> DST_IP SRC_PORT  DST_PORT
> >>> PROTOCOLPACKETS   BYTES
> >>> NetFlow   172.24.110.104
> >>> 172.24.109.247 41900 2055
> >>> udp 26 1576253010996
> >>> NetFlow   172.24.110.104
> >>> 172.24.109.247 58131 2055
> >>> udp 211576253008620
> >>> INFO ( foo/print ): *** Purging cache - END (PID: 21926, QN: 2/2, ET:
> >>> 0) ***
> >>> ^CINFO ( foo/print ): *** Purging cache - START (PID: 21559) ***
> >>> INFO ( foo/print ): *** Purging cache - END (PID: 21559, QN: 0/0, ET:
> >>> X) ***
> >>> INFO ( default/core ): OK, Exiting ...
> >>>
> >>> Now the output when using and the same .pcap file 64

Re: [pmacct-discussion] Linux Kernel Segfault

2021-04-20 Thread Marc Sune
Kevin,

Looks like this:
https://www.mail-archive.com/pmacct-discussion@pmacct.net/msg04063.html

If it is, then the work-around
(https://www.mail-archive.com/pmacct-discussion@pmacct.net/msg04069.html)
is to set `sql_host`  or to move to the HEAD of master branch.

Marc

Missatge de Kevin Battersby  del dia dc., 21
d’abr. 2021 a les 0:19:
>
>
>
> Hello,
>
>
>
> After compiling the latest version the pmacctd runs but does not log anything 
> to mysql.
>
>
>
> The following error is being seen in the kernel log:
>
>
>
> Apr 20 15:13:31 zeballos-gateway-new kernel: [ 4816.558020] pmacctd[788]: 
> segfault at 0 ip 7f782e
> f1c9a5 sp 7ffcbbd54450 error 4 in libc-2.28.so[7f782eeb6000+148000]
> Apr 20 15:13:31 zeballos-gateway-new kernel: [ 4816.558028] Code: 00 00 0f 1f 
> 00 41 57 41 56 41 55 41
> 54 49 89 fc 55 53 48 83 ec 78 44 0f b6 0e 64 48 8b 04 25 28 00 00 00 48 89 44 
> 24 68 31 c0 <0f> b6 07
> 84 c0 0f 84 32 04 00 00 45 84 c9 0f 84 3e 04 00 00 48 89
>
> Any ideas how this might be fixed?
>
>
>
> Regards,
>
> Kevin Battersby 
>
> 250-514-2063 Direct
>
> ___
> pmacct-discussion mailing list
> http://www.pmacct.net/#mailinglists

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] [docker-doctors] docker nfacct ... strange udp source ip !

2021-06-09 Thread Marc Sune
Alessandro,

inline

Missatge de Alessandro Montano | FIBERTELECOM
 del dia dc., 9 de juny 2021 a les 10:12:
>
> Hi Paolo (and Marc),
>
> this is my first post here ... first of all THANKS FOR YOU GREAT JOB :)
>
> I'm using pmacct/nfacctd container from docker-hub 
> (+kafka+telegraf+influxdb+grafana) and it's really a powerfull tool
>
> The sender are JUNIPER MX204 routers, using j-flow (extended netflow)
>
> NFACCTD VERSION:
> NetFlow Accounting Daemon, nfacctd 1.7.6-git [20201226-0 (7ad9d1b)]
>  '--enable-mysql' '--enable-pgsql' '--enable-sqlite3' '--enable-kafka' 
> '--enable-geoipv2' '--enable-jansson' '--enable-rabbitmq' '--enable-nflog' 
> '--enable-ndpi' '--enable-zmq' '--enable-avro' '--enable-serdes' 
> '--enable-redis' '--enable-gnutls' 'AVRO_CFLAGS=-I/usr/local/avro/include' 
> 'AVRO_LIBS=-L/usr/local/avro/lib -lavro' '--enable-l2' 
> '--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins' 
> '--enable-st-bins'
>
> SYSTEM:
> Linux 76afde386f6f 5.4.0-73-generic #82-Ubuntu SMP Wed Apr 14 17:39:42 UTC 
> 2021 x86_64 GNU/Linux
>
> CONFIG:
> debug: false
> daemonize: false
> pidfile: /var/run/nfacctd.pid
> logfile: /var/log/pmacct/nfacctd.log
> nfacctd_renormalize: true
> nfacctd_port: 20013
> aggregate[k]: peer_src_ip, peer_dst_ip, in_iface, out_iface, vlan, 
> sampling_direction, etype, src_as, dst_as, as_path, proto, src_net, src_mask, 
> dst_net, dst_mask, flows
> nfacctd_time_new: true
> plugins: kafka[k]
> kafka_output[k]: json
> kafka_topic[k]: nfacct
> kafka_broker_host[k]: kafka
> kafka_broker_port[k]: 9092
> kafka_refresh_time[k]: 60
> kafka_history[k]: 1m
> kafka_history_roundoff[k]: m
> kafka_max_writers[k]: 1
> kafka_markers[k]: true
> networks_file_no_lpm: true
> use_ip_next_hop: true
>
> DOCKER-COMPOSE:
> #Docker version 20.10.2, build 20.10.2-0ubuntu1~20.04.2
> #docker-compose version 1.29.2, build 5becea4c
> version: "3.9"
> services:
>   nfacct:
> networks:
>   - ingress
> image: pmacct/nfacctd
> restart: on-failure
> ports:
>   - "20013:20013/udp"
> volumes:
>   - /etc/localtime:/etc/localtime
>   - ./nfacct/etc:/etc/pmacct
>   - ./nfacct/lib:/var/lib/pmacct
>   - ./nfacct/log:/var/log/pmacct
> networks:
>   ingress:
> name: ingress
> ipam:
>   config:
>   - subnet: 192.168.200.0/24
>
> My problem is the  value of field PEER_IP_SRC ... at start everything is 
> correct, and it works well for a (long) while ... hours ... days ...
> I have ten routers so  "peer_ip_src": "151.157.228.xxx"  where xxx can easily 
> identify the sender. Perfect.
>
> Suddenly ... "peer_ip_src": "192.168.200.1" for all records (and I loose the 
> sender info!!!) ...
>
> It seems that docker-proxy decide to do nat/masquerading and translate 
> source_ip for the udp stream.
> The only way for me to have the correct behavior again is to stop/start the 
> container.
>
> How can I fix it? Or, is there an alternative way to obtain the same info 
> (router ip) from inside the netflow stream, and not from the udp packet.

Paolo is definitely the right person to answer how "peer_ip_src" is populated.

However, there is something that I don't fully understand. To the best
of my knowledge, even when binding ports, docker (actually the kernel,
configured by docker) shouldn't masquerade traffic at all - if
masquerade is truly what happens. And certainly that wouldn't happen
"randomly" in the middle of the execution.

My first thought would be that this is something related to pmacct
itself, and that records are incorrectly generated but traffic is ok.

I doubt the  linux kernel iptables rules would randomly change the way
traffic is manipulated, unless of course, something else on that
machine/server is reloading iptables, and the resulting ruleset is
_slightly different_ for the traffic flowing towards the docker
container, effectively modifying the streams that go to pmacct (e.g.
rule priority reording). That _could_ explain why restarting the
daemon suddenly works, as order would be fixed.

Some more info would be needed to discard an iptables/docker issue:

* Dump the iptables -L and iptables -t nat -L before and after the
issue and compare.
* Use iptables -vL and iptables -t nat -vL to monitor counters, before
and after the issue, specially in the NAT table.
* Get inside the running container
(https://github.com/pmacct/pmacct/blob/master/docs/DOCKER.md#opening-a-shell-on-a-running-container),
install tcpdump, and write the pcap to a file, before and after the
incident.

Since these dumps might contain sensitive data, you can send them
anonymized or in private.

Hopefully with this info we will see if it's an iptables issue or we
have to look somewhere else.

Regards
marc

>
> Thanks for your support.
>
> Cheers.
>
> --
> AlexIT
> --
> docker-doctors mailing list
> docker-doct...@pmacct.net
> http://acaraje.pmacct.net/cgi-bin/mailman/listinfo/docker-doctors

___
pmacct-discussion mailing list
http://www.

Re: [pmacct-discussion] [docker-doctors] docker nfacct ... strange udp source ip !

2021-06-09 Thread Marc Sune
ewhere else though.

marc

>
> What definitely works is not to expose specific ports, but to configure your 
> container in docker-compose to be attached directly to the host network. In 
> that case, there will be no translation rules and no source NAT and container 
> will be directly connected to all host's network interfaces.
> In such case, be aware that Docker DNS will not work, so to export 
> information from pmacct container further to kafka, you would need to send it 
> to "localhost", if the kafka container is running on the same host and not to 
> "kafka". This shouldn't be a big problem in your setup.
>
> Btw, I am using docker swarm and not docker-compose, although they both use 
> docker-compose files with similar syntax, but I don't think there is 
> difference in their behavior.
>
> Hope this helps
>
> Kind regards,
> Dusan
>
> On Wed, Jun 9, 2021 at 3:29 PM Paolo Lucente  wrote:
>>
>>
>> Hi Alessandro,
>>
>> (thanks for the kind words, first and foremost)
>>
>> Indeed, the test that Marc proposes is very sound, ie. check the actual
>> packets coming in "on the wire" with tcpdump: do they really change
>> sender IP address?
>>
>> Let me also confirm that what is used to populate peer_ip_src is the
>> sender IP address coming straight from the socket (Marc's question) and,
>> contrary to sFlow, there is typically there is no other way to infer
>> such info (Alessandro's question).
>>
>> Paolo
>>
>>
>> On 9/6/21 14:51, Marc Sune wrote:
>> > Alessandro,
>> >
>> > inline
>> >
>> > Missatge de Alessandro Montano | FIBERTELECOM
>> >  del dia dc., 9 de juny 2021 a les 10:12:
>> >>
>> >> Hi Paolo (and Marc),
>> >>
>> >> this is my first post here ... first of all THANKS FOR YOU GREAT JOB :)
>> >>
>> >> I'm using pmacct/nfacctd container from docker-hub 
>> >> (+kafka+telegraf+influxdb+grafana) and it's really a powerfull tool
>> >>
>> >> The sender are JUNIPER MX204 routers, using j-flow (extended netflow)
>> >>
>> >> NFACCTD VERSION:
>> >> NetFlow Accounting Daemon, nfacctd 1.7.6-git [20201226-0 (7ad9d1b)]
>> >>   '--enable-mysql' '--enable-pgsql' '--enable-sqlite3' '--enable-kafka' 
>> >> '--enable-geoipv2' '--enable-jansson' '--enable-rabbitmq' 
>> >> '--enable-nflog' '--enable-ndpi' '--enable-zmq' '--enable-avro' 
>> >> '--enable-serdes' '--enable-redis' '--enable-gnutls' 
>> >> 'AVRO_CFLAGS=-I/usr/local/avro/include' 'AVRO_LIBS=-L/usr/local/avro/lib 
>> >> -lavro' '--enable-l2' '--enable-traffic-bins' '--enable-bgp-bins' 
>> >> '--enable-bmp-bins' '--enable-st-bins'
>> >>
>> >> SYSTEM:
>> >> Linux 76afde386f6f 5.4.0-73-generic #82-Ubuntu SMP Wed Apr 14 17:39:42 
>> >> UTC 2021 x86_64 GNU/Linux
>> >>
>> >> CONFIG:
>> >> debug: false
>> >> daemonize: false
>> >> pidfile: /var/run/nfacctd.pid
>> >> logfile: /var/log/pmacct/nfacctd.log
>> >> nfacctd_renormalize: true
>> >> nfacctd_port: 20013
>> >> aggregate[k]: peer_src_ip, peer_dst_ip, in_iface, out_iface, vlan, 
>> >> sampling_direction, etype, src_as, dst_as, as_path, proto, src_net, 
>> >> src_mask, dst_net, dst_mask, flows
>> >> nfacctd_time_new: true
>> >> plugins: kafka[k]
>> >> kafka_output[k]: json
>> >> kafka_topic[k]: nfacct
>> >> kafka_broker_host[k]: kafka
>> >> kafka_broker_port[k]: 9092
>> >> kafka_refresh_time[k]: 60
>> >> kafka_history[k]: 1m
>> >> kafka_history_roundoff[k]: m
>> >> kafka_max_writers[k]: 1
>> >> kafka_markers[k]: true
>> >> networks_file_no_lpm: true
>> >> use_ip_next_hop: true
>> >>
>> >> DOCKER-COMPOSE:
>> >> #Docker version 20.10.2, build 20.10.2-0ubuntu1~20.04.2
>> >> #docker-compose version 1.29.2, build 5becea4c
>> >> version: "3.9"
>> >> services:
>> >>nfacct:
>> >>  networks:
>> >>- ingress
>> >>  image: pmacct/nfacctd
>> >>  restart: on-failure
>> >>  ports:
>> >>   

Re: [pmacct-discussion] [docker-doctors] docker nfacct ... strange udp source ip !

2021-06-09 Thread Marc Sune
Dusan,

Thanks. I seemed to have misunderstood yo before. That sounds like it, yes.

After reading through most, this might be _the_ issue:

https://github.com/moby/moby/issues/16720#issuecomment-435637740
https://github.com/moby/moby/issues/16720#issuecomment-444862701

Alessandro, can you try the suggested once the container is in failed state?

conntrack -D -p udp

Marc

Missatge de Dusan Pajin  del dia dc., 9 de juny
2021 a les 21:54:
>
> Hi,
>
> Alessandro, do you use docker-compose or docker swarm (docker stack)?
>
> The behavior I am referring to is described in number of issues on Github, 
> for example:
> https://github.com/moby/moby/issues/16720
> https://github.com/docker/for-linux/issues/182
> https://github.com/moby/moby/issues/18845
> https://github.com/moby/libnetwork/issues/1994
> https://github.com/robcowart/elastiflow/issues/414
> In some of those issues you will find links to other issues and so on.
>
> I don't have an explanation why this works for you in some situations and 
> some not.
> SInce that is the case, you might try clearing the conntrack table, which is 
> described in some of the issues above.
> Using the host network is certainly not convenient, but it is doable.
>
> Kind regards,
> Dusan
>
>
>
> On Wed, Jun 9, 2021 at 7:37 PM Marc Sune  wrote:
>>
>> Dusan, Alessandro,
>>
>> Let me answer Dusan first.
>>
>> Missatge de Dusan Pajin  del dia dc., 9 de juny
>> 2021 a les 18:08:
>> >
>> > Hi Alessandro,
>> >
>> > I would say that this is a "known" issue or behavior in docker which is 
>> > experienced by everyone who ever wanted to receive syslog, netflow, 
>> > telemetry or any other similar UDP stream from network devices. When you 
>> > expose ports in your docker-compose file, the docker will create the IP 
>> > tables rules to steer the traffic to your container in docker's bridge 
>> > network, but unfortunately also translate the source IP address of the 
>> > packets. I am not sure what is the reasoning behind such a behavior. If 
>> > you try to search for solutions for this issue, you will find some 
>> > proposals, but none of them used to work in my case.
>>
>> That is not my understanding. I've also double checked with a devops
>> Docker guru in my organization.
>>
>> In the default network docker mode, masquerading only happens for
>> egress traffic not ingress.
>>
>> I actually tried it locally by running an httpd container (apache2)
>> and redirect 8080 on the "host" to port 80 on the container. Container
>> is on the docker range, LAN on my laptop is 192.168.1.36, .33 being
>> another client in my LAN.
>>
>> root@d64c65384e87:/usr/local/apache2# tcpdump -l -n
>> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
>> listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
>> 17:21:49.546067 IP 192.168.1.33.46595 > 172.17.0.3.80: Flags [F.], seq
>> 2777556344, ack 4139714538, win 172, options [nop,nop,TS val 21290101
>> ecr 3311681356], length 0
>> 17:21:49.546379 IP 192.168.1.33.46591 > 172.17.0.3.80: Flags [F.], seq
>> 3001175791, ack 61192428, win 172, options [nop,nop,TS val 21290101
>> ecr 3311686360], length 0
>> 17:21:49.546402 IP 172.17.0.3.80 > 192.168.1.33.46591: Flags [.], ack
>> 1, win 236, options [nop,nop,TS val 3311689311 ecr 21290101], length 0
>> 17:21:49.546845 IP 172.17.0.3.80 > 192.168.1.33.46595: Flags [F.], seq
>> 1, ack 1, win 227, options [nop,nop,TS val 3311689311 ecr 21290101],
>> length 0
>> 17:21:49.550993 IP 192.168.1.33.46595 > 172.17.0.3.80: Flags [.], ack
>> 2, win 172, options [nop,nop,TS val 21290110 ecr 3311689311], length 0
>>
>> That works as expected, showing the real 1.33 address.
>>
>> Mind that there is a lot of confusion, because firewall services in
>> the system's OS can interfere with the rules set by the docker daemon
>> itself:
>>
>> https://stackoverflow.com/a/47913950/9321563
>>
>> Alessandro,
>>
>> I need to analyse in detail your rules, but what is clear is that
>> "something" is modifying them (see the two first rules)... whether
>> these two lines in particular are causing the issue, I am not sure:
>>
>> Pre:
>>
>> Chain POSTROUTING (policy ACCEPT)
>> target prot opt source   destination
>> MASQUERADE  all  --  192.168.200.0/24 anywhere
>> MASQUERADE  all  --  172.17.0.0/16anywhere
>> MASQUERADE  tcp  --  192.168.200.3192.168.200.3tcp dpt:80

Re: [pmacct-discussion] [docker-doctors] docker nfacct ... strange udp source ip !

2021-06-10 Thread Marc Sune
Alessandro,

Since  conntrack -D -p udp does fix the issue, it's clear conntrack
cache is incorrect.

The conjecture here is that pmacct docker container is started (or
probably, restarted) with the UDP traffic flowing. Linux's connection
tracker (conntrack) keeps track on connections, and also acts as a
cache in the kernel. Since when this happens docker container is still
in the process of being launched, and not all iptables rules are
pushed (sort of a race condition), some packets set the conntrack
state incorrectly, and it stays until you manually flush them.

Is this happening randomly, or is the container (or some container in
general) started/restarted before this happens?

I see there are some commits in https://github.com/moby/moby that try
to address s/t like this [1]. I don't see any other commit relevant to
this issue, but it might be worth to try latest docker CE version (and
new kernel).

Let me know under which conditions this happens, and if you can
reproduce it with a newer OS/docker version, and we can take it from
there.

As a _very last_ resort, and if this happens randomly (which I
wouldn't understand why..) one could flush UDP conntrack info
regularly, if a) you can afford the perf penalty of doing so, and
possibly some frames lost b) you can afford up to X seconds of records
not being processed, where X is the periodicity of the flush... ugly.

Marc

[1]

```
commit 1c4286bcffcdc6668f84570a2754c78cccbbf7e1
Author: Flavio Crisciani 
Date:   Mon Apr 10 17:12:14 2017 -0700

Adding test for docker/docker#8795

When a container was being destroyed was possible to have
flows in conntrack left behind on the host.
If a flow is present into the conntrack table, the packet
processing will skip the POSTROUTING table of iptables and
will use the information in conntrack to do the translation.
For this reason is possible that long lived flows created
towards a container that is destroyed, will actually affect
new flows incoming to the host, creating erroneous conditions
where traffic cannot reach new containers.
The fix takes care of cleaning them up when a container is
destroyed.

The test of this commit is actually reproducing the condition
where an UDP flow is established towards a container that is then
destroyed. The test verifies that the flow established is gone
after the container is destroyed.

Signed-off-by: Flavio Crisciani 
```

Issue trying to be fixed: https://github.com/moby/moby/issues/8795.
But this is a 2017 commit... I doubt your docker version doesn't have
it.

I see kubernetes has bug reports as of 2020 a similar problem, which
they are obviously fixing in their container mgmt:

https://github.com/kubernetes/kubernetes/issues/102559

>
> Marc,
>
> The system is a fresh installed ubuntu20.04 , really nothing installed in the 
> host, it's a minimal + sshd + docker ... nothing else, no crons, no tasks 
> running, no daemons.
>
> For the two lines you noticed swapped
>
> MASQUERADE  all  --  192.168.200.0/24 anywhere
> MASQUERADE  all  --  172.17.0.0/16anywhere
>
> I don't think there is any problem in swapping them, because source nets are 
> different , the first is the docker-bridge and second is docker-0 (unused)
> Anyway let's swap them
>
> !! problem just happened ... let's check with tcpdump
>
> # docker exec -ti open-nti_nfacct_1 apt install tcpdump>/dev/null && docker 
> exec -ti open-nti_nfacct_1 tcpdump -n "udp port 20013"
> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
> listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
> 22:49:37.294518 IP 192.168.200.1.33956 > 192.168.200.2.20013: UDP, length 380
> 22:49:37.295657 IP 192.168.200.1.33956 > 192.168.200.2.20013: UDP, length 380
> 22:49:37.296836 IP 192.168.200.1.33956 > 192.168.200.2.20013: UDP, length 380
> 22:49:37.298055 IP 192.168.200.1.33956 > 192.168.200.2.20013: UDP, length 380
> 22:49:37.299242 IP 192.168.200.1.33956 > 192.168.200.2.20013: UDP, length 380
> 22:49:37.300450 IP 192.168.200.1.33956 > 192.168.200.2.20013: UDP, length 290
> ^C
> 6 packets captured
> 6 packets received by filter
> 0 packets dropped by kernel
>
> !! what iptables says
>
> # iptables -t nat -vL POSTROUTING --line-numbers
> Chain POSTROUTING (policy ACCEPT 776 packets, 157K bytes)
> num   pkts bytes target prot opt in out source   
> destination
> 19   540 MASQUERADE  all  --  any!br-0b2348db16f3  
> 192.168.200.0/24 anywhere
> 20 0 MASQUERADE  all  --  any!docker0  172.17.0.0/16
> anywhere
> 30 0 MASQUERADE  udp  --  anyany 192.168.200.2
> 192.168.200.2udp dpt:20013
> 40 0 MASQUERADE  tcp  --  anyany 192.168.200.3
> 192.168.200.3tcp dpt:8086
> 50 0 MASQUERADE  tcp  --  anyany 192.168.200.4
> 192.168.200.4tcp dpt:3000
> 60 

Re: [pmacct-discussion] docker

2021-10-02 Thread Marc Sune
Steven, John,

John, thank you for jumping in. I agree it's the proper solution.

I believe the reason why the first container immediately stops is
because, in docker, a container will be alive until the main process
(or entrypoint) is alive. For the pmacctd container, the entry point
is:

https://github.com/pmacct/pmacct/blob/master/docker/pmacctd/Dockerfile#L11

When using the daemonize option (in pmacctd, not in docker), the main
process will fork and the child process will detach from the parent,
so that the main process can exit the and leave the daemon process
running in background, pmacctd in this case ([1]). Of course this
makes docker realise the entrypoint process has finalised, and
therefore stops the container.

John's explanation on docker's -d option is spot on (reference:
https://docs.docker.com/engine/reference/run/#detached-vs-foreground).
Btw, something you might want to look into when using -d, docker
(daemon) can restart the container, automatically, based on the reason
why the container stopped, so called restart-policies:

https://docs.docker.com/engine/reference/run/#restart-policies---restart

Regards
Marc

[1]
https://github.com/pmacct/pmacct/blob/master/src/pmacctd.c#L613
https://github.com/pmacct/pmacct/blob/master/src/util.c#L95

Missatge de John Jensen  del dia dv., 1 d’oct.
2021 a les 19:28:
>
> Hey Steve,
>
> It is the proper solution.
>
> To add some context, if you don't pass '-it' (-i keeps STDIN open on the 
> container and -t allocates a pseudo-TTY and attaches it to the STDIN of the 
> container) or '-d' (which spawns the container in 'detached' mode, which will 
> return your running container ID) to 'docker run', it defaults to running 
> your container in 'foreground mode'. When you ran your first docker command, 
> did you get presented with essentially nothing until you killed the container 
> with ctrl+c? I believe the default in foreground mode is to attach the host's 
> STDIN/STDOUT/STDERR to that of the container, so if you essentially "saw 
> nothing" then I would have expected you to see pmacct running in 'ps' output 
> in a different shell on the same box.
>
> The second 'docker run' command works because you're overriding the 
> entrypoint of the container at runtime to /bin/bash (as well as specifying 
> -it to 'docker run'), which would drop you to a bash shell inside the 
> container, where you can manually invoke pmacct.
>
> You'll almost always see processes run inside of containers (ie pmacct, 
> webservers, etc) configured to run in the foreground by convention, because 
> you're already daemonizing/detaching "up a level" when you pass the -d flag 
> to 'docker run' - this allows the process running inside of the container to 
> send logs to STDOUT/STDERR which you can then look at by running the 'docker 
> logs ' command.
>
> HTH
>
> -JJ
>
> On Fri, Oct 1, 2021 at 12:29 PM Steve Clark  wrote:
>>
>> Hi,
>>
>> I found if I set daemonize: false in my pmacctd.conf file and use the -d 
>> flag on the docker run line it seems to work.
>>
>> Don't know if this is the proper solution though.
>>
>> Thanks,
>> Steve
>>
>> On 10/1/21 7:21 AM, Steve Clark wrote:
>> > Hello,
>> >
>> >
>> > I am having trouble getting the "latest" or "bleeding-edge" docker image 
>> > to run by using the following command:
>> > docker run --privileged --network host -v 
>> > /etc/pmacct_netwolves/pmacctd.conf:/etc/pmacct/pmacctd.conf 
>> > pmacct/pmacctd:bleeding-edge
>> > $ docker ps
>> > CONTAINER ID   IMAGE COMMAND   CREATED   STATUSPORTS NAMES
>> > Fri Oct  1 07:15:37 EDT 2021
>> >
>> > but if I run the following command and then inside the container I run 
>> > pmaccd -f /etc/pmacct/pmacctd - it works
>> > docker run -it --privileged --network host -v 
>> > /etc/pmacct_netwolves/pmacctd.conf:/etc/pmacct/pmacctd.conf --entrypoint 
>> > /bin/bash  pmacct/pmacctd:bleeding-edge
>> >
>> > from another login on the same system
>> > V990002:~
>> > $ docker ps
>> > CONTAINER ID   IMAGE  COMMAND CREATED  
>> > STATUS  PORTS NAMES
>> > d4b0beab1b0b   pmacct/pmacctd:bleeding-edge   "/bin/bash"   46 seconds ago 
>> >   Up 45 seconds silly_volhard
>> > Fri Oct  1 07:17:53 EDT 2021
>> > V990002:~
>> > $ ps awx|grep pmacct
>> > 18718 pts/1Sl+0:00 docker run -it --privileged --network host -v 
>> > /etc/pmacct_netwolves/pmacctd.conf:/etc/pmacct/pmacctd.conf --entrypoint 
>> > /bin/bash pmacct/pmacctd:bleeding-edge
>> > 18853 ?Ss 0:02 pmacctd: Core Process [default]
>> > 18856 ?S  0:00 pmacctd: Netflow Probe Plugin [eth1]
>> > 19348 pts/2S+ 0:00 grep --color=auto pmacct
>> >
>> > My system is CentOS 7.
>> > docker-ce-20.10.8-3.el7.x86_64
>> >
>> > Also I must add I am docker noobie.
>> > Email Confidentiality Notice: The information contained in this 
>> > transmission may contain privileged and confidential and/or protected 
>> > health information (PHI) and may be subject to protection u