Re: [pmacct-discussion] [docker-doctors] pmacctd in docker

2022-05-12 Thread Thomas Eckert
Great, thanks!
That was fast :-)

Regards,
  Thomas

On Tue, May 10, 2022 at 4:29 PM Paolo Lucente  wrote:

>
> Hi Thomas,
>
> I think some confusion may be deriving from docs (to be improved) and
> the fact 1.7.6 is old.
>
> Nevertheless, from the interface indexes from your last output (ie.
> 1872541466, 3698069186, etc.) i can tell that you did configure
> pcap_ifindex to 'hash' (being honored as you can see) in conjunction
> with pcap_interfaces_map.
>
> One issue in the code is for sure the fact to require the definition of
> an ifindex always, even if pcap_ifindex is not set to 'map'. Another
> issue was the silent discarding of pcap_interfaces_map without notifying
> you with a warning. Both of these issues have been addressed in this
> commit that i just passed:
>
>
> https://github.com/pmacct/pmacct/commit/02080179aef3e87527e4d1158700eee729f1a5c3
>
> Paolo
>
>
> On 9/5/22 14:31, Thomas Eckert wrote:
> > Hi Paolo,
> >
> > Thanks for the hint, I gave it a try. I'm observing the exact same
> > behavior between running pmacct in a container & directly on my host in
> > all cases. Tested with
> > * official docker image: 281904b7afd6
> > * official ubuntu 21.10 package: pmacct/impish,now 1.7.6-2 amd64
> >
> > I *think* the problem is with the interfaces' ifindex parameter when
> > using the pcap_interfaces_map config key - everything works fine
> > (capture files are printed) when instead using the pcap_interface key.
> > Whenever I do not specify the 'ifindex' in the file specified as value
> > for the pcap_interfaces_map config key, I do not observe capture files
> > being printed. Vice versa, if I do specify the 'ifindex' parameter, then
> > capture files are printed.
> >
> > In fact, if I do specify 'ifindex' for all interfaces listed when I run
> > "netstat -i", then pmacctd throws errors for my br-* & enx interfaces -
> > which it does not do when I omit 'ifindex' - almost as if it only then
> > realizes that it is supposed to access those interfaces at all. This
> > assumption is also based on the fact that I do see log lines such as
> these
> >  INFO ( default/core ): Reading configuration file
> > '/etc/pmacct/pmacctd.conf'.
> >  INFO ( default/core ): [/etc/pmacct/pcap-itf.conf] (re)loading map.
> >  INFO ( default/core ): [/etc/pmacct/pcap-itf.conf] map successfully
> > (re)loaded.
> >  INFO ( default/core ): [docker0,1872541466] link type is: 1  <=
> >  INFO ( default/core ): [eno2,3698069186] link type is: 1
>  <=
> >  INFO ( default/core ): [lo,2529615826] link type is: 1
> ><=
> >  INFO ( default/core ): [tun0,3990258693] link type is: 12
> <=
> > when specifying 'ifname' whereas the marked (<=) lines are missing
> > whenever I do not.
> >
> > Reading through the config key documentation some more, I found the
> > config key pcap_ifindex. Interestingly enough, using it does not yield
> > any difference in results - neither for value "sys" nor for value "hash"
> > - irrespective of all other settings I played around with.
> >
> > Assuming in pmacctd.conf the config key pcap_interfaces_map is used,
> > then this is what I speculate is effectively happening:
> > * pmacctd ignores config key pcap_ifindex
> > * instead, it expects 'ifindex' to be set in the interface mapping file
> > for each line
> > * each line where 'ifindex' is not set is ignored
> > * if 'ifindex' is missing on all lines, this results in a
> > "no-interface-being-listened-on" case without any warning/error
> > Summary: seems like 'ifname' is a mandatory parameter in the interface
> > mapping file whereas the documentation says "pmacctd: mandatory keys:
> > ifname."
> >
> > My understanding of the documentation for above-mentioned config keys is
> > that the behavior I'm observing is not as intended (e.g. 'ifindex'
> > effectively being required, pcap_ifindex effectively being ignored) . So
> > I'm either making a mistake, e.g. in my config files, misunderstanding
> > the documentation or I'm encountering a bug - which I find difficult to
> > believe given how trivial my setup is.
> >
> > Any Suggestions ?
> >
> > Regards & Thanks,
> >Thomas
> >
> > On Sun, May 8, 2022 at 1:43 PM Paolo Lucente  > > wrote:
> >
> >
> > Hi Thomas,
> >
> > The simplest thing i may recommend is to check it all working
> outside a
> > container - this way you can easily isolate whether the issue is
> > somehow
> > related to the container (config or interaction of pmacctd with the
> > container) or with the pmacct config itself.
> >
> > Paolo
> >
> >
> > On 6/5/22 06:05, Thomas Eckert wrote:
> >  > Hi everyone,
> >  >
> >  > pmacct starter here, trying to get pmacctd working inside of a
> > container
> >  > to listen to the (container's) host's traffic. I suppose this is
> > a, if
> >  > not the, standard use case for pmacctd in a container. So I'm
> > sure it
> >  > works in principle but I'm doing something 

Re: [pmacct-discussion] [docker-doctors] pmacctd in docker

2022-05-10 Thread Paolo Lucente


Hi Thomas,

I think some confusion may be deriving from docs (to be improved) and 
the fact 1.7.6 is old.


Nevertheless, from the interface indexes from your last output (ie. 
1872541466, 3698069186, etc.) i can tell that you did configure 
pcap_ifindex to 'hash' (being honored as you can see) in conjunction 
with pcap_interfaces_map.


One issue in the code is for sure the fact to require the definition of 
an ifindex always, even if pcap_ifindex is not set to 'map'. Another 
issue was the silent discarding of pcap_interfaces_map without notifying 
you with a warning. Both of these issues have been addressed in this 
commit that i just passed:


https://github.com/pmacct/pmacct/commit/02080179aef3e87527e4d1158700eee729f1a5c3

Paolo


On 9/5/22 14:31, Thomas Eckert wrote:

Hi Paolo,

Thanks for the hint, I gave it a try. I'm observing the exact same 
behavior between running pmacct in a container & directly on my host in 
all cases. Tested with

* official docker image: 281904b7afd6
* official ubuntu 21.10 package: pmacct/impish,now 1.7.6-2 amd64

I *think* the problem is with the interfaces' ifindex parameter when 
using the pcap_interfaces_map config key - everything works fine 
(capture files are printed) when instead using the pcap_interface key. 
Whenever I do not specify the 'ifindex' in the file specified as value 
for the pcap_interfaces_map config key, I do not observe capture files 
being printed. Vice versa, if I do specify the 'ifindex' parameter, then 
capture files are printed.


In fact, if I do specify 'ifindex' for all interfaces listed when I run 
"netstat -i", then pmacctd throws errors for my br-* & enx interfaces - 
which it does not do when I omit 'ifindex' - almost as if it only then 
realizes that it is supposed to access those interfaces at all. This 
assumption is also based on the fact that I do see log lines such as these
     INFO ( default/core ): Reading configuration file 
'/etc/pmacct/pmacctd.conf'.

     INFO ( default/core ): [/etc/pmacct/pcap-itf.conf] (re)loading map.
     INFO ( default/core ): [/etc/pmacct/pcap-itf.conf] map successfully 
(re)loaded.

     INFO ( default/core ): [docker0,1872541466] link type is: 1      <=
     INFO ( default/core ): [eno2,3698069186] link type is: 1           <=
     INFO ( default/core ): [lo,2529615826] link type is: 1  
   <=

     INFO ( default/core ): [tun0,3990258693] link type is: 12          <=
when specifying 'ifname' whereas the marked (<=) lines are missing 
whenever I do not.


Reading through the config key documentation some more, I found the 
config key pcap_ifindex. Interestingly enough, using it does not yield 
any difference in results - neither for value "sys" nor for value "hash" 
- irrespective of all other settings I played around with.


Assuming in pmacctd.conf the config key pcap_interfaces_map is used, 
then this is what I speculate is effectively happening:

* pmacctd ignores config key pcap_ifindex
* instead, it expects 'ifindex' to be set in the interface mapping file 
for each line

* each line where 'ifindex' is not set is ignored
* if 'ifindex' is missing on all lines, this results in a 
"no-interface-being-listened-on" case without any warning/error
Summary: seems like 'ifname' is a mandatory parameter in the interface 
mapping file whereas the documentation says "pmacctd: mandatory keys: 
ifname."


My understanding of the documentation for above-mentioned config keys is 
that the behavior I'm observing is not as intended (e.g. 'ifindex' 
effectively being required, pcap_ifindex effectively being ignored) . So 
I'm either making a mistake, e.g. in my config files, misunderstanding 
the documentation or I'm encountering a bug - which I find difficult to 
believe given how trivial my setup is.


Any Suggestions ?

Regards & Thanks,
   Thomas

On Sun, May 8, 2022 at 1:43 PM Paolo Lucente > wrote:



Hi Thomas,

The simplest thing i may recommend is to check it all working outside a
container - this way you can easily isolate whether the issue is
somehow
related to the container (config or interaction of pmacctd with the
container) or with the pmacct config itself.

Paolo


On 6/5/22 06:05, Thomas Eckert wrote:
 > Hi everyone,
 >
 > pmacct starter here, trying to get pmacctd working inside of a
container
 > to listen to the (container's) host's traffic. I suppose this is
a, if
 > not the, standard use case for pmacctd in a container. So I'm
sure it
 > works in principle but I'm doing something wrong.
 >
 > Command for starting the container:
 >      docker run \
 >          --privileged --network=host \
 >          --name pmacctd \
 >          -v /tmp/pmacctd.conf:/etc/pmacct/pmacctd.conf:ro \
 >          -v /tmp/pcap-itf.conf:/etc/pmacct/pcap-itf.conf:ro \
 >          -v /tmp//captures:/var/pmacct/captures:rw pmacctd-debug \
 >          pmacct/pmacctd:latest
  

Re: [pmacct-discussion] [docker-doctors] pmacctd in docker

2022-05-09 Thread Thomas Eckert
Hi Paolo,

Thanks for the hint, I gave it a try. I'm observing the exact same behavior
between running pmacct in a container & directly on my host in all cases.
Tested with
* official docker image: 281904b7afd6
* official ubuntu 21.10 package: pmacct/impish,now 1.7.6-2 amd64

I *think* the problem is with the interfaces' ifindex parameter when using
the pcap_interfaces_map config key - everything works fine (capture files
are printed) when instead using the pcap_interface key. Whenever I do not
specify the 'ifindex' in the file specified as value for the
pcap_interfaces_map config key, I do not observe capture files being
printed. Vice versa, if I do specify the 'ifindex' parameter, then capture
files are printed.

In fact, if I do specify 'ifindex' for all interfaces listed when I run
"netstat -i", then pmacctd throws errors for my br-* & enx interfaces -
which it does not do when I omit 'ifindex' - almost as if it only then
realizes that it is supposed to access those interfaces at all. This
assumption is also based on the fact that I do see log lines such as these
INFO ( default/core ): Reading configuration file
'/etc/pmacct/pmacctd.conf'.
INFO ( default/core ): [/etc/pmacct/pcap-itf.conf] (re)loading map.
INFO ( default/core ): [/etc/pmacct/pcap-itf.conf] map successfully
(re)loaded.
INFO ( default/core ): [docker0,1872541466] link type is: 1  <=
INFO ( default/core ): [eno2,3698069186] link type is: 1   <=
INFO ( default/core ): [lo,2529615826] link type is: 1<=
INFO ( default/core ): [tun0,3990258693] link type is: 12  <=
when specifying 'ifname' whereas the marked (<=) lines are missing whenever
I do not.

Reading through the config key documentation some more, I found the config
key pcap_ifindex. Interestingly enough, using it does not yield any
difference in results - neither for value "sys" nor for value "hash" -
irrespective of all other settings I played around with.

Assuming in pmacctd.conf the config key pcap_interfaces_map is used, then
this is what I speculate is effectively happening:
* pmacctd ignores config key pcap_ifindex
* instead, it expects 'ifindex' to be set in the interface mapping file for
each line
* each line where 'ifindex' is not set is ignored
* if 'ifindex' is missing on all lines, this results in a
"no-interface-being-listened-on" case without any warning/error
Summary: seems like 'ifname' is a mandatory parameter in the interface
mapping file whereas the documentation says "pmacctd: mandatory keys:
ifname."

My understanding of the documentation for above-mentioned config keys is
that the behavior I'm observing is not as intended (e.g. 'ifindex'
effectively being required, pcap_ifindex effectively being ignored) . So
I'm either making a mistake, e.g. in my config files, misunderstanding the
documentation or I'm encountering a bug - which I find difficult to believe
given how trivial my setup is.

Any Suggestions ?

Regards & Thanks,
  Thomas

On Sun, May 8, 2022 at 1:43 PM Paolo Lucente  wrote:

>
> Hi Thomas,
>
> The simplest thing i may recommend is to check it all working outside a
> container - this way you can easily isolate whether the issue is somehow
> related to the container (config or interaction of pmacctd with the
> container) or with the pmacct config itself.
>
> Paolo
>
>
> On 6/5/22 06:05, Thomas Eckert wrote:
> > Hi everyone,
> >
> > pmacct starter here, trying to get pmacctd working inside of a container
> > to listen to the (container's) host's traffic. I suppose this is a, if
> > not the, standard use case for pmacctd in a container. So I'm sure it
> > works in principle but I'm doing something wrong.
> >
> > Command for starting the container:
> >  docker run \
> >  --privileged --network=host \
> >  --name pmacctd \
> >  -v /tmp/pmacctd.conf:/etc/pmacct/pmacctd.conf:ro \
> >  -v /tmp/pcap-itf.conf:/etc/pmacct/pcap-itf.conf:ro \
> >  -v /tmp//captures:/var/pmacct/captures:rw pmacctd-debug \
> >  pmacct/pmacctd:latest
> >
> > Contents of pmacctd.conf:
> >  daemonize: false
> >  snaplen: 1000
> >  pcap_interfaces_map: /etc/pmacct/pcap-itf.conf
> >  aggregate: src_host, dst_host, src_port, dst_port, proto, class
> >  plugins: print
> >  print_output: json
> >  print_output_file: /var/pmacct/captures/capture-%Y%m%d_%H%M.txt
> >  print_output_file_append: true
> >  print_history: 1m
> >  print_history_roundoff: m
> >  print_refresh_time: 5
> >
> > pcap-itf.conf contains all interfaces of the host (as per netstat -i) in
> > the form
> >  ifname=eno2
> > One line each, no other keys/values other than ifname.
> > Possibly important note: There's a VPN (openconnect) constantly running
> > on the host. The VPN's interface is listed in netstat -i and, as such,
> > included in pcap-itf.conf.
> >
> > Starting the container yields this output:
> >  INFO ( default/core ): Promiscuous Mode 

Re: [pmacct-discussion] [docker-doctors] pmacctd in docker

2022-05-08 Thread Paolo Lucente


Hi Thomas,

The simplest thing i may recommend is to check it all working outside a 
container - this way you can easily isolate whether the issue is somehow 
related to the container (config or interaction of pmacctd with the 
container) or with the pmacct config itself.


Paolo


On 6/5/22 06:05, Thomas Eckert wrote:

Hi everyone,

pmacct starter here, trying to get pmacctd working inside of a container 
to listen to the (container's) host's traffic. I suppose this is a, if 
not the, standard use case for pmacctd in a container. So I'm sure it 
works in principle but I'm doing something wrong.


Command for starting the container:
     docker run \
         --privileged --network=host \
         --name pmacctd \
         -v /tmp/pmacctd.conf:/etc/pmacct/pmacctd.conf:ro \
         -v /tmp/pcap-itf.conf:/etc/pmacct/pcap-itf.conf:ro \
         -v /tmp//captures:/var/pmacct/captures:rw pmacctd-debug \
         pmacct/pmacctd:latest

Contents of pmacctd.conf:
     daemonize: false
     snaplen: 1000
     pcap_interfaces_map: /etc/pmacct/pcap-itf.conf
     aggregate: src_host, dst_host, src_port, dst_port, proto, class
     plugins: print
     print_output: json
     print_output_file: /var/pmacct/captures/capture-%Y%m%d_%H%M.txt
     print_output_file_append: true
     print_history: 1m
     print_history_roundoff: m
     print_refresh_time: 5

pcap-itf.conf contains all interfaces of the host (as per netstat -i) in 
the form

     ifname=eno2
One line each, no other keys/values other than ifname.
Possibly important note: There's a VPN (openconnect) constantly running 
on the host. The VPN's interface is listed in netstat -i and, as such, 
included in pcap-itf.conf.


Starting the container yields this output:
     INFO ( default/core ): Promiscuous Mode Accounting Daemon, pmacctd 
1.7.7-git (20211107-0 (ef37a415))
     INFO ( default/core ):  '--enable-mysql' '--enable-pgsql' 
'--enable-sqlite3' '--enable-kafka' '--enable-geoipv2' 
'--enable-jansson' '--enable-rabbitmq' '--enable-nflog' '--enable-ndpi' 
'--enable-zmq' '--enable-avro' '--enable-serdes' '--enable-redis' 
'--enable-gnutls' 'AVRO_CFLAGS=-I/usr/local/avro/include' 
'AVRO_LIBS=-L/usr/local/avro/lib -lavro' '--enable-l2' 
'--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins' 
'--enable-st-bins'
     INFO ( default/core ): Reading configuration file 
'/etc/pmacct/pmacctd.conf'.

     INFO ( default/core ): [/etc/pmacct/pcap-itf.conf] (re)loading map.
     INFO ( default/core ): [/etc/pmacct/pcap-itf.conf] map successfully 
(re)loaded.
     INFO ( default_print/print ): cache entries=16411 base cache 
memory=67875896 bytes

     INFO ( default_print/print ): JSON: setting object handlers.
     INFO ( default_print/print ): *** Purging cache - START (PID: 7) ***
     INFO ( default_print/print ): *** Purging cache - END (PID: 7, QN: 
0/0, ET: X) ***


Now, the problem is there are no files showing up in the 'captures' 
directory at all.


I tried these things  (as well as combinations thereof) to try to 
understand what's going on:
* change the time related settings in pmacct.conf: to dump data 
more/less often - also waited (increasingly) long, at times up to 20 minutes
* change 'snaplen' in pmacct.conf up & down - just to make sure I'm not 
running into buffering problems (just guessing, haven't read pmacct/d 
sources)
* change pcap-itf.conf to contain all interfaces or only the (host's) 
LAN + VPN interfaces (removing all others like docker's internal 'docker0')
* check permission settings of the 'captures' directory - this should be 
fine because a simple "touch /var/pmacct/captures/foobar" works and the 
file does exist as observed in the directory on the host itself
* run the container _not_ in host-sniffing mode, so just inside its own 
network-bubble, then cause traffic against it and observe it writing 
data to the 'captures' directory - works!


Because I started to doubt my own sanity I asked one of our Docker/K8S 
experts to check my docker setup and he found no problem looking over 
it, including via "docker inspect pmacct". So I'm fairly sure my mistake 
is somewhere in the configuration of pmacctd but I cannot figure out 
what is. Would someone please point it out to me ?


Regards & Thanks,
   Thomas

PS: It's been almost 10 years since I've posted to a mailing list. 
Please forgive any conventions/best-practices missteps.



___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] docker

2021-10-02 Thread Marc Sune
Steven, John,

John, thank you for jumping in. I agree it's the proper solution.

I believe the reason why the first container immediately stops is
because, in docker, a container will be alive until the main process
(or entrypoint) is alive. For the pmacctd container, the entry point
is:

https://github.com/pmacct/pmacct/blob/master/docker/pmacctd/Dockerfile#L11

When using the daemonize option (in pmacctd, not in docker), the main
process will fork and the child process will detach from the parent,
so that the main process can exit the and leave the daemon process
running in background, pmacctd in this case ([1]). Of course this
makes docker realise the entrypoint process has finalised, and
therefore stops the container.

John's explanation on docker's -d option is spot on (reference:
https://docs.docker.com/engine/reference/run/#detached-vs-foreground).
Btw, something you might want to look into when using -d, docker
(daemon) can restart the container, automatically, based on the reason
why the container stopped, so called restart-policies:

https://docs.docker.com/engine/reference/run/#restart-policies---restart

Regards
Marc

[1]
https://github.com/pmacct/pmacct/blob/master/src/pmacctd.c#L613
https://github.com/pmacct/pmacct/blob/master/src/util.c#L95

Missatge de John Jensen  del dia dv., 1 d’oct.
2021 a les 19:28:
>
> Hey Steve,
>
> It is the proper solution.
>
> To add some context, if you don't pass '-it' (-i keeps STDIN open on the 
> container and -t allocates a pseudo-TTY and attaches it to the STDIN of the 
> container) or '-d' (which spawns the container in 'detached' mode, which will 
> return your running container ID) to 'docker run', it defaults to running 
> your container in 'foreground mode'. When you ran your first docker command, 
> did you get presented with essentially nothing until you killed the container 
> with ctrl+c? I believe the default in foreground mode is to attach the host's 
> STDIN/STDOUT/STDERR to that of the container, so if you essentially "saw 
> nothing" then I would have expected you to see pmacct running in 'ps' output 
> in a different shell on the same box.
>
> The second 'docker run' command works because you're overriding the 
> entrypoint of the container at runtime to /bin/bash (as well as specifying 
> -it to 'docker run'), which would drop you to a bash shell inside the 
> container, where you can manually invoke pmacct.
>
> You'll almost always see processes run inside of containers (ie pmacct, 
> webservers, etc) configured to run in the foreground by convention, because 
> you're already daemonizing/detaching "up a level" when you pass the -d flag 
> to 'docker run' - this allows the process running inside of the container to 
> send logs to STDOUT/STDERR which you can then look at by running the 'docker 
> logs ' command.
>
> HTH
>
> -JJ
>
> On Fri, Oct 1, 2021 at 12:29 PM Steve Clark  wrote:
>>
>> Hi,
>>
>> I found if I set daemonize: false in my pmacctd.conf file and use the -d 
>> flag on the docker run line it seems to work.
>>
>> Don't know if this is the proper solution though.
>>
>> Thanks,
>> Steve
>>
>> On 10/1/21 7:21 AM, Steve Clark wrote:
>> > Hello,
>> >
>> >
>> > I am having trouble getting the "latest" or "bleeding-edge" docker image 
>> > to run by using the following command:
>> > docker run --privileged --network host -v 
>> > /etc/pmacct_netwolves/pmacctd.conf:/etc/pmacct/pmacctd.conf 
>> > pmacct/pmacctd:bleeding-edge
>> > $ docker ps
>> > CONTAINER ID   IMAGE COMMAND   CREATED   STATUSPORTS NAMES
>> > Fri Oct  1 07:15:37 EDT 2021
>> >
>> > but if I run the following command and then inside the container I run 
>> > pmaccd -f /etc/pmacct/pmacctd - it works
>> > docker run -it --privileged --network host -v 
>> > /etc/pmacct_netwolves/pmacctd.conf:/etc/pmacct/pmacctd.conf --entrypoint 
>> > /bin/bash  pmacct/pmacctd:bleeding-edge
>> >
>> > from another login on the same system
>> > V990002:~
>> > $ docker ps
>> > CONTAINER ID   IMAGE  COMMAND CREATED  
>> > STATUS  PORTS NAMES
>> > d4b0beab1b0b   pmacct/pmacctd:bleeding-edge   "/bin/bash"   46 seconds ago 
>> >   Up 45 seconds silly_volhard
>> > Fri Oct  1 07:17:53 EDT 2021
>> > V990002:~
>> > $ ps awx|grep pmacct
>> > 18718 pts/1Sl+0:00 docker run -it --privileged --network host -v 
>> > /etc/pmacct_netwolves/pmacctd.conf:/etc/pmacct/pmacctd.conf --entrypoint 
>> > /bin/bash pmacct/pmacctd:bleeding-edge
>> > 18853 ?Ss 0:02 pmacctd: Core Process [default]
>> > 18856 ?S  0:00 pmacctd: Netflow Probe Plugin [eth1]
>> > 19348 pts/2S+ 0:00 grep --color=auto pmacct
>> >
>> > My system is CentOS 7.
>> > docker-ce-20.10.8-3.el7.x86_64
>> >
>> > Also I must add I am docker noobie.
>> > Email Confidentiality Notice: The information contained in this 
>> > transmission may contain privileged and confidential and/or protected 
>> > health information (PHI) and may be subject to protection 

Re: [pmacct-discussion] docker

2021-10-01 Thread Steve Clark

Hi,

I found if I set daemonize: false in my pmacctd.conf file and use the -d flag 
on the docker run line it seems to work.

Don't know if this is the proper solution though.

Thanks,
Steve

On 10/1/21 7:21 AM, Steve Clark wrote:

Hello,


I am having trouble getting the "latest" or "bleeding-edge" docker image to run 
by using the following command:
docker run --privileged --network host -v 
/etc/pmacct_netwolves/pmacctd.conf:/etc/pmacct/pmacctd.conf 
pmacct/pmacctd:bleeding-edge
$ docker ps
CONTAINER ID   IMAGE COMMAND   CREATED   STATUSPORTS NAMES
Fri Oct  1 07:15:37 EDT 2021

but if I run the following command and then inside the container I run pmaccd 
-f /etc/pmacct/pmacctd - it works
docker run -it --privileged --network host -v 
/etc/pmacct_netwolves/pmacctd.conf:/etc/pmacct/pmacctd.conf --entrypoint 
/bin/bash  pmacct/pmacctd:bleeding-edge

from another login on the same system
V990002:~
$ docker ps
CONTAINER ID   IMAGE  COMMAND CREATED  STATUS   
   PORTS NAMES
d4b0beab1b0b   pmacct/pmacctd:bleeding-edge   "/bin/bash"   46 seconds ago   Up 
45 seconds silly_volhard
Fri Oct  1 07:17:53 EDT 2021
V990002:~
$ ps awx|grep pmacct
18718 pts/1Sl+0:00 docker run -it --privileged --network host -v 
/etc/pmacct_netwolves/pmacctd.conf:/etc/pmacct/pmacctd.conf --entrypoint 
/bin/bash pmacct/pmacctd:bleeding-edge
18853 ?Ss 0:02 pmacctd: Core Process [default]
18856 ?S  0:00 pmacctd: Netflow Probe Plugin [eth1]
19348 pts/2S+ 0:00 grep --color=auto pmacct

My system is CentOS 7.
docker-ce-20.10.8-3.el7.x86_64

Also I must add I am docker noobie.
Email Confidentiality Notice: The information contained in this transmission 
may contain privileged and confidential and/or protected health information 
(PHI) and may be subject to protection under the law, including the Health 
Insurance Portability and Accountability Act of 1996, as amended (HIPAA). This 
transmission is intended for the sole use of the individual or entity to whom 
it is addressed. If you are not the intended recipient, you are notified that 
any use, dissemination, distribution, printing or copying of this transmission 
is strictly prohibited and may subject you to criminal or civil penalties. If 
you have received this transmission in error, please contact the sender 
immediately and delete this email and any attachments from any computer. Vaso 
Corporation and its subsidiary companies are not responsible for data leaks 
that result from email messages received that contain privileged and 
confidential and/or protected health information (PHI).

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Email Confidentiality Notice: The information contained in this transmission 
may contain privileged and confidential and/or protected health information 
(PHI) and may be subject to protection under the law, including the Health 
Insurance Portability and Accountability Act of 1996, as amended (HIPAA). This 
transmission is intended for the sole use of the individual or entity to whom 
it is addressed. If you are not the intended recipient, you are notified that 
any use, dissemination, distribution, printing or copying of this transmission 
is strictly prohibited and may subject you to criminal or civil penalties. If 
you have received this transmission in error, please contact the sender 
immediately and delete this email and any attachments from any computer. Vaso 
Corporation and its subsidiary companies are not responsible for data leaks 
that result from email messages received that contain privileged and 
confidential and/or protected health information (PHI).

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] docker

2021-10-01 Thread John Jensen
Hey Steve,

It is the proper solution.

To add some context, if you don't pass '-it' (-i keeps STDIN open on the
container and -t allocates a pseudo-TTY and attaches it to the STDIN of the
container) or '-d' (which spawns the container in 'detached' mode, which
will return your running container ID) to 'docker run', it defaults to
running your container in 'foreground mode'. When you ran your first docker
command, did you get presented with essentially nothing until you killed
the container with ctrl+c? I believe the default in foreground mode is to
attach the host's STDIN/STDOUT/STDERR to that of the container, so if you
essentially "saw nothing" then I would have expected you to see pmacct
running in 'ps' output in a different shell on the same box.

The second 'docker run' command works because you're overriding the
entrypoint of the container at runtime to /bin/bash (as well as specifying
-it to 'docker run'), which would drop you to a bash shell inside the
container, where you can manually invoke pmacct.

You'll almost always see processes run inside of containers (ie pmacct,
webservers, etc) configured to run in the foreground by convention, because
you're already daemonizing/detaching "up a level" when you pass the -d flag
to 'docker run' - this allows the process running inside of the container
to send logs to STDOUT/STDERR which you can then look at by running the
'docker logs ' command.

HTH

-JJ

On Fri, Oct 1, 2021 at 12:29 PM Steve Clark  wrote:

> Hi,
>
> I found if I set daemonize: false in my pmacctd.conf file and use the -d
> flag on the docker run line it seems to work.
>
> Don't know if this is the proper solution though.
>
> Thanks,
> Steve
>
> On 10/1/21 7:21 AM, Steve Clark wrote:
> > Hello,
> >
> >
> > I am having trouble getting the "latest" or "bleeding-edge" docker image
> to run by using the following command:
> > docker run --privileged --network host -v
> /etc/pmacct_netwolves/pmacctd.conf:/etc/pmacct/pmacctd.conf
> pmacct/pmacctd:bleeding-edge
> > $ docker ps
> > CONTAINER ID   IMAGE COMMAND   CREATED   STATUSPORTS NAMES
> > Fri Oct  1 07:15:37 EDT 2021
> >
> > but if I run the following command and then inside the container I run
> pmaccd -f /etc/pmacct/pmacctd - it works
> > docker run -it --privileged --network host -v
> /etc/pmacct_netwolves/pmacctd.conf:/etc/pmacct/pmacctd.conf --entrypoint
> /bin/bash  pmacct/pmacctd:bleeding-edge
> >
> > from another login on the same system
> > V990002:~
> > $ docker ps
> > CONTAINER ID   IMAGE  COMMAND CREATED
> STATUS  PORTS NAMES
> > d4b0beab1b0b   pmacct/pmacctd:bleeding-edge   "/bin/bash"   46 seconds
> ago   Up 45 seconds silly_volhard
> > Fri Oct  1 07:17:53 EDT 2021
> > V990002:~
> > $ ps awx|grep pmacct
> > 18718 pts/1Sl+0:00 docker run -it --privileged --network host -v
> /etc/pmacct_netwolves/pmacctd.conf:/etc/pmacct/pmacctd.conf --entrypoint
> /bin/bash pmacct/pmacctd:bleeding-edge
> > 18853 ?Ss 0:02 pmacctd: Core Process [default]
> > 18856 ?S  0:00 pmacctd: Netflow Probe Plugin [eth1]
> > 19348 pts/2S+ 0:00 grep --color=auto pmacct
> >
> > My system is CentOS 7.
> > docker-ce-20.10.8-3.el7.x86_64
> >
> > Also I must add I am docker noobie.
> > Email Confidentiality Notice: The information contained in this
> transmission may contain privileged and confidential and/or protected
> health information (PHI) and may be subject to protection under the law,
> including the Health Insurance Portability and Accountability Act of 1996,
> as amended (HIPAA). This transmission is intended for the sole use of the
> individual or entity to whom it is addressed. If you are not the intended
> recipient, you are notified that any use, dissemination, distribution,
> printing or copying of this transmission is strictly prohibited and may
> subject you to criminal or civil penalties. If you have received this
> transmission in error, please contact the sender immediately and delete
> this email and any attachments from any computer. Vaso Corporation and its
> subsidiary companies are not responsible for data leaks that result from
> email messages received that contain privileged and confidential and/or
> protected health information (PHI).
> >
> > ___
> > pmacct-discussion mailing list
> > http://www.pmacct.net/#mailinglists
>
> Email Confidentiality Notice: The information contained in this
> transmission may contain privileged and confidential and/or protected
> health information (PHI) and may be subject to protection under the law,
> including the Health Insurance Portability and Accountability Act of 1996,
> as amended (HIPAA). This transmission is intended for the sole use of the
> individual or entity to whom it is addressed. If you are not the intended
> recipient, you are notified that any use, dissemination, distribution,
> printing or copying of this transmission is strictly prohibited and 

Re: [pmacct-discussion] [docker-doctors] docker nfacct ... strange udp source ip !

2021-06-10 Thread Marc Sune
Alessandro,

Since  conntrack -D -p udp does fix the issue, it's clear conntrack
cache is incorrect.

The conjecture here is that pmacct docker container is started (or
probably, restarted) with the UDP traffic flowing. Linux's connection
tracker (conntrack) keeps track on connections, and also acts as a
cache in the kernel. Since when this happens docker container is still
in the process of being launched, and not all iptables rules are
pushed (sort of a race condition), some packets set the conntrack
state incorrectly, and it stays until you manually flush them.

Is this happening randomly, or is the container (or some container in
general) started/restarted before this happens?

I see there are some commits in https://github.com/moby/moby that try
to address s/t like this [1]. I don't see any other commit relevant to
this issue, but it might be worth to try latest docker CE version (and
new kernel).

Let me know under which conditions this happens, and if you can
reproduce it with a newer OS/docker version, and we can take it from
there.

As a _very last_ resort, and if this happens randomly (which I
wouldn't understand why..) one could flush UDP conntrack info
regularly, if a) you can afford the perf penalty of doing so, and
possibly some frames lost b) you can afford up to X seconds of records
not being processed, where X is the periodicity of the flush... ugly.

Marc

[1]

```
commit 1c4286bcffcdc6668f84570a2754c78cccbbf7e1
Author: Flavio Crisciani 
Date:   Mon Apr 10 17:12:14 2017 -0700

Adding test for docker/docker#8795

When a container was being destroyed was possible to have
flows in conntrack left behind on the host.
If a flow is present into the conntrack table, the packet
processing will skip the POSTROUTING table of iptables and
will use the information in conntrack to do the translation.
For this reason is possible that long lived flows created
towards a container that is destroyed, will actually affect
new flows incoming to the host, creating erroneous conditions
where traffic cannot reach new containers.
The fix takes care of cleaning them up when a container is
destroyed.

The test of this commit is actually reproducing the condition
where an UDP flow is established towards a container that is then
destroyed. The test verifies that the flow established is gone
after the container is destroyed.

Signed-off-by: Flavio Crisciani 
```

Issue trying to be fixed: https://github.com/moby/moby/issues/8795.
But this is a 2017 commit... I doubt your docker version doesn't have
it.

I see kubernetes has bug reports as of 2020 a similar problem, which
they are obviously fixing in their container mgmt:

https://github.com/kubernetes/kubernetes/issues/102559

>
> Marc,
>
> The system is a fresh installed ubuntu20.04 , really nothing installed in the 
> host, it's a minimal + sshd + docker ... nothing else, no crons, no tasks 
> running, no daemons.
>
> For the two lines you noticed swapped
>
> MASQUERADE  all  --  192.168.200.0/24 anywhere
> MASQUERADE  all  --  172.17.0.0/16anywhere
>
> I don't think there is any problem in swapping them, because source nets are 
> different , the first is the docker-bridge and second is docker-0 (unused)
> Anyway let's swap them
>
> !! problem just happened ... let's check with tcpdump
>
> # docker exec -ti open-nti_nfacct_1 apt install tcpdump>/dev/null && docker 
> exec -ti open-nti_nfacct_1 tcpdump -n "udp port 20013"
> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
> listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
> 22:49:37.294518 IP 192.168.200.1.33956 > 192.168.200.2.20013: UDP, length 380
> 22:49:37.295657 IP 192.168.200.1.33956 > 192.168.200.2.20013: UDP, length 380
> 22:49:37.296836 IP 192.168.200.1.33956 > 192.168.200.2.20013: UDP, length 380
> 22:49:37.298055 IP 192.168.200.1.33956 > 192.168.200.2.20013: UDP, length 380
> 22:49:37.299242 IP 192.168.200.1.33956 > 192.168.200.2.20013: UDP, length 380
> 22:49:37.300450 IP 192.168.200.1.33956 > 192.168.200.2.20013: UDP, length 290
> ^C
> 6 packets captured
> 6 packets received by filter
> 0 packets dropped by kernel
>
> !! what iptables says
>
> # iptables -t nat -vL POSTROUTING --line-numbers
> Chain POSTROUTING (policy ACCEPT 776 packets, 157K bytes)
> num   pkts bytes target prot opt in out source   
> destination
> 19   540 MASQUERADE  all  --  any!br-0b2348db16f3  
> 192.168.200.0/24 anywhere
> 20 0 MASQUERADE  all  --  any!docker0  172.17.0.0/16
> anywhere
> 30 0 MASQUERADE  udp  --  anyany 192.168.200.2
> 192.168.200.2udp dpt:20013
> 40 0 MASQUERADE  tcp  --  anyany 192.168.200.3
> 192.168.200.3tcp dpt:8086
> 50 0 MASQUERADE  tcp  --  anyany 192.168.200.4
> 192.168.200.4tcp dpt:3000
> 60 

Re: [pmacct-discussion] [docker-doctors] docker nfacct ... strange udp source ip !

2021-06-09 Thread Alessandro Montano | Fiber Telecom

Dusan,

I'm new in this docker world, I don't know swarm

It's think it's a normal docker-compose version 1.29.2, build 5becea4c

--

AlexIT

___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] [docker-doctors] docker nfacct ... strange udp source ip !

2021-06-09 Thread Marc Sune
Dusan,

Thanks. I seemed to have misunderstood yo before. That sounds like it, yes.

After reading through most, this might be _the_ issue:

https://github.com/moby/moby/issues/16720#issuecomment-435637740
https://github.com/moby/moby/issues/16720#issuecomment-444862701

Alessandro, can you try the suggested once the container is in failed state?

conntrack -D -p udp

Marc

Missatge de Dusan Pajin  del dia dc., 9 de juny
2021 a les 21:54:
>
> Hi,
>
> Alessandro, do you use docker-compose or docker swarm (docker stack)?
>
> The behavior I am referring to is described in number of issues on Github, 
> for example:
> https://github.com/moby/moby/issues/16720
> https://github.com/docker/for-linux/issues/182
> https://github.com/moby/moby/issues/18845
> https://github.com/moby/libnetwork/issues/1994
> https://github.com/robcowart/elastiflow/issues/414
> In some of those issues you will find links to other issues and so on.
>
> I don't have an explanation why this works for you in some situations and 
> some not.
> SInce that is the case, you might try clearing the conntrack table, which is 
> described in some of the issues above.
> Using the host network is certainly not convenient, but it is doable.
>
> Kind regards,
> Dusan
>
>
>
> On Wed, Jun 9, 2021 at 7:37 PM Marc Sune  wrote:
>>
>> Dusan, Alessandro,
>>
>> Let me answer Dusan first.
>>
>> Missatge de Dusan Pajin  del dia dc., 9 de juny
>> 2021 a les 18:08:
>> >
>> > Hi Alessandro,
>> >
>> > I would say that this is a "known" issue or behavior in docker which is 
>> > experienced by everyone who ever wanted to receive syslog, netflow, 
>> > telemetry or any other similar UDP stream from network devices. When you 
>> > expose ports in your docker-compose file, the docker will create the IP 
>> > tables rules to steer the traffic to your container in docker's bridge 
>> > network, but unfortunately also translate the source IP address of the 
>> > packets. I am not sure what is the reasoning behind such a behavior. If 
>> > you try to search for solutions for this issue, you will find some 
>> > proposals, but none of them used to work in my case.
>>
>> That is not my understanding. I've also double checked with a devops
>> Docker guru in my organization.
>>
>> In the default network docker mode, masquerading only happens for
>> egress traffic not ingress.
>>
>> I actually tried it locally by running an httpd container (apache2)
>> and redirect 8080 on the "host" to port 80 on the container. Container
>> is on the docker range, LAN on my laptop is 192.168.1.36, .33 being
>> another client in my LAN.
>>
>> root@d64c65384e87:/usr/local/apache2# tcpdump -l -n
>> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
>> listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
>> 17:21:49.546067 IP 192.168.1.33.46595 > 172.17.0.3.80: Flags [F.], seq
>> 2777556344, ack 4139714538, win 172, options [nop,nop,TS val 21290101
>> ecr 3311681356], length 0
>> 17:21:49.546379 IP 192.168.1.33.46591 > 172.17.0.3.80: Flags [F.], seq
>> 3001175791, ack 61192428, win 172, options [nop,nop,TS val 21290101
>> ecr 3311686360], length 0
>> 17:21:49.546402 IP 172.17.0.3.80 > 192.168.1.33.46591: Flags [.], ack
>> 1, win 236, options [nop,nop,TS val 3311689311 ecr 21290101], length 0
>> 17:21:49.546845 IP 172.17.0.3.80 > 192.168.1.33.46595: Flags [F.], seq
>> 1, ack 1, win 227, options [nop,nop,TS val 3311689311 ecr 21290101],
>> length 0
>> 17:21:49.550993 IP 192.168.1.33.46595 > 172.17.0.3.80: Flags [.], ack
>> 2, win 172, options [nop,nop,TS val 21290110 ecr 3311689311], length 0
>>
>> That works as expected, showing the real 1.33 address.
>>
>> Mind that there is a lot of confusion, because firewall services in
>> the system's OS can interfere with the rules set by the docker daemon
>> itself:
>>
>> https://stackoverflow.com/a/47913950/9321563
>>
>> Alessandro,
>>
>> I need to analyse in detail your rules, but what is clear is that
>> "something" is modifying them (see the two first rules)... whether
>> these two lines in particular are causing the issue, I am not sure:
>>
>> Pre:
>>
>> Chain POSTROUTING (policy ACCEPT)
>> target prot opt source   destination
>> MASQUERADE  all  --  192.168.200.0/24 anywhere
>> MASQUERADE  all  --  172.17.0.0/16anywhere
>> MASQUERADE  tcp  --  192.168.200.3192.168.200.3tcp dpt:8086
>> MASQUERADE  tcp  --  192.168.200.5192.168.200.5tcp dpt:3000
>> MASQUERADE  udp  --  192.168.200.9192.168.200.9udp dpt:5
>> MASQUERADE  tcp  --  192.168.200.11   192.168.200.11   tcp dpt:9092
>> MASQUERADE  udp  --  192.168.200.4192.168.200.4udp dpt:50005
>> MASQUERADE  udp  --  192.168.200.8192.168.200.8udp dpt:5600
>> MASQUERADE  tcp  --  192.168.200.8192.168.200.8tcp dpt:bgp
>> MASQUERADE  udp  --  192.168.200.2192.168.200.2udp dpt:20013
>>
>> Post:
>>
>> Chain POSTROUTING 

Re: [pmacct-discussion] [docker-doctors] docker nfacct ... strange udp source ip !

2021-06-09 Thread Marc Sune
Dusan, Alessandro,

Let me answer Dusan first.

Missatge de Dusan Pajin  del dia dc., 9 de juny
2021 a les 18:08:
>
> Hi Alessandro,
>
> I would say that this is a "known" issue or behavior in docker which is 
> experienced by everyone who ever wanted to receive syslog, netflow, telemetry 
> or any other similar UDP stream from network devices. When you expose ports 
> in your docker-compose file, the docker will create the IP tables rules to 
> steer the traffic to your container in docker's bridge network, but 
> unfortunately also translate the source IP address of the packets. I am not 
> sure what is the reasoning behind such a behavior. If you try to search for 
> solutions for this issue, you will find some proposals, but none of them used 
> to work in my case.

That is not my understanding. I've also double checked with a devops
Docker guru in my organization.

In the default network docker mode, masquerading only happens for
egress traffic not ingress.

I actually tried it locally by running an httpd container (apache2)
and redirect 8080 on the "host" to port 80 on the container. Container
is on the docker range, LAN on my laptop is 192.168.1.36, .33 being
another client in my LAN.

root@d64c65384e87:/usr/local/apache2# tcpdump -l -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
17:21:49.546067 IP 192.168.1.33.46595 > 172.17.0.3.80: Flags [F.], seq
2777556344, ack 4139714538, win 172, options [nop,nop,TS val 21290101
ecr 3311681356], length 0
17:21:49.546379 IP 192.168.1.33.46591 > 172.17.0.3.80: Flags [F.], seq
3001175791, ack 61192428, win 172, options [nop,nop,TS val 21290101
ecr 3311686360], length 0
17:21:49.546402 IP 172.17.0.3.80 > 192.168.1.33.46591: Flags [.], ack
1, win 236, options [nop,nop,TS val 3311689311 ecr 21290101], length 0
17:21:49.546845 IP 172.17.0.3.80 > 192.168.1.33.46595: Flags [F.], seq
1, ack 1, win 227, options [nop,nop,TS val 3311689311 ecr 21290101],
length 0
17:21:49.550993 IP 192.168.1.33.46595 > 172.17.0.3.80: Flags [.], ack
2, win 172, options [nop,nop,TS val 21290110 ecr 3311689311], length 0

That works as expected, showing the real 1.33 address.

Mind that there is a lot of confusion, because firewall services in
the system's OS can interfere with the rules set by the docker daemon
itself:

https://stackoverflow.com/a/47913950/9321563

Alessandro,

I need to analyse in detail your rules, but what is clear is that
"something" is modifying them (see the two first rules)... whether
these two lines in particular are causing the issue, I am not sure:

Pre:

Chain POSTROUTING (policy ACCEPT)
target prot opt source   destination
MASQUERADE  all  --  192.168.200.0/24 anywhere
MASQUERADE  all  --  172.17.0.0/16anywhere
MASQUERADE  tcp  --  192.168.200.3192.168.200.3tcp dpt:8086
MASQUERADE  tcp  --  192.168.200.5192.168.200.5tcp dpt:3000
MASQUERADE  udp  --  192.168.200.9192.168.200.9udp dpt:5
MASQUERADE  tcp  --  192.168.200.11   192.168.200.11   tcp dpt:9092
MASQUERADE  udp  --  192.168.200.4192.168.200.4udp dpt:50005
MASQUERADE  udp  --  192.168.200.8192.168.200.8udp dpt:5600
MASQUERADE  tcp  --  192.168.200.8192.168.200.8tcp dpt:bgp
MASQUERADE  udp  --  192.168.200.2192.168.200.2udp dpt:20013

Post:

Chain POSTROUTING (policy ACCEPT 4799 packets, 1170K bytes)
 pkts bytes target prot opt in out source
destination
  340 20392 MASQUERADE  all  --  any!br-d662f1cf56fa
192.168.200.0/24 anywhere  <--
  453 28712 MASQUERADE  all  --  any!docker0  172.17.0.0/16
anywhere <--
   0 0 MASQUERADE  tcp  --  anyany 192.168.200.3
192.168.200.3tcp dpt:8086
0 0 MASQUERADE  tcp  --  anyany 192.168.200.5
192.168.200.5tcp dpt:3000
0 0 MASQUERADE  udp  --  anyany 192.168.200.9
192.168.200.9udp dpt:5
0 0 MASQUERADE  tcp  --  anyany 192.168.200.11
192.168.200.11   tcp dpt:9092
0 0 MASQUERADE  udp  --  anyany 192.168.200.4
192.168.200.4udp dpt:50005
0 0 MASQUERADE  udp  --  anyany 192.168.200.8
192.168.200.8udp dpt:5600
0 0 MASQUERADE  tcp  --  anyany 192.168.200.8
192.168.200.8tcp dpt:bgp
0 0 MASQUERADE  udp  --  anyany 192.168.200.2
192.168.200.2udp dpt:20013

Which OS are you using in the host?

A bit of a moonshot, when the problem occurs can you try manually
(using iptabes) to remove the first two rules and set them exactly as
in the PRE scenario. Use

iptables -t nat -I  

which allows you to add it in a specific position. I think the problem
might be somewhere else though.

marc

>
> What definitely works is not to expose specific ports, but to configure 

Re: [pmacct-discussion] [docker-doctors] docker nfacct ... strange udp source ip !

2021-06-09 Thread Alessandro Montano | Fiber Telecom

Hi Dusan,

A know issue? and nobody can solve it ! with udp packets it's a real problem.

And in many situations, it-s not possibile di ricetly-attach to host network.
For scalability I was thinking to have many istances of the same collector (with docker -scale) , and an nginx as udp-load-balancer, which uses internal-docker-dns to distribute incoming udp packets 
(in roundrobin) to the istances

I'm already doing this thing with sflow and telemetry streams, but there it's 
working fine, because source-ip is not used.

Anyway, why it'is not constant and not predictable , for example

15:21:59.646861 IP xxx.157.228.yyy.50101 > 192.168.200.2.20013: UDP, length 380
15:21:59.648041 IP xxx.157.228.yyy.50101 > 192.168.200.2.20013: UDP, length 380
15:21:59.649240 IP xxx.157.228.yyy.50101 > 192.168.200.2.20013: UDP, length 380
15:21:59.650439 IP xxx.157.228.yyy.50101 > 192.168.200.2.20013: UDP, length 380
15:21:59.651653 IP xxx.157.228.yyy.50101 > 192.168.200.2.20013: UDP, length 380
15:21:59.652839 IP xxx.157.228.yyy.50101 > 192.168.200.2.20013: UDP, length 380
15:21:59.654055 IP xxx.157.228.yyy.50101 > 192.168.200.2.20013: UDP, length 380
15:21:59.655251 IP xxx.157.228.yyy.50101 > 192.168.200.2.20013: UDP, length 380

15:46:12.818232 IP 192.168.200.1.24229 > 192.168.200.2.20013: UDP, length 380
15:46:12.819363 IP 192.168.200.1.24229 > 192.168.200.2.20013: UDP, length 380
15:46:12.820573 IP 192.168.200.1.24229 > 192.168.200.2.20013: UDP, length 380
15:46:12.821750 IP 192.168.200.1.24229 > 192.168.200.2.20013: UDP, length 380
15:46:12.823020 IP 192.168.200.1.24229 > 192.168.200.2.20013: UDP, length 380
15:46:12.824177 IP 192.168.200.1.24229 > 192.168.200.2.20013: UDP, length 380
15:46:12.825387 IP 192.168.200.1.24229 > 192.168.200.2.20013: UDP, length 380
15:46:12.826567 IP 192.168.200.1.24229 > 192.168.200.2.20013: UDP, length 380

What does it cause this two different behaviors?


Cheers.

--
AlexIT



___
pmacct-discussion mailing list
http://www.pmacct.net/#mailinglists


Re: [pmacct-discussion] [docker-doctors] docker nfacct ... strange udp source ip !

2021-06-09 Thread Dusan Pajin
 Hi Alessandro,

I would say that this is a "known" issue or behavior in docker which is
experienced by everyone who ever wanted to receive syslog, netflow,
telemetry or any other similar UDP stream from network devices. When you
expose ports in your docker-compose file, the docker will create the IP
tables rules to steer the traffic to your container in docker's bridge
network, but unfortunately also translate the source IP address of the
packets. I am not sure what is the reasoning behind such a behavior. If you
try to search for solutions for this issue, you will find some proposals,
but none of them used to work in my case.

What definitely works is not to expose specific ports, but to configure
your container in docker-compose to be attached directly to the host
network. In that case, there will be no translation rules and no source NAT
and container will be directly connected to all host's network interfaces.
In such case, be aware that Docker DNS will not work, so to export
information from pmacct container further to kafka, you would need to send
it to "localhost", if the kafka container is running on the same host and
not to "kafka". This shouldn't be a big problem in your setup.

Btw, I am using docker swarm and not docker-compose, although they both use
docker-compose files with similar syntax, but I don't think there is
difference in their behavior.

Hope this helps

Kind regards,
Dusan

On Wed, Jun 9, 2021 at 3:29 PM Paolo Lucente  wrote:

>
> Hi Alessandro,
>
> (thanks for the kind words, first and foremost)
>
> Indeed, the test that Marc proposes is very sound, ie. check the actual
> packets coming in "on the wire" with tcpdump: do they really change
> sender IP address?
>
> Let me also confirm that what is used to populate peer_ip_src is the
> sender IP address coming straight from the socket (Marc's question) and,
> contrary to sFlow, there is typically there is no other way to infer
> such info (Alessandro's question).
>
> Paolo
>
>
> On 9/6/21 14:51, Marc Sune wrote:
> > Alessandro,
> >
> > inline
> >
> > Missatge de Alessandro Montano | FIBERTELECOM
> >  del dia dc., 9 de juny 2021 a les 10:12:
> >>
> >> Hi Paolo (and Marc),
> >>
> >> this is my first post here ... first of all THANKS FOR YOU GREAT JOB :)
> >>
> >> I'm using pmacct/nfacctd container from docker-hub
> (+kafka+telegraf+influxdb+grafana) and it's really a powerfull tool
> >>
> >> The sender are JUNIPER MX204 routers, using j-flow (extended netflow)
> >>
> >> NFACCTD VERSION:
> >> NetFlow Accounting Daemon, nfacctd 1.7.6-git [20201226-0 (7ad9d1b)]
> >>   '--enable-mysql' '--enable-pgsql' '--enable-sqlite3' '--enable-kafka'
> '--enable-geoipv2' '--enable-jansson' '--enable-rabbitmq' '--enable-nflog'
> '--enable-ndpi' '--enable-zmq' '--enable-avro' '--enable-serdes'
> '--enable-redis' '--enable-gnutls' 'AVRO_CFLAGS=-I/usr/local/avro/include'
> 'AVRO_LIBS=-L/usr/local/avro/lib -lavro' '--enable-l2'
> '--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins'
> '--enable-st-bins'
> >>
> >> SYSTEM:
> >> Linux 76afde386f6f 5.4.0-73-generic #82-Ubuntu SMP Wed Apr 14 17:39:42
> UTC 2021 x86_64 GNU/Linux
> >>
> >> CONFIG:
> >> debug: false
> >> daemonize: false
> >> pidfile: /var/run/nfacctd.pid
> >> logfile: /var/log/pmacct/nfacctd.log
> >> nfacctd_renormalize: true
> >> nfacctd_port: 20013
> >> aggregate[k]: peer_src_ip, peer_dst_ip, in_iface, out_iface, vlan,
> sampling_direction, etype, src_as, dst_as, as_path, proto, src_net,
> src_mask, dst_net, dst_mask, flows
> >> nfacctd_time_new: true
> >> plugins: kafka[k]
> >> kafka_output[k]: json
> >> kafka_topic[k]: nfacct
> >> kafka_broker_host[k]: kafka
> >> kafka_broker_port[k]: 9092
> >> kafka_refresh_time[k]: 60
> >> kafka_history[k]: 1m
> >> kafka_history_roundoff[k]: m
> >> kafka_max_writers[k]: 1
> >> kafka_markers[k]: true
> >> networks_file_no_lpm: true
> >> use_ip_next_hop: true
> >>
> >> DOCKER-COMPOSE:
> >> #Docker version 20.10.2, build 20.10.2-0ubuntu1~20.04.2
> >> #docker-compose version 1.29.2, build 5becea4c
> >> version: "3.9"
> >> services:
> >>nfacct:
> >>  networks:
> >>- ingress
> >>  image: pmacct/nfacctd
> >>  restart: on-failure
> >>  ports:
> >>- "20013:20013/udp"
> >>  volumes:
> >>- /etc/localtime:/etc/localtime
> >>- ./nfacct/etc:/etc/pmacct
> >>- ./nfacct/lib:/var/lib/pmacct
> >>- ./nfacct/log:/var/log/pmacct
> >> networks:
> >>ingress:
> >>  name: ingress
> >>  ipam:
> >>config:
> >>- subnet: 192.168.200.0/24
> >>
> >> My problem is the  value of field PEER_IP_SRC ... at start everything
> is correct, and it works well for a (long) while ... hours ... days ...
> >> I have ten routers so  "peer_ip_src": "151.157.228.xxx"  where xxx can
> easily identify the sender. Perfect.
> >>
> >> Suddenly ... "peer_ip_src": "192.168.200.1" for all records (and I
> loose the sender info!!!) ...
> >>
> >> It seems that docker-proxy decide to do nat/masquerading and 

Re: [pmacct-discussion] [docker-doctors] docker nfacct ... strange udp source ip !

2021-06-09 Thread Paolo Lucente



Hi Alessandro,

(thanks for the kind words, first and foremost)

Indeed, the test that Marc proposes is very sound, ie. check the actual 
packets coming in "on the wire" with tcpdump: do they really change 
sender IP address?


Let me also confirm that what is used to populate peer_ip_src is the 
sender IP address coming straight from the socket (Marc's question) and, 
contrary to sFlow, there is typically there is no other way to infer 
such info (Alessandro's question).


Paolo


On 9/6/21 14:51, Marc Sune wrote:

Alessandro,

inline

Missatge de Alessandro Montano | FIBERTELECOM
 del dia dc., 9 de juny 2021 a les 10:12:


Hi Paolo (and Marc),

this is my first post here ... first of all THANKS FOR YOU GREAT JOB :)

I'm using pmacct/nfacctd container from docker-hub 
(+kafka+telegraf+influxdb+grafana) and it's really a powerfull tool

The sender are JUNIPER MX204 routers, using j-flow (extended netflow)

NFACCTD VERSION:
NetFlow Accounting Daemon, nfacctd 1.7.6-git [20201226-0 (7ad9d1b)]
  '--enable-mysql' '--enable-pgsql' '--enable-sqlite3' '--enable-kafka' 
'--enable-geoipv2' '--enable-jansson' '--enable-rabbitmq' '--enable-nflog' 
'--enable-ndpi' '--enable-zmq' '--enable-avro' '--enable-serdes' 
'--enable-redis' '--enable-gnutls' 'AVRO_CFLAGS=-I/usr/local/avro/include' 
'AVRO_LIBS=-L/usr/local/avro/lib -lavro' '--enable-l2' '--enable-traffic-bins' 
'--enable-bgp-bins' '--enable-bmp-bins' '--enable-st-bins'

SYSTEM:
Linux 76afde386f6f 5.4.0-73-generic #82-Ubuntu SMP Wed Apr 14 17:39:42 UTC 2021 
x86_64 GNU/Linux

CONFIG:
debug: false
daemonize: false
pidfile: /var/run/nfacctd.pid
logfile: /var/log/pmacct/nfacctd.log
nfacctd_renormalize: true
nfacctd_port: 20013
aggregate[k]: peer_src_ip, peer_dst_ip, in_iface, out_iface, vlan, 
sampling_direction, etype, src_as, dst_as, as_path, proto, src_net, src_mask, 
dst_net, dst_mask, flows
nfacctd_time_new: true
plugins: kafka[k]
kafka_output[k]: json
kafka_topic[k]: nfacct
kafka_broker_host[k]: kafka
kafka_broker_port[k]: 9092
kafka_refresh_time[k]: 60
kafka_history[k]: 1m
kafka_history_roundoff[k]: m
kafka_max_writers[k]: 1
kafka_markers[k]: true
networks_file_no_lpm: true
use_ip_next_hop: true

DOCKER-COMPOSE:
#Docker version 20.10.2, build 20.10.2-0ubuntu1~20.04.2
#docker-compose version 1.29.2, build 5becea4c
version: "3.9"
services:
   nfacct:
 networks:
   - ingress
 image: pmacct/nfacctd
 restart: on-failure
 ports:
   - "20013:20013/udp"
 volumes:
   - /etc/localtime:/etc/localtime
   - ./nfacct/etc:/etc/pmacct
   - ./nfacct/lib:/var/lib/pmacct
   - ./nfacct/log:/var/log/pmacct
networks:
   ingress:
 name: ingress
 ipam:
   config:
   - subnet: 192.168.200.0/24

My problem is the  value of field PEER_IP_SRC ... at start everything is 
correct, and it works well for a (long) while ... hours ... days ...
I have ten routers so  "peer_ip_src": "151.157.228.xxx"  where xxx can easily 
identify the sender. Perfect.

Suddenly ... "peer_ip_src": "192.168.200.1" for all records (and I loose the 
sender info!!!) ...

It seems that docker-proxy decide to do nat/masquerading and translate 
source_ip for the udp stream.
The only way for me to have the correct behavior again is to stop/start the 
container.

How can I fix it? Or, is there an alternative way to obtain the same info 
(router ip) from inside the netflow stream, and not from the udp packet.


Paolo is definitely the right person to answer how "peer_ip_src" is populated.

However, there is something that I don't fully understand. To the best
of my knowledge, even when binding ports, docker (actually the kernel,
configured by docker) shouldn't masquerade traffic at all - if
masquerade is truly what happens. And certainly that wouldn't happen
"randomly" in the middle of the execution.

My first thought would be that this is something related to pmacct
itself, and that records are incorrectly generated but traffic is ok.

I doubt the  linux kernel iptables rules would randomly change the way
traffic is manipulated, unless of course, something else on that
machine/server is reloading iptables, and the resulting ruleset is
_slightly different_ for the traffic flowing towards the docker
container, effectively modifying the streams that go to pmacct (e.g.
rule priority reording). That _could_ explain why restarting the
daemon suddenly works, as order would be fixed.

Some more info would be needed to discard an iptables/docker issue:

* Dump the iptables -L and iptables -t nat -L before and after the
issue and compare.
* Use iptables -vL and iptables -t nat -vL to monitor counters, before
and after the issue, specially in the NAT table.
* Get inside the running container
(https://github.com/pmacct/pmacct/blob/master/docs/DOCKER.md#opening-a-shell-on-a-running-container),
install tcpdump, and write the pcap to a file, before and after the
incident.

Since these dumps might contain sensitive data, you can send them
anonymized or in private.

Hopefully 

Re: [pmacct-discussion] [docker-doctors] docker nfacct ... strange udp source ip !

2021-06-09 Thread Marc Sune
Alessandro,

inline

Missatge de Alessandro Montano | FIBERTELECOM
 del dia dc., 9 de juny 2021 a les 10:12:
>
> Hi Paolo (and Marc),
>
> this is my first post here ... first of all THANKS FOR YOU GREAT JOB :)
>
> I'm using pmacct/nfacctd container from docker-hub 
> (+kafka+telegraf+influxdb+grafana) and it's really a powerfull tool
>
> The sender are JUNIPER MX204 routers, using j-flow (extended netflow)
>
> NFACCTD VERSION:
> NetFlow Accounting Daemon, nfacctd 1.7.6-git [20201226-0 (7ad9d1b)]
>  '--enable-mysql' '--enable-pgsql' '--enable-sqlite3' '--enable-kafka' 
> '--enable-geoipv2' '--enable-jansson' '--enable-rabbitmq' '--enable-nflog' 
> '--enable-ndpi' '--enable-zmq' '--enable-avro' '--enable-serdes' 
> '--enable-redis' '--enable-gnutls' 'AVRO_CFLAGS=-I/usr/local/avro/include' 
> 'AVRO_LIBS=-L/usr/local/avro/lib -lavro' '--enable-l2' 
> '--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins' 
> '--enable-st-bins'
>
> SYSTEM:
> Linux 76afde386f6f 5.4.0-73-generic #82-Ubuntu SMP Wed Apr 14 17:39:42 UTC 
> 2021 x86_64 GNU/Linux
>
> CONFIG:
> debug: false
> daemonize: false
> pidfile: /var/run/nfacctd.pid
> logfile: /var/log/pmacct/nfacctd.log
> nfacctd_renormalize: true
> nfacctd_port: 20013
> aggregate[k]: peer_src_ip, peer_dst_ip, in_iface, out_iface, vlan, 
> sampling_direction, etype, src_as, dst_as, as_path, proto, src_net, src_mask, 
> dst_net, dst_mask, flows
> nfacctd_time_new: true
> plugins: kafka[k]
> kafka_output[k]: json
> kafka_topic[k]: nfacct
> kafka_broker_host[k]: kafka
> kafka_broker_port[k]: 9092
> kafka_refresh_time[k]: 60
> kafka_history[k]: 1m
> kafka_history_roundoff[k]: m
> kafka_max_writers[k]: 1
> kafka_markers[k]: true
> networks_file_no_lpm: true
> use_ip_next_hop: true
>
> DOCKER-COMPOSE:
> #Docker version 20.10.2, build 20.10.2-0ubuntu1~20.04.2
> #docker-compose version 1.29.2, build 5becea4c
> version: "3.9"
> services:
>   nfacct:
> networks:
>   - ingress
> image: pmacct/nfacctd
> restart: on-failure
> ports:
>   - "20013:20013/udp"
> volumes:
>   - /etc/localtime:/etc/localtime
>   - ./nfacct/etc:/etc/pmacct
>   - ./nfacct/lib:/var/lib/pmacct
>   - ./nfacct/log:/var/log/pmacct
> networks:
>   ingress:
> name: ingress
> ipam:
>   config:
>   - subnet: 192.168.200.0/24
>
> My problem is the  value of field PEER_IP_SRC ... at start everything is 
> correct, and it works well for a (long) while ... hours ... days ...
> I have ten routers so  "peer_ip_src": "151.157.228.xxx"  where xxx can easily 
> identify the sender. Perfect.
>
> Suddenly ... "peer_ip_src": "192.168.200.1" for all records (and I loose the 
> sender info!!!) ...
>
> It seems that docker-proxy decide to do nat/masquerading and translate 
> source_ip for the udp stream.
> The only way for me to have the correct behavior again is to stop/start the 
> container.
>
> How can I fix it? Or, is there an alternative way to obtain the same info 
> (router ip) from inside the netflow stream, and not from the udp packet.

Paolo is definitely the right person to answer how "peer_ip_src" is populated.

However, there is something that I don't fully understand. To the best
of my knowledge, even when binding ports, docker (actually the kernel,
configured by docker) shouldn't masquerade traffic at all - if
masquerade is truly what happens. And certainly that wouldn't happen
"randomly" in the middle of the execution.

My first thought would be that this is something related to pmacct
itself, and that records are incorrectly generated but traffic is ok.

I doubt the  linux kernel iptables rules would randomly change the way
traffic is manipulated, unless of course, something else on that
machine/server is reloading iptables, and the resulting ruleset is
_slightly different_ for the traffic flowing towards the docker
container, effectively modifying the streams that go to pmacct (e.g.
rule priority reording). That _could_ explain why restarting the
daemon suddenly works, as order would be fixed.

Some more info would be needed to discard an iptables/docker issue:

* Dump the iptables -L and iptables -t nat -L before and after the
issue and compare.
* Use iptables -vL and iptables -t nat -vL to monitor counters, before
and after the issue, specially in the NAT table.
* Get inside the running container
(https://github.com/pmacct/pmacct/blob/master/docs/DOCKER.md#opening-a-shell-on-a-running-container),
install tcpdump, and write the pcap to a file, before and after the
incident.

Since these dumps might contain sensitive data, you can send them
anonymized or in private.

Hopefully with this info we will see if it's an iptables issue or we
have to look somewhere else.

Regards
marc

>
> Thanks for your support.
>
> Cheers.
>
> --
> AlexIT
> --
> docker-doctors mailing list
> docker-doct...@pmacct.net
> http://acaraje.pmacct.net/cgi-bin/mailman/listinfo/docker-doctors

___
pmacct-discussion mailing list