Thanks for determining that port-forwarding directly to the pods doesn't
work either. Would you mind checking whether port-forwarding to a NodePort
service succeeds? I'm glad to hear that a NodePort service works as
expected when accessed directly, but my goal in asking you to test that
configuration was to try to see whether the problem is with the kubectl
port-forward process, so doing that final step of the test would help
assess that.

I've got two possible theories for what's going on. One is that kubectl
port-forward doesn't work correctly (or that you're doing something wrong
when invoking it, though everything you've shown so far looks fine), in
which case Artemis is fine but your attempt to use kubectl port-forward is
where this is breaking. If that's the case, you'll want to take that up
with the Kubernetes community (e.g. file a bug in the Kubernetes GitHub
project <https://github.com/kubernetes/kubernetes>). The other possibility
is that kubectl port-forward is working fine, but the Artemis process is
binding to the TCP port in a way that doesn't result in it accepting
incoming connections from whatever network path gets used for kubectl
port-forward even though it's fine when accessed via the Kubernetes
service. In that case, you'll want to tweak the bootstrap.xml to make it
listen on 0.0.0.0 (i.e. all IP addresses associated with the pod) rather
than the specific address you listed in your last message. I suspect the
latter, but both are possible.

If modifying the bootstrap.xml to make it listen on 0.0.0.0 is easy to do,
please do that and see where it gets you. If that's non-trivial, then
please deploy an Nginx deployment with a ClusterIP service in front of it (
https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
is one of many resources online that describe how to do this), confirm that
you can hit the Nginx service from within the cluster, and then test
whether you can use kubectl port-forward to the Nginx ClusterIP service.
(Ideally, use local port 8161 when doing the kubectl port-forward, just to
keep things as equivalent as possible.) If you can get through to the Nginx
service via kubectl port-forward but can't do the same thing to the Artemis
web console, then that to me would be a pretty clear indication that the
issue is within the Artemis config and not a bug in kubectl, and at that
point you'd want to pursue how to get Artemis to bind to 0.0.0.0 even if
that's non-trivial to do. Once you do that, I'd expect you to be able to
port-forward to either the service or to each individual pods as needed.

As for why it didn't work when you added a host entry to map to 127.0.0.1,
that's because that hostname isn't your Windows machine, it's the container
that's running on your Windows machine, which has its own network stack and
address. Networking is the most complicated part of working with
Kubernetes, and it's easy to get confused about exactly which host is
"localhost" in a given context, but if you think of each container as being
on a different machine rather than the physical machine you're actually
running on, that will help avoid at least some of the places where
confusion could occur.

One other thing: I'd encourage you to think about whether kubectl
port-forward is really the right tool for the job here. It's a great way to
do ad hoc troubleshooting, but if you're going to be using it frequently
and/or if you might want to use it for automated monitoring in production,
I still think you might be better off with an always-available ingress
route(s) that lets you expose these things on a permanent basis (whether
that's exposing the service that load-balances across all brokers or
whether you set up individual services for each individual pod and then
expose each of those single-broker services so you can access the web
console of any broker). And yes, ingress is a feature by which incoming
HTTP traffic is routed to resources (certainly services, but I'm not sure
what other resource types can be exposed) within Kubernetes.

Tim


On Fri, Aug 20, 2021 at 9:43 AM Thai Le <lnthai2...@gmail.com> wrote:

> I tried port-forward directly to the pod, it didn't work:
> C:\Users\nle>kubectl port-forward pod/ex-aao-ss-0 8161:8161 -n myproject
> Forwarding from 127.0.0.1:8161 -> 8161
> Forwarding from [::1]:8161 -> 8161
> Handling connection for 8161
> Handling connection for 8161
> E0820 09:06:52.508157   13064 portforward.go:400] an error occurred
> forwarding 8161 -> 8161: error forwarding port 8161 to pod
> ff6c35be7d3ace906d726b6abd38dbe122ddf2530647a44c799ca9a8a15ab245, uid :
> exit status 1: 2021/08/20 13:06:46 socat[31487] E connect(17, AF=2
> 127.0.0.1:8161, 16): Connection refused
> E0820 09:06:52.522192   13064 portforward.go:400] an error occurred
> forwarding 8161 -> 8161: error forwarding port 8161 to pod
> ff6c35be7d3ace906d726b6abd38dbe122ddf2530647a44c799ca9a8a15ab245, uid :
> exit status 1: 2021/08/20 13:06:46 socat[31488] E connect(17, AF=2
> 127.0.0.1:8161, 16): Connection refused
>
> Since the bootstrap.xml indicates the web server is binding to
> http://ex-aao-ss-0.ex-aao-hdls-svc.myproject.svc.cluster.local:8161, I
> also
> tried to add the "ex-aao-ss-0.ex-aao-hdls-svc.myproject.svc.cluster.local"
> to my windows hosts file which map it to 127.0.0.1 then try access it from
> my desktop browser but it's still not working.
>
> If i change the 2 ClusterIP services (*ex-aao-wconsj-0-svc* and
> *ex-aao-wconsj-1-svc*) that expose 8161 for the 2 brokers to NodePort then
> I can access the web console directly from outside the cluster without port
> forwarding. I don't understand why this works but direct port forwarding to
> the pod does not.
>
> The purpose of this exercise is to make sure that when developing the
> application locally we can point to artemis running on a cluster and
> observe/debug message distribution between multiple artemis brokers. In
> production, we also need access to the console of each broker at the same
> time for troubleshooting, my original thought is just port-forward to each
> pod when needed. Forgive me for my limited knowledge of kubernetes, but as
> I understand, ingress is to load balance http traffic so at one point in
> time, the console of a particular broker can be accessed.
>
> Thai Le
>
>
> On Fri, Aug 20, 2021 at 12:14 AM Tim Bain <tb...@alumni.duke.edu> wrote:
>
> > Can you port-forward directly to the individual pods successfully? If
> that
> > doesn't work, then going through the service won't, so make sure that
> > building block is working.
> >
> > Also, if you switch the service to be a NodePort service, can you hit the
> > web console from outside the K8s cluster without the port-forward? And
> > assuming that works, can you port-forward against that service
> > successfully? I'm not proposing you make that a permanent change, just
> > suggesting you try these variations to attempt to characterize the
> problem.
> >
> > One other question: long-term, do you plan to expose the web console port
> > outside of the cluster? If so, you won't (shouldn't) be using kubectl
> > port-forward for that, and you should probably be using an ingress proxy,
> > so maybe just set that up and don't worry about getting the port-forward
> > approach to work.
> >
> > Tim
> >
> > On Wed, Aug 18, 2021, 1:24 PM Thai Le <lnthai2...@gmail.com> wrote:
> >
> > > Thank you Justin for your suggestion.
> > >
> > > I looked at the bootstrap.xml of both broker nodes and the binding is
> set
> > > to the hostname of the pod:
> > > <web bind="
> > > http://ex-aao-ss-0.ex-aao-hdls-svc.myproject.svc.cluster.local:8161";
> > > path="web">
> > > <web bind="
> > > http://ex-aao-ss-1.ex-aao-hdls-svc.myproject.svc.cluster.local:8161";
> > > path="web">
> > > So it makes sense that I got a connection refused when accessing the
> pod
> > > from my desktop using localhost through port forwarding to the pod.
> > >
> > > I also see that there are 3 kubernetes services running, one for both
> > 8161
> > > and 61616 (I think this is the main service that i can hit from the jms
> > > consumer) and 2 other that only for 8161 but for each broker node (I
> > > believe this is to allow clients from outside kubernetes to access the
> > web
> > > console using IP, giving that routing from outside cluster to the
> service
> > > IP is present):
> > > kubectl get services -n myproject
> > > NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP
> > > PORT(S)              AGE
> > > activemq-artemis-operator   ClusterIP   10.100.240.205   <none>
> > >  8383/TCP             46h
> > > ex-aao-hdls-svc             ClusterIP   None             <none>
> > >  8161/TCP,61616/TCP   134m
> > > ex-aao-ping-svc             ClusterIP   None             <none>
> > >  8888/TCP             134m
> > > ex-aao-wconsj-0-svc         ClusterIP   *10.96.183.20*     <none>
> > >  8161/TCP             134m
> > > ex-aao-wconsj-1-svc         ClusterIP   *10.98.233.91*     <none>
> > >  8161/TCP             134m
> > >
> > > Here is the description of the main service:
> > > kubectl describe service ex-aao-hdls-svc -n myproject
> > > Name:             * ex-aao-hdls-svc*
> > > Namespace:         myproject
> > > Labels:            ActiveMQArtemis=ex-aao
> > >                    application=ex-aao-app
> > > Annotations:       <none>
> > > Selector:          ActiveMQArtemis=ex-aao,application=ex-aao-app
> > > Type:              ClusterIP
> > > IP Family Policy:  SingleStack
> > > IP Families:       IPv4
> > > IP:                None
> > > IPs:               None
> > > Port:              console-jolokia  8161/TCP
> > > TargetPort:        8161/TCP
> > > Endpoints:         *10.1.0.30*:8161,*10.1.0.31*:8161
> > > Port:              all  61616/TCP
> > > TargetPort:        61616/TCP
> > > Endpoints:         *10.1.0.30*:61616,*10.1.0.31*:61616
> > > Session Affinity:  None
> > > Events:            <none>
> > >
> > > And here is the description the other 2 services:
> > > kubectl describe service ex-aao-wconsj-0-svc -n myproject
> > > Name:              ex-aao-wconsj-0-svc
> > > Namespace:         myproject
> > > Labels:            ActiveMQArtemis=ex-aao
> > >                    application=ex-aao-app
> > > Annotations:       <none>
> > > Selector:          ActiveMQArtemis=ex-aao,application=ex-aao-app,
> > > statefulset.kubernetes.io/pod-name=ex-aao-ss-0
> > > Type:              ClusterIP
> > > IP Family Policy:  SingleStack
> > > IP Families:       IPv4
> > > IP:                *10.96.183.20*
> > > IPs:               *10.96.183.20*
> > > Port:              wconsj-0  8161/TCP
> > > TargetPort:        8161/TCP
> > > Endpoints:         *10.1.0.30*:8161
> > > Session Affinity:  None
> > > Events:            <none>
> > >
> > > kubectl describe service ex-aao-wconsj-1-svc -n myproject
> > > Name:              ex-aao-wconsj-1-svc
> > > Namespace:         myproject
> > > Labels:            ActiveMQArtemis=ex-aao
> > >                    application=ex-aao-app
> > > Annotations:       <none>
> > > Selector:          ActiveMQArtemis=ex-aao,application=ex-aao-app,
> > > statefulset.kubernetes.io/pod-name=ex-aao-ss-1
> > > Type:              ClusterIP
> > > IP Family Policy:  SingleStack
> > > IP Families:       IPv4
> > > IP:                *10.98.233.91*
> > > IPs:               *10.98.233.91*
> > > Port:              wconsj-1  8161/TCP
> > > TargetPort:        8161/TCP
> > > Endpoints:         *10.1.0.31*:8161
> > > Session Affinity:  None
> > > Events:            <none>
> > >
> > > The 2 pods hosting the broker nodes are ex-aao-ss0 and ex-aao-ss1:
> > > kubectl get all -o wide -n myproject
> > > NAME                                            READY   STATUS
> > RESTARTS
> > >   AGE    IP
> > > pod/activemq-artemis-operator-bb9cf6567-qjdzs   1/1     Running   0
> > >  46h    10.1.0.6
> > > pod/debug                                       1/1     Running   0
> > >  162m   10.1.0.29
> > > pod/ex-aao-ss-0                                 1/1     Running   0
> > >  155m   *10.1.0.30*
> > > pod/ex-aao-ss-1                                 1/1     Running   0
> > >  154m   *10.1.0.31*
> > >
> > > Hence, from another pod in the same cluster I can access the web
> console
> > :
> > > curl -L http://*ex-aao-hdls-svc*:8161, so I should be able to port
> > forward
> > > using this service instead of the pod:
> > > C:\Users\nle>kubectl port-forward service/ex-aao-hdls-svc 8161:8161 -n
> > > myproject
> > > Forwarding from 127.0.0.1:8161 -> 8161
> > > Forwarding from [::1]:8161 -> 8161
> > >
> > > However, hitting http://localhost:8161 from my desktop still give the
> > same
> > > error:
> > >
> > > Handling connection for 8161
> > > Handling connection for 8161
> > > E0818 14:51:30.135226   18024 portforward.go:400] an error occurred
> > > forwarding 8161 -> 8161: error forwarding port 8161 to pod
> > > ff6c35be7d3ace906d726b6abd38dbe122ddf2530647a44c799ca9a8a15ab245, uid :
> > > exit status 1: 2021/08/18 18:51:26 socat[1906] E connect(17, AF=2
> > > 127.0.0.1:8161, 16): Connection refused
> > > E0818 14:51:30.136855   18024 portforward.go:400] an error occurred
> > > forwarding 8161 -> 8161: error forwarding port 8161 to pod
> > > ff6c35be7d3ace906d726b6abd38dbe122ddf2530647a44c799ca9a8a15ab245, uid :
> > > exit status 1: 2021/08/18 18:51:26 socat[1907] E connect(17, AF=2
> > > 127.0.0.1:8161, 16): Connection refused
> > >
> > > Do you have any other suggestion?
> > >
> > > Thai Le
> > >
> > >
> > > On Wed, Aug 18, 2021 at 2:10 PM Justin Bertram <jbert...@apache.org>
> > > wrote:
> > >
> > > > If the embedded web server which serves the console (as configured in
> > > > bootstrap.xml) is bound to localhost then it will never be accessible
> > > from
> > > > a remote machine. You need to bind it to an IP or hostname which is
> > > > externally accessible.
> > > >
> > > >
> > > > Justin
> > > >
> > > > On Tue, Aug 17, 2021 at 2:58 PM Thai Le <lnthai2...@gmail.com>
> wrote:
> > > >
> > > > > Hello,
> > > > >
> > > > > I am not sure if questions regarding Artemis cloud can be asked
> here
> > > but
> > > > > since i found no mailing list for artemis cloud and the slack
> channel
> > > > needs
> > > > > an invitation to join I'm gonna try my luck here.
> > > > >
> > > > > I installed the Artemis operator and an ActiveMQArtemis with a
> > > deployment
> > > > > plan of 2 brokers on my single node kubernetes (docker-desktop),
> here
> > > is
> > > > > the deployment:
> > > > >
> > > > > apiVersion: broker.amq.io/v2alpha5
> > > > > kind: ActiveMQArtemis
> > > > > metadata:
> > > > >   name: ex-aao
> > > > > spec:
> > > > >   adminUser: brokerAdmin
> > > > >   adminPassword: verySecret
> > > > >   deploymentPlan:
> > > > >     size: 2
> > > > >     image: placeholder
> > > > >     podSecurity:
> > > > >       runAsUser: 0
> > > > >   console:
> > > > >     expose: true
> > > > >     sslEnabled: false
> > > > >
> > > > > The 2 brokers are running and I can curl the web console from
> another
> > > pod
> > > > > in the same kubernetes cluster. However, I cannot access the web
> > > console
> > > > > from my desktop (http://localhost:8161/console). I also tried to
> > port
> > > > > forward requests to port 8161 from my desktop to one of the 2
> artemis
> > > > pods
> > > > > but it does not work either.
> > > > >
> > > > > I would appreciate it if anyone could give me a hint as to what may
> > be
> > > > > wrong or a direction to artemis cloud mailing list
> > > > >
> > > > > Thai Le
> > > > >
> > > >
> > >
> > >
> > > --
> > > Where there is will, there is a way
> > >
> >
>
>
> --
> Where there is will, there is a way
>

Reply via email to