Re: Artemis cloud: cannot access web console

2021-08-25 Thread Thai Le
Tim,

I also think the binding of bootstrap.xml is the culpit here since I did a
test of port-forwarding to both node port service and pod of my a sample
web app on the exact same cluster and namespace and it works just fine. I
opened a bug on artemiscloud github project because i think it maybe useful
for other working on POC and need a quick way to check the broker.

Thai Le

On Wed, Aug 25, 2021 at 12:10 AM Tim Bain  wrote:

> Thai,
>
> The fact that kubectl port-forward doesn't work with even a NodePort
> service (which has a TCP socket open on every node, so the easy way for it
> to do port forwarding would be to just open a TCP connection to that port)
> makes me suspect that there's something wrong with kubectl. If that's
> something you're up for pursuing via a bug on the Kubernetes GitHub
> project, it might help avoid problems for someone else in the future and/or
> result in a future fix that would allow you to use this workflow in the
> future.
>
> With that said, I do think that the fact that Artemis isn't bound to
> 0.0.0.0 means it's not 100% conclusive that kubectl is to blame here, so if
> you'd have the appetite to try getting that config swapped, that might also
> yield value. I'm not sure who writes/maintains Artemis Cloud, but it's not
> me and the lack of a response from anyone else on this list makes it clear
> to me that those people aren't here. If there's a GitHub project that
> accepts bugs, it might be worth raising one there about the bind IP, to see
> what support you can get from the authors. Alternatively, if you just
> wanted to try swapping out the IP, maybe one option would be to either
> modify the K8s pod/deployment definition to provide an init-container that
> does the config file manipulation before handing off to the main container
> that runs the web console? Without knowing how that pod/deployment is
> defined, I'm speculating on whether that would work, but it's at least a
> possibility if you wanted to pursue testing with Artemis bound to all IPs.
>
> And although I think all of that trying-things-out stuff is important and
> worth doing, I do still think that an ingress is probably the right
> long-term solution for actual production use, so I'm supportive of your
> statement that you're going to investigate that option.
>
> Tim
>
> On Mon, Aug 23, 2021 at 2:59 PM Thai Le  wrote:
>
> > Thank you Tim,
> >
> > After I changed the kubernetes services that expose 8161 from ClsusterIP
> to
> > NodePort, i setup port forward to the exposed port (8161) but i am not
> able
> > to reach the console, here is the config and the cmd i used:
> >  apiVersion: v1
> > kind: Service
> > metadata:
> >   creationTimestamp: "2021-08-23T15:32:37Z"
> >   labels:
> > ActiveMQArtemis: ex-aao
> > application: ex-aao-app
> >   name: ex-aao-wconsj-1-svc
> >   namespace: myproject
> >   ownerReferences:
> >   - apiVersion: broker.amq.io/v2alpha5
> > blockOwnerDeletion: true
> > controller: true
> > kind: ActiveMQArtemis
> > name: ex-aao
> > uid: d35e1dd0-37ed-4b8e-8db4-cd7081d3502f
> >   resourceVersion: "323731"
> >   uid: f4866b76-6af8-4196-9d68-f90d535eb3dc
> > spec:
> >   clusterIP: 10.107.150.161
> >   clusterIPs:
> >   - 10.107.150.161
> >   externalTrafficPolicy: Cluster
> >   ipFamilies:
> >   - IPv4
> >   ipFamilyPolicy: SingleStack
> >   ports:
> >   - name: wconsj-1
> > nodePort: 32061
> > port: 8161
> > protocol: TCP
> > targetPort: 8161
> >   publishNotReadyAddresses: true
> >   selector:
> > ActiveMQArtemis: ex-aao
> > application: ex-aao-app
> > statefulset.kubernetes.io/pod-name: ex-aao-ss-1
> >   sessionAffinity: None
> >   type: NodePort
> > status:
> >   loadBalancer:
> > ingress:
> > - hostname: localhost
> >
> > Forward cmd:
> > kubectl port-forward pod/ex-aao-ss-0 8161:8161 -n myproject
> >
> > Per your suggestion, I attempted to bind the web service to 0.0.0.0 but i
> > have not yet found where the code swap out he default values
> > in
> >
> /tmp/remote_source/app/src/yacfg/profiles/artemis/2.18.0/_modules/bootstrap_xml/*
> > of the init image
> quay.io/artemiscloud/activemq-artemis-broker-init:0.2.6.
> > I gonna look into ingress next
> >
> > Thai Le
> >
> > On Mon, Aug 23, 2021 at 12:59 AM Tim Bain  wrote:
> >
> > > Thanks for determining that port-forwarding directly to the pods
> doesn't
> > > work either. Would you mind checking whether port-forwarding to a
> > NodePort
> > > service succeeds? I'm glad to hear that a NodePort service works as
> > > expected when accessed directly, but my goal in asking you to test that
> > > configuration was to try to see whether the problem is with the kubectl
> > > port-forward process, so doing that final step of the test would help
> > > assess that.
> > >
> > > I've got two possible theories for what's going on. One is that kubectl
> > > port-forward doesn't work correctly (or that you're doing something
> wrong
> > > when invoking it, though everything 

Re: Artemis cloud: cannot access web console

2021-08-24 Thread Tim Bain
Thai,

The fact that kubectl port-forward doesn't work with even a NodePort
service (which has a TCP socket open on every node, so the easy way for it
to do port forwarding would be to just open a TCP connection to that port)
makes me suspect that there's something wrong with kubectl. If that's
something you're up for pursuing via a bug on the Kubernetes GitHub
project, it might help avoid problems for someone else in the future and/or
result in a future fix that would allow you to use this workflow in the
future.

With that said, I do think that the fact that Artemis isn't bound to
0.0.0.0 means it's not 100% conclusive that kubectl is to blame here, so if
you'd have the appetite to try getting that config swapped, that might also
yield value. I'm not sure who writes/maintains Artemis Cloud, but it's not
me and the lack of a response from anyone else on this list makes it clear
to me that those people aren't here. If there's a GitHub project that
accepts bugs, it might be worth raising one there about the bind IP, to see
what support you can get from the authors. Alternatively, if you just
wanted to try swapping out the IP, maybe one option would be to either
modify the K8s pod/deployment definition to provide an init-container that
does the config file manipulation before handing off to the main container
that runs the web console? Without knowing how that pod/deployment is
defined, I'm speculating on whether that would work, but it's at least a
possibility if you wanted to pursue testing with Artemis bound to all IPs.

And although I think all of that trying-things-out stuff is important and
worth doing, I do still think that an ingress is probably the right
long-term solution for actual production use, so I'm supportive of your
statement that you're going to investigate that option.

Tim

On Mon, Aug 23, 2021 at 2:59 PM Thai Le  wrote:

> Thank you Tim,
>
> After I changed the kubernetes services that expose 8161 from ClsusterIP to
> NodePort, i setup port forward to the exposed port (8161) but i am not able
> to reach the console, here is the config and the cmd i used:
>  apiVersion: v1
> kind: Service
> metadata:
>   creationTimestamp: "2021-08-23T15:32:37Z"
>   labels:
> ActiveMQArtemis: ex-aao
> application: ex-aao-app
>   name: ex-aao-wconsj-1-svc
>   namespace: myproject
>   ownerReferences:
>   - apiVersion: broker.amq.io/v2alpha5
> blockOwnerDeletion: true
> controller: true
> kind: ActiveMQArtemis
> name: ex-aao
> uid: d35e1dd0-37ed-4b8e-8db4-cd7081d3502f
>   resourceVersion: "323731"
>   uid: f4866b76-6af8-4196-9d68-f90d535eb3dc
> spec:
>   clusterIP: 10.107.150.161
>   clusterIPs:
>   - 10.107.150.161
>   externalTrafficPolicy: Cluster
>   ipFamilies:
>   - IPv4
>   ipFamilyPolicy: SingleStack
>   ports:
>   - name: wconsj-1
> nodePort: 32061
> port: 8161
> protocol: TCP
> targetPort: 8161
>   publishNotReadyAddresses: true
>   selector:
> ActiveMQArtemis: ex-aao
> application: ex-aao-app
> statefulset.kubernetes.io/pod-name: ex-aao-ss-1
>   sessionAffinity: None
>   type: NodePort
> status:
>   loadBalancer:
> ingress:
> - hostname: localhost
>
> Forward cmd:
> kubectl port-forward pod/ex-aao-ss-0 8161:8161 -n myproject
>
> Per your suggestion, I attempted to bind the web service to 0.0.0.0 but i
> have not yet found where the code swap out he default values
> in
> /tmp/remote_source/app/src/yacfg/profiles/artemis/2.18.0/_modules/bootstrap_xml/*
> of the init image quay.io/artemiscloud/activemq-artemis-broker-init:0.2.6.
> I gonna look into ingress next
>
> Thai Le
>
> On Mon, Aug 23, 2021 at 12:59 AM Tim Bain  wrote:
>
> > Thanks for determining that port-forwarding directly to the pods doesn't
> > work either. Would you mind checking whether port-forwarding to a
> NodePort
> > service succeeds? I'm glad to hear that a NodePort service works as
> > expected when accessed directly, but my goal in asking you to test that
> > configuration was to try to see whether the problem is with the kubectl
> > port-forward process, so doing that final step of the test would help
> > assess that.
> >
> > I've got two possible theories for what's going on. One is that kubectl
> > port-forward doesn't work correctly (or that you're doing something wrong
> > when invoking it, though everything you've shown so far looks fine), in
> > which case Artemis is fine but your attempt to use kubectl port-forward
> is
> > where this is breaking. If that's the case, you'll want to take that up
> > with the Kubernetes community (e.g. file a bug in the Kubernetes GitHub
> > project ). The other
> possibility
> > is that kubectl port-forward is working fine, but the Artemis process is
> > binding to the TCP port in a way that doesn't result in it accepting
> > incoming connections from whatever network path gets used for kubectl
> > port-forward even though it's fine when accessed via the Kubernetes
> > service. In 

Re: Artemis cloud: cannot access web console

2021-08-23 Thread Thai Le
Thank you Tim,

After I changed the kubernetes services that expose 8161 from ClsusterIP to
NodePort, i setup port forward to the exposed port (8161) but i am not able
to reach the console, here is the config and the cmd i used:
 apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2021-08-23T15:32:37Z"
  labels:
ActiveMQArtemis: ex-aao
application: ex-aao-app
  name: ex-aao-wconsj-1-svc
  namespace: myproject
  ownerReferences:
  - apiVersion: broker.amq.io/v2alpha5
blockOwnerDeletion: true
controller: true
kind: ActiveMQArtemis
name: ex-aao
uid: d35e1dd0-37ed-4b8e-8db4-cd7081d3502f
  resourceVersion: "323731"
  uid: f4866b76-6af8-4196-9d68-f90d535eb3dc
spec:
  clusterIP: 10.107.150.161
  clusterIPs:
  - 10.107.150.161
  externalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: wconsj-1
nodePort: 32061
port: 8161
protocol: TCP
targetPort: 8161
  publishNotReadyAddresses: true
  selector:
ActiveMQArtemis: ex-aao
application: ex-aao-app
statefulset.kubernetes.io/pod-name: ex-aao-ss-1
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer:
ingress:
- hostname: localhost

Forward cmd:
kubectl port-forward pod/ex-aao-ss-0 8161:8161 -n myproject

Per your suggestion, I attempted to bind the web service to 0.0.0.0 but i
have not yet found where the code swap out he default values
in 
/tmp/remote_source/app/src/yacfg/profiles/artemis/2.18.0/_modules/bootstrap_xml/*
of the init image quay.io/artemiscloud/activemq-artemis-broker-init:0.2.6.
I gonna look into ingress next

Thai Le

On Mon, Aug 23, 2021 at 12:59 AM Tim Bain  wrote:

> Thanks for determining that port-forwarding directly to the pods doesn't
> work either. Would you mind checking whether port-forwarding to a NodePort
> service succeeds? I'm glad to hear that a NodePort service works as
> expected when accessed directly, but my goal in asking you to test that
> configuration was to try to see whether the problem is with the kubectl
> port-forward process, so doing that final step of the test would help
> assess that.
>
> I've got two possible theories for what's going on. One is that kubectl
> port-forward doesn't work correctly (or that you're doing something wrong
> when invoking it, though everything you've shown so far looks fine), in
> which case Artemis is fine but your attempt to use kubectl port-forward is
> where this is breaking. If that's the case, you'll want to take that up
> with the Kubernetes community (e.g. file a bug in the Kubernetes GitHub
> project ). The other possibility
> is that kubectl port-forward is working fine, but the Artemis process is
> binding to the TCP port in a way that doesn't result in it accepting
> incoming connections from whatever network path gets used for kubectl
> port-forward even though it's fine when accessed via the Kubernetes
> service. In that case, you'll want to tweak the bootstrap.xml to make it
> listen on 0.0.0.0 (i.e. all IP addresses associated with the pod) rather
> than the specific address you listed in your last message. I suspect the
> latter, but both are possible.
>
> If modifying the bootstrap.xml to make it listen on 0.0.0.0 is easy to do,
> please do that and see where it gets you. If that's non-trivial, then
> please deploy an Nginx deployment with a ClusterIP service in front of it (
>
> https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
> is one of many resources online that describe how to do this), confirm that
> you can hit the Nginx service from within the cluster, and then test
> whether you can use kubectl port-forward to the Nginx ClusterIP service.
> (Ideally, use local port 8161 when doing the kubectl port-forward, just to
> keep things as equivalent as possible.) If you can get through to the Nginx
> service via kubectl port-forward but can't do the same thing to the Artemis
> web console, then that to me would be a pretty clear indication that the
> issue is within the Artemis config and not a bug in kubectl, and at that
> point you'd want to pursue how to get Artemis to bind to 0.0.0.0 even if
> that's non-trivial to do. Once you do that, I'd expect you to be able to
> port-forward to either the service or to each individual pods as needed.
>
> As for why it didn't work when you added a host entry to map to 127.0.0.1,
> that's because that hostname isn't your Windows machine, it's the container
> that's running on your Windows machine, which has its own network stack and
> address. Networking is the most complicated part of working with
> Kubernetes, and it's easy to get confused about exactly which host is
> "localhost" in a given context, but if you think of each container as being
> on a different machine rather than the physical machine you're actually
> running on, that will help avoid at least some of the places where
> confusion could occur.
>
> One other thing: 

Re: Artemis cloud: cannot access web console

2021-08-22 Thread Tim Bain
Thanks for determining that port-forwarding directly to the pods doesn't
work either. Would you mind checking whether port-forwarding to a NodePort
service succeeds? I'm glad to hear that a NodePort service works as
expected when accessed directly, but my goal in asking you to test that
configuration was to try to see whether the problem is with the kubectl
port-forward process, so doing that final step of the test would help
assess that.

I've got two possible theories for what's going on. One is that kubectl
port-forward doesn't work correctly (or that you're doing something wrong
when invoking it, though everything you've shown so far looks fine), in
which case Artemis is fine but your attempt to use kubectl port-forward is
where this is breaking. If that's the case, you'll want to take that up
with the Kubernetes community (e.g. file a bug in the Kubernetes GitHub
project ). The other possibility
is that kubectl port-forward is working fine, but the Artemis process is
binding to the TCP port in a way that doesn't result in it accepting
incoming connections from whatever network path gets used for kubectl
port-forward even though it's fine when accessed via the Kubernetes
service. In that case, you'll want to tweak the bootstrap.xml to make it
listen on 0.0.0.0 (i.e. all IP addresses associated with the pod) rather
than the specific address you listed in your last message. I suspect the
latter, but both are possible.

If modifying the bootstrap.xml to make it listen on 0.0.0.0 is easy to do,
please do that and see where it gets you. If that's non-trivial, then
please deploy an Nginx deployment with a ClusterIP service in front of it (
https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
is one of many resources online that describe how to do this), confirm that
you can hit the Nginx service from within the cluster, and then test
whether you can use kubectl port-forward to the Nginx ClusterIP service.
(Ideally, use local port 8161 when doing the kubectl port-forward, just to
keep things as equivalent as possible.) If you can get through to the Nginx
service via kubectl port-forward but can't do the same thing to the Artemis
web console, then that to me would be a pretty clear indication that the
issue is within the Artemis config and not a bug in kubectl, and at that
point you'd want to pursue how to get Artemis to bind to 0.0.0.0 even if
that's non-trivial to do. Once you do that, I'd expect you to be able to
port-forward to either the service or to each individual pods as needed.

As for why it didn't work when you added a host entry to map to 127.0.0.1,
that's because that hostname isn't your Windows machine, it's the container
that's running on your Windows machine, which has its own network stack and
address. Networking is the most complicated part of working with
Kubernetes, and it's easy to get confused about exactly which host is
"localhost" in a given context, but if you think of each container as being
on a different machine rather than the physical machine you're actually
running on, that will help avoid at least some of the places where
confusion could occur.

One other thing: I'd encourage you to think about whether kubectl
port-forward is really the right tool for the job here. It's a great way to
do ad hoc troubleshooting, but if you're going to be using it frequently
and/or if you might want to use it for automated monitoring in production,
I still think you might be better off with an always-available ingress
route(s) that lets you expose these things on a permanent basis (whether
that's exposing the service that load-balances across all brokers or
whether you set up individual services for each individual pod and then
expose each of those single-broker services so you can access the web
console of any broker). And yes, ingress is a feature by which incoming
HTTP traffic is routed to resources (certainly services, but I'm not sure
what other resource types can be exposed) within Kubernetes.

Tim


On Fri, Aug 20, 2021 at 9:43 AM Thai Le  wrote:

> I tried port-forward directly to the pod, it didn't work:
> C:\Users\nle>kubectl port-forward pod/ex-aao-ss-0 8161:8161 -n myproject
> Forwarding from 127.0.0.1:8161 -> 8161
> Forwarding from [::1]:8161 -> 8161
> Handling connection for 8161
> Handling connection for 8161
> E0820 09:06:52.508157   13064 portforward.go:400] an error occurred
> forwarding 8161 -> 8161: error forwarding port 8161 to pod
> ff6c35be7d3ace906d726b6abd38dbe122ddf2530647a44c799ca9a8a15ab245, uid :
> exit status 1: 2021/08/20 13:06:46 socat[31487] E connect(17, AF=2
> 127.0.0.1:8161, 16): Connection refused
> E0820 09:06:52.522192   13064 portforward.go:400] an error occurred
> forwarding 8161 -> 8161: error forwarding port 8161 to pod
> ff6c35be7d3ace906d726b6abd38dbe122ddf2530647a44c799ca9a8a15ab245, uid :
> exit status 1: 2021/08/20 13:06:46 socat[31488] E connect(17, AF=2
> 127.0.0.1:8161, 

Re: Artemis cloud: cannot access web console

2021-08-20 Thread Thai Le
I tried port-forward directly to the pod, it didn't work:
C:\Users\nle>kubectl port-forward pod/ex-aao-ss-0 8161:8161 -n myproject
Forwarding from 127.0.0.1:8161 -> 8161
Forwarding from [::1]:8161 -> 8161
Handling connection for 8161
Handling connection for 8161
E0820 09:06:52.508157   13064 portforward.go:400] an error occurred
forwarding 8161 -> 8161: error forwarding port 8161 to pod
ff6c35be7d3ace906d726b6abd38dbe122ddf2530647a44c799ca9a8a15ab245, uid :
exit status 1: 2021/08/20 13:06:46 socat[31487] E connect(17, AF=2
127.0.0.1:8161, 16): Connection refused
E0820 09:06:52.522192   13064 portforward.go:400] an error occurred
forwarding 8161 -> 8161: error forwarding port 8161 to pod
ff6c35be7d3ace906d726b6abd38dbe122ddf2530647a44c799ca9a8a15ab245, uid :
exit status 1: 2021/08/20 13:06:46 socat[31488] E connect(17, AF=2
127.0.0.1:8161, 16): Connection refused

Since the bootstrap.xml indicates the web server is binding to
http://ex-aao-ss-0.ex-aao-hdls-svc.myproject.svc.cluster.local:8161, I also
tried to add the "ex-aao-ss-0.ex-aao-hdls-svc.myproject.svc.cluster.local"
to my windows hosts file which map it to 127.0.0.1 then try access it from
my desktop browser but it's still not working.

If i change the 2 ClusterIP services (*ex-aao-wconsj-0-svc* and
*ex-aao-wconsj-1-svc*) that expose 8161 for the 2 brokers to NodePort then
I can access the web console directly from outside the cluster without port
forwarding. I don't understand why this works but direct port forwarding to
the pod does not.

The purpose of this exercise is to make sure that when developing the
application locally we can point to artemis running on a cluster and
observe/debug message distribution between multiple artemis brokers. In
production, we also need access to the console of each broker at the same
time for troubleshooting, my original thought is just port-forward to each
pod when needed. Forgive me for my limited knowledge of kubernetes, but as
I understand, ingress is to load balance http traffic so at one point in
time, the console of a particular broker can be accessed.

Thai Le


On Fri, Aug 20, 2021 at 12:14 AM Tim Bain  wrote:

> Can you port-forward directly to the individual pods successfully? If that
> doesn't work, then going through the service won't, so make sure that
> building block is working.
>
> Also, if you switch the service to be a NodePort service, can you hit the
> web console from outside the K8s cluster without the port-forward? And
> assuming that works, can you port-forward against that service
> successfully? I'm not proposing you make that a permanent change, just
> suggesting you try these variations to attempt to characterize the problem.
>
> One other question: long-term, do you plan to expose the web console port
> outside of the cluster? If so, you won't (shouldn't) be using kubectl
> port-forward for that, and you should probably be using an ingress proxy,
> so maybe just set that up and don't worry about getting the port-forward
> approach to work.
>
> Tim
>
> On Wed, Aug 18, 2021, 1:24 PM Thai Le  wrote:
>
> > Thank you Justin for your suggestion.
> >
> > I looked at the bootstrap.xml of both broker nodes and the binding is set
> > to the hostname of the pod:
> > http://ex-aao-ss-0.ex-aao-hdls-svc.myproject.svc.cluster.local:8161;
> > path="web">
> > http://ex-aao-ss-1.ex-aao-hdls-svc.myproject.svc.cluster.local:8161;
> > path="web">
> > So it makes sense that I got a connection refused when accessing the pod
> > from my desktop using localhost through port forwarding to the pod.
> >
> > I also see that there are 3 kubernetes services running, one for both
> 8161
> > and 61616 (I think this is the main service that i can hit from the jms
> > consumer) and 2 other that only for 8161 but for each broker node (I
> > believe this is to allow clients from outside kubernetes to access the
> web
> > console using IP, giving that routing from outside cluster to the service
> > IP is present):
> > kubectl get services -n myproject
> > NAMETYPECLUSTER-IP   EXTERNAL-IP
> > PORT(S)  AGE
> > activemq-artemis-operator   ClusterIP   10.100.240.205   
> >  8383/TCP 46h
> > ex-aao-hdls-svc ClusterIP   None 
> >  8161/TCP,61616/TCP   134m
> > ex-aao-ping-svc ClusterIP   None 
> >  /TCP 134m
> > ex-aao-wconsj-0-svc ClusterIP   *10.96.183.20* 
> >  8161/TCP 134m
> > ex-aao-wconsj-1-svc ClusterIP   *10.98.233.91* 
> >  8161/TCP 134m
> >
> > Here is the description of the main service:
> > kubectl describe service ex-aao-hdls-svc -n myproject
> > Name: * ex-aao-hdls-svc*
> > Namespace: myproject
> > Labels:ActiveMQArtemis=ex-aao
> >application=ex-aao-app
> > Annotations:   
> > Selector:  ActiveMQArtemis=ex-aao,application=ex-aao-app
> > Type:  ClusterIP
> > IP Family 

Re: Artemis cloud: cannot access web console

2021-08-19 Thread Tim Bain
Can you port-forward directly to the individual pods successfully? If that
doesn't work, then going through the service won't, so make sure that
building block is working.

Also, if you switch the service to be a NodePort service, can you hit the
web console from outside the K8s cluster without the port-forward? And
assuming that works, can you port-forward against that service
successfully? I'm not proposing you make that a permanent change, just
suggesting you try these variations to attempt to characterize the problem.

One other question: long-term, do you plan to expose the web console port
outside of the cluster? If so, you won't (shouldn't) be using kubectl
port-forward for that, and you should probably be using an ingress proxy,
so maybe just set that up and don't worry about getting the port-forward
approach to work.

Tim

On Wed, Aug 18, 2021, 1:24 PM Thai Le  wrote:

> Thank you Justin for your suggestion.
>
> I looked at the bootstrap.xml of both broker nodes and the binding is set
> to the hostname of the pod:
> http://ex-aao-ss-0.ex-aao-hdls-svc.myproject.svc.cluster.local:8161;
> path="web">
> http://ex-aao-ss-1.ex-aao-hdls-svc.myproject.svc.cluster.local:8161;
> path="web">
> So it makes sense that I got a connection refused when accessing the pod
> from my desktop using localhost through port forwarding to the pod.
>
> I also see that there are 3 kubernetes services running, one for both 8161
> and 61616 (I think this is the main service that i can hit from the jms
> consumer) and 2 other that only for 8161 but for each broker node (I
> believe this is to allow clients from outside kubernetes to access the web
> console using IP, giving that routing from outside cluster to the service
> IP is present):
> kubectl get services -n myproject
> NAMETYPECLUSTER-IP   EXTERNAL-IP
> PORT(S)  AGE
> activemq-artemis-operator   ClusterIP   10.100.240.205   
>  8383/TCP 46h
> ex-aao-hdls-svc ClusterIP   None 
>  8161/TCP,61616/TCP   134m
> ex-aao-ping-svc ClusterIP   None 
>  /TCP 134m
> ex-aao-wconsj-0-svc ClusterIP   *10.96.183.20* 
>  8161/TCP 134m
> ex-aao-wconsj-1-svc ClusterIP   *10.98.233.91* 
>  8161/TCP 134m
>
> Here is the description of the main service:
> kubectl describe service ex-aao-hdls-svc -n myproject
> Name: * ex-aao-hdls-svc*
> Namespace: myproject
> Labels:ActiveMQArtemis=ex-aao
>application=ex-aao-app
> Annotations:   
> Selector:  ActiveMQArtemis=ex-aao,application=ex-aao-app
> Type:  ClusterIP
> IP Family Policy:  SingleStack
> IP Families:   IPv4
> IP:None
> IPs:   None
> Port:  console-jolokia  8161/TCP
> TargetPort:8161/TCP
> Endpoints: *10.1.0.30*:8161,*10.1.0.31*:8161
> Port:  all  61616/TCP
> TargetPort:61616/TCP
> Endpoints: *10.1.0.30*:61616,*10.1.0.31*:61616
> Session Affinity:  None
> Events:
>
> And here is the description the other 2 services:
> kubectl describe service ex-aao-wconsj-0-svc -n myproject
> Name:  ex-aao-wconsj-0-svc
> Namespace: myproject
> Labels:ActiveMQArtemis=ex-aao
>application=ex-aao-app
> Annotations:   
> Selector:  ActiveMQArtemis=ex-aao,application=ex-aao-app,
> statefulset.kubernetes.io/pod-name=ex-aao-ss-0
> Type:  ClusterIP
> IP Family Policy:  SingleStack
> IP Families:   IPv4
> IP:*10.96.183.20*
> IPs:   *10.96.183.20*
> Port:  wconsj-0  8161/TCP
> TargetPort:8161/TCP
> Endpoints: *10.1.0.30*:8161
> Session Affinity:  None
> Events:
>
> kubectl describe service ex-aao-wconsj-1-svc -n myproject
> Name:  ex-aao-wconsj-1-svc
> Namespace: myproject
> Labels:ActiveMQArtemis=ex-aao
>application=ex-aao-app
> Annotations:   
> Selector:  ActiveMQArtemis=ex-aao,application=ex-aao-app,
> statefulset.kubernetes.io/pod-name=ex-aao-ss-1
> Type:  ClusterIP
> IP Family Policy:  SingleStack
> IP Families:   IPv4
> IP:*10.98.233.91*
> IPs:   *10.98.233.91*
> Port:  wconsj-1  8161/TCP
> TargetPort:8161/TCP
> Endpoints: *10.1.0.31*:8161
> Session Affinity:  None
> Events:
>
> The 2 pods hosting the broker nodes are ex-aao-ss0 and ex-aao-ss1:
> kubectl get all -o wide -n myproject
> NAMEREADY   STATUSRESTARTS
>   AGEIP
> pod/activemq-artemis-operator-bb9cf6567-qjdzs   1/1 Running   0
>  46h10.1.0.6
> pod/debug   1/1 Running   0
>  162m   10.1.0.29
> pod/ex-aao-ss-0 1/1 Running   0
>  155m   

Re: Artemis cloud: cannot access web console

2021-08-18 Thread Thai Le
Thank you Justin for your suggestion.

I looked at the bootstrap.xml of both broker nodes and the binding is set
to the hostname of the pod:
http://ex-aao-ss-0.ex-aao-hdls-svc.myproject.svc.cluster.local:8161;
path="web">
http://ex-aao-ss-1.ex-aao-hdls-svc.myproject.svc.cluster.local:8161;
path="web">
So it makes sense that I got a connection refused when accessing the pod
from my desktop using localhost through port forwarding to the pod.

I also see that there are 3 kubernetes services running, one for both 8161
and 61616 (I think this is the main service that i can hit from the jms
consumer) and 2 other that only for 8161 but for each broker node (I
believe this is to allow clients from outside kubernetes to access the web
console using IP, giving that routing from outside cluster to the service
IP is present):
kubectl get services -n myproject
NAMETYPECLUSTER-IP   EXTERNAL-IP
PORT(S)  AGE
activemq-artemis-operator   ClusterIP   10.100.240.205   
 8383/TCP 46h
ex-aao-hdls-svc ClusterIP   None 
 8161/TCP,61616/TCP   134m
ex-aao-ping-svc ClusterIP   None 
 /TCP 134m
ex-aao-wconsj-0-svc ClusterIP   *10.96.183.20* 
 8161/TCP 134m
ex-aao-wconsj-1-svc ClusterIP   *10.98.233.91* 
 8161/TCP 134m

Here is the description of the main service:
kubectl describe service ex-aao-hdls-svc -n myproject
Name: * ex-aao-hdls-svc*
Namespace: myproject
Labels:ActiveMQArtemis=ex-aao
   application=ex-aao-app
Annotations:   
Selector:  ActiveMQArtemis=ex-aao,application=ex-aao-app
Type:  ClusterIP
IP Family Policy:  SingleStack
IP Families:   IPv4
IP:None
IPs:   None
Port:  console-jolokia  8161/TCP
TargetPort:8161/TCP
Endpoints: *10.1.0.30*:8161,*10.1.0.31*:8161
Port:  all  61616/TCP
TargetPort:61616/TCP
Endpoints: *10.1.0.30*:61616,*10.1.0.31*:61616
Session Affinity:  None
Events:

And here is the description the other 2 services:
kubectl describe service ex-aao-wconsj-0-svc -n myproject
Name:  ex-aao-wconsj-0-svc
Namespace: myproject
Labels:ActiveMQArtemis=ex-aao
   application=ex-aao-app
Annotations:   
Selector:  ActiveMQArtemis=ex-aao,application=ex-aao-app,
statefulset.kubernetes.io/pod-name=ex-aao-ss-0
Type:  ClusterIP
IP Family Policy:  SingleStack
IP Families:   IPv4
IP:*10.96.183.20*
IPs:   *10.96.183.20*
Port:  wconsj-0  8161/TCP
TargetPort:8161/TCP
Endpoints: *10.1.0.30*:8161
Session Affinity:  None
Events:

kubectl describe service ex-aao-wconsj-1-svc -n myproject
Name:  ex-aao-wconsj-1-svc
Namespace: myproject
Labels:ActiveMQArtemis=ex-aao
   application=ex-aao-app
Annotations:   
Selector:  ActiveMQArtemis=ex-aao,application=ex-aao-app,
statefulset.kubernetes.io/pod-name=ex-aao-ss-1
Type:  ClusterIP
IP Family Policy:  SingleStack
IP Families:   IPv4
IP:*10.98.233.91*
IPs:   *10.98.233.91*
Port:  wconsj-1  8161/TCP
TargetPort:8161/TCP
Endpoints: *10.1.0.31*:8161
Session Affinity:  None
Events:

The 2 pods hosting the broker nodes are ex-aao-ss0 and ex-aao-ss1:
kubectl get all -o wide -n myproject
NAMEREADY   STATUSRESTARTS
  AGEIP
pod/activemq-artemis-operator-bb9cf6567-qjdzs   1/1 Running   0
 46h10.1.0.6
pod/debug   1/1 Running   0
 162m   10.1.0.29
pod/ex-aao-ss-0 1/1 Running   0
 155m   *10.1.0.30*
pod/ex-aao-ss-1 1/1 Running   0
 154m   *10.1.0.31*

Hence, from another pod in the same cluster I can access the web console :
curl -L http://*ex-aao-hdls-svc*:8161, so I should be able to port forward
using this service instead of the pod:
C:\Users\nle>kubectl port-forward service/ex-aao-hdls-svc 8161:8161 -n
myproject
Forwarding from 127.0.0.1:8161 -> 8161
Forwarding from [::1]:8161 -> 8161

However, hitting http://localhost:8161 from my desktop still give the same
error:

Handling connection for 8161
Handling connection for 8161
E0818 14:51:30.135226   18024 portforward.go:400] an error occurred
forwarding 8161 -> 8161: error forwarding port 8161 to pod
ff6c35be7d3ace906d726b6abd38dbe122ddf2530647a44c799ca9a8a15ab245, uid :
exit status 1: 2021/08/18 18:51:26 socat[1906] E connect(17, AF=2
127.0.0.1:8161, 16): Connection refused
E0818 14:51:30.136855   18024 portforward.go:400] an error occurred
forwarding 8161 -> 8161: error forwarding port 8161 to pod
ff6c35be7d3ace906d726b6abd38dbe122ddf2530647a44c799ca9a8a15ab245, uid 

Re: Artemis cloud: cannot access web console

2021-08-18 Thread Justin Bertram
If the embedded web server which serves the console (as configured in
bootstrap.xml) is bound to localhost then it will never be accessible from
a remote machine. You need to bind it to an IP or hostname which is
externally accessible.


Justin

On Tue, Aug 17, 2021 at 2:58 PM Thai Le  wrote:

> Hello,
>
> I am not sure if questions regarding Artemis cloud can be asked here but
> since i found no mailing list for artemis cloud and the slack channel needs
> an invitation to join I'm gonna try my luck here.
>
> I installed the Artemis operator and an ActiveMQArtemis with a deployment
> plan of 2 brokers on my single node kubernetes (docker-desktop), here is
> the deployment:
>
> apiVersion: broker.amq.io/v2alpha5
> kind: ActiveMQArtemis
> metadata:
>   name: ex-aao
> spec:
>   adminUser: brokerAdmin
>   adminPassword: verySecret
>   deploymentPlan:
> size: 2
> image: placeholder
> podSecurity:
>   runAsUser: 0
>   console:
> expose: true
> sslEnabled: false
>
> The 2 brokers are running and I can curl the web console from another pod
> in the same kubernetes cluster. However, I cannot access the web console
> from my desktop (http://localhost:8161/console). I also tried to port
> forward requests to port 8161 from my desktop to one of the 2 artemis pods
> but it does not work either.
>
> I would appreciate it if anyone could give me a hint as to what may be
> wrong or a direction to artemis cloud mailing list
>
> Thai Le
>