You have to configure a few things:

- determine set of names to configure public cname, hosts file entries or
local dns cache entries, and virtualhost confis on a proxy.  ex:
supervisor01.mycompany.com.
- reverse proxy front end, like nginx with routes back to appropriate
backends based on virtualhost config
- manage /etc/hosts entries for each node (or local dns cache)
- `hostname -f` must match appropriate name, ex: supervisor01.mycompany.com
- public dns entries (CNAME) that all point to the reverse proxy server.
- if aws, you can config elb to point to reverse proxy end point for
security.
- also aws alb might work as front end reverse proxy end point, have not
tried it personally.

I tried this on aws w nginx running on one of my zookeeper servers.  works
well.

I have some shell scripts online that demo how to set this up:

- nginx config lib  -
https://github.com/darkn3rd/storm-vagrant/blob/master/scripts/lib/nginx.src
- how to use lib in setup_proxy():
https://github.com/darkn3rd/storm-vagrant/blob/master/scripts/lib/vagrant_driver.src


On Jan 22, 2017 1:32 PM, "Alexandre Vermeerbergen" <[email protected]>
wrote:

> Hello,
>
> I deployed our application to an Apache Storm cluster (currently version
> 1.0.1) on Amazon E2C instances.
>
> Nimbus UI works like a charm, except for the "debug" links which URLs are
> based on private FQDN instead of public FDQN address of VMs.
>
> For example, when clicking on the debug link of one of the executors, I
> have a URL looking like this one, which isn't accessible from outside the
> VM:
>
> http://ip-161-12-410-48.eu-west-1.compute.internal:8000/dumps/<name of my
> topology>
>
> Please note that the "Host" on the same line is 
> ip-161-12-410-48.eu-west-1.compute.internal,
> so I guess that's the root of my issue ; I guess it should rather be a
> public FQDN like ec2-12-132-11-1.eu-west-1.compute.amazonaws.com
>
> Any idea how to make Storm / Nimbus / Nimbus UI use public FQDNs instead
> of private ones?
>
> Thanks,
> Alexandre
>
>
>

Reply via email to