On 03/09/2018 11:26 AM, Sebastian Laskawiec wrote:
>
>
> On Thu, Mar 8, 2018 at 11:47 AM Bela Ban <bela...@mailbox.org 
> <mailto:bela...@mailbox.org>> wrote:
>
>
>
>     On 08/03/18 10:49, Sebastian Laskawiec wrote:
>     > Hey Bela,
>     >
>     > I've just stumbled upon this:
>     > https://coreos.com/os/docs/latest/cluster-discovery.html
>     >
>     > The Etcd folks created a public discovery service. You need to use a
>     > token and get a discovery string back. I believe that's super, super
>     > useful for demos across multiple public clouds.
>
>
>     Why? This is conceptually the same as running a GossipRouter on a
>     public, DNS-mapped, IP address...
>
>
>     The real challenge with cross-cloud clusters is (as you and I
>     discovered) to bridge the non-public addresses of local cloud members
>     with members running in different clouds.
>
>
> I totally agree with you here. It's pretty bad that there is no way 
> for the Pod to learn what is the external Load Balancer address that 
> exposes it.
>
> The only way I can see to fix this is to write a very small 
> application which will do this mapping. Then the app should use 
> PodInjectionPolicy [1] (or a similar Admission Controller [2])
>
> So back to the publicly available GossipRouter - I still believe there 
> is a potential in this solution and we should create a small tutorial 
> telling users how to do it (maybe a template for OpenShift?). But 
> granted - Admission Controller work (the mapper I mentioned the above) 
> is by far more important.
>
> [1] https://kubernetes.io/docs/tasks/inject-data-application/podpreset/
> [2] https://kubernetes.io/docs/admin/admission-controllers/

I think that the question of mapping to public IPs is almost orthogonal 
to the existence of the service. Nodes should publish any address/data 
they want, the IPs may be relevant only within the internal network. The 
purpose as I see it is to get cluster going ASAP. Even without the need 
of turning the GossipRouter on.


>
>     Unless you make all members use public IP addresses, but that's not
>     something that's typically advised in a cloud env.
>
>
>     > What do you think about that? Perhaps we could implement an
>     ETCD_PING
>     > and just reuse their service or write our own?
>
>     Sure, should be simple. But - again - what's the goal? If
>     discovery.etcd.io <http://discovery.etcd.io> can be used as a
>     public *permanent* discovery service,
>     yes, cool
>
>
> You convinced me - GossipRouter is the right way to go here.

I'd personally prefer a HTTP-based service with some JSONs - it's easy 
to inspect and see what it does, therefore I'd trust it a bit more. Also 
it's unlikely to block HTTP communication from any node. Also it's easy 
to debug which node has connected and which has not - simply peek on the 
JSON list.

I wouldn't parasite on etcd's servers, rather spawn our 
discovery.infinispan.org. Besides looking better, we could also get some 
interesting data (what sizes of cluster are people using, how often are 
they restart the servers...).

Radim

>
>     > Thanks,
>     > Seb
>     >
>     >
>     > _______________________________________________
>     > infinispan-dev mailing list
>     > infinispan-dev@lists.jboss.org
>     <mailto:infinispan-dev@lists.jboss.org>
>     > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>     >
>
>     --
>     Bela Ban | http://www.jgroups.org
>
>     _______________________________________________
>     infinispan-dev mailing list
>     infinispan-dev@lists.jboss.org <mailto:infinispan-dev@lists.jboss.org>
>     https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
>
>
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


-- 
Radim Vansa <rva...@redhat.com>
JBoss Performance Team

_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Reply via email to