Hi. I'm having some issues migrating an (admittedly somewhat unconventional) existing system to a containerized environment (k8s) and was hoping someone might have some pointers on how I might be able to work around them.

A major portion of the system is implemented using what basically are micro-services conceptually. I'm in the process of porting each micro-service to a pod (eventually to be replicated), and then exposing the micro-service externally to other processes outside of the Kubernetes overlay network. Although it's been quite easy to create containers/pods out of each micro-service and get them to run successfully on Kubernetes, I'm running into issues with the networking configuration, specifically with respect to how to expose these services properly to the outside world.


The problem is that the way the system is currently built (out of my control - depends on idiosyncrasies of a piece of 3rd party software) these micro-services have to operate more like "pets" than "cattle". That is, even if there's multiple instances of a particular micro-service running, client code needs to be able to access a specific instance (pod), rather than just any instance. This is obviously different from the way most micro-service systems work, where each individual instance is pretty much identical to any other, so you can expose the service to the external network using a load balancer. Because of this issue, it's been proving to be a bit non-trivial to make this migration work correctly.


What I've been trying to do is find a way for *each* of these individual instances of a micro-service to get assigned a public ip/port, rather than just assigning one single public ip/port that points to a load balancer in front of them. But I don't see any way to do this properly in Kubernetes.

* I tried exposing the pods externally using NodePort. However that doesn't accomplish what I'm looking for. Although it does open a public port on each host, each of those public ports just points to a single load balancer in front of the service's pods, rather than to the individual pods.

* I tried exposing the pods externally using HostPort. That does work, and comes closest to accomplishing what I'm looking for. But it has the major drawback of not being able to run more than one instance of the same pod on the same host machine (since each instance wants to listen on the same port). As a result, if I want to run N instances of the same pod, I need to have N host machines. This is not ideal from a scalability / hardware utilization (and cost) perspective.


I guess ideally what I'd be looking for is some way for each pod that got launched to automatically get assigned a unique external hostname/port combo, with multiple instances of the same pod able to run on a single host, all while ensuring no port conflicts. E.g.:

service_A_pod1 exposed at 192.168.0.10:30001
service_A_pod2 exposed at 192.168.0.10:30005
service_A_pod3 exposed at 192.168.0.20:31007
etc.

I've read through the docs pretty thoroughly, though, and Kubernetes doesn't seem to provide anything like this.


Has anyone run into a similar problem like this before and/or any ideas how to solve this? Might there be any 3rd party add-ons to k8s that might help address a situation like this? (On a related note, we're using k8s in conjunction with Rancher. Might Rancher provide some capabilities here? I didn't see anything in the Rancher docs, but it's possible I could have missed something.) Or does Kubernetes have any hooks to allow you "roll your own" service deployment plugin, in order to customize the way external port exposure is done?


Another possible way for me to work around this problem is that I could probably eliminate the "pets" constraint I'm bumping up against if I were able to run the pods behind a customized Service/load balancer that was a bit smarter about which specific pod instance it routed traffic to. So same question about Kubernetes Services: any hooks to "roll your own" service? From what I can glean from the documentation, k8s services only provide 2 types of routing to pods: round robin, or client-ip-based SessionAffinity. But is there any way to add custom routing algorithms?


On a side note, it's occurred to me that if I'm running into such difficulties trying to fit this system into Kubernetes' architecture, then there's an argument to be made that perhaps it doesn't make sense to run these services using k8s. However, I think there's many aspects of containerization that do apply well to this system (it does already have a microservice design) and there's many features of Kubernetes that this system would do well to take advantage of. (E.g., running multiple copies of services without caring on which host, services being more resilient and automatically moving to other hosts on failure, etc.) So I very much would like to try to find a way to make the migration of this system to k8s successful.


Apologies for being long winded, and thanks in advance for any assistance anyone might be able to offer.

DR

--
You received this message because you are subscribed to the Google Groups "Kubernetes 
user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to