I’ve actually read through code and documentation for past couple of days, and 
I’ve come to pretty elegant solution (IMO):
Starting from 1.4 k8s provides inter-pod anti-affinity (still alpha but the 
functionality is there)
So, I create node labels for node size, let say: S, M, L, XL, … with tag such 
as:
s: size
to get more than one labels of the size per node, which means node L size will 
have S, M, and L labels. 

Then, I create replicaset with pod anti-affinity, which will make sure no more 
than one instance if the pod is located on the node, and I also have node 
selector of size. So let say S has 1 core, than we have only 1 instance of the 
IN app. Node M have let say 2 core, we have one for S, and one for M node 
selectors. 

as a result, I’ll have exactly what I need one instance of app configuration 
per node (due to host networking), and rolling updates out of the box. 
So basically I was going to implement the same functionality but it seems we 
are all set now. 

I dud not believe this functionality is present, but after that I love k8s even 
more :) 

> On Oct 27, 2016, at 6:35 AM, Rodrigo Campos <[email protected]> wrote:
> 
> 
> 
> On Sunday, October 23, 2016, Yaroslav Molochko <[email protected] 
> <mailto:[email protected]>> wrote:
> Thank you for your time and valuable suggestions, please find my comments 
> below:
> 
>> On Oct 23, 2016, at 3:37 PM, Rodrigo Campos <[email protected] <>> wrote:
>> 
>> But, to have a solution today, what you say makes sense. But I'm not sure how
>> you will communicate between the IN and OUT pods if they are different pods 
>> and
>> you need unix sockets and it is SO sensible to performance.
> 
> I was thinking of shared folder from host machine, we could make dedicated 
> volume for that shared folder, which can be even in tmpfs just to avoid 
> inodes crawling attack vector.
> 
>> I would consider doing: 1 pod that has several containers, several IN 
>> containers
>> that each reserve the CPU usage you want and one OUT container (that also
>> reserves the mem usage you want). All in one pod.
>> 
>> This way, you can communicate via unix sockets using an emptyDir volume or
>> HostPath if that is more performant. Also, the OUT container may need to use
>> hostNetwork to do the outgoing IP thing you need.
>> 
>> And if a logical host consists of several IN and one OUT instances, then you
>> really want them all in the same pod. That what a pod tries to abstract, 
>> really.
> 
> Thanks for the suggestions, what bothers me though, this may lead to extra 
> work on building up dedicated pod configurations (means extra replicaset) for 
> each type of node we have. During years of evolution, we’ve got plenty of 
> system types, from 1Core 1GB RAM to 32core 64GB of RAM and everything in the 
> middle :) This is around dozen of configurations so it is doable in general, 
> but I would love to abstract HW level completely. 
> 
> Why would it lead to that? You can't put more than one instance of an app in 
> one node? Is this because of the IP address of each node?
> 
> 
> 
> Thanks a lot,
> Rodrigo
> 
> -- 
> You received this message because you are subscribed to a topic in the Google 
> Groups "Kubernetes user discussion and Q&A" group.
> To unsubscribe from this topic, visit 
> https://groups.google.com/d/topic/kubernetes-users/DRzHmrjmiAs/unsubscribe 
> <https://groups.google.com/d/topic/kubernetes-users/DRzHmrjmiAs/unsubscribe>.
> To unsubscribe from this group and all its topics, send an email to 
> [email protected] 
> <mailto:[email protected]>.
> To post to this group, send email to [email protected] 
> <mailto:[email protected]>.
> Visit this group at https://groups.google.com/group/kubernetes-users 
> <https://groups.google.com/group/kubernetes-users>.
> For more options, visit https://groups.google.com/d/optout 
> <https://groups.google.com/d/optout>.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.
      • Re... 'David Oppenheimer' via Kubernetes user discussion and Q&A
        • ... Yaroslav Molochko
  • Re: [kubern... Rodrigo Campos
    • Re: [k... Yaroslav Molochko
      • Re... Rodrigo Campos
        • ... Yaroslav Molochko
          • ... Rodrigo Campos
            • ... Rodrigo Campos
            • ... Yaroslav Molochko
              • ... Rodrigo Campos
              • ... Yaroslav Molochko

Reply via email to