These are not the limits of a cluster, they are the limits of testing of the clusters. Cluster limit testing is performed on specific control plane configurations which you are of course not limited to.
As you start to deviate from what the community tests, you take more responsibility for cluster configuration qualification, and just generally the more you have to know what you are doing. IRL cluster scaling limits are driven more by dynamic behaviors than by static configuration. More than a 100 containers per node is probably ok. More important: is what is the rate of container starts? A cluster with 50 containers per node that are highly ephemeral is higher system stress than 200 containers per node that sit there statically. Using advanced controllers like jobs add more system stress than adding more nodes. Again, at these scales you just need to take more care and ensure you have your monitoring/metrics story sorted out so you get feedback on your cluster operations. -Bob PS: Come join us at Sig Scalability! :-) On Monday, January 29, 2018 at 5:00:48 PM UTC-8, Luis Pabon wrote: > > Hi all, I have a question on Kubernetes limits: > https://kubernetes.io/docs/admin/cluster-large/ - What happens, if for > example, more than 100 pods are scheduled to a node? > -- You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group. To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscr...@googlegroups.com. To post to this group, send email to kubernetes-users@googlegroups.com. Visit this group at https://groups.google.com/group/kubernetes-users. For more options, visit https://groups.google.com/d/optout.