Sorry, not follow the price argument. You are only charged for the nodes you use on a Kubernetes cluster (no Masters, no matter cluster size).

I don't understand very well "no matter cluster size" whereas no one has ever talked about creating nodes that will not be used later. In my example every node will be used and of course I will be charged the cost, making the cluster size very important to define total spending


So, I really don't why it makes a difference the number of clusters
what I mean is very simple:
if I have to use a single cluster, the minimum hardware features must be able to bear db requirements.
My db must have 60 GB of RAM.
So every node in this cluster will have 60 gb.
I can spend 1000$/month so I can afford two nodes.
One node will be for db, the other will be used for many (8-10-6 I don't know) web pod
So I'm asking

in terms of performance, scalability and stability which is the better solution between:

a single cluster with 2 nodes where 1 node is used for db and other for n web-pod

or

(considering that the requirements of the db machine are very different from those of the web machines) two clusters, one for db (n1-standard-16 single node) and another for web machines (with more n1-standard-2 nodes)



Can't you use an internal load balancer to communicate?

I noticed that if I create a load balancer service or an ingress service, Kubernetes will create a public ip address.
So when you say *internal* load balancer, what are you referring to?
Because I tried to use a nodeport service to communicate between cluster and didn't work



Il 13/12/2017 13:56, Rodrigo Campos ha scritto:
Sorry, not follow the price argument. You are only charged for the nodes you use on a Kubernetes cluster (no Masters, no matter cluster size).

So, I really don't why it makes a difference the number of clusters

On Wednesday, December 13, 2017, <[email protected] <mailto:[email protected]>> wrote:

    I think that the situation is more complicated if we start looking
    at machine prices.
    Let me use some real data:
    1) I have to use a db machine like gcloud n1-standard-16 --->
    kubernetes cluster with 1 node for 500$/month
    2) I have to use 9 web server like n1-standard-2 ---> kubernetes
    cluster with 9 nodes for 480$/month

    So with about 1000$/month I have the configuration that currently
    supports the web traffic of my company.

    If I wanted to use a single cluster I should choose nodes like
    n1-standard-16.
    Wanting not to exceed the $1000 limit, I could create a cluster
    with 2 nodes.
    So I'll have: a node for db and a node for 9 (web) pod

    So the real question could be: in terms of performance,
    scalability and stability which is the better solution between: (9
    nodes with 1 pod) vs (1 node with 9 pods)

    If two alternatives are comparable I could use a single cluster :)










    Il giorno martedì 12 dicembre 2017 23:00:10 UTC+1, David
    Rosenstrauch ha scritto:
    > On 2017-12-12 4:38 pm, Marco De Rosa wrote:
    > > The main reason is that the "web" cluster has hardware features
    > > different from the "db" cluster and I didn't find a way to have a
    > > cluster with for example one node better, in cpu and/or ram, than
    > > others.
    > > So 2 clusters to put in communication with the doubt that I have
    > > described above.
    > > The alternative could be create a single cluster with n nodes
    sized in
    > > such a way as to support web traffic and database work.
    > > So a situation where I have for example 4 nodes: in 3 nodes 6
    web-pods
    > > plus the last node as pure db machine.
    > > But this solution is quite complicated in terms of how
    precisely to
    > > size the web pods, the db and the overall characteristics of the
    > > cluster..
    > > So the idea to create two different clusters
    >
    >
    > FYI, this could probably be easily accomplished on a single cluster,
    > using node labels and node selectors.
    >
    > Let's say you had 2 types of nodes:  machines with big disks, and
    > machines with lots of memory.  Then let's say that you have 2
    different
    > types of containers - one that runs a memory cache, and one that
    runs a
    > log file processing system.  What you could do is label the
    nodes as,
    > say, either "type=hidisk" or "type=himem", as appropriate.  And
    then you
    > could set a node selector on the caches to only run on nodes with
    > "type=himem", and a node selector on the log processors to only
    run on
    > nodes with "type=hidisk".
    >
    > HTH,
    >
    > DR

    --
    You received this message because you are subscribed to the Google
    Groups "Kubernetes user discussion and Q&A" group.
    To unsubscribe from this group and stop receiving emails from it,
    send an email to [email protected]
    <mailto:[email protected]>.
    To post to this group, send email to
    [email protected]
    <mailto:[email protected]>.
    Visit this group at
    https://groups.google.com/group/kubernetes-users
    <https://groups.google.com/group/kubernetes-users>.
    For more options, visit https://groups.google.com/d/optout
    <https://groups.google.com/d/optout>.

--
You received this message because you are subscribed to a topic in the Google Groups "Kubernetes user discussion and Q&A" group. To unsubscribe from this topic, visit https://groups.google.com/d/topic/kubernetes-users/d8xJqXYDAZ8/unsubscribe. To unsubscribe from this group and all its topics, send an email to [email protected] <mailto:[email protected]>. To post to this group, send email to [email protected] <mailto:[email protected]>.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Kubernetes 
user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to