Hello Luke and Erik,

Please find my  reaction inline



De : Erik Jacobs [mailto:ejac...@redhat.com]
Envoyé : mercredi 4 mai 2016 17:41
À : Luke Meyer
Cc : ABDALA Olga; dev@lists.openshift.redhat.com
Objet : Re: Three-tier application deployment on OpenShift origin

Hi Luke,

I'll have to disagree but only semantically.

For a small environment and without changing the scheduler config, the concept 
of "zone" can be used. Yes, I would agree with you that in a real production 
environment the Red Hat concept of a "zone" is as you described.

Ø  From what I understand, the Red Hat concept of a "zone" is to improve the 
HA? And what is the ‘other’ concept of “zone” that you are mentioning Erik?

You could additionally label nodes with something like "env=appserver" and use 
nodeselectors on that. This is probably a more realistic production expectation.

Ø  Thanks for this info, I guess I will be dong that.

For the purposes of getting Abdala's small environment going, I guess it 
doesn't much "matter"...


Erik M Jacobs, RHCA
Principal Technical Marketing Manager, OpenShift Enterprise
Red Hat, Inc.
Phone: 646.462.3745
Email: ejac...@redhat.com<mailto:ejac...@redhat.com>
AOL Instant Messenger: ejacobsatredhat
Twitter: @ErikonOpen
Freenode: thoraxe

On Wed, May 4, 2016 at 11:36 AM, Luke Meyer 
<lme...@redhat.com<mailto:lme...@redhat.com>> wrote:


On Tue, May 3, 2016 at 10:57 AM, Erik Jacobs 
<ejac...@redhat.com<mailto:ejac...@redhat.com>> wrote:
Hi Olga,

Some responses inline/


Erik M Jacobs, RHCA
Principal Technical Marketing Manager, OpenShift Enterprise
Red Hat, Inc.
Phone: 646.462.3745<tel:646.462.3745>
Email: ejac...@redhat.com<mailto:ejac...@redhat.com>
AOL Instant Messenger: ejacobsatredhat
Twitter: @ErikonOpen
Freenode: thoraxe

On Mon, Apr 25, 2016 at 9:34 AM, ABDALA Olga 
<olga.abd...@solucom.fr<mailto:olga.abd...@solucom.fr>> wrote:
Hello all,

I am done with my origin advanced installation (thanks to your useful help) 
which architecture is composed of 4 virtualized servers (on the same network):

-       1  Master

-       2 Nodes

-       1 VM hosting Ansible

My next steps are to implement/test some use cases with a three-tier App(each 
App’s tier being hosted on a different VM):

-       The horizontal scalability;

-       The load-balancing of the Nodes : Keep the system running even if one 
of the VMs goes down;

-       App’s monitoring using Origin API: Allow the Origin API to “tell” the 
App on which VM is hosted each tier. (I still don’t know how to test that 
though…)

There are some notions that are still not clear to me:

-       From my web console, how can I know on which Node has my App been 
deployed?

If you look in the Browse -> Pods -> select a pod, you should see the node 
where the pod is running.


-       How can I put each component of my App on a separated Node?

-       How does the “zones” concept in origin work?

These two are closely related.

1) In your case it sounds like you would want a zone for each tier: appserver, 
web server, db
2) This would require a node with a label of, for example, zone=appserver
3) When you create your pod (or replication controller, or deployment config) 
you would want to specify, via a nodeselector, which zone you want the pod(s) 
to land in


This is not the concept of zones. The point of zones is to spread replicas 
between different zones in order to improve HA (for instance, define a zone per 
rack, thereby ensuring that taking down a rack doesn't take down your app 
that's scaled across multiple zones).

This isn't what you want though. And you'd certainly never put a zone in a 
nodeselector for an RC if you're trying to scale it to multiple zones.
For the purpose of separating the tiers of your app, you would still want to 
use a nodeselector per DC or RC and corresponding node labels. There's no other 
way to designate where you want the pods from different RCs to land. You just 
don't want "zones".

Ø  That is exactly one of the things I would like to test. What happens if a 
pod goes down? Because I want my App to run all the time.

Ø  I’ve read that the RC is the one that ensures that another pod gets 
recreated after one has gone down. How is that done? Is there another version 
of the App that is always ‘present’ to take over? (I am really new in OpenShift 
and I am trying to understand all these concepts)

Thank you


This stuff is scattered throughout the docs:

https://docs.openshift.org/latest/admin_guide/manage_nodes.html#updating-labels-on-nodes
https://docs.openshift.org/latest/dev_guide/deployments.html#assigning-pods-to-specific-nodes

I hope this helps.


Content of /etc/ansible/hosts of my Ansible hosting VM:
[masters]
sv5305.selfdeploy.loc
# host group for nodes, includes region info
[nodes]
sv5305.selfdeploy.loc openshift_node_labels="{'region': 'infra', 'zone': 
'default'}" openshift_schedulable=false
sv5306.selfdeploy.loc openshift_node_labels="{'region': 'primary', 'zone': 
'east'}"
sv5307.selfdeploy.loc openshift_node_labels="{'region': 'primary', 'zone': 
'west'}"

Thank you in advance.

Regards,

Olga


_______________________________________________
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


_______________________________________________
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


_______________________________________________
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

Reply via email to