It's just that the "zone=" label is discussed in our example scheduler
configs
<https://docs.openshift.com/enterprise/3.1/admin_guide/scheduler.html#use-cases>
used for service spreading so it has a technical significance. Using "env="
would be fine.

On Wed, May 4, 2016 at 11:41 AM, Erik Jacobs <[email protected]> wrote:

> Hi Luke,
>
> I'll have to disagree but only semantically.
>
> For a small environment and without changing the scheduler config, the
> concept of "zone" can be used. Yes, I would agree with you that in a real
> production environment the Red Hat concept of a "zone" is as you described.
>
> You could additionally label nodes with something like "env=appserver" and
> use nodeselectors on that. This is probably a more realistic production
> expectation.
>
> For the purposes of getting Abdala's small environment going, I guess it
> doesn't much "matter"...
>
>
> Erik M Jacobs, RHCA
> Principal Technical Marketing Manager, OpenShift Enterprise
> Red Hat, Inc.
> Phone: 646.462.3745
> Email: [email protected]
> AOL Instant Messenger: ejacobsatredhat
> Twitter: @ErikonOpen
> Freenode: thoraxe
>
> On Wed, May 4, 2016 at 11:36 AM, Luke Meyer <[email protected]> wrote:
>
>>
>>
>> On Tue, May 3, 2016 at 10:57 AM, Erik Jacobs <[email protected]> wrote:
>>
>>> Hi Olga,
>>>
>>> Some responses inline/
>>>
>>>
>>> Erik M Jacobs, RHCA
>>> Principal Technical Marketing Manager, OpenShift Enterprise
>>> Red Hat, Inc.
>>> Phone: 646.462.3745
>>> Email: [email protected]
>>> AOL Instant Messenger: ejacobsatredhat
>>> Twitter: @ErikonOpen
>>> Freenode: thoraxe
>>>
>>> On Mon, Apr 25, 2016 at 9:34 AM, ABDALA Olga <[email protected]>
>>> wrote:
>>>
>>>> Hello all,
>>>>
>>>>
>>>>
>>>> I am done with my *origin advanced installation* (thanks to your
>>>> useful help) which architecture is composed of *4 virtualized servers* (on
>>>> the same network):
>>>>
>>>> -       1  Master
>>>>
>>>> -       2 Nodes
>>>>
>>>> -       1 VM hosting Ansible
>>>>
>>>>
>>>>
>>>> My next steps are to implement/test some use cases with a *three-tier
>>>> App*(each App’s tier being hosted on a different VM):
>>>>
>>>> -       The * horizontal scalability*;
>>>>
>>>> -       The * load-balancing* of the Nodes : Keep the system running
>>>> even if one of the VMs goes down;
>>>>
>>>> -       App’s monitoring using *Origin API*: Allow the Origin API to
>>>> “tell” the App on which VM is hosted each tier. (I still don’t know how to
>>>> test that though…)
>>>>
>>>>
>>>>
>>>> There are some * notions* that are still not clear to me:
>>>>
>>>> -       From my web console, how can I know *on which Node has my App
>>>> been deployed*?
>>>>
>>>
>>> If you look in the Browse -> Pods -> select a pod, you should see the
>>> node where the pod is running.
>>>
>>>
>>>> -       How can I put *each component of my App* on a *separated Node*?
>>>>
>>>> -       How does the “*zones*” concept in origin work?
>>>>
>>>
>>> These two are closely related.
>>>
>>> 1) In your case it sounds like you would want a zone for each tier:
>>> appserver, web server, db
>>> 2) This would require a node with a label of, for example, zone=appserver
>>> 3) When you create your pod (or replication controller, or deployment
>>> config) you would want to specify, via a nodeselector, which zone you want
>>> the pod(s) to land in
>>>
>>>
>> This is not the concept of zones. The point of zones is to spread
>> replicas between different zones in order to improve HA (for instance,
>> define a zone per rack, thereby ensuring that taking down a rack doesn't
>> take down your app that's scaled across multiple zones).
>>
>> This isn't what you want though. And you'd certainly never put a zone in
>> a nodeselector for an RC if you're trying to scale it to multiple zones.
>>
>> For the purpose of separating the tiers of your app, you would still want
>> to use a nodeselector per DC or RC and corresponding node labels. There's
>> no other way to designate where you want the pods from different RCs to
>> land. You just don't want "zones".
>>
>>
>>
>>> This stuff is scattered throughout the docs:
>>>
>>>
>>> https://docs.openshift.org/latest/admin_guide/manage_nodes.html#updating-labels-on-nodes
>>>
>>> https://docs.openshift.org/latest/dev_guide/deployments.html#assigning-pods-to-specific-nodes
>>>
>>> I hope this helps.
>>>
>>>
>>>>
>>>>
>>>> Content of /etc/ansible/hosts of my Ansible hosting VM:
>>>>
>>>> [masters]
>>>>
>>>> sv5305.selfdeploy.loc
>>>>
>>>> # host group for nodes, includes region info
>>>>
>>>> [nodes]
>>>>
>>>> sv5305.selfdeploy.loc openshift_node_labels="{'region': 'infra',
>>>> 'zone': 'default'}" openshift_schedulable=false
>>>>
>>>> sv5306.selfdeploy.loc openshift_node_labels="{'region': 'primary',
>>>> 'zone': 'east'}"
>>>>
>>>> sv5307.selfdeploy.loc openshift_node_labels="{'region': 'primary',
>>>> 'zone': 'west'}"
>>>>
>>>>
>>>>
>>>> Thank you in advance.
>>>>
>>>>
>>>>
>>>> Regards,
>>>>
>>>>
>>>>
>>>> Olga
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> dev mailing list
>>>> [email protected]
>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>>
>>>>
>>>
>>> _______________________________________________
>>> dev mailing list
>>> [email protected]
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>>
>>
>
_______________________________________________
dev mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

Reply via email to