Re: Three-tier application deployment on OpenShift origin

2016-05-09 Thread Erik Jacobs
On Mon, May 9, 2016 at 9:02 AM, ABDALA Olga <olga.abd...@solucom.fr> wrote:

>
>
>
>
> *De :* Erik Jacobs [mailto:ejac...@redhat.com]
> *Envoyé :* lundi 9 mai 2016 14:31
>
> *À :* ABDALA Olga
> *Cc :* dev@lists.openshift.redhat.com
> *Objet :* Re: Three-tier application deployment on OpenShift origin
>
>
>
> On Mon, May 9, 2016 at 4:56 AM, ABDALA Olga <olga.abd...@solucom.fr>
> wrote:
>
> Hello Erik,
>
>
>
> Please find my comments inline
>
>
>
> *De :* Erik Jacobs [mailto:ejac...@redhat.com]
> *Envoyé :* mercredi 4 mai 2016 17:32
> *À :* ABDALA Olga
> *Cc :* dev@lists.openshift.redhat.com
> *Objet :* Re: Three-tier application deployment on OpenShift origin
>
>
>
>
>
> On Wed, May 4, 2016 at 8:30 AM, ABDALA Olga <olga.abd...@solucom.fr>
> wrote:
>
> Hello Erik,
>
>
>
> Thank you for your inputs.
>
> However, while trying to update the label for my Nodes, here is what I
> get:
>
>
>
>
>
> labels are single key/value pairs. You are trying to add an additional
> zone label without specifying --overwrite. You cannot have multiple values
> for the same key.
>
>
>
> Same thing if I try to update my pods’ labels.
>
>
>
> Changing a pod label is not what you want to do. You want to change the
> pod nodeselector.
>
> Ø  Yes I guess that is what I will have to change
>
>
>
> Yes.
>
>
>
> For the NodeSelector, where can I find the pod configuration file, for me
> to specify the Node,  please?
>
> Is it in the *master-config.yaml* file?
>
>
>
> master-config.yaml is the master configuration, not a "pod configuration".
> "pod configuration" is kind of a strange statement. You probably mean "pod
> definition".
>
> Ø  By « pod definition », do you mean the pod yaml file?
>
>
>
> That is one example, yes.
>
>
>
>
>
> We'll ignore nodeselector and master-config because while it's a thing, it
> won't do what you want. If you're interested, docs here:
> https://docs.openshift.org/latest/admin_guide/managing_projects.html#setting-the-cluster-wide-default-node-selector
> .
>
> Ø  After checking the docs, My question is : if the defaultNodeSelector
> in the master config file is set for a specific region, does that mean that
> pods will never be placed on the Nodes of that specific region?
>
>
>
> If the defaultNodeSelector is set, and you didn't somehow change it in the
> project, then the default node selector will *always* be applied, in
> addition to any pod-specific node selector. Whether that default
> nodeSelector is for "region", "zone", or any other arbitrary key/value pair
> is not relevant. The default is the default.
>
>
>
> I think you meant to ask "if the default... is set for a region... does
> that mean the pods will always be placed". Not "never". Why would the
> selector mean never? That sounds more like an anti-selector...
>
>
>
>  Always… yes, sorry, my bad
>
>
>
> What you want to change is the pod nodeselector. I linked to the docs:
>
>
>
>
> https://docs.openshift.org/latest/dev_guide/deployments.html#assigning-pods-to-specific-nodes
>
> Ø  Just to make sure ; by setting a value to the « nodeSelector », will
> that put my pod to the specified Node?
>
>
>
> If you set a value for the nodeSelector your pod will attempt to be
> scheduled on nodes who have labels that match.
>
>
>
> If you want to run a pod on a specific node I believe there is also a way
> to select a specific node by its hostname. It's in the docs somewhere.
>
> Ok thanks
>
>
>
> I don't know how you created your pods, so how you change/add nodeselector
> depends.
>
> Ø  Actualy, I did not really ‘create’ the pods. What I did is, after
> creating a project and adding my application to the project, 1 pod was
> automatically created. From there, I simply increased the number of pods
> (from the web console) to as many as I wanted.
>
>
>
> Yes, so you have a deployment config that causes a replication controller
> to be created that then causes a pod to be created. As per below, "new-app"
> / "add to project" are basically the same thing. One is the UI and one is
> the CLI.
>
> Oh ok I see.
>
> Ø  By the way, I wanted to set something clear in my head regarding the
> pods. Does the number of pods mean the number of the application’s
> ‘versions’?
>
> I don't understand your question. The number of pods is the number of
> pods. What do you mean by "the application's 'versions'"?
>
> What I meant by application’s versions

RE: Three-tier application deployment on OpenShift origin

2016-05-09 Thread ABDALA Olga


De : Erik Jacobs [mailto:ejac...@redhat.com]
Envoyé : lundi 9 mai 2016 14:31
À : ABDALA Olga
Cc : dev@lists.openshift.redhat.com
Objet : Re: Three-tier application deployment on OpenShift origin

On Mon, May 9, 2016 at 4:56 AM, ABDALA Olga 
<olga.abd...@solucom.fr<mailto:olga.abd...@solucom.fr>> wrote:
Hello Erik,

Please find my comments inline

De : Erik Jacobs [mailto:ejac...@redhat.com<mailto:ejac...@redhat.com>]
Envoyé : mercredi 4 mai 2016 17:32
À : ABDALA Olga
Cc : dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
Objet : Re: Three-tier application deployment on OpenShift origin


On Wed, May 4, 2016 at 8:30 AM, ABDALA Olga 
<olga.abd...@solucom.fr<mailto:olga.abd...@solucom.fr>> wrote:
Hello Erik,

Thank you for your inputs.
However, while trying to update the label for my Nodes, here is what I get:

[cid:image001.png@01D1AA02.991A01D0]

labels are single key/value pairs. You are trying to add an additional zone 
label without specifying --overwrite. You cannot have multiple values for the 
same key.

Same thing if I try to update my pods’ labels.
[cid:image002.png@01D1AA02.991A01D0]

Changing a pod label is not what you want to do. You want to change the pod 
nodeselector.

>  Yes I guess that is what I will have to change

Yes.

For the NodeSelector, where can I find the pod configuration file, for me to 
specify the Node,  please?
Is it in the master-config.yaml file?

master-config.yaml is the master configuration, not a "pod configuration". "pod 
configuration" is kind of a strange statement. You probably mean "pod 
definition".

>  By « pod definition », do you mean the pod yaml file?

That is one example, yes.


We'll ignore nodeselector and master-config because while it's a thing, it 
won't do what you want. If you're interested, docs here: 
https://docs.openshift.org/latest/admin_guide/managing_projects.html#setting-the-cluster-wide-default-node-selector.

>  After checking the docs, My question is : if the defaultNodeSelector in the 
> master config file is set for a specific region, does that mean that pods 
> will never be placed on the Nodes of that specific region?

If the defaultNodeSelector is set, and you didn't somehow change it in the 
project, then the default node selector will *always* be applied, in addition 
to any pod-specific node selector. Whether that default nodeSelector is for 
"region", "zone", or any other arbitrary key/value pair is not relevant. The 
default is the default.

I think you meant to ask "if the default... is set for a region... does that 
mean the pods will always be placed". Not "never". Why would the selector mean 
never? That sounds more like an anti-selector...

 Always… yes, sorry, my bad

What you want to change is the pod nodeselector. I linked to the docs:

https://docs.openshift.org/latest/dev_guide/deployments.html#assigning-pods-to-specific-nodes

>  Just to make sure ; by setting a value to the « nodeSelector », will that 
> put my pod to the specified Node?

If you set a value for the nodeSelector your pod will attempt to be scheduled 
on nodes who have labels that match.

If you want to run a pod on a specific node I believe there is also a way to 
select a specific node by its hostname. It's in the docs somewhere.
Ok thanks

I don't know how you created your pods, so how you change/add nodeselector 
depends.

>  Actualy, I did not really ‘create’ the pods. What I did is, after creating a 
> project and adding my application to the project, 1 pod was automatically 
> created. From there, I simply increased the number of pods (from the web 
> console) to as many as I wanted.

Yes, so you have a deployment config that causes a replication controller to be 
created that then causes a pod to be created. As per below, "new-app" / "add to 
project" are basically the same thing. One is the UI and one is the CLI.
Oh ok I see.

>  By the way, I wanted to set something clear in my head regarding the pods. 
> Does the number of pods mean the number of the application’s ‘versions’?
I don't understand your question. The number of pods is the number of pods. 
What do you mean by "the application's 'versions'"?
What I meant by application’s versions is a sort of ‘A/B testing’. That is 
because I was wondering how does the HA work. As in, when a pod goes down, how 
is another pod regenerated by the replication controller to make the App still 
running?

Since you have builds, I am guessing that you used something like "new-app". 
new-app will have created a deploymentconfig. You would want to edit the 
deploymentconfig, find the pod template, and then add the nodeselector as shown 
in the docs above.


Thank you!

Olga

De : Erik Jacobs [mailto:ejac...@redhat.com<mailto:ejac...@redhat.com>]
Envoyé : mardi 3 mai 2016 16:57
À : ABDALA Olga
Cc : 

RE: Three-tier application deployment on OpenShift origin

2016-05-09 Thread ABDALA Olga
Hello Luke and Erik,

Please find my  reaction inline



De : Erik Jacobs [mailto:ejac...@redhat.com]
Envoyé : mercredi 4 mai 2016 17:41
À : Luke Meyer
Cc : ABDALA Olga; dev@lists.openshift.redhat.com
Objet : Re: Three-tier application deployment on OpenShift origin

Hi Luke,

I'll have to disagree but only semantically.

For a small environment and without changing the scheduler config, the concept 
of "zone" can be used. Yes, I would agree with you that in a real production 
environment the Red Hat concept of a "zone" is as you described.

Ø  From what I understand, the Red Hat concept of a "zone" is to improve the 
HA? And what is the ‘other’ concept of “zone” that you are mentioning Erik?

You could additionally label nodes with something like "env=appserver" and use 
nodeselectors on that. This is probably a more realistic production expectation.

Ø  Thanks for this info, I guess I will be dong that.

For the purposes of getting Abdala's small environment going, I guess it 
doesn't much "matter"...


Erik M Jacobs, RHCA
Principal Technical Marketing Manager, OpenShift Enterprise
Red Hat, Inc.
Phone: 646.462.3745
Email: ejac...@redhat.com<mailto:ejac...@redhat.com>
AOL Instant Messenger: ejacobsatredhat
Twitter: @ErikonOpen
Freenode: thoraxe

On Wed, May 4, 2016 at 11:36 AM, Luke Meyer 
<lme...@redhat.com<mailto:lme...@redhat.com>> wrote:


On Tue, May 3, 2016 at 10:57 AM, Erik Jacobs 
<ejac...@redhat.com<mailto:ejac...@redhat.com>> wrote:
Hi Olga,

Some responses inline/


Erik M Jacobs, RHCA
Principal Technical Marketing Manager, OpenShift Enterprise
Red Hat, Inc.
Phone: 646.462.3745
Email: ejac...@redhat.com<mailto:ejac...@redhat.com>
AOL Instant Messenger: ejacobsatredhat
Twitter: @ErikonOpen
Freenode: thoraxe

On Mon, Apr 25, 2016 at 9:34 AM, ABDALA Olga 
<olga.abd...@solucom.fr<mailto:olga.abd...@solucom.fr>> wrote:
Hello all,

I am done with my origin advanced installation (thanks to your useful help) 
which architecture is composed of 4 virtualized servers (on the same network):

-   1  Master

-   2 Nodes

-   1 VM hosting Ansible

My next steps are to implement/test some use cases with a three-tier App(each 
App’s tier being hosted on a different VM):

-   The horizontal scalability;

-   The load-balancing of the Nodes : Keep the system running even if one 
of the VMs goes down;

-   App’s monitoring using Origin API: Allow the Origin API to “tell” the 
App on which VM is hosted each tier. (I still don’t know how to test that 
though…)

There are some notions that are still not clear to me:

-   From my web console, how can I know on which Node has my App been 
deployed?

If you look in the Browse -> Pods -> select a pod, you should see the node 
where the pod is running.


-   How can I put each component of my App on a separated Node?

-   How does the “zones” concept in origin work?

These two are closely related.

1) In your case it sounds like you would want a zone for each tier: appserver, 
web server, db
2) This would require a node with a label of, for example, zone=appserver
3) When you create your pod (or replication controller, or deployment config) 
you would want to specify, via a nodeselector, which zone you want the pod(s) 
to land in


This is not the concept of zones. The point of zones is to spread replicas 
between different zones in order to improve HA (for instance, define a zone per 
rack, thereby ensuring that taking down a rack doesn't take down your app 
that's scaled across multiple zones).

This isn't what you want though. And you'd certainly never put a zone in a 
nodeselector for an RC if you're trying to scale it to multiple zones.
For the purpose of separating the tiers of your app, you would still want to 
use a nodeselector per DC or RC and corresponding node labels. There's no other 
way to designate where you want the pods from different RCs to land. You just 
don't want "zones".

Ø  That is exactly one of the things I would like to test. What happens if a 
pod goes down? Because I want my App to run all the time.

Ø  I’ve read that the RC is the one that ensures that another pod gets 
recreated after one has gone down. How is that done? Is there another version 
of the App that is always ‘present’ to take over? (I am really new in OpenShift 
and I am trying to understand all these concepts)

Thank you


This stuff is scattered throughout the docs:

https://docs.openshift.org/latest/admin_guide/manage_nodes.html#updating-labels-on-nodes
https://docs.openshift.org/latest/dev_guide/deployments.html#assigning-pods-to-specific-nodes

I hope this helps.


Content of /etc/ansible/hosts of my Ansible hosting VM:
[masters]
sv5305.selfdeploy.loc
# host group for nodes, includes region info
[nodes]
sv5305.selfdeploy.loc openshift_node_labels="{'region': 'infra', 'zone': 
'default'}" openshift_

Re: Three-tier application deployment on OpenShift origin

2016-05-04 Thread Erik Jacobs
Hi Luke,

I'll have to disagree but only semantically.

For a small environment and without changing the scheduler config, the
concept of "zone" can be used. Yes, I would agree with you that in a real
production environment the Red Hat concept of a "zone" is as you described.

You could additionally label nodes with something like "env=appserver" and
use nodeselectors on that. This is probably a more realistic production
expectation.

For the purposes of getting Abdala's small environment going, I guess it
doesn't much "matter"...


Erik M Jacobs, RHCA
Principal Technical Marketing Manager, OpenShift Enterprise
Red Hat, Inc.
Phone: 646.462.3745
Email: ejac...@redhat.com
AOL Instant Messenger: ejacobsatredhat
Twitter: @ErikonOpen
Freenode: thoraxe

On Wed, May 4, 2016 at 11:36 AM, Luke Meyer  wrote:

>
>
> On Tue, May 3, 2016 at 10:57 AM, Erik Jacobs  wrote:
>
>> Hi Olga,
>>
>> Some responses inline/
>>
>>
>> Erik M Jacobs, RHCA
>> Principal Technical Marketing Manager, OpenShift Enterprise
>> Red Hat, Inc.
>> Phone: 646.462.3745
>> Email: ejac...@redhat.com
>> AOL Instant Messenger: ejacobsatredhat
>> Twitter: @ErikonOpen
>> Freenode: thoraxe
>>
>> On Mon, Apr 25, 2016 at 9:34 AM, ABDALA Olga 
>> wrote:
>>
>>> Hello all,
>>>
>>>
>>>
>>> I am done with my *origin advanced installation* (thanks to your useful
>>> help) which architecture is composed of *4 virtualized servers* (on the
>>> same network):
>>>
>>> -   1  Master
>>>
>>> -   2 Nodes
>>>
>>> -   1 VM hosting Ansible
>>>
>>>
>>>
>>> My next steps are to implement/test some use cases with a *three-tier
>>> App*(each App’s tier being hosted on a different VM):
>>>
>>> -   The * horizontal scalability*;
>>>
>>> -   The * load-balancing* of the Nodes : Keep the system running
>>> even if one of the VMs goes down;
>>>
>>> -   App’s monitoring using *Origin API*: Allow the Origin API to
>>> “tell” the App on which VM is hosted each tier. (I still don’t know how to
>>> test that though…)
>>>
>>>
>>>
>>> There are some * notions* that are still not clear to me:
>>>
>>> -   From my web console, how can I know *on which Node has my App
>>> been deployed*?
>>>
>>
>> If you look in the Browse -> Pods -> select a pod, you should see the
>> node where the pod is running.
>>
>>
>>> -   How can I put *each component of my App* on a *separated Node*?
>>>
>>> -   How does the “*zones*” concept in origin work?
>>>
>>
>> These two are closely related.
>>
>> 1) In your case it sounds like you would want a zone for each tier:
>> appserver, web server, db
>> 2) This would require a node with a label of, for example, zone=appserver
>> 3) When you create your pod (or replication controller, or deployment
>> config) you would want to specify, via a nodeselector, which zone you want
>> the pod(s) to land in
>>
>>
> This is not the concept of zones. The point of zones is to spread replicas
> between different zones in order to improve HA (for instance, define a zone
> per rack, thereby ensuring that taking down a rack doesn't take down your
> app that's scaled across multiple zones).
>
> This isn't what you want though. And you'd certainly never put a zone in a
> nodeselector for an RC if you're trying to scale it to multiple zones.
>
> For the purpose of separating the tiers of your app, you would still want
> to use a nodeselector per DC or RC and corresponding node labels. There's
> no other way to designate where you want the pods from different RCs to
> land. You just don't want "zones".
>
>
>
>> This stuff is scattered throughout the docs:
>>
>>
>> https://docs.openshift.org/latest/admin_guide/manage_nodes.html#updating-labels-on-nodes
>>
>> https://docs.openshift.org/latest/dev_guide/deployments.html#assigning-pods-to-specific-nodes
>>
>> I hope this helps.
>>
>>
>>>
>>>
>>> Content of /etc/ansible/hosts of my Ansible hosting VM:
>>>
>>> [masters]
>>>
>>> sv5305.selfdeploy.loc
>>>
>>> # host group for nodes, includes region info
>>>
>>> [nodes]
>>>
>>> sv5305.selfdeploy.loc openshift_node_labels="{'region': 'infra', 'zone':
>>> 'default'}" openshift_schedulable=false
>>>
>>> sv5306.selfdeploy.loc openshift_node_labels="{'region': 'primary',
>>> 'zone': 'east'}"
>>>
>>> sv5307.selfdeploy.loc openshift_node_labels="{'region': 'primary',
>>> 'zone': 'west'}"
>>>
>>>
>>>
>>> Thank you in advance.
>>>
>>>
>>>
>>> Regards,
>>>
>>>
>>>
>>> Olga
>>>
>>>
>>>
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>>
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Three-tier application deployment on OpenShift origin

2016-05-04 Thread Luke Meyer
On Tue, May 3, 2016 at 10:57 AM, Erik Jacobs  wrote:

> Hi Olga,
>
> Some responses inline/
>
>
> Erik M Jacobs, RHCA
> Principal Technical Marketing Manager, OpenShift Enterprise
> Red Hat, Inc.
> Phone: 646.462.3745
> Email: ejac...@redhat.com
> AOL Instant Messenger: ejacobsatredhat
> Twitter: @ErikonOpen
> Freenode: thoraxe
>
> On Mon, Apr 25, 2016 at 9:34 AM, ABDALA Olga 
> wrote:
>
>> Hello all,
>>
>>
>>
>> I am done with my *origin advanced installation* (thanks to your useful
>> help) which architecture is composed of *4 virtualized servers* (on the
>> same network):
>>
>> -   1  Master
>>
>> -   2 Nodes
>>
>> -   1 VM hosting Ansible
>>
>>
>>
>> My next steps are to implement/test some use cases with a *three-tier
>> App*(each App’s tier being hosted on a different VM):
>>
>> -   The * horizontal scalability*;
>>
>> -   The * load-balancing* of the Nodes : Keep the system running
>> even if one of the VMs goes down;
>>
>> -   App’s monitoring using *Origin API*: Allow the Origin API to
>> “tell” the App on which VM is hosted each tier. (I still don’t know how to
>> test that though…)
>>
>>
>>
>> There are some * notions* that are still not clear to me:
>>
>> -   From my web console, how can I know *on which Node has my App
>> been deployed*?
>>
>
> If you look in the Browse -> Pods -> select a pod, you should see the node
> where the pod is running.
>
>
>> -   How can I put *each component of my App* on a *separated Node*?
>>
>> -   How does the “*zones*” concept in origin work?
>>
>
> These two are closely related.
>
> 1) In your case it sounds like you would want a zone for each tier:
> appserver, web server, db
> 2) This would require a node with a label of, for example, zone=appserver
> 3) When you create your pod (or replication controller, or deployment
> config) you would want to specify, via a nodeselector, which zone you want
> the pod(s) to land in
>
>
This is not the concept of zones. The point of zones is to spread replicas
between different zones in order to improve HA (for instance, define a zone
per rack, thereby ensuring that taking down a rack doesn't take down your
app that's scaled across multiple zones).

This isn't what you want though. And you'd certainly never put a zone in a
nodeselector for an RC if you're trying to scale it to multiple zones.

For the purpose of separating the tiers of your app, you would still want
to use a nodeselector per DC or RC and corresponding node labels. There's
no other way to designate where you want the pods from different RCs to
land. You just don't want "zones".



> This stuff is scattered throughout the docs:
>
>
> https://docs.openshift.org/latest/admin_guide/manage_nodes.html#updating-labels-on-nodes
>
> https://docs.openshift.org/latest/dev_guide/deployments.html#assigning-pods-to-specific-nodes
>
> I hope this helps.
>
>
>>
>>
>> Content of /etc/ansible/hosts of my Ansible hosting VM:
>>
>> [masters]
>>
>> sv5305.selfdeploy.loc
>>
>> # host group for nodes, includes region info
>>
>> [nodes]
>>
>> sv5305.selfdeploy.loc openshift_node_labels="{'region': 'infra', 'zone':
>> 'default'}" openshift_schedulable=false
>>
>> sv5306.selfdeploy.loc openshift_node_labels="{'region': 'primary',
>> 'zone': 'east'}"
>>
>> sv5307.selfdeploy.loc openshift_node_labels="{'region': 'primary',
>> 'zone': 'west'}"
>>
>>
>>
>> Thank you in advance.
>>
>>
>>
>> Regards,
>>
>>
>>
>> Olga
>>
>>
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev