TR: [OpenShift Origin] CLI Installation within Jenkins Server

2016-07-13 Thread ABDALA Olga
Hello,

I am currently trying to deploy my application to OpenShift 3 using Jenkins.
While following the steps as mentioned in this link  
https://blog.openshift.com/deploy-applications-openshift-3-using-jenkins/, 
there is a step of installing the "oc" CLI client on the Jenkins server.
The tutorial sends me to the CLI reference guide which requires a RedHat 
account for the CLI installation.

Is there an OpenShift CLI plugin that can be downloaded from Jenkins, or any 
other way of installing the CLI on the Jenkins server without necessarily 
creating an account (I am using a VM), please?

Thank you in advance for your answers .

Regards,

Olga
The information transmitted in the present email including the attachment is 
intended only for the person to whom or entity to which it is addressed and may 
contain confidential and/or privileged material. Any review, retransmission, 
dissemination or other use of, or taking of any action in reliance upon this 
information by persons or entities other than the intended recipient is 
prohibited. If you received this in error, please contact the sender and delete 
all copies of the material.

Ce message et toutes les pi?ces qui y sont ?ventuellement jointes sont 
confidentiels et transmis ? l'intention exclusive de son destinataire. Toute 
modification, ?dition, utilisation ou diffusion par toute personne ou entit? 
autre que le destinataire est interdite. Si vous avez re?u ce message par 
erreur, nous vous remercions de nous en informer imm?diatement et de le 
supprimer ainsi que les pi?ces qui y sont ?ventuellement jointes.
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


RE: [Origin] Fail to build from remote Git repository

2016-05-19 Thread ABDALA Olga
Hi Ben,

So I deployed as asked the Jenkins image, but I got an error.
Here is the output of the CLI:

[cid:image001.png@01D1B1D2.AE114170]


De : Ben Parees [mailto:bpar...@redhat.com]
Envoyé : jeudi 19 mai 2016 13:01
À : ABDALA Olga
Cc : dev
Objet : Re: [Origin] Fail to build from remote Git repository


On May 19, 2016 5:26 AM, "ABDALA Olga" 
<olga.abd...@solucom.fr<mailto:olga.abd...@solucom.fr>> wrote:
>
> Hello,
>
>
>
> I have been trying to deploy my application that already exists on Github, on 
> openShift, using the “oc new-app” command, but I have been receiving a build 
> error and I don’t know where that might be coming from.
>
>
>
> Here is what I get after running the command:
>
>
>
>
>
> But when I go through the logs, here is what I get:
>
>
>
>
>
> Does anybody know what might be the cause of the fail?

Basically the git clone is failing. Can you deploy our jenkins image to a pod, 
rsh into it, and attempt a git clone from there? That will hopefully give us 
more information about the nature of the failure.

>
>
>
> Ps: The git repo is public…
>
>
>
> Thank you!
>
>
>
> Olga A.
>
>
>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>

Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


RE: Three-tier application deployment on OpenShift origin

2016-05-09 Thread ABDALA Olga


De : Erik Jacobs [mailto:ejac...@redhat.com]
Envoyé : lundi 9 mai 2016 14:31
À : ABDALA Olga
Cc : dev@lists.openshift.redhat.com
Objet : Re: Three-tier application deployment on OpenShift origin

On Mon, May 9, 2016 at 4:56 AM, ABDALA Olga 
<olga.abd...@solucom.fr<mailto:olga.abd...@solucom.fr>> wrote:
Hello Erik,

Please find my comments inline

De : Erik Jacobs [mailto:ejac...@redhat.com<mailto:ejac...@redhat.com>]
Envoyé : mercredi 4 mai 2016 17:32
À : ABDALA Olga
Cc : dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
Objet : Re: Three-tier application deployment on OpenShift origin


On Wed, May 4, 2016 at 8:30 AM, ABDALA Olga 
<olga.abd...@solucom.fr<mailto:olga.abd...@solucom.fr>> wrote:
Hello Erik,

Thank you for your inputs.
However, while trying to update the label for my Nodes, here is what I get:

[cid:image001.png@01D1AA02.991A01D0]

labels are single key/value pairs. You are trying to add an additional zone 
label without specifying --overwrite. You cannot have multiple values for the 
same key.

Same thing if I try to update my pods’ labels.
[cid:image002.png@01D1AA02.991A01D0]

Changing a pod label is not what you want to do. You want to change the pod 
nodeselector.

>  Yes I guess that is what I will have to change

Yes.

For the NodeSelector, where can I find the pod configuration file, for me to 
specify the Node,  please?
Is it in the master-config.yaml file?

master-config.yaml is the master configuration, not a "pod configuration". "pod 
configuration" is kind of a strange statement. You probably mean "pod 
definition".

>  By « pod definition », do you mean the pod yaml file?

That is one example, yes.


We'll ignore nodeselector and master-config because while it's a thing, it 
won't do what you want. If you're interested, docs here: 
https://docs.openshift.org/latest/admin_guide/managing_projects.html#setting-the-cluster-wide-default-node-selector.

>  After checking the docs, My question is : if the defaultNodeSelector in the 
> master config file is set for a specific region, does that mean that pods 
> will never be placed on the Nodes of that specific region?

If the defaultNodeSelector is set, and you didn't somehow change it in the 
project, then the default node selector will *always* be applied, in addition 
to any pod-specific node selector. Whether that default nodeSelector is for 
"region", "zone", or any other arbitrary key/value pair is not relevant. The 
default is the default.

I think you meant to ask "if the default... is set for a region... does that 
mean the pods will always be placed". Not "never". Why would the selector mean 
never? That sounds more like an anti-selector...

 Always… yes, sorry, my bad

What you want to change is the pod nodeselector. I linked to the docs:

https://docs.openshift.org/latest/dev_guide/deployments.html#assigning-pods-to-specific-nodes

>  Just to make sure ; by setting a value to the « nodeSelector », will that 
> put my pod to the specified Node?

If you set a value for the nodeSelector your pod will attempt to be scheduled 
on nodes who have labels that match.

If you want to run a pod on a specific node I believe there is also a way to 
select a specific node by its hostname. It's in the docs somewhere.
Ok thanks

I don't know how you created your pods, so how you change/add nodeselector 
depends.

>  Actualy, I did not really ‘create’ the pods. What I did is, after creating a 
> project and adding my application to the project, 1 pod was automatically 
> created. From there, I simply increased the number of pods (from the web 
> console) to as many as I wanted.

Yes, so you have a deployment config that causes a replication controller to be 
created that then causes a pod to be created. As per below, "new-app" / "add to 
project" are basically the same thing. One is the UI and one is the CLI.
Oh ok I see.

>  By the way, I wanted to set something clear in my head regarding the pods. 
> Does the number of pods mean the number of the application’s ‘versions’?
I don't understand your question. The number of pods is the number of pods. 
What do you mean by "the application's 'versions'"?
What I meant by application’s versions is a sort of ‘A/B testing’. That is 
because I was wondering how does the HA work. As in, when a pod goes down, how 
is another pod regenerated by the replication controller to make the App still 
running?

Since you have builds, I am guessing that you used something like "new-app". 
new-app will have created a deploymentconfig. You would want to edit the 
deploymentconfig, find the pod template, and then add the nodeselector as shown 
in the docs above.


Thank you!

Olga

De : Erik Jacobs [mailto:ejac...@redhat.com<mailto:ejac...@redhat.com>]
Envoyé : mardi 3 mai 2016 16:57
À : ABDALA Olga
Cc : 

RE: Three-tier application deployment on OpenShift origin

2016-05-09 Thread ABDALA Olga
Hello Luke and Erik,

Please find my  reaction inline



De : Erik Jacobs [mailto:ejac...@redhat.com]
Envoyé : mercredi 4 mai 2016 17:41
À : Luke Meyer
Cc : ABDALA Olga; dev@lists.openshift.redhat.com
Objet : Re: Three-tier application deployment on OpenShift origin

Hi Luke,

I'll have to disagree but only semantically.

For a small environment and without changing the scheduler config, the concept 
of "zone" can be used. Yes, I would agree with you that in a real production 
environment the Red Hat concept of a "zone" is as you described.

Ø  From what I understand, the Red Hat concept of a "zone" is to improve the 
HA? And what is the ‘other’ concept of “zone” that you are mentioning Erik?

You could additionally label nodes with something like "env=appserver" and use 
nodeselectors on that. This is probably a more realistic production expectation.

Ø  Thanks for this info, I guess I will be dong that.

For the purposes of getting Abdala's small environment going, I guess it 
doesn't much "matter"...


Erik M Jacobs, RHCA
Principal Technical Marketing Manager, OpenShift Enterprise
Red Hat, Inc.
Phone: 646.462.3745
Email: ejac...@redhat.com<mailto:ejac...@redhat.com>
AOL Instant Messenger: ejacobsatredhat
Twitter: @ErikonOpen
Freenode: thoraxe

On Wed, May 4, 2016 at 11:36 AM, Luke Meyer 
<lme...@redhat.com<mailto:lme...@redhat.com>> wrote:


On Tue, May 3, 2016 at 10:57 AM, Erik Jacobs 
<ejac...@redhat.com<mailto:ejac...@redhat.com>> wrote:
Hi Olga,

Some responses inline/


Erik M Jacobs, RHCA
Principal Technical Marketing Manager, OpenShift Enterprise
Red Hat, Inc.
Phone: 646.462.3745
Email: ejac...@redhat.com<mailto:ejac...@redhat.com>
AOL Instant Messenger: ejacobsatredhat
Twitter: @ErikonOpen
Freenode: thoraxe

On Mon, Apr 25, 2016 at 9:34 AM, ABDALA Olga 
<olga.abd...@solucom.fr<mailto:olga.abd...@solucom.fr>> wrote:
Hello all,

I am done with my origin advanced installation (thanks to your useful help) 
which architecture is composed of 4 virtualized servers (on the same network):

-   1  Master

-   2 Nodes

-   1 VM hosting Ansible

My next steps are to implement/test some use cases with a three-tier App(each 
App’s tier being hosted on a different VM):

-   The horizontal scalability;

-   The load-balancing of the Nodes : Keep the system running even if one 
of the VMs goes down;

-   App’s monitoring using Origin API: Allow the Origin API to “tell” the 
App on which VM is hosted each tier. (I still don’t know how to test that 
though…)

There are some notions that are still not clear to me:

-   From my web console, how can I know on which Node has my App been 
deployed?

If you look in the Browse -> Pods -> select a pod, you should see the node 
where the pod is running.


-   How can I put each component of my App on a separated Node?

-   How does the “zones” concept in origin work?

These two are closely related.

1) In your case it sounds like you would want a zone for each tier: appserver, 
web server, db
2) This would require a node with a label of, for example, zone=appserver
3) When you create your pod (or replication controller, or deployment config) 
you would want to specify, via a nodeselector, which zone you want the pod(s) 
to land in


This is not the concept of zones. The point of zones is to spread replicas 
between different zones in order to improve HA (for instance, define a zone per 
rack, thereby ensuring that taking down a rack doesn't take down your app 
that's scaled across multiple zones).

This isn't what you want though. And you'd certainly never put a zone in a 
nodeselector for an RC if you're trying to scale it to multiple zones.
For the purpose of separating the tiers of your app, you would still want to 
use a nodeselector per DC or RC and corresponding node labels. There's no other 
way to designate where you want the pods from different RCs to land. You just 
don't want "zones".

Ø  That is exactly one of the things I would like to test. What happens if a 
pod goes down? Because I want my App to run all the time.

Ø  I’ve read that the RC is the one that ensures that another pod gets 
recreated after one has gone down. How is that done? Is there another version 
of the App that is always ‘present’ to take over? (I am really new in OpenShift 
and I am trying to understand all these concepts)

Thank you


This stuff is scattered throughout the docs:

https://docs.openshift.org/latest/admin_guide/manage_nodes.html#updating-labels-on-nodes
https://docs.openshift.org/latest/dev_guide/deployments.html#assigning-pods-to-specific-nodes

I hope this helps.


Content of /etc/ansible/hosts of my Ansible hosting VM:
[masters]
sv5305.selfdeploy.loc
# host group for nodes, includes region info
[nodes]
sv5305.selfdeploy.loc openshift_node_labels="{'region': 'infra', 'zone': 
'default'}" openshift_

Three-tier application deployment on OpenShift origin

2016-04-25 Thread ABDALA Olga
Hello all,

I am done with my origin advanced installation (thanks to your useful help) 
which architecture is composed of 4 virtualized servers (on the same network):

-   1  Master

-   2 Nodes

-   1 VM hosting Ansible

My next steps are to implement/test some use cases with a three-tier App(each 
App's tier being hosted on a different VM):

-   The horizontal scalability;

-   The load-balancing of the Nodes : Keep the system running even if one 
of the VMs goes down;

-   App's monitoring using Origin API: Allow the Origin API to "tell" the 
App on which VM is hosted each tier. (I still don't know how to test that 
though...)

There are some notions that are still not clear to me:

-   From my web console, how can I know on which Node has my App been 
deployed?

-   How can I put each component of my App on a separated Node?

-   How does the "zones" concept in origin work?

Content of /etc/ansible/hosts of my Ansible hosting VM:
[masters]
sv5305.selfdeploy.loc
# host group for nodes, includes region info
[nodes]
sv5305.selfdeploy.loc openshift_node_labels="{'region': 'infra', 'zone': 
'default'}" openshift_schedulable=false
sv5306.selfdeploy.loc openshift_node_labels="{'region': 'primary', 'zone': 
'east'}"
sv5307.selfdeploy.loc openshift_node_labels="{'region': 'primary', 'zone': 
'west'}"

Thank you in advance.

Regards,

Olga

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Problem deploying the Registry

2016-03-30 Thread ABDALA Olga
Hello everyone,

I am currently encountering a problem while trying to deploy the Docker 
registry after the advanced installation of OpenShift.
Following the 
doc,
 I run the following command and got an error:

# oadm registry --config=admin.kubeconfig
error: error getting client: Get https://10.110.1.95:8443/api: x509: 
certificate signed by unknown authority

Does anybody know what the error means and how I can fix it, please?

Thank you.

Regards,


Olga ABDALA

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


RE: OpenShift use cases

2016-03-29 Thread ABDALA Olga
Hi Akram,

Thanks for your inputs.
However, I am not sure I understand what you mean by the first use case : 
‘Continuous integration and continous delivery pipeline ‘

Regards,


Olga ABDALA



De : Akram Ben Aissi [mailto:akram.benai...@gmail.com]
Envoyé : lundi 28 mars 2016 15:00
À : Jérôme Fenal
Cc : ABDALA Olga; dev@lists.openshift.redhat.com
Objet : Re: OpenShift use cases

One can also add:

- Continuous integration and continous delivery pipeline
- Application version promotion (from dev to qa to prod with the same image)
- Micro-Service enablement
- Micro-service update and rollback scenario



On 28 March 2016 at 14:18, Jérôme Fenal 
<jfe...@gmail.com<mailto:jfe...@gmail.com>> wrote:


2016-03-21 11:40 GMT+01:00 ABDALA Olga 
<olga.abd...@solucom.fr<mailto:olga.abd...@solucom.fr>>:
Hello,

I am currently looking for use cases to later implement in order to test 
different OpenShift functionalities after deploying an application.

So far, I only have scalability in mind. Do you have more suggestions?

​Bonjour Olga,​

​I'd see a few others:
- ​application rolling​ update
- applicatoin decommissionning

​What are you trying to achieve?

Regards,

J.
--
Jérôme Fenal

___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


RE: [OpenShift 3 deployment] Error response from daemon: Error setting devmapper transaction ID: Error running SetTransactionID dm_task_run failed

2016-03-19 Thread ABDALA Olga
Hi Jason,

Thank you for your inputs. Please find below my comments.

De : Jason DeTiberus [mailto:jdeti...@redhat.com]
Envoyé : mercredi 16 mars 2016 22:25
À : ABDALA Olga
Cc : dev@lists.openshift.redhat.com
Objet : Re: [OpenShift 3 deployment] Error response from daemon: Error setting 
devmapper transaction ID: Error running SetTransactionID dm_task_run failed



On Tue, Mar 15, 2016 at 10:00 AM, ABDALA Olga 
<olga.abd...@solucom.fr<mailto:olga.abd...@solucom.fr>> wrote:
Hello,

It’s me again with my openShift deployment issues. Last time, I was telling 
that I managed to deploy a non-distributed OpenShift with just 1 Master and 1 
Node.
From the basic configuration, I wanted to add the number of Nodes in order to 
have 1 Master and 2 Nodes.

So what I did is, I followed the steps, as indicated 
here<https://docs.openshift.org/latest/install_config/install/advanced_install.html>
 . What changed from my previous configuration, is :

-   I enabled SELinux

-   I installed ansible

-   I installed the following packages

 yum install wget git net-tools bind-utils iptables-services bridge-utils 
bash-completion

-   I did what is asked for the EPEL repository

-   I cloned the openshift-ansible repository

-   I generated an SSH key on the master then copied the key on the  other 
Node hosts

-   I then configured my /etc/ansible/hosts file as follow:

# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes
#etcd
#lb

# Set variables common for all OSEv3 hosts
[OSEv3:vars]

#SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=root

# If ansible_ssh_user is not root, ansible_sudo must be set to true
#ansible_sudo=false

deployment_type=origin

# uncomment the following to enable htpasswd authentication; defaults to 
DenyAllPasswordIdentityProvider
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 
'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': 
'/etc/origin/master/htpasswd'}]

# host group for masters
[masters]
sv5256.selfdeploy.loc openshift_hostname=sv5256.selfdeploy.loc 
containerized=true

Any reason you are choosing a containerized the master and not the nodes?

ð  I thought that by setting “containerized to true”, the containerized 
services were run on both the Master and the Nodes (please correct me if I’m 
wrong, since I am really new to OpenShift).


# host group for nodes, includes region info
[nodes]
sv5256.selfdeploy.loc openshift_node_labels="{'region': 'infra', 'zone': 
'default'}"
SV5257.selfdeploy.loc openshift_node_labels="{'region': 'primary', 'zone': 
'east'}"
SV5258.selfdeploy.loc openshift_node_labels="{'region': 'primary', 'zone': 
'west'}"

Your master should be listed as a node as well, there are some features that 
require that the master be part of the sdn and currently that is only possible 
with the master also being a node.

ð  The master is listed as a node actually. The Master hostname is: 
sv5256.selfdeploy.loc . Or maybe is there another place it should be mentioned 
as node ?



-   I then run the advanced installation as follow:

# ansible-playbook ~/openshift-ansible/playbooks/byo/config.yml
The `oc get nodes`, at first showed me that all the hosts were ready (cfr the 
attached screenshot),


-   I also configured a master file

-   I could not set up the Node configuration file because I was getting a 
message saying that ‘ca.crt’ file in 
`/var/lib/origin/openshift.local.config/master/` needed to be correct. The 
thing is I cannot find the `/openshift.local.config/master` under /var/lib. It 
is located somewhere else.
This is because your master is not currently a node, once it is added to the 
node inventory, you should see a node directory containing configuration under 
/etc/origin/.

ð  So is that still is the cause of the error message ?


I tried to re-run the `oc get nodes` command and any other, but , I am being 
displayed this error:


Error response from daemon: Error setting devmapper transaction ID: Error 
running SetTransactionID dm_task_run failed.

Sounds like docker is hung on the host. You can try to restart the docker 
daemon, and then attempt to run oc again (On a containerized installation oc is 
actually a shell script that runs the oc client within a container rather than 
natively).


Can please anybody help me with this matter?


Thanks



Olga

<http://www.solucom.fr/communique_financier/finalisation-du-rapprochement-entre-solucom-et-les-activites-de-kurt-salmon-en-europe-hors-retail-et-consumer-goods-naissance-dun-nouveau-leader-du-conseil/>
 
<http://www.solucom.fr/communique_financier/finalisation-du-rapprochement-entre-solucom-et-les-activites-de-kurt-salmon-en-europe-hors-retail-et-consumer-goods-naissance-dun-nouveau-leader-du-conseil/>


[OpenShift 3 deployment] Error response from daemon: Error setting devmapper transaction ID: Error running SetTransactionID dm_task_run failed

2016-03-15 Thread ABDALA Olga
Hello,

It's me again with my openShift deployment issues. Last time, I was telling 
that I managed to deploy a non-distributed OpenShift with just 1 Master and 1 
Node.
>From the basic configuration, I wanted to add the number of Nodes in order to 
>have 1 Master and 2 Nodes.

So what I did is, I followed the steps, as indicated 
here
 . What changed from my previous configuration, is :

-   I enabled SELinux

-   I installed ansible

-   I installed the following packages

 yum install wget git net-tools bind-utils iptables-services bridge-utils 
bash-completion

-   I did what is asked for the EPEL repository

-   I cloned the openshift-ansible repository

-   I generated an SSH key on the master then copied the key on the  other 
Node hosts

-   I then configured my /etc/ansible/hosts file as follow:

# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes
#etcd
#lb

# Set variables common for all OSEv3 hosts
[OSEv3:vars]

#SSH user, this user should allow ssh based auth without requiring a password
ansible_ssh_user=root

# If ansible_ssh_user is not root, ansible_sudo must be set to true
#ansible_sudo=false

deployment_type=origin

# uncomment the following to enable htpasswd authentication; defaults to 
DenyAllPasswordIdentityProvider
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 
'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': 
'/etc/origin/master/htpasswd'}]

# host group for masters
[masters]
sv5256.selfdeploy.loc openshift_hostname=sv5256.selfdeploy.loc 
containerized=true

# host group for nodes, includes region info
[nodes]
sv5256.selfdeploy.loc openshift_node_labels="{'region': 'infra', 'zone': 
'default'}"
SV5257.selfdeploy.loc openshift_node_labels="{'region': 'primary', 'zone': 
'east'}"
SV5258.selfdeploy.loc openshift_node_labels="{'region': 'primary', 'zone': 
'west'}"


-   I then run the advanced installation as follow:

# ansible-playbook ~/openshift-ansible/playbooks/byo/config.yml
The `oc get nodes`, at first showed me that all the hosts were ready (cfr the 
attached screenshot),


-   I also configured a master file

-   I could not set up the Node configuration file because I was getting a 
message saying that 'ca.crt' file in 
`/var/lib/origin/openshift.local.config/master/` needed to be correct. The 
thing is I cannot find the `/openshift.local.config/master` under /var/lib. It 
is located somewhere else.

I tried to re-run the `oc get nodes` command and any other, but , I am being 
displayed this error:


Error response from daemon: Error setting devmapper transaction ID: Error 
running SetTransactionID dm_task_run failed.

Can please anybody help me with this matter?


Thanks



Olga

[cid:image001.png@01D17EC7.004D9090]

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


RE: [Openshift deployment] Problem with my Openshift web console

2016-03-14 Thread ABDALA Olga
Thank you very much Scott.
I am going to try this and hope that it works this time :)


Olga ABDALA
Stagiaire ASI – DCS
Fixe : +33 (0)1 49 03 89 49
Mobile : +33 (0)6 79 50 34 93
olga.abd...@solucom.fr<mailto:olga.abd...@solucom.fr>
solucom
Tour Franklin : 100 - 101 terrasse Boieldieu
92042 Paris La Défense Cedex
[cid:image001.png@01D17DFB.3EE826C0]<http://www.solucom.fr/communique_financier/finalisation-du-rapprochement-entre-solucom-et-les-activites-de-kurt-salmon-en-europe-hors-retail-et-consumer-goods-naissance-dun-nouveau-leader-du-conseil/>

De : Scott Dodson [mailto:sdod...@redhat.com]
Envoyé : lundi 14 mars 2016 14:09
À : ABDALA Olga
Cc : Georg Franke | stephanbartl GmbH; dev@lists.openshift.redhat.com
Objet : Re: [Openshift deployment] Problem with my Openshift web console

And yes, one master with two nodes should work just fine.

On Mon, Mar 14, 2016 at 9:08 AM, Scott Dodson 
<sdod...@redhat.com<mailto:sdod...@redhat.com>> wrote:
Yes, you can add nodes by adding a group named [new_nodes] to your inventory 
and running playbooks/byo/openshift-node/scaleup.yml

On Mon, Mar 14, 2016 at 4:54 AM, ABDALA Olga 
<olga.abd...@solucom.fr<mailto:olga.abd...@solucom.fr>> wrote:
Hello,

I’ve run the `lvs -a` command and the result shows that I still have space on 
my thinpool.

What I did this weekend, is that I uninstalled OpenShift and Docker and tried 
to do something different.
So what I did is, I re-installed OpenShift and Docker but this time with only 1 
Master and 1 Node. After I did that, I have an OpenShift web console (cfr the 
attached screenshot)  while logging with a user that I created using `oc login`.
However, I would like to know if from that configuration I can go to the 
configuration I wanted in the first place (1 Master and 2 Nodes).

Thanks,


Olga ABDALA
Stagiaire ASI – DCS
Fixe : +33 (0)1 49 03 89 49<tel:%2B33%20%280%291%2049%2003%2089%2049>
Mobile : +33 (0)6 79 50 34 93<tel:%2B33%20%280%296%2079%2050%2034%2093>
olga.abd...@solucom.fr<mailto:olga.abd...@solucom.fr>
solucom
Tour Franklin : 100 - 101 terrasse Boieldieu
92042 Paris La Défense Cedex
[cid:image001.png@01D17DFB.3EE826C0]<http://www.solucom.fr/communique_financier/finalisation-du-rapprochement-entre-solucom-et-les-activites-de-kurt-salmon-en-europe-hors-retail-et-consumer-goods-naissance-dun-nouveau-leader-du-conseil/>

De : Scott Dodson [mailto:sdod...@redhat.com<mailto:sdod...@redhat.com>]
Envoyé : vendredi 11 mars 2016 18:04
À : ABDALA Olga
Cc : Georg Franke | stephanbartl GmbH; 
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>

Objet : Re: [Openshift deployment] Problem with my Openshift web console

The ATTENTION message is because you've performed a containerized install, 
meaning that the normal /usr/bin/oc and /usr/bin/openshift binaries aren't 
installed into the local filesystem but they are instead executed inside a 
container using the wrapper script in /usr/local/bin/openshift
If you've manually install the oc binary somewhere you can symlink oadm to that 
path and use that instead.
The device mapper error is pretty concerning though I can't understand how it'd 
related to the console problems. Have you run out of space on the filesystem 
that provides /var/lib/docker or if you've configured docker-storage-setup are 
you out of space on your thinpool? `lvs -a` should show you that

On Fri, Mar 11, 2016 at 11:40 AM, ABDALA Olga 
<olga.abd...@solucom.fr<mailto:olga.abd...@solucom.fr>> wrote:
Hi,

Please find attached a screenshot of the command’s result.

Ps:I have been having that “ATTENTION” message for the past 2 days and I don’t 
understand what that means, although I have the client tools already installed.
If you have any idea of what that can be, I would love to read to that.

Thanks,


Olga ABDALA
[cid:image001.png@01D17DFB.3EE826C0]<http://www.solucom.fr/communique_financier/finalisation-du-rapprochement-entre-solucom-et-les-activites-de-kurt-salmon-en-europe-hors-retail-et-consumer-goods-naissance-dun-nouveau-leader-du-conseil/>

De : Georg Franke | stephanbartl GmbH 
[mailto:g...@stephanbartl.at<mailto:g...@stephanbartl.at>]
Envoyé : vendredi 11 mars 2016 17:13
À : ABDALA Olga
Cc : Akram Ben Aissi; 
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>

Objet : Re: [Openshift deployment] Problem with my Openshift web console

Hi,

did you assign the login user enough privileges.
On the command line eg
oadm policy add-cluster-role-to-user cluster:admin user

Georg


On 11 Mar 2016, at 16:50, ABDALA Olga 
<olga.abd...@solucom.fr<mailto:olga.abd...@solucom.fr>> wrote:
Hi Akram,

Yes I am trying with https.


Olga ABDALA

<http://www.solucom.fr/communique_financier/finalisation-du-rapprochement-entre-solucom-et-les-activites-de-kurt-salmon-en-europe-hors-retail-et-consumer-goods-naissance-dun-nouveau-leader

RE: [Openshift deployment] Problem with my Openshift web console

2016-03-14 Thread ABDALA Olga
Hello,

I’ve run the `lvs -a` command and the result shows that I still have space on 
my thinpool.

What I did this weekend, is that I uninstalled OpenShift and Docker and tried 
to do something different.
So what I did is, I re-installed OpenShift and Docker but this time with only 1 
Master and 1 Node. After I did that, I have an OpenShift web console (cfr the 
attached screenshot)  while logging with a user that I created using `oc login`.
However, I would like to know if from that configuration I can go to the 
configuration I wanted in the first place (1 Master and 2 Nodes).

Thanks,


Olga ABDALA
Stagiaire ASI – DCS
Fixe : +33 (0)1 49 03 89 49
Mobile : +33 (0)6 79 50 34 93
olga.abd...@solucom.fr<mailto:olga.abd...@solucom.fr>
solucom
Tour Franklin : 100 - 101 terrasse Boieldieu
92042 Paris La Défense Cedex
[cid:image001.png@01D17DD6.C0EE2180]<http://www.solucom.fr/communique_financier/finalisation-du-rapprochement-entre-solucom-et-les-activites-de-kurt-salmon-en-europe-hors-retail-et-consumer-goods-naissance-dun-nouveau-leader-du-conseil/>

De : Scott Dodson [mailto:sdod...@redhat.com]
Envoyé : vendredi 11 mars 2016 18:04
À : ABDALA Olga
Cc : Georg Franke | stephanbartl GmbH; dev@lists.openshift.redhat.com
Objet : Re: [Openshift deployment] Problem with my Openshift web console

The ATTENTION message is because you've performed a containerized install, 
meaning that the normal /usr/bin/oc and /usr/bin/openshift binaries aren't 
installed into the local filesystem but they are instead executed inside a 
container using the wrapper script in /usr/local/bin/openshift
If you've manually install the oc binary somewhere you can symlink oadm to that 
path and use that instead.
The device mapper error is pretty concerning though I can't understand how it'd 
related to the console problems. Have you run out of space on the filesystem 
that provides /var/lib/docker or if you've configured docker-storage-setup are 
you out of space on your thinpool? `lvs -a` should show you that

On Fri, Mar 11, 2016 at 11:40 AM, ABDALA Olga 
<olga.abd...@solucom.fr<mailto:olga.abd...@solucom.fr>> wrote:
Hi,

Please find attached a screenshot of the command’s result.

Ps:I have been having that “ATTENTION” message for the past 2 days and I don’t 
understand what that means, although I have the client tools already installed.
If you have any idea of what that can be, I would love to read to that.

Thanks,


Olga ABDALA
[cid:image001.png@01D17DD6.C0EE2180]<http://www.solucom.fr/communique_financier/finalisation-du-rapprochement-entre-solucom-et-les-activites-de-kurt-salmon-en-europe-hors-retail-et-consumer-goods-naissance-dun-nouveau-leader-du-conseil/>

De : Georg Franke | stephanbartl GmbH 
[mailto:g...@stephanbartl.at<mailto:g...@stephanbartl.at>]
Envoyé : vendredi 11 mars 2016 17:13
À : ABDALA Olga
Cc : Akram Ben Aissi; 
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>

Objet : Re: [Openshift deployment] Problem with my Openshift web console

Hi,

did you assign the login user enough privileges.
On the command line eg
oadm policy add-cluster-role-to-user cluster:admin user

Georg


On 11 Mar 2016, at 16:50, ABDALA Olga 
<olga.abd...@solucom.fr<mailto:olga.abd...@solucom.fr>> wrote:
Hi Akram,

Yes I am trying with https.


Olga ABDALA

<http://www.solucom.fr/communique_financier/finalisation-du-rapprochement-entre-solucom-et-les-activites-de-kurt-salmon-en-europe-hors-retail-et-consumer-goods-naissance-dun-nouveau-leader-du-conseil/>

De : Akram Ben Aissi [mailto:akram.benai...@gmail.com]
Envoyé : vendredi 11 mars 2016 16:47
À : ABDALA Olga
Cc : dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
Objet : Re: [Openshift deployment] Problem with my Openshift web console

Hi Olga
Are you trying with https?

On Friday, 11 March 2016, ABDALA Olga 
<olga.abd...@solucom.fr<mailto:olga.abd...@solucom.fr>> wrote:
Hi,

I am currently trying to deploy OpenShift Origin 3 on my VMs in order to later 
deploy an application. The distributor’s release is CentOS 7.2.1511 .
The architecture I am willing to deploy is composed of one Master host and two 
Node hosts. This architecture will later be used for an HA  deployment.

However, after following the 
documentation<https://docs.openshift.org/latest/install_config/install/advanced_install.html>,
 I am having a problem while trying to display my OpenShift web console, using 
the master IP address in my browser.
All I can see after login in with the credentials of a user I previously 
created (using the HTPaswd) is a blank page, which is not normal.

I am coming to you to see if you can help me sorting out that issue. Any advice 
or recommendation?

Thank you in advance.

Best regards,



Olga ABDALA
Stagiaire ASI – DCS
Fixe : +33 (0)1 49 03 89 49<tel:%2B33%20%280%291%2049%2003%2089%2049>
Mobile : +33 (0)6 79