Re: How to refer to the external resource

2018-05-25 Thread Sergey Beryozkin
Hi Dan, All

Combining the 'minishift hostfolder' (to link the minishift folder with the
demo folder) and following the advise from
https://developers.redhat.com/blog/2017/04/05/adding-persistent-storage-to-minishift-cdk-3-in-minutes/

Let me remove the inlined resource from the yaml config...

Thanks, Sergey


On Thu, May 24, 2018 at 10:59 PM, Sergey Beryozkin 
wrote:

> Hi Dan
>
> Thanks for trying to help, I will check tomorrow. specifically.
> oc rsh pod_name_from_above ls -l /opt/sso-demo
>
> I've already had some initial experience with deploying into Minishift.
> It just happens that Keycloak is involved in this case, but I asked on
> this list because I thought the issue was generic.
>
> Yes. my basic understanding is that the 'minishift hostfolder' is for
> syncing with the actual host. I'm not even sure a similar option exists for
> the multi-cluster case. But I'd just like to start with something simple
> enough :-).
>
> I'll update when I get the check done
>
> Cheers, Sergey
>
>
>
>
>
> On Thu, May 24, 2018 at 5:57 PM, Dan Pungă  wrote:
>
>> Hi!
>>
>> Does the new deployment run successfully? In the running pod can you
>> check to see if the json file is actually there (mounted). I haven't used
>> the minishift hostfolder option before, but I thought it has to do with the
>> docker-iso VM <-> host interaction and not with the actual pods/containers
>> inside VM.
>> oc get pods
>> will give the the running pod inside the project
>> oc rsh pod_name_from_above ls -l /opt/sso-demo
>> to have a look inside the pod in the /opt/sso-demo dir, if the
>> sso-demo.json file is there
>>
>> In order for oc to work you have to have the binary exported on your
>> path, or alternatively run this from the minishift folder where it is
>> locatedshould be in the cache/oc directory.
>>
>> I'm not sure what you're trying to do. When you say "refer to the the
>> non-encoded Keycloak realm on the disk instead", do you mean you'd like to
>> edit/update that json so that Keycloak uses your version? If this is the
>> case, with the current configuration, this could be done by editing the
>> secret (which should be base64 encoded) and rerolling the deployment to
>> restart the pod. https://docs.openshift.org/lat
>> est/dev_guide/secrets.html#secrets-and-the-pod-lifecycle
>>
>> If the Keycloack server that is running inside the pod (of which I have 0
>> knowledge..:) ) is able to reread this file without the need to restart and
>> you want to modify this file on the fly, then I guess you can't do that if
>> it's mounted as a secret volume and need to add some configuration to that
>> list of resources from github(add a build configuration that customizes the
>> image used so that the json config is placed and looked for in "pod space",
>> add an imagestream for it and reference this custom imagestream in the
>> deployconfiguration).
>>
>> Hope you'll also get a response from someone that is more familiar with
>> the environment.
>>
>> B0est of luck,
>> Dan
>>
>> On 24.05.2018 13:31, Sergey Beryozkin wrote:
>>
>> Hi,
>>
>> I'm new to Open Shift so apologize for what looks like a fairly basic
>> query, I did do some archive checks, but could not find a simple answer.
>>
>> I'm experimenting with this configuration:
>> https://github.com/wildfly-swarm-openshiftio-boosters/wfswar
>> m-rest-http-secured/blob/master/service.sso.yaml
>>
>> It is part of the demo which shows how a Keycloak server can be easily
>> deployed and it has been optimized to make the deployment very easy to do.
>> I'm currently trying it with Minishift 1.17.0.
>>
>> This configuration inlines several resources. For example, [1], which is
>> a Base64 encoded Keycloak realm
>>
>> which is then copied to the volume as a secret [2] and is made visible to
>> Keycloak [3].
>>
>> I'd like to try refer to the the non-encoded Keycloak realm on the disk
>> instead.
>>
>> I've tried a Minishift hostfolder command to mount a demo folder where
>> the non-encoded realm exists:
>>
>> SSO_DEMO   sshfs   .../boosters/wfswarm-rest-http-secured/minishift
>> /opt/sso-demo
>>
>> where in the local wfswarm-rest-http-secured/minishift folder I have an
>> unencoded sso-demo.json file.
>>
>> Next I removed the [1] block and [2] as well. I managed to import the
>> updated config, but the realm file is not visible to KeyCloak.
>>
>> If appreciate any advice/guidance. I've seen the docs about persistent
>> volumes, but I'm not sure it is the right way to go.
>>
>> Thanks, Sergey
>>
>>
>> [1] https://github.com/wildfly-swarm-openshiftio-boosters/wfswar
>> m-rest-http-secured/blob/master/service.sso.yaml#L11
>> [2]
>> https://github.com/wildfly-swarm-openshiftio-boosters/wfswar
>> m-rest-http-secured/blob/master/service.sso.yaml#L147
>> [3] https://github.com/wildfly-swarm-openshiftio-boosters/
>> wfswarm-rest-http-secured/blob/master/service.sso.yaml#L120
>>
>>
>>
>>
>>
>> ___
>> users 

Re: Origin 3.9 Installation Issue

2018-05-25 Thread Scott Dodson
Yeah, master branch won't work with 3.9. Please stick to release-3.9 branch
for managing 3.9 environments.

On Fri, May 25, 2018 at 11:02 AM, Jason Marshall 
wrote:

> Thank you, Dan! That got me a LOT further.
>
> Now off to the next hurdle. ;)
>
> Jason
>
> On Fri, May 25, 2018 at 5:06 AM, Dan Pungă  wrote:
>
>> Hi,
>>
>> Not sure about the error, but I've noticed a task that I haven't seen
>> during my installation attempts (also Origin 3.9 on a cluster).
>> From what I see, "origin_control_plane" is an ansible role that's present
>> on the master branch of the openshift-ansible repo, but not on the
>> release-3.9 branch.
>> https://docs.openshift.org/latest/install_config/install/hos
>> t_preparation.html#preparing-for-advanced-installations-origin ststes
>> that we should use the release-3.9 branch and that the master is intended
>> for the currently developed version of OShift Origin.
>>
>>  Hope it helps!
>>
>> On 24.05.2018 21:52, Jason Marshall wrote:
>>
>> Good afternoon,
>>
>> I am attempting to do an advanced installation of Origin 3.9 in a cluster
>> with a single master and 2 nodes, with a node role on the master server.
>>
>> I am able to run the prerequisites.yml playbook with no issue. The
>> deploy_cluster.yml  fails at the point where the origin.node service
>> attempts to start on the master server. The error that comes up is:
>>
>> TASK [openshift_control_plane : Start and enable self-hosting node]
>> 
>> 
>> fatal: [openshift-master.expdev.local]: FAILED! => {"changed": false,
>> "msg": "Unable to restart service origin-node: Job for origin-node.service
>> failed because the control process exited with error code. See \"systemctl
>> status origin-node.service\" and \"journalctl -xe\" for details.\n"}
>> ...ignoring
>> 
>> "May 24 14:40:53 cmhldshftlab01.expdev.local origin-node[2657]:
>> /usr/local/bin/openshift-node: line 17: /usr/bin/openshift-node-config:
>> No such file or directory",
>> "May 24 14:40:53 cmhldshftlab01.expdev.local systemd[1]:
>> origin-node.service: main process exited, code=exited, status=1/FAILURE",
>> "May 24 14:40:53 cmhldshftlab01.expdev.local systemd[1]: Failed
>> to start OpenShift Node.",
>> "May 24 14:40:53 cmhldshftlab01.expdev.local systemd[1]: Unit
>> origin-node.service entered failed state.",
>> "May 24 14:40:53 cmhldshftlab01.expdev.local systemd[1]:
>> origin-node.service failed."
>> 
>> INSTALLER STATUS **
>> 
>> 
>> *
>> Initialization : Complete (0:00:33)
>> Health Check   : Complete (0:00:24)
>> Node Preparation   : Complete (0:00:01)
>> etcd Install   : Complete (0:00:41)
>> Load Balancer Install  : Complete (0:00:18)
>> Master Install : In Progress (0:01:47)
>> This phase can be restarted by running:
>> playbooks/openshift-master/config.yml
>>
>>
>> Failure summary:
>>
>>
>>   1. Hosts:openshift-master.expdev.local
>>  Play: Configure masters
>>  Task: openshift_control_plane : fail
>>  Message:  Node start failed.
>>
>>
>>
>>
>> I go looking for openshift-node-config, and can't find it anywhere. Nor
>> can I find where this file comes from, even when using "yum whatprovides"
>> or a find command in the openshift-ansible directory I am installing from.
>>
>> Am I running into a potential configuration issue, or a bug with the
>> version of origin I am running? My openshift-ansible folder was pulled down
>> at around 2PM Eastern today, as I refreshed it to see if there was any
>> difference in behavior.
>>
>> Any suggestions or troubleshooting tips would be most appreciated.
>>
>> Thank you,
>>
>> Jason
>>
>>
>>
>>
>> ___
>> users mailing 
>> listusers@lists.openshift.redhat.comhttp://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: Posting to REST on same cluster?

2018-05-25 Thread Karl Nicholas
Hi Frédéric,

Thank you. I had a stupid bug which I found and fixed. I just couldn’t believe 
I had written code that didn’t work the first time! You’re correct, I proved 
there was no problem with permissions by using curl.

Thanks again.

Karl.

From: Frederic Giloux 
Sent: Thursday, May 24, 2018 10:36 PM
To: Karl Nicholas 
Cc: users@lists.openshift.redhat.com
Subject: Re: Posting to REST on same cluster?

Hi Karl,
OpenShift does not differentiate between post and get. Also 405 is a server 
error: "the request method is known by the server but has been disabled and 
cannot be used". The issue is likely to be at the level of your application 
providing the REST API. To validate it you could log into a container serving 
the REST API (oc rsh ) and use curl locally.
Regards,
Frédéric

On Thu, May 24, 2018 at 4:35 PM, Karl Nicholas 
> wrote:

In OPENSHIFT I have one application in a cluster attempting to access another 
REST application but I'm getting a HTTP 405 Method Not Allowed for a POST 
request. The GET requests seem to be okay, so I'm thinking it's a permission 
problem. This should be pretty typical micro-services architecture on 
openshift. So far I cannot figure out where to look for permissions or how to 
fix this error. The URL for the service I have set to 
http://rs.opca.svc.cluster.local:8080/statutesrs/rs/.
 The application attempting to do the post is named op. Is this a permissions 
problem or do I need a different URL? If permissions, how do I fix it?

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



--
Frédéric Giloux
Principal App Dev Consultant
Red Hat Germany

fgil...@redhat.com M: 
+49-174-172-4661

redhat.com
 | TRIED. TESTED. TRUSTED. | 
redhat.com/trusted

Red Hat GmbH, 
http://www.de.redhat.com/
 Sitz: Grasbrunn,
Handelsregister: Amtsgericht München, HRB 153243
Geschäftsführer: Paul Argiry, Charles Cachera, Michael Cunningham, Michael 
O'Neill
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Origin 3.9 Installation Issue

2018-05-25 Thread Jason Marshall
Thank you, Dan! That got me a LOT further.

Now off to the next hurdle. ;)

Jason

On Fri, May 25, 2018 at 5:06 AM, Dan Pungă  wrote:

> Hi,
>
> Not sure about the error, but I've noticed a task that I haven't seen
> during my installation attempts (also Origin 3.9 on a cluster).
> From what I see, "origin_control_plane" is an ansible role that's present
> on the master branch of the openshift-ansible repo, but not on the
> release-3.9 branch.
> https://docs.openshift.org/latest/install_config/install/
> host_preparation.html#preparing-for-advanced-installations-origin ststes
> that we should use the release-3.9 branch and that the master is intended
> for the currently developed version of OShift Origin.
>
>  Hope it helps!
>
> On 24.05.2018 21:52, Jason Marshall wrote:
>
> Good afternoon,
>
> I am attempting to do an advanced installation of Origin 3.9 in a cluster
> with a single master and 2 nodes, with a node role on the master server.
>
> I am able to run the prerequisites.yml playbook with no issue. The
> deploy_cluster.yml  fails at the point where the origin.node service
> attempts to start on the master server. The error that comes up is:
>
> TASK [openshift_control_plane : Start and enable self-hosting node]
> 
> 
> fatal: [openshift-master.expdev.local]: FAILED! => {"changed": false,
> "msg": "Unable to restart service origin-node: Job for origin-node.service
> failed because the control process exited with error code. See \"systemctl
> status origin-node.service\" and \"journalctl -xe\" for details.\n"}
> ...ignoring
> 
> "May 24 14:40:53 cmhldshftlab01.expdev.local origin-node[2657]:
> /usr/local/bin/openshift-node: line 17: /usr/bin/openshift-node-config:
> No such file or directory",
> "May 24 14:40:53 cmhldshftlab01.expdev.local systemd[1]:
> origin-node.service: main process exited, code=exited, status=1/FAILURE",
> "May 24 14:40:53 cmhldshftlab01.expdev.local systemd[1]: Failed to
> start OpenShift Node.",
> "May 24 14:40:53 cmhldshftlab01.expdev.local systemd[1]: Unit
> origin-node.service entered failed state.",
> "May 24 14:40:53 cmhldshftlab01.expdev.local systemd[1]:
> origin-node.service failed."
> 
> INSTALLER STATUS **
> 
> 
> *
> Initialization : Complete (0:00:33)
> Health Check   : Complete (0:00:24)
> Node Preparation   : Complete (0:00:01)
> etcd Install   : Complete (0:00:41)
> Load Balancer Install  : Complete (0:00:18)
> Master Install : In Progress (0:01:47)
> This phase can be restarted by running: playbooks/openshift-master/
> config.yml
>
>
> Failure summary:
>
>
>   1. Hosts:openshift-master.expdev.local
>  Play: Configure masters
>  Task: openshift_control_plane : fail
>  Message:  Node start failed.
>
>
>
>
> I go looking for openshift-node-config, and can't find it anywhere. Nor
> can I find where this file comes from, even when using "yum whatprovides"
> or a find command in the openshift-ansible directory I am installing from.
>
> Am I running into a potential configuration issue, or a bug with the
> version of origin I am running? My openshift-ansible folder was pulled down
> at around 2PM Eastern today, as I refreshed it to see if there was any
> difference in behavior.
>
> Any suggestions or troubleshooting tips would be most appreciated.
>
> Thank you,
>
> Jason
>
>
>
>
> ___
> users mailing 
> listusers@lists.openshift.redhat.comhttp://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: hawkular-cassandra failed to startup on openshift origin 3.9

2018-05-25 Thread Tim Dudgeon

I don't see why that shouldn't work as its using an ephemeral volume.
When using NFS I did find that if I tried to redeploy metrics using a 
volume that had already been deployed to then I did hit permissions 
problems that were solved by wiping the data from the NFS mount.
But I can't see how that could apply to a ephemeral volume. That's 
always worked fine for me.



On 25/05/18 11:29, Yu Wei wrote:

configuration as below,

/openshift_metrics_install_metrics=true
/
/openshift_metrics_image_version=v3.9
/
/openshift_master_default_subdomain=paas-dev.dataos.io
/
/#openshift_hosted_logging_deploy=true
/
/openshift_logging_install_logging=true
/
/openshift_logging_image_version=v3.9
/
/openshift_disable_check=disk_availability,docker_image_availability,docker_storage
/
/osm_etcd_image=registry.access.redhat.com/rhel7/etcd
/
/
/
/openshift_enable_service_catalog=true
/
/openshift_service_catalog_image_prefix=openshift/origin-
/
/openshift_service_catalog_image_version=v3.9.0/

*From:* users-boun...@lists.openshift.redhat.com 
 on behalf of Tim Dudgeon 


*Sent:* Friday, May 25, 2018 6:21 PM
*To:* users@lists.openshift.redhat.com
*Subject:* Re: hawkular-cassandra failed to startup on openshift 
origin 3.9


So what was the configuration for metrics in the inventory file.


On 25/05/18 11:04, Yu Wei wrote:

Yes, I deployed that via ansible-playbooks.

*From:* users-boun...@lists.openshift.redhat.com 
 
 
 on behalf of Tim 
Dudgeon  

*Sent:* Friday, May 25, 2018 5:51 PM
*To:* users@lists.openshift.redhat.com 

*Subject:* Re: hawkular-cassandra failed to startup on openshift 
origin 3.9


How are you deploying this? Using the ansible playbooks?


On 25/05/18 10:25, Yu Wei wrote:

Hi,
I tried to deploy hawkular-cassandra on openshift origin 3.9 cluster.
However, pod failed to start up with error as below,
/WARN [main] 2018-05-25 09:17:43,277 StartupChecks.java:267 - 
Directory /cassandra_data/data doesn't exist /


/ERROR [main] 2018-05-25 09:17:43,279 CassandraDaemon.java:710 - Has 
no permission to create directory /cassandra_data/data/


I tried emptyDir and persistent volume as cassandra-data, both failed.

Any advice for this issue?

Thanks,

Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux



___
users mailing list
users@lists.openshift.redhat.com 


http://lists.openshift.redhat.com/openshiftmm/listinfo/users






___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: hawkular-cassandra failed to startup on openshift origin 3.9

2018-05-25 Thread Dan Pungă

Hi,

I've installed a similar configuration and it works. Origin 3.9 with 
metrics installed and ephemeral storage(emptyDir/default).

What I have specified in my inventory file is

openshift_metrics_image_prefix=docker.io/openshift/origin-
openshift_metrics_image_version=v3.9

so I also have the var for openshift_metrics_image_prefix, but I think 
the value there is actually the default one, so the config should be 
identical.


I've attached the replication controller for the hawkular-cassandra pod 
on my cluster (I've removed some annotations and state info). You could 
compare it to yours and see if there are differences

oc get rc/hawkular-cassandra-1 -n openshift-infra -o yaml to see yours

Hope it helps!

On 25.05.2018 13:29, Yu Wei wrote:

configuration as below,

/openshift_metrics_install_metrics=true
/
/openshift_metrics_image_version=v3.9
/
/openshift_master_default_subdomain=paas-dev.dataos.io
/
/#openshift_hosted_logging_deploy=true
/
/openshift_logging_install_logging=true
/
/openshift_logging_image_version=v3.9
/
/openshift_disable_check=disk_availability,docker_image_availability,docker_storage
/
/osm_etcd_image=registry.access.redhat.com/rhel7/etcd
/
/
/
/openshift_enable_service_catalog=true
/
/openshift_service_catalog_image_prefix=openshift/origin-
/
/openshift_service_catalog_image_version=v3.9.0/

*From:* users-boun...@lists.openshift.redhat.com 
 on behalf of Tim Dudgeon 


*Sent:* Friday, May 25, 2018 6:21 PM
*To:* users@lists.openshift.redhat.com
*Subject:* Re: hawkular-cassandra failed to startup on openshift 
origin 3.9


So what was the configuration for metrics in the inventory file.


On 25/05/18 11:04, Yu Wei wrote:

Yes, I deployed that via ansible-playbooks.

*From:* users-boun...@lists.openshift.redhat.com 
 
 
 on behalf of Tim 
Dudgeon  

*Sent:* Friday, May 25, 2018 5:51 PM
*To:* users@lists.openshift.redhat.com 

*Subject:* Re: hawkular-cassandra failed to startup on openshift 
origin 3.9


How are you deploying this? Using the ansible playbooks?


On 25/05/18 10:25, Yu Wei wrote:

Hi,
I tried to deploy hawkular-cassandra on openshift origin 3.9 cluster.
However, pod failed to start up with error as below,
/WARN [main] 2018-05-25 09:17:43,277 StartupChecks.java:267 - 
Directory /cassandra_data/data doesn't exist /


/ERROR [main] 2018-05-25 09:17:43,279 CassandraDaemon.java:710 - Has 
no permission to create directory /cassandra_data/data/


I tried emptyDir and persistent volume as cassandra-data, both failed.

Any advice for this issue?

Thanks,

Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux



___
users mailing list
users@lists.openshift.redhat.com 


http://lists.openshift.redhat.com/openshiftmm/listinfo/users






___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users




hawk_cass.yaml
Description: application/yaml
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Origin 3.9 master-api error

2018-05-25 Thread Dan Pungă

Hello all!

Yet another question/problem from yours truly ...:)

I'm trying to access the cluster with oc login which returns

Error from server (InternalError): Internal error occurred: unexpected 
response: 400


I've tried both the lb entrypoint and also to directly connect to a master.
Don't know if this is the reason, but the origin-master-api service 
shows some errors:



May 25 12:45:06 master1 atomic-openshift-master-api: E0525 
12:45:06.089926    1418 watcher.go:208] watch chan error: etcdserver: 
mvcc: required revision has been compacted
May 25 12:45:08 master1 atomic-openshift-master-api: E0525 
12:45:08.093085    1418 watcher.go:208] watch chan error: etcdserver: 
mvcc: required revision has been compacted
May 25 12:45:08 master1 atomic-openshift-master-api: E0525 
12:45:08.828681    1418 watcher.go:208] watch chan error: etcdserver: 
mvcc: required revision has been compacted
May 25 12:45:10 master1 atomic-openshift-master-api: E0525 
12:45:10.184361    1418 osinserver.go:111] internal error: urls don't 
validate: https://master2.oshift-pinfold.intra:8443/oauth/token/implicit 
/ https://master1.oshift-pinfold.intra:8443/oauth/token/implicit
May 25 12:45:10 master1 atomic-openshift-master-api: E0525 
12:45:10.797415    1418 watcher.go:208] watch chan error: etcdserver: 
mvcc: required revision has been compacted
May 25 12:45:24 master1 atomic-openshift-master-api: E0525 
12:45:24.120997    1418 watcher.go:208] watch chan error: etcdserver: 
mvcc: required revision has been compacted
May 25 12:45:26 master1 atomic-openshift-master-api: E0525 
12:45:26.168915    1418 watcher.go:208] watch chan error: etcdserver: 
mvcc: required revision has been compacted
May 25 12:45:26 master1 atomic-openshift-master-api: E0525 
12:45:26.625063    1418 watcher.go:208] watch chan error: etcdserver: 
mvcc: required revision has been compacted
May 25 12:45:26 master1 atomic-openshift-master-api: E0525 
12:45:26.871406    1418 watcher.go:208] watch chan error: etcdserver: 
mvcc: required revision has been compacted


I've gone into this issue before and a restart of the origin-master-api 
solved the connecting problem, but this is not an option for long term use.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: hawkular-cassandra failed to startup on openshift origin 3.9

2018-05-25 Thread Yu Wei
configuration as below,

openshift_metrics_install_metrics=true
openshift_metrics_image_version=v3.9
openshift_master_default_subdomain=paas-dev.dataos.io
#openshift_hosted_logging_deploy=true
openshift_logging_install_logging=true
openshift_logging_image_version=v3.9
openshift_disable_check=disk_availability,docker_image_availability,docker_storage
osm_etcd_image=registry.access.redhat.com/rhel7/etcd

openshift_enable_service_catalog=true
openshift_service_catalog_image_prefix=openshift/origin-
openshift_service_catalog_image_version=v3.9.0

From: users-boun...@lists.openshift.redhat.com 
 on behalf of Tim Dudgeon 

Sent: Friday, May 25, 2018 6:21 PM
To: users@lists.openshift.redhat.com
Subject: Re: hawkular-cassandra failed to startup on openshift origin 3.9


So what was the configuration for metrics in the inventory file.


On 25/05/18 11:04, Yu Wei wrote:
Yes, I deployed that via ansible-playbooks.

From: 
users-boun...@lists.openshift.redhat.com
 

 on behalf of Tim Dudgeon 
Sent: Friday, May 25, 2018 5:51 PM
To: users@lists.openshift.redhat.com
Subject: Re: hawkular-cassandra failed to startup on openshift origin 3.9


How are you deploying this? Using the ansible playbooks?

On 25/05/18 10:25, Yu Wei wrote:
Hi,
I tried to deploy hawkular-cassandra on openshift origin 3.9 cluster.
However, pod failed to start up with error as below,
WARN [main] 2018-05-25 09:17:43,277 StartupChecks.java:267 - Directory 
/cassandra_data/data doesn't exist
ERROR [main] 2018-05-25 09:17:43,279 CassandraDaemon.java:710 - Has no 
permission to create directory /cassandra_data/data

I tried emptyDir and persistent volume as cassandra-data, both failed.

Any advice for this issue?


Thanks,

Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: hawkular-cassandra failed to startup on openshift origin 3.9

2018-05-25 Thread Tim Dudgeon

So what was the configuration for metrics in the inventory file.


On 25/05/18 11:04, Yu Wei wrote:

Yes, I deployed that via ansible-playbooks.

*From:* users-boun...@lists.openshift.redhat.com 
 on behalf of Tim Dudgeon 


*Sent:* Friday, May 25, 2018 5:51 PM
*To:* users@lists.openshift.redhat.com
*Subject:* Re: hawkular-cassandra failed to startup on openshift 
origin 3.9


How are you deploying this? Using the ansible playbooks?


On 25/05/18 10:25, Yu Wei wrote:

Hi,
I tried to deploy hawkular-cassandra on openshift origin 3.9 cluster.
However, pod failed to start up with error as below,
/WARN [main] 2018-05-25 09:17:43,277 StartupChecks.java:267 - 
Directory /cassandra_data/data doesn't exist /


/ERROR [main] 2018-05-25 09:17:43,279 CassandraDaemon.java:710 - Has 
no permission to create directory /cassandra_data/data/


I tried emptyDir and persistent volume as cassandra-data, both failed.

Any advice for this issue?

Thanks,

Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux



___
users mailing list
users@lists.openshift.redhat.com 


http://lists.openshift.redhat.com/openshiftmm/listinfo/users




___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: hawkular-cassandra failed to startup on openshift origin 3.9

2018-05-25 Thread Yu Wei
Yes, I deployed that via ansible-playbooks.

From: users-boun...@lists.openshift.redhat.com 
 on behalf of Tim Dudgeon 

Sent: Friday, May 25, 2018 5:51 PM
To: users@lists.openshift.redhat.com
Subject: Re: hawkular-cassandra failed to startup on openshift origin 3.9


How are you deploying this? Using the ansible playbooks?

On 25/05/18 10:25, Yu Wei wrote:
Hi,
I tried to deploy hawkular-cassandra on openshift origin 3.9 cluster.
However, pod failed to start up with error as below,
WARN [main] 2018-05-25 09:17:43,277 StartupChecks.java:267 - Directory 
/cassandra_data/data doesn't exist
ERROR [main] 2018-05-25 09:17:43,279 CassandraDaemon.java:710 - Has no 
permission to create directory /cassandra_data/data

I tried emptyDir and persistent volume as cassandra-data, both failed.

Any advice for this issue?


Thanks,

Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: hawkular-cassandra failed to startup on openshift origin 3.9

2018-05-25 Thread Tim Dudgeon

How are you deploying this? Using the ansible playbooks?


On 25/05/18 10:25, Yu Wei wrote:

Hi,
I tried to deploy hawkular-cassandra on openshift origin 3.9 cluster.
However, pod failed to start up with error as below,
/WARN [main] 2018-05-25 09:17:43,277 StartupChecks.java:267 - 
Directory /cassandra_data/data doesn't exist /


/ERROR [main] 2018-05-25 09:17:43,279 CassandraDaemon.java:710 - Has 
no permission to create directory /cassandra_data/data/


I tried emptyDir and persistent volume as cassandra-data, both failed.

Any advice for this issue?

Thanks,

Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


hawkular-cassandra failed to startup on openshift origin 3.9

2018-05-25 Thread Yu Wei
Hi,
I tried to deploy hawkular-cassandra on openshift origin 3.9 cluster.
However, pod failed to start up with error as below,
WARN [main] 2018-05-25 09:17:43,277 StartupChecks.java:267 - Directory 
/cassandra_data/data doesn't exist
ERROR [main] 2018-05-25 09:17:43,279 CassandraDaemon.java:710 - Has no 
permission to create directory /cassandra_data/data

I tried emptyDir and persistent volume as cassandra-data, both failed.

Any advice for this issue?


Thanks,

Jared, (韦煜)
Software developer
Interested in open source software, big data, Linux
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Origin 3.9 Installation Issue

2018-05-25 Thread Dan Pungă

Hi,

Not sure about the error, but I've noticed a task that I haven't seen 
during my installation attempts (also Origin 3.9 on a cluster).
From what I see, "origin_control_plane" is an ansible role that's 
present on the master branch of the openshift-ansible repo, but not on 
the release-3.9 branch.
https://docs.openshift.org/latest/install_config/install/host_preparation.html#preparing-for-advanced-installations-origin 
ststes that we should use the release-3.9 branch and that the master is 
intended for the currently developed version of OShift Origin.


 Hope it helps!

On 24.05.2018 21:52, Jason Marshall wrote:

Good afternoon,

I am attempting to do an advanced installation of Origin 3.9 in a 
cluster with a single master and 2 nodes, with a node role on the 
master server.


I am able to run the prerequisites.yml playbook with no issue. The 
deploy_cluster.yml  fails at the point where the origin.node service 
attempts to start on the master server. The error that comes up is:


TASK [openshift_control_plane : Start and enable self-hosting node] 

fatal: [openshift-master.expdev.local]: FAILED! => {"changed": false, 
"msg": "Unable to restart service origin-node: Job for 
origin-node.service failed because the control process exited with 
error code. See \"systemctl status origin-node.service\" and 
\"journalctl -xe\" for details.\n"}

...ignoring

    "May 24 14:40:53 cmhldshftlab01.expdev.local 
origin-node[2657]: /usr/local/bin/openshift-node: line 17: 
/usr/bin/openshift-node-config: No such file or directory",
    "May 24 14:40:53 cmhldshftlab01.expdev.local systemd[1]: 
origin-node.service: main process exited, code=exited, status=1/FAILURE",
    "May 24 14:40:53 cmhldshftlab01.expdev.local systemd[1]: 
Failed to start OpenShift Node.",
    "May 24 14:40:53 cmhldshftlab01.expdev.local systemd[1]: Unit 
origin-node.service entered failed state.",
    "May 24 14:40:53 cmhldshftlab01.expdev.local systemd[1]: 
origin-node.service failed."


INSTALLER STATUS 
***

Initialization : Complete (0:00:33)
Health Check   : Complete (0:00:24)
Node Preparation   : Complete (0:00:01)
etcd Install   : Complete (0:00:41)
Load Balancer Install  : Complete (0:00:18)
Master Install : In Progress (0:01:47)
    This phase can be restarted by running: 
playbooks/openshift-master/config.yml



Failure summary:


  1. Hosts:    openshift-master.expdev.local
 Play: Configure masters
 Task: openshift_control_plane : fail
 Message:  Node start failed.




I go looking for openshift-node-config, and can't find it anywhere. 
Nor can I find where this file comes from, even when using "yum 
whatprovides" or a find command in the openshift-ansible directory I 
am installing from.


Am I running into a potential configuration issue, or a bug with the 
version of origin I am running? My openshift-ansible folder was pulled 
down at around 2PM Eastern today, as I refreshed it to see if there 
was any difference in behavior.


Any suggestions or troubleshooting tips would be most appreciated.

Thank you,

Jason




___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users