OpenShift Jenkins Pipeline (DSL) Plugin : erroneous 'incorrect namespace' error from create()?

2017-12-19 Thread Alan Christie
Hi guys,

I have a template that can be successfully processed and the objects created 
using oc from the command-line. The template is supposed to run in one 
namespace (let’s call it Y) but it creates secrets that are placed in another 
namespace/project (let’s call that X). The namespaces are managed the same 
user. Both namespaces exist and the following command when run on the 
command-line is valid and is successful:

oc process -f  | oc create -f -

The act of processing and creating templates works in the Jenkins pipeline 
except when the template creates objects in different namespaces. When I try 
and reproduce these actions from within a Jenkins pipeline job that is using 
the OpenShift Jenkins Pipeline (DSL) Plugin, i.e. when I do something like this…

openshift.withCluster("${CLUSTER}") {
openshift.withProject(“${Y}") {
def objs = 
openshift.process('—filename=’)
openshift.create(objs)
}
}

I get the following error reported in the Jenkins Job output:

err=error: the namespace from the provided object “X" does not match 
the namespace “Y". You must pass '—namespace=X' to perform this operation., 
verb=create

How do replicate the actions that appear to be legitimate from the command-line 
but using the Pipeline Plugin? Its error does not make sense. Instead the 
plugin appears to assume that the objects created form the template must reside 
in the namespace in which I am running and therefore insists on it.

Should I raise an issue on the Plugin project?

https://github.com/openshift/jenkins-client-plugin

Thank you in advance of any help but, in the meantime I will continue to search 
for a solution.

Alan Christie
Informatics Matters


___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OpenShift Jenkins Pipeline (DSL) Plugin : erroneous 'incorrect namespace' error from create()?

2017-12-19 Thread Alan Christie
I appear to be able to work-around the problem by iterating through the objects 
created by the call to process() and conditionally setting the namespace, i.e. 
by doing this…

def objs = openshift.process('—filename=’)
for (obj in objs) {
if (obj.metadata.namespace == “X") {
openshift.create(obj, "—namespace=X")
} else {
openshift.create(obj)
}
}
 

But this is not ideal and just creates noise. Ideally I simply want create() in 
the pipeline to be able to reproduce create() on the command-line.


> On 19 Dec 2017, at 11:26, Alan Christie <achris...@informaticsmatters.com> 
> wrote:
> 
> Hi guys,
> 
> I have a template that can be successfully processed and the objects created 
> using oc from the command-line. The template is supposed to run in one 
> namespace (let’s call it Y) but it creates secrets that are placed in another 
> namespace/project (let’s call that X). The namespaces are managed the same 
> user. Both namespaces exist and the following command when run on the 
> command-line is valid and is successful:
> 
>   oc process -f  | oc create -f -
> 
> The act of processing and creating templates works in the Jenkins pipeline 
> except when the template creates objects in different namespaces. When I try 
> and reproduce these actions from within a Jenkins pipeline job that is using 
> the OpenShift Jenkins Pipeline (DSL) Plugin, i.e. when I do something like 
> this…
> 
>   openshift.withCluster("${CLUSTER}") {
>   openshift.withProject(“${Y}") {
>   def objs = 
> openshift.process('—filename=’)
>   openshift.create(objs)
>   }
>   }
> 
> I get the following error reported in the Jenkins Job output:
> 
>   err=error: the namespace from the provided object “X" does not match 
> the namespace “Y". You must pass '—namespace=X' to perform this operation., 
> verb=create
> 
> How do replicate the actions that appear to be legitimate from the 
> command-line but using the Pipeline Plugin? Its error does not make sense. 
> Instead the plugin appears to assume that the objects created form the 
> template must reside in the namespace in which I am running and therefore 
> insists on it.
> 
> Should I raise an issue on the Plugin project?
> 
>   https://github.com/openshift/jenkins-client-plugin 
> <https://github.com/openshift/jenkins-client-plugin>
> 
> Thank you in advance of any help but, in the meantime I will continue to 
> search for a solution.
> 
> Alan Christie
> Informatics Matters
> 
> 

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OpenShift Jenkins Pipeline (DSL) Plugin : erroneous 'incorrect namespace' error from create()?

2017-12-19 Thread Alan Christie
Thanks Justin,

Is it of interest that the plugin’s create() behaves differently to the 
command-line? It just doesn't _feel_ right. I could create an issue in the 
client plugin GitHub project if the consensus is that the behaviour is wrong. 
After all, why does create() care? Its command-line-cousin doesn’t.

At the moment my work-around (inspecting the namespace) and iterating through 
the objects feels more pleasant than using raw.

Alan.

> On 19 Dec 2017, at 14:09, Justin Pierce <jupie...@redhat.com> wrote:
> 
> Alan - This might be a use case for the openshift.raw API [1]. It will simply 
> pass through any arguments you give it. 
> 
> Best Regards,
> Justin
> [1] https://github.com/openshift/jenkins-client-plugin#i-need-more 
> <https://github.com/openshift/jenkins-client-plugin#i-need-more>
> 
> On Tue, Dec 19, 2017 at 6:57 AM, Alan Christie 
> <achris...@informaticsmatters.com <mailto:achris...@informaticsmatters.com>> 
> wrote:
> I appear to be able to work-around the problem by iterating through the 
> objects created by the call to process() and conditionally setting the 
> namespace, i.e. by doing this…
> 
>   def objs = openshift.process('—filename=’)
>   for (obj in objs) {
>   if (obj.metadata.namespace == “X") {
>   openshift.create(obj, "—namespace=X")
> } else {
>   openshift.create(obj)
>   }
>   }
>  
> 
> But this is not ideal and just creates noise. Ideally I simply want create() 
> in the pipeline to be able to reproduce create() on the command-line.
> 
> 
> 
>> On 19 Dec 2017, at 11:26, Alan Christie <achris...@informaticsmatters.com 
>> <mailto:achris...@informaticsmatters.com>> wrote:
>> 
>> Hi guys,
>> 
>> I have a template that can be successfully processed and the objects created 
>> using oc from the command-line. The template is supposed to run in one 
>> namespace (let’s call it Y) but it creates secrets that are placed in 
>> another namespace/project (let’s call that X). The namespaces are managed 
>> the same user. Both namespaces exist and the following command when run on 
>> the command-line is valid and is successful:
>> 
>>  oc process -f  | oc create -f -
>> 
>> The act of processing and creating templates works in the Jenkins pipeline 
>> except when the template creates objects in different namespaces. When I try 
>> and reproduce these actions from within a Jenkins pipeline job that is using 
>> the OpenShift Jenkins Pipeline (DSL) Plugin, i.e. when I do something like 
>> this…
>> 
>>  openshift.withCluster("${CLUSTER}") {
>>  openshift.withProject(“${Y}") {
>>  def objs = 
>> openshift.process('—filename=’)
>>  openshift.create(objs)
>>  }
>>  }
>> 
>> I get the following error reported in the Jenkins Job output:
>> 
>>  err=error: the namespace from the provided object “X" does not match 
>> the namespace “Y". You must pass '—namespace=X' to perform this operation., 
>> verb=create
>> 
>> How do replicate the actions that appear to be legitimate from the 
>> command-line but using the Pipeline Plugin? Its error does not make sense. 
>> Instead the plugin appears to assume that the objects created form the 
>> template must reside in the namespace in which I am running and therefore 
>> insists on it.
>> 
>> Should I raise an issue on the Plugin project?
>> 
>>  https://github.com/openshift/jenkins-client-plugin 
>> <https://github.com/openshift/jenkins-client-plugin>
>> 
>> Thank you in advance of any help but, in the meantime I will continue to 
>> search for a solution.
>> 
>> Alan Christie
>> Informatics Matters
>> 
>> 
> 
> 
> ___
> dev mailing list
> dev@lists.openshift.redhat.com <mailto:dev@lists.openshift.redhat.com>
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev 
> <http://lists.openshift.redhat.com/openshiftmm/listinfo/dev>
> 
> 

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OpenShift Jenkins Pipeline (DSL) Plugin : erroneous 'incorrect namespace' error from create()?

2017-12-19 Thread Alan Christie
Thanks, issue created…

https://github.com/openshift/jenkins-client-plugin/issues/96 
<https://github.com/openshift/jenkins-client-plugin/issues/96>

For now the work-around gets me out of the hole. Cheers.

Alan.

> On 19 Dec 2017, at 14:26, Justin Pierce <jupie...@redhat.com> wrote:
> 
> The plugin tries to simplify namespace handling for the majority of cases. In 
> your particular case, that attempt to help is detrimental. 
> 
> An issue would probably be appreciated. I've copied Gabe, the active 
> maintainer. 
> 
> On Tue, Dec 19, 2017 at 9:20 AM, Alan Christie 
> <achris...@informaticsmatters.com <mailto:achris...@informaticsmatters.com>> 
> wrote:
> Thanks Justin,
> 
> Is it of interest that the plugin’s create() behaves differently to the 
> command-line? It just doesn't _feel_ right. I could create an issue in the 
> client plugin GitHub project if the consensus is that the behaviour is wrong. 
> After all, why does create() care? Its command-line-cousin doesn’t.
> 
> At the moment my work-around (inspecting the namespace) and iterating through 
> the objects feels more pleasant than using raw.
> 
> Alan.
> 
> 
>> On 19 Dec 2017, at 14:09, Justin Pierce <jupie...@redhat.com 
>> <mailto:jupie...@redhat.com>> wrote:
>> 
>> Alan - This might be a use case for the openshift.raw API [1]. It will 
>> simply pass through any arguments you give it. 
>> 
>> Best Regards,
>> Justin
>> [1] https://github.com/openshift/jenkins-client-plugin#i-need-more 
>> <https://github.com/openshift/jenkins-client-plugin#i-need-more>
>> 
>> On Tue, Dec 19, 2017 at 6:57 AM, Alan Christie 
>> <achris...@informaticsmatters.com <mailto:achris...@informaticsmatters.com>> 
>> wrote:
>> I appear to be able to work-around the problem by iterating through the 
>> objects created by the call to process() and conditionally setting the 
>> namespace, i.e. by doing this…
>> 
>>  def objs = openshift.process('—filename=’)
>>  for (obj in objs) {
>>  if (obj.metadata.namespace == “X") {
>>  openshift.create(obj, "—namespace=X")
>> } else {
>>  openshift.create(obj)
>>  }
>>  }
>>  
>> 
>> But this is not ideal and just creates noise. Ideally I simply want create() 
>> in the pipeline to be able to reproduce create() on the command-line.
>> 
>> 
>> 
>>> On 19 Dec 2017, at 11:26, Alan Christie <achris...@informaticsmatters.com 
>>> <mailto:achris...@informaticsmatters.com>> wrote:
>>> 
>>> Hi guys,
>>> 
>>> I have a template that can be successfully processed and the objects 
>>> created using oc from the command-line. The template is supposed to run in 
>>> one namespace (let’s call it Y) but it creates secrets that are placed in 
>>> another namespace/project (let’s call that X). The namespaces are managed 
>>> the same user. Both namespaces exist and the following command when run on 
>>> the command-line is valid and is successful:
>>> 
>>> oc process -f  | oc create -f -
>>> 
>>> The act of processing and creating templates works in the Jenkins pipeline 
>>> except when the template creates objects in different namespaces. When I 
>>> try and reproduce these actions from within a Jenkins pipeline job that is 
>>> using the OpenShift Jenkins Pipeline (DSL) Plugin, i.e. when I do something 
>>> like this…
>>> 
>>> openshift.withCluster("${CLUSTER}") {
>>> openshift.withProject(“${Y}") {
>>> def objs = 
>>> openshift.process('—filename=’)
>>> openshift.create(objs)
>>> }
>>> }
>>> 
>>> I get the following error reported in the Jenkins Job output:
>>> 
>>> err=error: the namespace from the provided object “X" does not match 
>>> the namespace “Y". You must pass '—namespace=X' to perform this operation., 
>>> verb=create
>>> 
>>> How do replicate the actions that appear to be legitimate from the 
>>> command-line but using the Pipeline Plugin? Its error does not make sense. 
>>> Instead the plugin appears to assume that the objects created form the 
>>> template must reside in the namespace in which I am running and therefore 
>>> insists on it.
>>> 
>>> Should I raise an issue on the Plugin project?
>>> 
>>> https://github.com/openshift/jenkins-client-plugin 
>>> <https://github.com/openshift/jenkins-client-plugin>
>>> 
>>> Thank you in advance of any help but, in the meantime I will continue to 
>>> search for a solution.
>>> 
>>> Alan Christie
>>> Informatics Matters
>>> 
>>> 
>> 
>> 
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com <mailto:dev@lists.openshift.redhat.com>
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev 
>> <http://lists.openshift.redhat.com/openshiftmm/listinfo/dev>
>> 
>> 
> 
> 

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Docker builds in an OpenShift Jenkins slave?

2017-12-05 Thread Alan Christie
Thanks again, Ben.

I run into two problems with your _alternative_ suggestion … it looked really 
promising because at least you have access to the pod configuration (in the 
Jenkins "Configure System->Cloud->Kubernetes”), which is cool, but I encounter 
the following in the Jenkins log as it attempts to launch my slave Pod...

Invalid value: true: Privileged containers are not allowed

That’s annoying, especially as there’s a checkbox for it. But also…

Invalid value: "hostPath": hostPath volumes are not allowed to be used


> On 5 Dec 2017, at 15:48, Ben Parees <bpar...@redhat.com> wrote:
> 
> 
> 
> On Tue, Dec 5, 2017 at 10:36 AM, Alan Christie 
> <achris...@informaticsmatters.com <mailto:achris...@informaticsmatters.com>> 
> wrote:
> Thanks Ben. It does seem sensible to use build strategies but prior to a 
> wholesale migration to OpenShift, and for existing workflows that may contain 
> docker and docker-compose commands is there any reasonable option other than 
> a an external (cloud/proprietary/dedicated) docker-enabled slave? I can, for 
> example, just have a Docker slave available (outside the OpenShift cluster) 
> but that’s not ideal.
> 
> Is there an _unsafe_ route I might be able to use now?
> 
> use DOCKER_HOST env variable and point to a host w/ a public docker.
> 
> The alternative is to try to use a hostpath volume definition in your slave 
> pod template but then you also need to run the slave pod as privileged.
> 
> 
> 
> 
> I understand the issues around sharing a docker.sock but it seems to be an 
> acceptable strategy for many. And, for a controlled environment, just 
> mounting docker.sock is a rather neat (quick-n-dirty) solution.
> 
> It may be that, was you say there’s no sensible route down the OpenShift/CICD 
> road other than build strategies. It’s just that for existing/legacy projects 
> not having docker.sock is quite a hill to climb.
> 
> Thanks for your advice though, that has been gratefully received.
> 
> Alan.
> 
>> On 5 Dec 2017, at 13:41, Ben Parees <bpar...@redhat.com 
>> <mailto:bpar...@redhat.com>> wrote:
>> 
>> 
>> 
>> 
>> 
>> On Dec 5, 2017 07:57, "Alan Christie" <achris...@informaticsmatters.com 
>> <mailto:achris...@informaticsmatters.com>> wrote:
>> I’m using Jenkins from the CI/CD catalogue and am able to spin up slaves and 
>> use an `ImageStream` to identify my own slave image. That’s useful, but what 
>> I want to be able to do is build and run Docker images, primarily for 
>> unit/functional test purposes. The _sticking point_, it seems, is the 
>> ability to mount the host's `docker.sock`, without this I’m unable to run 
>> any Docker commands in my Docker containers.
>> 
>> Q. Is there a way to mount the Jenkins/OpenShift host’s /var/run/docker.sock 
>> in my slave so that I can run Docker commands?
>> 
>> Not safely. (mounting the host docker socket is giving out root access to 
>> your host). 
>> 
>> You could use a remote docker host with a certificate for access I believe. 
>> (that's still handing out root access on the docker host but at least it's a 
>> little protected) 
>> 
>> If not, what is the recommended/best practice for building/running/pushing 
>> Docker images from a slave agent?
>> 
>> Define docker build strategies in openshift and trigger them from your 
>> jenkins job. 
>> 
>> 
>> Alan
>> 
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com <mailto:dev@lists.openshift.redhat.com>
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev 
>> <http://lists.openshift.redhat.com/openshiftmm/listinfo/dev>
>> 
> 
> 
> 
> 
> -- 
> Ben Parees | OpenShift

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Docker builds in an OpenShift Jenkins slave?

2017-12-05 Thread Alan Christie
I’m using Jenkins from the CI/CD catalogue and am able to spin up slaves and 
use an `ImageStream` to identify my own slave image. That’s useful, but what I 
want to be able to do is build and run Docker images, primarily for 
unit/functional test purposes. The _sticking point_, it seems, is the ability 
to mount the host's `docker.sock`, without this I’m unable to run any Docker 
commands in my Docker containers.

Q. Is there a way to mount the Jenkins/OpenShift host’s /var/run/docker.sock in 
my slave so that I can run Docker commands? If not, what is the 
recommended/best practice for building/running/pushing Docker images from a 
slave agent?

Alan

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Docker builds in an OpenShift Jenkins slave?

2017-12-05 Thread Alan Christie
Thanks Ben. It does seem sensible to use build strategies but prior to a 
wholesale migration to OpenShift, and for existing workflows that may contain 
docker and docker-compose commands is there any reasonable option other than a 
an external (cloud/proprietary/dedicated) docker-enabled slave? I can, for 
example, just have a Docker slave available (outside the OpenShift cluster) but 
that’s not ideal.

Is there an _unsafe_ route I might be able to use now?

I understand the issues around sharing a docker.sock but it seems to be an 
acceptable strategy for many. And, for a controlled environment, just mounting 
docker.sock is a rather neat (quick-n-dirty) solution.

It may be that, was you say there’s no sensible route down the OpenShift/CICD 
road other than build strategies. It’s just that for existing/legacy projects 
not having docker.sock is quite a hill to climb.

Thanks for your advice though, that has been gratefully received.

Alan.

> On 5 Dec 2017, at 13:41, Ben Parees <bpar...@redhat.com> wrote:
> 
> 
> 
> 
> 
> On Dec 5, 2017 07:57, "Alan Christie" <achris...@informaticsmatters.com 
> <mailto:achris...@informaticsmatters.com>> wrote:
> I’m using Jenkins from the CI/CD catalogue and am able to spin up slaves and 
> use an `ImageStream` to identify my own slave image. That’s useful, but what 
> I want to be able to do is build and run Docker images, primarily for 
> unit/functional test purposes. The _sticking point_, it seems, is the ability 
> to mount the host's `docker.sock`, without this I’m unable to run any Docker 
> commands in my Docker containers.
> 
> Q. Is there a way to mount the Jenkins/OpenShift host’s /var/run/docker.sock 
> in my slave so that I can run Docker commands?
> 
> Not safely. (mounting the host docker socket is giving out root access to 
> your host). 
> 
> You could use a remote docker host with a certificate for access I believe. 
> (that's still handing out root access on the docker host but at least it's a 
> little protected) 
> 
> If not, what is the recommended/best practice for building/running/pushing 
> Docker images from a slave agent?
> 
> Define docker build strategies in openshift and trigger them from your 
> jenkins job. 
> 
> 
> Alan
> 
> ___
> dev mailing list
> dev@lists.openshift.redhat.com <mailto:dev@lists.openshift.redhat.com>
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev 
> <http://lists.openshift.redhat.com/openshiftmm/listinfo/dev>
> 

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OKD/3.11 - Metrics playbook not creating the PV for NFS

2019-10-09 Thread Alan Christie
I see there is a problem known and described in 
https://github.com/openshift/openshift-ansible/issues/5750 
<https://github.com/openshift/openshift-ansible/issues/5750> and I’ve added a 
comment to that.

Alan Christie
achris...@informaticsmatters.com



> On 9 Oct 2019, at 12:41 pm, Alan Christie  
> wrote:
> 
> Hi,
> 
>   OpenShift ansible tag: openshift-ansible-3.11.152-1
>   Ansible: 2.7.13
> 
> I’m trying to install metrics using NFS as the storage back-end and, although 
> the NFS export and PVC is created, it’s not creating the underlying PV, 
> preventing the hawular-cassandra pod from starting.
> 
> Incidentally, everything’s fine with an NFS-based registry. Its export gets 
> created along with the PV and PVC.
> 
> I see a registry-volume PV but no metrics-volume.
> 
> This may be a red-herring but when I look at the playbooks I can see the 
> registry variable being used (openshift_hosted_registry_storage_volume_size) 
> but I cannot see the metrics variable being used anywhere 
> (openshift_metrics_storage_volume_size)…
> 
>   $ grep openshift_metrics_storage_volume_size `find . -name "*.yml"`
>   
> ./roles/openshift_facts/defaults/main.yml:openshift_metrics_storage_volume_size:
>  '10Gi'
> 
> In my inventory my registry variables (which work) look like this…
> 
> openshift_hosted_registry_storage_kind: nfs
> openshift_hosted_registry_storage_access_modes: ['ReadWriteMany']
> openshift_hosted_registry_storage_nfs_directory: /nfs1
> openshift_hosted_registry_storage_nfs_options: '*(rw,root_squash)'
> openshift_hosted_registry_storage_volume_name: registry
> openshift_hosted_registry_storage_volume_size: 200Gi
> 
> …and the metrics variables look like this...
> 
> openshift_metrics_install_metrics: yes
> openshift_metrics_storage_kind: nfs
> openshift_metrics_storage_access_modes: ['ReadWriteOnce']
> openshift_metrics_storage_nfs_directory: /exports
> openshift_metrics_storage_nfs_options: '*(rw,root_squash)'
> openshift_metrics_storage_volume_name: metrics
> openshift_metrics_storage_volume_size: 40Gi
> openshift_metrics_storage_labels: [{'storage': 'metrics'}]
> 
> Have I missed something for this to work?
> 
> If metrics is broken for NFS I’ll create an issue on the openshift-ansible 
> repo.
> 
> Alan Christie
> achris...@informaticsmatters.com <mailto:achris...@informaticsmatters.com>
> 
> 
> 

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


OKD/3.11 - Metrics playbook not creating the PV for NFS

2019-10-09 Thread Alan Christie
Hi,

OpenShift ansible tag: openshift-ansible-3.11.152-1
Ansible: 2.7.13

I’m trying to install metrics using NFS as the storage back-end and, although 
the NFS export and PVC is created, it’s not creating the underlying PV, 
preventing the hawular-cassandra pod from starting.

Incidentally, everything’s fine with an NFS-based registry. Its export gets 
created along with the PV and PVC.

I see a registry-volume PV but no metrics-volume.

This may be a red-herring but when I look at the playbooks I can see the 
registry variable being used (openshift_hosted_registry_storage_volume_size) 
but I cannot see the metrics variable being used anywhere 
(openshift_metrics_storage_volume_size)…

$ grep openshift_metrics_storage_volume_size `find . -name "*.yml"`

./roles/openshift_facts/defaults/main.yml:openshift_metrics_storage_volume_size:
 '10Gi'

In my inventory my registry variables (which work) look like this…

openshift_hosted_registry_storage_kind: nfs
openshift_hosted_registry_storage_access_modes: ['ReadWriteMany']
openshift_hosted_registry_storage_nfs_directory: /nfs1
openshift_hosted_registry_storage_nfs_options: '*(rw,root_squash)'
openshift_hosted_registry_storage_volume_name: registry
openshift_hosted_registry_storage_volume_size: 200Gi

…and the metrics variables look like this...

openshift_metrics_install_metrics: yes
openshift_metrics_storage_kind: nfs
openshift_metrics_storage_access_modes: ['ReadWriteOnce']
openshift_metrics_storage_nfs_directory: /exports
openshift_metrics_storage_nfs_options: '*(rw,root_squash)'
openshift_metrics_storage_volume_name: metrics
openshift_metrics_storage_volume_size: 40Gi
openshift_metrics_storage_labels: [{'storage': 'metrics'}]

Have I missed something for this to work?

If metrics is broken for NFS I’ll create an issue on the openshift-ansible repo.

Alan Christie
achris...@informaticsmatters.com



___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev