Re: default StorageClass

2018-04-17 Thread Hemant Kumar
The properties I mentioned apply to default storageclass that is
automatically created for detected cloudprovider (Openstack in this case)



On Tue, Apr 17, 2018 at 12:16 PM, Tim Dudgeon  wrote:

> Sorry, which StorageClass do those variables apply to? There could be
> multiple ones deployed.
> For instance, this property obviously applies to the StorageClass created
> for GlusterFS:
>
> openshift_storage_glusterfs_storageclass_default=True
>
>
> On 17/04/18 17:11, Hemant Kumar wrote:
>
> For making the storageclass not default
>
> openshift_storageclass_default=False
>
> You can also change default class name by
>
> openshift_storageclass_name=something_else
>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: default StorageClass

2018-04-17 Thread Tim Dudgeon
Sorry, which StorageClass do those variables apply to? There could be 
multiple ones deployed.
For instance, this property obviously applies to the StorageClass 
created for GlusterFS:


openshift_storage_glusterfs_storageclass_default=True


On 17/04/18 17:11, Hemant Kumar wrote:

For making the storageclass not default

openshift_storageclass_default=False

You can also change default class name by

openshift_storageclass_name=something_else



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: default StorageClass

2018-04-17 Thread Hemant Kumar
For making the storageclass not default

openshift_storageclass_default=False

You can also change default class name by

openshift_storageclass_name=something_else

PS: Sorry Tim - forgot to reply-all and resending



On Tue, Apr 17, 2018 at 11:15 AM, Tim Dudgeon  wrote:

> When deploying glusterfs you can specify that this is to be the default
> StorageClass for dynamic provisioning using this variable
>
> openshift_storage_glusterfs_storageclass_default=True
>
> However if you also have another dynamic provisioner (e.g. OpenStack
> Cinder) then that is also declared as the default StorageClass and you end
> up with two defaults, which inevitably leads to trouble.
>
> How can you specify that Cinder is not to be the default StorageClass?
> Also, it is possible to specify the names for these StorageClasses?
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Can we use 'Run in privileged mode' in the Jenkins Kubernetes Pod Template?

2018-04-17 Thread Clayton Coleman
Privileged allows everything that anyuid allows

On Apr 17, 2018, at 11:20 AM, Alan Christie <
achris...@informaticsmatters.com> wrote:

Thanks Clayton. That’s worked.

I’m not sure whether I also need to do an "*oc adm policy add-scc-to-user
anyuid -z ${SERVICE_ACCOUNT}"* (which I have done) but I am now able to
build Docker container images in a Jenkins pipeline using a buildah
slave-agent! That’s neat.

The Dockerfile/image source that builds the Jenkins slave-agent and the
(rather fat) resultant agent image are public...

https://github.com/alanbchristie/openshift-jenkins-buildah-slave
https://hub.docker.com/r/alanbchristie/jenkins-slave-buildah-centos7/


On 17 Apr 2018, at 00:39, Clayton Coleman  wrote:

Like any other user, to run privileged an administrator must grant access
to the Jenkins service account to launch privileged pods.  That’s done by
granting the service account the slave pod runs as the privileged SCC:

oc adm policy add-scc-to-user -z SERVICE_ACCT privileged

On Apr 16, 2018, at 2:46 PM, Alan Christie 
wrote:

I’m trying to get around building Docker containers in a Jenkins
slave-agent (because the Docker socket is not available). Along comes
`buildah` claiming to be a lightweight OCI builder so I’ve built a
`buildah` Jenkins slave agent based on the
`openshift/jenkins-slave-maven-centos7` image (
https://github.com/alanbchristie/openshift-jenkins-buildah-slave.git).

Nice.

Sadly…

…the agent appears useless because buildah needs to be run as root!!!

So I walk from one problem into another.

The wonderfully named option in Jenkins -> Manage Jenkins -> Configure
System -> Kubernetes Pod Template -> "Run in privileged mode" was so
appealing I just had to click it!

But … sigh ... I still can’t run as root, instead I get the **Privileged
containers are not allowed provider restricted** error.

This has probably been asked before but...

   1. Is there anything that can be done to run slave-agents as root? (I
   don't want a BuildConfig, I want to run my existing complex pipelines which
   also build docker images in a Jenkins agent)
   2. If not, is someone thinking about supporting this?

Alan Christie


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Can we use 'Run in privileged mode' in the Jenkins Kubernetes Pod Template?

2018-04-17 Thread Alan Christie
Thanks Clayton. That’s worked.

I’m not sure whether I also need to do an "oc adm policy add-scc-to-user anyuid 
-z ${SERVICE_ACCOUNT}" (which I have done) but I am now able to build Docker 
container images in a Jenkins pipeline using a buildah slave-agent! That’s neat.

The Dockerfile/image source that builds the Jenkins slave-agent and the (rather 
fat) resultant agent image are public...

https://github.com/alanbchristie/openshift-jenkins-buildah-slave 

https://hub.docker.com/r/alanbchristie/jenkins-slave-buildah-centos7/ 



> On 17 Apr 2018, at 00:39, Clayton Coleman  wrote:
> 
> Like any other user, to run privileged an administrator must grant access to 
> the Jenkins service account to launch privileged pods.  That’s done by 
> granting the service account the slave pod runs as the privileged SCC:
> 
> oc adm policy add-scc-to-user -z SERVICE_ACCT privileged 
> 
> On Apr 16, 2018, at 2:46 PM, Alan Christie  > wrote:
> 
>> I’m trying to get around building Docker containers in a Jenkins slave-agent 
>> (because the Docker socket is not available). Along comes `buildah` claiming 
>> to be a lightweight OCI builder so I’ve built a `buildah` Jenkins slave 
>> agent based on the `openshift/jenkins-slave-maven-centos7` image 
>> (https://github.com/alanbchristie/openshift-jenkins-buildah-slave.git 
>> ).
>> 
>> Nice.
>> 
>> Sadly…
>> 
>> …the agent appears useless because buildah needs to be run as root!!!
>> 
>> So I walk from one problem into another.
>> 
>> The wonderfully named option in Jenkins -> Manage Jenkins -> Configure 
>> System -> Kubernetes Pod Template -> "Run in privileged mode" was so 
>> appealing I just had to click it!
>> 
>> But … sigh ... I still can’t run as root, instead I get the **Privileged 
>> containers are not allowed provider restricted** error.
>> 
>> This has probably been asked before but...
>> Is there anything that can be done to run slave-agents as root? (I don't 
>> want a BuildConfig, I want to run my existing complex pipelines which also 
>> build docker images in a Jenkins agent)
>> If not, is someone thinking about supporting this?
>> Alan Christie
>> 
>> 
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com 
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users 
>> 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


default StorageClass

2018-04-17 Thread Tim Dudgeon
When deploying glusterfs you can specify that this is to be the default 
StorageClass for dynamic provisioning using this variable


openshift_storage_glusterfs_storageclass_default=True

However if you also have another dynamic provisioner (e.g. OpenStack 
Cinder) then that is also declared as the default StorageClass and you 
end up with two defaults, which inevitably leads to trouble.


How can you specify that Cinder is not to be the default StorageClass?
Also, it is possible to specify the names for these StorageClasses?



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Help with FlexVolumes in 3.9

2018-04-17 Thread Marc Boorshtein
I'm trying to get a CIFS flex volume integrated with origin 3.9 based on
the following instructions:

https://docs.openshift.com/container-platform/3.9/install_config/persistent_storage/persistent_storage_flex_volume.html

I'm specifically trying to get a cifs flex volume working -
https://github.com/andyzhangx/kubernetes-drivers/tree/master/flexvolume/cifs

Here's the steps I took on all nodes and masters:

1.  mkdir -p /usr/libexec/kubernetes/kubelet-plugins/volume/exec/azure-cifs
2.  cd /usr/libexec/kubernetes/kubelet-plugins/volume/exec/azure-cifs/
3.  sudo wget -O cifs
https://raw.githubusercontent.com/andyzhangx/kubernetes-drivers/master/flexvolume/cifs/cifs
4.  sudo chmod a+x cifs

Then I created a pv and pvc based on the instructions then created the pod
in the README.md.  The flex volume fails to mount with this in
/var/log/messages:

Apr 17 08:29:13 node origin-node: I0417 08:29:13.1166651620
kubelet.go:1854] SyncLoop (ADD, "api"):
"nginx-flex-cifs_test-volumes(efa44126-423a-11e8-a4f3-525400887c40)"
Apr 17 08:29:13 node systemd: Created slice libcontainer container
kubepods-besteffort-podefa44126_423a_11e8_a4f3_525400887c40.slice.
Apr 17 08:29:13 node systemd: Starting libcontainer container
kubepods-besteffort-podefa44126_423a_11e8_a4f3_525400887c40.slice.
Apr 17 08:29:13 node origin-node: E0417 08:29:13.1566281620
desired_state_of_world_populator.go:280] Failed to add volume
"flexvol-mount" (specName: "pv-cifs-flexvol") for pod
"efa44126-423a-11e8-a4f3-525400887c40" to desiredStateOfWorld. err=failed
to get Plugin from volumeSpec for volume "pv-cifs-flexvol" err=no volume
plugin matched
Apr 17 08:29:13 node origin-node: I0417 08:29:13.2316611620
reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started
for volume "default-token-5dnnh" (UniqueName: "
kubernetes.io/secret/efa44126-423a-11e8-a4f3-525400887c40-default-token-5dnnh")
pod "nginx-flex-cifs" (UID: "efa44126-423a-11e8-a4f3-525400887c40")
Apr 17 08:29:13 node origin-node: I0417 08:29:13.3318741620
reconciler.go:257] operationExecutor.MountVolume started for volume
"default-token-5dnnh" (UniqueName: "
kubernetes.io/secret/efa44126-423a-11e8-a4f3-525400887c40-default-token-5dnnh")
pod "nginx-flex-cifs" (UID: "efa44126-423a-11e8-a4f3-525400887c40")
Apr 17 08:29:13 node systemd: Started Kubernetes transient mount for
/var/lib/origin/openshift.local.volumes/pods/efa44126-423a-11e8-a4f3-525400887c40/volumes/
kubernetes.io~secret/default-token-5dnnh.
Apr 17 08:29:13 node systemd: Starting Kubernetes transient mount for
/var/lib/origin/openshift.local.volumes/pods/efa44126-423a-11e8-a4f3-525400887c40/volumes/
kubernetes.io~secret/default-token-5dnnh.
Apr 17 08:29:13 node origin-node: I0417 08:29:13.3584261620
operation_generator.go:481] MountVolume.SetUp succeeded for volume
"default-token-5dnnh" (UniqueName: "
kubernetes.io/secret/efa44126-423a-11e8-a4f3-525400887c40-default-token-5dnnh")
pod "nginx-flex-cifs" (UID: "efa44126-423a-11e8-a4f3-525400887c40")

are there special permissions that I need?

thanks
Marc
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: specifying storage class for metrics and logging

2018-04-17 Thread Jeff Cantrill
openshift_logging_elasticsearch_pvc_dynamic is a deprecated variable that
defined the alpha feature of PV->PVC associations prior to the introduction
of storage classes

On Tue, Apr 17, 2018 at 6:26 AM, Per Carlson  wrote:

> Hi.
>
> On 17 April 2018 at 12:17, Tim Dudgeon  wrote:
>
>> So if you are using dynamic provisioning the only option for logging is
>> for the default StorageClass to be set to what is needed?
>>
>> On 17/04/18 11:12, Per Carlson wrote:
>>
>> This holds at least for 3.7:
>>
>> For metrics you can use "openshift_metrics_cassanda_pvc_storage_class_name"
>> (https://github.com/openshift/openshift-ansible/blob/release
>> -3.7/roles/openshift_metrics/tasks/generate_cassandra_pvcs.yaml#L44).
>>
>> Using a StorageClass for logging (ElasticSearch) is more confusing. The
>> variable is "openshift_logging_elasticsearch_pvc_storage_class_name" (
>> https://github.com/openshift/openshift-ansible/blob/release
>> -3.7/roles/openshift_logging_elasticsearch/defaults/main.yml#L34). But,
>> it is only used for non-dynamic PVCs (https://github.com/openshift/
>> openshift-ansible/blob/release-3.7/roles/openshift_logging_
>> elasticsearch/tasks/main.yaml#L368-L370).
>>
>>
>> --
>> Pelle
>>
>> Research is what I'm doing when I don't know what I'm doing.
>> - Wernher von Braun
>>
>>
>>
>
> ​No, I think you can ​use a StorageClass by keeping 
> "openshift_logging_elasticsearch_pvc_dynamic"
> is false. Not sure if that has any side effects though.
>
> --
> Pelle
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Integration Services
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com
http://www.redhat.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: specifying storage class for metrics and logging

2018-04-17 Thread Per Carlson
Hi.

On 17 April 2018 at 12:17, Tim Dudgeon  wrote:

> So if you are using dynamic provisioning the only option for logging is
> for the default StorageClass to be set to what is needed?
>
> On 17/04/18 11:12, Per Carlson wrote:
>
> This holds at least for 3.7:
>
> For metrics you can use "openshift_metrics_cassanda_pvc_storage_class_name"
> (https://github.com/openshift/openshift-ansible/blob/
> release-3.7/roles/openshift_metrics/tasks/generate_cassandra_pvcs.yaml#L44
> ).
>
> Using a StorageClass for logging (ElasticSearch) is more confusing. The
> variable is "openshift_logging_elasticsearch_pvc_storage_class_name" (
> https://github.com/openshift/openshift-ansible/blob/
> release-3.7/roles/openshift_logging_elasticsearch/defaults/main.yml#L34).
> But, it is only used for non-dynamic PVCs (https://github.com/openshift/
> openshift-ansible/blob/release-3.7/roles/openshift_
> logging_elasticsearch/tasks/main.yaml#L368-L370).
>
>
> --
> Pelle
>
> Research is what I'm doing when I don't know what I'm doing.
> - Wernher von Braun
>
>
>

​No, I think you can ​use a StorageClass by keeping
"openshift_logging_elasticsearch_pvc_dynamic" is false. Not sure if that
has any side effects though.

-- 
Pelle
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: specifying storage class for metrics and logging

2018-04-17 Thread Tim Dudgeon
So if you are using dynamic provisioning the only option for logging is 
for the default StorageClass to be set to what is needed?



On 17/04/18 11:12, Per Carlson wrote:

This holds at least for 3.7:

For metrics you can use 
"openshift_metrics_cassanda_pvc_storage_class_name" 
(https://github.com/openshift/openshift-ansible/blob/release-3.7/roles/openshift_metrics/tasks/generate_cassandra_pvcs.yaml#L44).


Using a StorageClass for logging (ElasticSearch) is more confusing. 
The variable is 
"openshift_logging_elasticsearch_pvc_storage_class_name" 
(https://github.com/openshift/openshift-ansible/blob/release-3.7/roles/openshift_logging_elasticsearch/defaults/main.yml#L34). 
But, it is only used for non-dynamic PVCs 
(https://github.com/openshift/openshift-ansible/blob/release-3.7/roles/openshift_logging_elasticsearch/tasks/main.yaml#L368-L370).



--
Pelle

Research is what I'm doing when I don't know what I'm doing.
- Wernher von Braun


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


specifying storage class for metrics and logging

2018-04-17 Thread Tim Dudgeon
If using dynamic provisioning for metrics and logging e.g. your 
inventory file contains:


openshift_metrics_cassandra_storage_type=dynamic

How does one go about specifying the StorageClass to uses?
Without this the default storage class would be used which is not what 
you might want.


Tim



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Best way to get installed?

2018-04-17 Thread Tracy Reed
I think I've finally got a basic cluster up using the following:

Setup prereqs:
https://docs.openshift.com/container-platform/3.5/install_config/install/host_preparation.html#setting-path
Then oc cluster up:
https://docs.openshift.org/latest/getting_started/administrators.html

However, when I point my browser at ip:8443 I first get a certificate
error (understandable, I'll look up how to put in my own cert later)
then I click past the browser warning and it just spins until the
browser times out. The install process went seemingly perfectly. The oc
cluster up ran without error and said:

-- Server Information ...
   OpenShift server started.
   The server is accessible via web console at:
   https://10.240.0.132:8443
   You are logged in as:
   User: developer
   Password: developer
   To login as administrator:
   oc login -u system:admin

so that all looks good. But why does the web console never appear? 

Thanks in advance for any pointers. I've been having a heck of a time
just getting my openshift up and running.

On Fri, Apr 13, 2018 at 12:22:26AM PDT, Tim Dudgeon spake thusly:
> Depends on what you are wanting to do.
> To get some basic experience with using OpenShift you could try Minishift:
> 
> https://docs.openshift.org/latest/minishift/index.html
> 
> Tim
> 
> 
> On 12/04/18 22:26, Tracy Reed wrote:
> > So I've been tasked with setting up an OpenShift cluster for some light
> > testing. Not prod. I was originally given
> > https://github.com/RedHatWorkshops/openshiftv3-ops-workshop/blob/master/setting_up_nonha_ocp_cluster.md
> > as the install guide.
> > 
> > This tutorial takes quite a while to manually setup the 4 nodes (in
> > GCE), plus storage, etc. and then launches into an hour long ansible
> > run.  I've been through it 4 times now and each time run into various
> > odd problems (which I could document for you if necessary).
> > 
> > Is there currently any other simpler and faster way to install
> > a basic OpenShift setup?
> > 
> > Googling produces a number of other OpenShift tutorials, many of which
> > now have comments on them about bugs or being out of date etc.
> > 
> > What's the current state of the art in simple openshift install
> > guides?
> > 
> > Thanks!
> > 
> > 
> > 
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> 

> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users


-- 
Tracy Reed
http://tracyreed.org
Digital signature attached for your safety.


signature.asc
Description: PGP signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users