Re: simple hello world in python keeps crashing how to see why?

2018-05-21 Thread Graham Dumpleton
If that is really your whole application then as soon as the loop completes, 
the container will exit and the pod restarted. If that happens quick enough and 
keeps happening it would go into a fail state. For a normal deployment, you 
need to have an application, such as a WSGI application running on a WSGI 
server, which runs permanently. You wouldn't use a normal deployment for a 
short lived program that exits straight away.

What is it that you are ultimately wanting to do?

Graham

> On 22 May 2018, at 7:04 am, Brian Keyes  wrote:
> 
> I have an very very simple hello python 
> 
> 
> #start loop
> for x in range(0, 30):
> print ("hello python ") 
> 
> 
> but every time I run this on openshift it keeps crashing , why , would it be 
> best to scale this up so it is on all worker nodes let it crash and ssh into 
> the worker node and look at the docker logs ?
> 
> it has 2gb of ram allocated so I am not thinking that this is a memory issue 
> 
> any advice ?
> -- 
> thanks 
> 
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: simple hello world in python keeps crashing how to see why?

2018-05-21 Thread Ben Parees
On Mon, May 21, 2018 at 5:04 PM, Brian Keyes  wrote:

> I have an very very simple hello python
>
>
> #start loop
> for x in range(0, 30):
> print ("hello python ")
>
> but every time I run this on openshift it keeps crashing , why , would it
> be best to scale this up so it is on all worker nodes let it crash and ssh
> into the worker node and look at the docker logs ?
>

I assume it's not crashing, it's exiting after finishing the loop.
Openshift expects your pod containers to run a long-lived process (if you
don't want it to be long-lived, use Jobs), so if it exists it restarts it
for you.

If you do want to see the logs for a "crashing"(exiting) container, you can
use "oc logs -p podname" to see the "previous" logs for the pod, which will
show you the output from the previous run that "crashed"(exited).




> it has 2gb of ram allocated so I am not thinking that this is a memory
> issue
>
> any advice ?
> --
> thanks 
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Ben Parees | OpenShift
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: simple hello world in python keeps crashing how to see why?

2018-05-21 Thread Clayton Coleman
Find the name of one of your crashing pods and run:

$ oc debug POD_NAME

That'll put you into a copy of that pod at a shell and you can debug
further from there.


On Mon, May 21, 2018 at 5:04 PM, Brian Keyes  wrote:

> I have an very very simple hello python
>
>
> #start loop
> for x in range(0, 30):
> print ("hello python ")
>
> but every time I run this on openshift it keeps crashing , why , would it
> be best to scale this up so it is on all worker nodes let it crash and ssh
> into the worker node and look at the docker logs ?
>
> it has 2gb of ram allocated so I am not thinking that this is a memory
> issue
>
> any advice ?
> --
> thanks 
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


simple hello world in python keeps crashing how to see why?

2018-05-21 Thread Brian Keyes
I have an very very simple hello python


#start loop
for x in range(0, 30):
print ("hello python ")

but every time I run this on openshift it keeps crashing , why , would it
be best to scale this up so it is on all worker nodes let it crash and ssh
into the worker node and look at the docker logs ?

it has 2gb of ram allocated so I am not thinking that this is a memory
issue

any advice ?
-- 
thanks 
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Provisioning persistence for metrics with GlusterFS

2018-05-21 Thread Rodrigo Bersa
Hi Dan!

Now what I also don't understand is how did the initial volume group for
> the registry got created with just 26GB of storage if the default is for
> 100GB? Is there a rule such as: "create block-hosting volume of default
> size=100GB or max available"?
> The integrated registry's persistence is set to 5GB. This is, I believe, a
> default value, as I haven't set anything related to it in my inventory file
> when installing Openshift Origin. How can I use the remaining storage in my
> vg with glusterFS and Openshift?
>

The registry-storage volume is not a block-volume is a file volume, witch
has no "minimum" size. So you can create many other small file volumes with
no problems. The only restriction will be to create block-volumes, that
need a least 100GB.

Best regards,


Rodrigo Bersa

Cloud Consultant, RHCVA, RHCE

Red Hat Brasil 

rbe...@redhat.comM: +55-11-99557-5841

TRIED. TESTED. TRUSTED. 
Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
pelo *Great Place to Work*.

On Mon, May 21, 2018 at 7:49 AM, Dan Pungă  wrote:

> Hello Rodrigo, I appreciate your answer!
>
> In the meantime I had reached for the heketi-cli related support(chat) and
> I got the same reference. There's a config map generated by the installer
> for the heketi-registry pod that has the default for block-hosting volumes
> size set at 100GB.
> What I thought was that the "block hosting volume" would be an equivalent
> of a logical volume and it(heketi-cli) tries to create a lv of size 100GB
> inside the already created vg_bd61a1e6f317bb9decade964449c12e8(which has
> 26GB).
>
> I've actually modified the encrypted json config and tried to restart the
> heketi-registry pod, which failed. So I ended up with some unmanaged
> glusterFS storage, but since I'm on a test envionment, it's fine.
> Otherwise, good to know for the future.
>
> Now what I also don't understand is how did the initial volume group for
> the registry got created with just 26GB of storage if the default is for
> 100GB? Is there a rule such as: "create block-hosting volume of default
> size=100GB or max available"?
> The integrated registry's persistence is set to 5GB. This is, I believe, a
> default value, as I haven't set anything related to it in my inventory file
> when installing Openshift Origin. How can I use the remaining storage in my
> vg with glusterFS and Openshift?
>
> Thank you!
>
> On 19.05.2018 02:43, Rodrigo Bersa wrote:
>
> Hi Dan,
>
> The Gluster Block volumes works with the concept of block-hosting volume,
> and these ones are created with 100GB by default.
>
> To clarify, the block volumes will be provisioned over the block hosting
> volumes.
>
> Let's say you need a 10GB block volume, it will create a block hosting
> volume with 100GB and then the 10GB block volume over it, as the next block
> volumes requested until it reaches the 100GB. After that a new block
> hosting volume will be created and so on.
>
> So, if you have just 26GB available in each server, it's not enough to
> create the block hosting volume. You may need to add more devices to your
> CNS Cluster to grow your free space.
>
>
> Kind regards,
>
>
> Rodrigo Bersa
>
> Cloud Consultant, RHCVA, RHCE
>
> Red Hat Brasil 
>
> rbe...@redhat.comM: +55-11-99557-5841
> 
> TRIED. TESTED. TRUSTED. 
> Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
> pelo *Great Place to Work*.
>
> On Wed, May 16, 2018 at 10:35 PM, Dan Pungă  wrote:
>
>> Hello all!
>>
>> I have setup a cluster with 3 glusterFS nodes for disk persistence just
>> as specified in the docs. I have configured the inventory file to install
>> the containerized version to be used by Openshift's integrated registry.
>> This works fine.
>>
>> Now I wanted to install the metrics component and I followed the
>> procedure described here: https://docs.openshift.org/lat
>> est/install_config/persistent_storage/persistent_storage_
>> glusterfs.html#install-example-infra
>>
>> I end up with openshift-infra project set up, but with 3 pods failing to
>> start and I think this has to do with the PVC for cassandra that fails to
>> create.
>>
>> oc get pvc metrics-cassandra-1 -o yaml
>>
>> apiVersion: v1
>> kind: PersistentVolumeClaim
>> metadata:
>>   annotations:
>> control-plane.alpha.kubernetes.io/leader:
>> '{"holderIdentity":"8ef584d1-5923-11e8-8730-0a580a830040","l
>> easeDurationSeconds":15,"acquireTime":"2018-05-17T00:38:34Z"
>> ,"renewTime":"2018-05-17T00:55:33Z","leaderTransitions":0}'
>> kubectl.kubernetes.io/last-applied-configuration: |
>>   {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata"
>> :{"annotations":{"volume.beta.kubernetes.io/storage-provisioner":"
>> gluster.org/glusterblock"},"labels":{"metrics-infra":"hawkular-cassandra
>> 

Re: [logging]

2018-05-21 Thread Rich Megginson

On 05/20/2018 12:31 PM, Himmat Singh wrote:


Hi Team,

I am using openshift logging image  with below version for provides us 
centralize logging capability for our openshift cluster and external 
environment logs.


|registry.access.redhat.com/openshift3/logging-fluentd:v3.9 
|


I am trying to add additional functionality on top of above images as 
per our additional requirement.


As per requirement, i have created below configuration files to get 
node security logs and  ingest them to elasticsearch via mux .


Below is source file .. input-pre-secure.conf


|@type tail|
|@label @INGRESS|
|@id secure-input|
|path /var/log/secure*|
|read_from_head true|
|pos_file /var/log/secure.log.pos|
|tag audit.log|
|format none|
||


and filter-pre-secure.conf


|@type parser|
|key_name message|
|format grok|
||
|    pattern (?%{WORD} %{DATA} %{TIME}) 
%{HOSTNAME:host_target} sshd\[%{BASE10NUM}\]: (?%{WORD} 
%{WORD}) (?%{WORD}) from %{IP:src_ip} port %{BASE10NUM:port}|

||
||
|    pattern (?%{WORD} %{DATA} %{TIME}) 
%{HOSTNAME:host_target} sshd\[%{BASE10NUM}\]: %{DATA:EventType} for 
%{USERNAME:username} from %{IP:src_ip} port %{BASE10NUM:port} ssh2|

||
||


Modified Dockerfile:

|FROM registry.access.redhat.com/openshift3/logging-fluentd:v3.9 
|||

||
|COPY fluent-plugin-grok-parser-1.0.1.gem .|
|RUN gem install fluent-plugin-grok-parser-1.0.1.gem|
|COPY input-pre-secure.conf /etc/fluent/configs.d/openshift/|
|COPY filter-pre-secure.conf /etc/fluent/configs.d/openshift/|


*I have deployed updated logging images to mux and fluentd 
**daemonset**. After making this configuration changes i am not able 
to get any of logs to elasticsearch. *


I want all the security logs from /var/log/secure to be filtered 
according to our specific requirement and to be written on .operation 
index. what all configurations i need to make to have logs to be 
written on operation logs.



Please help me with the solution or any suggestion and with correct 
configuration files.


So in another issue I commented about this: 
https://github.com/openshift/origin-aggregated-logging/issues/1141#issuecomment-389301880


You are going to want to create your own index name and bypass all other 
processing.




*
*
*Thanks and Regards, *
*Himmat Singh.*

*
*
*
*


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: RPMs for 3.9 on Centos

2018-05-21 Thread Tim Dudgeon
OK, so do this on the nodes before running the ansible installer seems 
to do the trick:


yum -y install centos-release-openshift-origin


On 21/05/18 11:46, Joel Pearson wrote:
You shouldn’t need testing. It looks like they’ve been in the repo for 
about a month.


Not sure about the ansible side I haven’t actually tried to install 
3.9 yet. And when I do I plan on using system containers.


But you could grep through the ansible scripts looking for what 
installs to repo so you can figure out why it isn’t using it.
On Mon, 21 May 2018 at 8:38 pm, Tim Dudgeon > wrote:


Seems like Ansible isn't doing so for me.
Are there any special params needed for this?

I did try setting these two, but to no effect:

openshift_enable_origin_repo=true
openshift_repos_enable_testing=true


On 21/05/18 11:32, Joel Pearson wrote:

They’re in the paas repo. You don’t have that repo installed for
some reason.

Ansible is supposed to lay that down

http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin/

Why don’t you use the system container version instead? Or you
prefer rpms?
On Mon, 21 May 2018 at 8:30 pm, Tim Dudgeon
> wrote:

I looks like RPMs for Origin 3.9 are still not available from
the Centos
repos:

> $ yum search origin
> Loaded plugins: fastestmirror
> Loading mirror speeds from cached hostfile
>  * base: ftp.lysator.liu.se 
>  * extras: ftp.lysator.liu.se 
>  * updates: ftp.lysator.liu.se 
>



> N/S matched: origin
>

=
> centos-release-openshift-origin13.noarch : Yum
configuration for
> OpenShift Origin 1.3 packages
> centos-release-openshift-origin14.noarch : Yum
configuration for
> OpenShift Origin 1.4 packages
> centos-release-openshift-origin15.noarch : Yum
configuration for
> OpenShift Origin 1.5 packages
> centos-release-openshift-origin36.noarch : Yum
configuration for
> OpenShift Origin 3.6 packages
> centos-release-openshift-origin37.noarch : Yum
configuration for
> OpenShift Origin 3.7 packages
> google-noto-sans-canadian-aboriginal-fonts.noarch : Sans
Canadian
> Aboriginal font
> centos-release-openshift-origin.noarch : Common release
file to
> establish shared metadata for CentOS PaaS SIG
> ksh.x86_64 : The Original ATT Korn Shell
> texlive-tetex.noarch : scripts and files originally written
for or
> included in teTeX
>
>   Name and summary matches only, use "search all" for
everything.
Any idea when these will be available, or instructions for
finding them
somewhere else?





___
users mailing list
users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users





___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Logging fails when using cinder volume for elasticsearch

2018-05-21 Thread Tim Dudgeon

On 21/05/18 13:30, Jeff Cantrill wrote:
Consider logging and issue so that it is properly addressed by the 
development team.



https://github.com/openshift/openshift-ansible/issues/8456
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Logging fails when using cinder volume for elasticsearch

2018-05-21 Thread Jeff Cantrill
Consider logging and issue so that it is properly addressed by the
development team.

On Mon, May 21, 2018 at 7:05 AM, Tim Dudgeon  wrote:

> I'm seeing a  strange problem with trying to use a Cinder volume for the
> elasticsearch PVC when installing logging with Origin 3.7. If I use NFS or
> GlusterFS volumes it all works fine. If I try a Cinder volume elastic
> search fails to start because of permissions problems:
>
>
> [2018-05-21 11:03:48,483][INFO ][container.run] Begin
> Elasticsearch startup script
> [2018-05-21 11:03:48,500][INFO ][container.run] Comparing the
> specified RAM to the maximum recommended for Elasticsearch...
> [2018-05-21 11:03:48,503][INFO ][container.run] Inspecting the
> maximum RAM available...
> [2018-05-21 11:03:48,513][INFO ][container.run] ES_HEAP_SIZE:
> '4096m'
> [2018-05-21 11:03:48,527][INFO ][container.run] Setting heap
> dump location /elasticsearch/persistent/heapdump.hprof
> [2018-05-21 11:03:48,531][INFO ][container.run] Checking if
> Elasticsearch is ready on https://localhost:9200
> Exception in thread "main" java.lang.IllegalStateException: Failed to
> created node environment
> Likely root cause: java.nio.file.AccessDeniedException:
> /elasticsearch/persistent/logging-es
> at sun.nio.fs.UnixException.translateToIOException(UnixExceptio
> n.java:84)
> at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.
> java:102)
> at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.
> java:107)
> at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSy
> stemProvider.java:384)
> at java.nio.file.Files.createDirectory(Files.java:674)
> at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781)
> at java.nio.file.Files.createDirectories(Files.java:767)
> at org.elasticsearch.env.NodeEnvironment.(NodeEnvironment
> .java:169)
> at org.elasticsearch.node.Node.(Node.java:165)
> at org.elasticsearch.node.Node.(Node.java:140)
> at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:143)
> at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:194)
> at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:286)
> at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch
> .java:45)
> Refer to the log for complete error details.
>
> The directory ownerships do look very strange. Using Gluster (where it
> works) you see this (/elasticsearch/persistent is where the volume is
> mounted):
>
> sh-4.2$ cd /elasticsearch/persistent
> sh-4.2$ ls -al
> total 8
> drwxrwsr-x. 4 root 2009 4096 May 21 07:17 .
> drwxrwxrwx. 4 root root   42 May 21 07:17 ..
> drwxr-sr-x. 3 1000 2009 4096 May 21 07:17 logging-es
>
> User 1000 and group 2009 do not exist in /etc/passwd or /etc/groups
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>



-- 
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Integration Services
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
jcant...@redhat.com
http://www.redhat.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Logging fails when using cinder volume for elasticsearch

2018-05-21 Thread Tim Dudgeon
I'm seeing a  strange problem with trying to use a Cinder volume for the 
elasticsearch PVC when installing logging with Origin 3.7. If I use NFS 
or GlusterFS volumes it all works fine. If I try a Cinder volume elastic 
search fails to start because of permissions problems:



[2018-05-21 11:03:48,483][INFO ][container.run    ] Begin 
Elasticsearch startup script
[2018-05-21 11:03:48,500][INFO ][container.run    ] Comparing 
the specified RAM to the maximum recommended for Elasticsearch...
[2018-05-21 11:03:48,503][INFO ][container.run    ] Inspecting 
the maximum RAM available...
[2018-05-21 11:03:48,513][INFO ][container.run    ] 
ES_HEAP_SIZE: '4096m'
[2018-05-21 11:03:48,527][INFO ][container.run    ] Setting heap 
dump location /elasticsearch/persistent/heapdump.hprof
[2018-05-21 11:03:48,531][INFO ][container.run    ] Checking if 
Elasticsearch is ready on https://localhost:9200
Exception in thread "main" java.lang.IllegalStateException: Failed to 
created node environment
Likely root cause: java.nio.file.AccessDeniedException: 
/elasticsearch/persistent/logging-es
    at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
    at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
    at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
    at 
sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)

    at java.nio.file.Files.createDirectory(Files.java:674)
    at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781)
    at java.nio.file.Files.createDirectories(Files.java:767)
    at 
org.elasticsearch.env.NodeEnvironment.(NodeEnvironment.java:169)

    at org.elasticsearch.node.Node.(Node.java:165)
    at org.elasticsearch.node.Node.(Node.java:140)
    at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:143)
    at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:194)
    at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:286)
    at 
org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:45)

Refer to the log for complete error details.

The directory ownerships do look very strange. Using Gluster (where it 
works) you see this (/elasticsearch/persistent is where the volume is 
mounted):


sh-4.2$ cd /elasticsearch/persistent
sh-4.2$ ls -al
total 8
drwxrwsr-x. 4 root 2009 4096 May 21 07:17 .
drwxrwxrwx. 4 root root   42 May 21 07:17 ..
drwxr-sr-x. 3 1000 2009 4096 May 21 07:17 logging-es

User 1000 and group 2009 do not exist in /etc/passwd or /etc/groups



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Provisioning persistence for metrics with GlusterFS

2018-05-21 Thread Dan Pungă

Hello Rodrigo, I appreciate your answer!

In the meantime I had reached for the heketi-cli related support(chat) 
and I got the same reference. There's a config map generated by the 
installer for the heketi-registry pod that has the default for 
block-hosting volumes size set at 100GB.
What I thought was that the "block hosting volume" would be an 
equivalent of a logical volume and it(heketi-cli) tries to create a lv 
of size 100GB inside the already created 
vg_bd61a1e6f317bb9decade964449c12e8(which has 26GB).


I've actually modified the encrypted json config and tried to restart 
the heketi-registry pod, which failed. So I ended up with some unmanaged 
glusterFS storage, but since I'm on a test envionment, it's fine. 
Otherwise, good to know for the future.


Now what I also don't understand is how did the initial volume group for 
the registry got created with just 26GB of storage if the default is for 
100GB? Is there a rule such as: "create block-hosting volume of default 
size=100GB or max available"?
The integrated registry's persistence is set to 5GB. This is, I believe, 
a default value, as I haven't set anything related to it in my inventory 
file when installing Openshift Origin. How can I use the remaining 
storage in my vg with glusterFS and Openshift?


Thank you!

On 19.05.2018 02:43, Rodrigo Bersa wrote:

Hi Dan,

The Gluster Block volumes works with the concept of block-hosting 
volume, and these ones are created with 100GB by default.


To clarify, the block volumes will be provisioned over the block 
hosting volumes.


Let's say you need a 10GB block volume, it will create a block hosting 
volume with 100GB and then the 10GB block volume over it, as the next 
block volumes requested until it reaches the 100GB. After that a new 
block hosting volume will be created and so on.


So, if you have just 26GB available in each server, it's not enough to 
create the block hosting volume. You may need to add more devices to 
your CNS Cluster to grow your free space.



Kind regards,


Rodrigo Bersa

Cloud Consultant, RHCVA, RHCE

Red Hat Brasil 

rbe...@redhat.com  M: +55-11-99557-5841 



  
TRIED. TESTED. TRUSTED. 

Red Hat é reconhecida entre as melhores empresas para trabalhar no 
Brasil pelo *Great Place to Work*.


On Wed, May 16, 2018 at 10:35 PM, Dan Pungă > wrote:


Hello all!

I have setup a cluster with 3 glusterFS nodes for disk persistence
just as specified in the docs. I have configured the inventory
file to install the containerized version to be used by
Openshift's integrated registry. This works fine.

Now I wanted to install the metrics component and I followed the
procedure described here:

https://docs.openshift.org/latest/install_config/persistent_storage/persistent_storage_glusterfs.html#install-example-infra



I end up with openshift-infra project set up, but with 3 pods
failing to start and I think this has to do with the PVC for
cassandra that fails to create.

oc get pvc metrics-cassandra-1 -o yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
control-plane.alpha.kubernetes.io/leader
:

'{"holderIdentity":"8ef584d1-5923-11e8-8730-0a580a830040","leaseDurationSeconds":15,"acquireTime":"2018-05-17T00:38:34Z","renewTime":"2018-05-17T00:55:33Z","leaderTransitions":0}'
kubectl.kubernetes.io/last-applied-configuration
: |

{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"volume.beta.kubernetes.io/storage-provisioner

":"gluster.org/glusterblock

"},"labels":{"metrics-infra":"hawkular-cassandra"},"name":"metrics-cassandra-1","namespace":"openshift-infra"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"6Gi"}},"storageClassName":"glusterfs-registry-block"}}
volume.beta.kubernetes.io/storage-provisioner
:
gluster.org/glusterblock 
  creationTimestamp: 2018-05-17T00:38:34Z
  labels:
    metrics-infra: hawkular-cassandra
  name: metrics-cassandra-1
  namespace: openshift-infra
  resourceVersion: "1204482"
  selfLink:

/api/v1/namespaces/openshift-infra/persistentvolumeclaims/metrics-cassandra-1
  uid: a18b8c20-596a-11e8-8a63-fa163ed601cb
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
  storage: 6Gi
  storageClassName: 

Re: RPMs for 3.9 on Centos

2018-05-21 Thread Joel Pearson
You shouldn’t need testing. It looks like they’ve been in the repo for
about a month.

Not sure about the ansible side I haven’t actually tried to install 3.9
yet. And when I do I plan on using system containers.

But you could grep through the ansible scripts looking for what installs to
repo so you can figure out why it isn’t using it.
On Mon, 21 May 2018 at 8:38 pm, Tim Dudgeon  wrote:

> Seems like Ansible isn't doing so for me.
> Are there any special params needed for this?
>
> I did try setting these two, but to no effect:
>
> openshift_enable_origin_repo=true
> openshift_repos_enable_testing=true
>
> On 21/05/18 11:32, Joel Pearson wrote:
>
> They’re in the paas repo. You don’t have that repo installed for some
> reason.
>
> Ansible is supposed to lay that down
>
> http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin/
>
> Why don’t you use the system container version instead? Or you prefer rpms?
> On Mon, 21 May 2018 at 8:30 pm, Tim Dudgeon  wrote:
>
>> I looks like RPMs for Origin 3.9 are still not available from the Centos
>> repos:
>>
>> > $ yum search origin
>> > Loaded plugins: fastestmirror
>> > Loading mirror speeds from cached hostfile
>> >  * base: ftp.lysator.liu.se
>> >  * extras: ftp.lysator.liu.se
>> >  * updates: ftp.lysator.liu.se
>> >
>> 
>>
>> > N/S matched: origin
>> >
>> =
>> > centos-release-openshift-origin13.noarch : Yum configuration for
>> > OpenShift Origin 1.3 packages
>> > centos-release-openshift-origin14.noarch : Yum configuration for
>> > OpenShift Origin 1.4 packages
>> > centos-release-openshift-origin15.noarch : Yum configuration for
>> > OpenShift Origin 1.5 packages
>> > centos-release-openshift-origin36.noarch : Yum configuration for
>> > OpenShift Origin 3.6 packages
>> > centos-release-openshift-origin37.noarch : Yum configuration for
>> > OpenShift Origin 3.7 packages
>> > google-noto-sans-canadian-aboriginal-fonts.noarch : Sans Canadian
>> > Aboriginal font
>> > centos-release-openshift-origin.noarch : Common release file to
>> > establish shared metadata for CentOS PaaS SIG
>> > ksh.x86_64 : The Original ATT Korn Shell
>> > texlive-tetex.noarch : scripts and files originally written for or
>> > included in teTeX
>> >
>> >   Name and summary matches only, use "search all" for everything.
>> Any idea when these will be available, or instructions for finding them
>> somewhere else?
>>
>>
>>
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: RPMs for 3.9 on Centos

2018-05-21 Thread Tim Dudgeon

Seems like Ansible isn't doing so for me.
Are there any special params needed for this?

I did try setting these two, but to no effect:

openshift_enable_origin_repo=true
openshift_repos_enable_testing=true


On 21/05/18 11:32, Joel Pearson wrote:
They’re in the paas repo. You don’t have that repo installed for some 
reason.


Ansible is supposed to lay that down

http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin/

Why don’t you use the system container version instead? Or you prefer 
rpms?
On Mon, 21 May 2018 at 8:30 pm, Tim Dudgeon > wrote:


I looks like RPMs for Origin 3.9 are still not available from the
Centos
repos:

> $ yum search origin
> Loaded plugins: fastestmirror
> Loading mirror speeds from cached hostfile
>  * base: ftp.lysator.liu.se 
>  * extras: ftp.lysator.liu.se 
>  * updates: ftp.lysator.liu.se 
>



> N/S matched: origin
>

=
> centos-release-openshift-origin13.noarch : Yum configuration for
> OpenShift Origin 1.3 packages
> centos-release-openshift-origin14.noarch : Yum configuration for
> OpenShift Origin 1.4 packages
> centos-release-openshift-origin15.noarch : Yum configuration for
> OpenShift Origin 1.5 packages
> centos-release-openshift-origin36.noarch : Yum configuration for
> OpenShift Origin 3.6 packages
> centos-release-openshift-origin37.noarch : Yum configuration for
> OpenShift Origin 3.7 packages
> google-noto-sans-canadian-aboriginal-fonts.noarch : Sans Canadian
> Aboriginal font
> centos-release-openshift-origin.noarch : Common release file to
> establish shared metadata for CentOS PaaS SIG
> ksh.x86_64 : The Original ATT Korn Shell
> texlive-tetex.noarch : scripts and files originally written for or
> included in teTeX
>
>   Name and summary matches only, use "search all" for everything.
Any idea when these will be available, or instructions for finding
them
somewhere else?





___
users mailing list
users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: RPMs for 3.9 on Centos

2018-05-21 Thread Joel Pearson
They’re in the paas repo. You don’t have that repo installed for some
reason.

Ansible is supposed to lay that down

http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin/

Why don’t you use the system container version instead? Or you prefer rpms?
On Mon, 21 May 2018 at 8:30 pm, Tim Dudgeon  wrote:

> I looks like RPMs for Origin 3.9 are still not available from the Centos
> repos:
>
> > $ yum search origin
> > Loaded plugins: fastestmirror
> > Loading mirror speeds from cached hostfile
> >  * base: ftp.lysator.liu.se
> >  * extras: ftp.lysator.liu.se
> >  * updates: ftp.lysator.liu.se
> >
> 
>
> > N/S matched: origin
> >
> =
> > centos-release-openshift-origin13.noarch : Yum configuration for
> > OpenShift Origin 1.3 packages
> > centos-release-openshift-origin14.noarch : Yum configuration for
> > OpenShift Origin 1.4 packages
> > centos-release-openshift-origin15.noarch : Yum configuration for
> > OpenShift Origin 1.5 packages
> > centos-release-openshift-origin36.noarch : Yum configuration for
> > OpenShift Origin 3.6 packages
> > centos-release-openshift-origin37.noarch : Yum configuration for
> > OpenShift Origin 3.7 packages
> > google-noto-sans-canadian-aboriginal-fonts.noarch : Sans Canadian
> > Aboriginal font
> > centos-release-openshift-origin.noarch : Common release file to
> > establish shared metadata for CentOS PaaS SIG
> > ksh.x86_64 : The Original ATT Korn Shell
> > texlive-tetex.noarch : scripts and files originally written for or
> > included in teTeX
> >
> >   Name and summary matches only, use "search all" for everything.
> Any idea when these will be available, or instructions for finding them
> somewhere else?
>
>
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RPMs for 3.9 on Centos

2018-05-21 Thread Tim Dudgeon
I looks like RPMs for Origin 3.9 are still not available from the Centos 
repos:



$ yum search origin
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: ftp.lysator.liu.se
 * extras: ftp.lysator.liu.se
 * updates: ftp.lysator.liu.se
 
N/S matched: origin 
=
centos-release-openshift-origin13.noarch : Yum configuration for 
OpenShift Origin 1.3 packages
centos-release-openshift-origin14.noarch : Yum configuration for 
OpenShift Origin 1.4 packages
centos-release-openshift-origin15.noarch : Yum configuration for 
OpenShift Origin 1.5 packages
centos-release-openshift-origin36.noarch : Yum configuration for 
OpenShift Origin 3.6 packages
centos-release-openshift-origin37.noarch : Yum configuration for 
OpenShift Origin 3.7 packages
google-noto-sans-canadian-aboriginal-fonts.noarch : Sans Canadian 
Aboriginal font
centos-release-openshift-origin.noarch : Common release file to 
establish shared metadata for CentOS PaaS SIG

ksh.x86_64 : The Original ATT Korn Shell
texlive-tetex.noarch : scripts and files originally written for or 
included in teTeX


  Name and summary matches only, use "search all" for everything.
Any idea when these will be available, or instructions for finding them 
somewhere else?






___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users