Re: v3.11 - One or more checks failed

2018-10-12 Thread leo David
Hi,
Same problem here,  it seems so that the 3.11 rpms are not available at the
prerequisites configured repo.
Any thoughts ?

Thanks,

Leo

On Sat, Oct 13, 2018, 00:29 Anton Hughes  wrote:

> I suspect this is a OKD on CentOS ?
>>
>
> Yes, correct.
>
> On Sat, 13 Oct 2018 at 10:15, Daniel Comnea  wrote:
>
>> I suspect this is a OKD on CentOS ?
>>
>> On Fri, Oct 12, 2018 at 9:50 PM Anton Hughes 
>> wrote:
>>
>>> Hello
>>>
>>> Im trying to install 3.11, but am getting the below error/
>>>
>>> Im using
>>> https://github.com/openshift/openshift-ansible/releases/tag/v3.11.0
>>>
>>> Failure summary:
>>>
>>>
>>>   1. Hosts:xxx.xxx.xxx.xxx
>>>  Play: OpenShift Health Checks
>>>  Task: Run health checks (install) - EL
>>>  Message:  One or more checks failed
>>>  Details:  check "package_version":
>>>Not all of the required packages are available at their
>>> requested version
>>>origin:3.11
>>>origin-node:3.11
>>>origin-master:3.11
>>>Please check your subscriptions and enabled repositories.
>>>
>>>
>>> The relevant section of my inventory file is:
>>>
>>> [OSEv3:vars]
>>> ansible_ssh_user=root
>>> enable_excluders=False
>>> enable_docker_excluder=False
>>> ansible_service_broker_install=False
>>>
>>> containerized=True
>>> os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'
>>>
>>> openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability
>>>
>>> #openshift_node_kubelet_args={'pods-per-core': ['10']}
>>>
>>> deployment_type=origin
>>> openshift_deployment_type=origin
>>>
>>> openshift_release=v3.11
>>> openshift_pkg_version=-3.11.0
>>> openshift_image_tag=v3.11
>>> openshift_disable_check=package_version
>>> openshift_disable_check=docker_storage
>>>
>>>
>>> template_service_broker_selector={"region":"infra"}
>>> openshift_metrics_image_version="v3.11"
>>> openshift_logging_image_version="v3.11"
>>> openshift_logging_elasticsearch_proxy_image_version="v1.0.0"
>>> logging_elasticsearch_rollout_override=false
>>> osm_use_cockpit=true
>>>
>>>
>>>
>>> Any help is apprciated.
>>>
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: v3.11 - One or more checks failed

2018-10-12 Thread Anton Hughes
>
> I suspect this is a OKD on CentOS ?
>

Yes, correct.

On Sat, 13 Oct 2018 at 10:15, Daniel Comnea  wrote:

> I suspect this is a OKD on CentOS ?
>
> On Fri, Oct 12, 2018 at 9:50 PM Anton Hughes 
> wrote:
>
>> Hello
>>
>> Im trying to install 3.11, but am getting the below error/
>>
>> Im using
>> https://github.com/openshift/openshift-ansible/releases/tag/v3.11.0
>>
>> Failure summary:
>>
>>
>>   1. Hosts:xxx.xxx.xxx.xxx
>>  Play: OpenShift Health Checks
>>  Task: Run health checks (install) - EL
>>  Message:  One or more checks failed
>>  Details:  check "package_version":
>>Not all of the required packages are available at their
>> requested version
>>origin:3.11
>>origin-node:3.11
>>origin-master:3.11
>>Please check your subscriptions and enabled repositories.
>>
>>
>> The relevant section of my inventory file is:
>>
>> [OSEv3:vars]
>> ansible_ssh_user=root
>> enable_excluders=False
>> enable_docker_excluder=False
>> ansible_service_broker_install=False
>>
>> containerized=True
>> os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'
>>
>> openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability
>>
>> #openshift_node_kubelet_args={'pods-per-core': ['10']}
>>
>> deployment_type=origin
>> openshift_deployment_type=origin
>>
>> openshift_release=v3.11
>> openshift_pkg_version=-3.11.0
>> openshift_image_tag=v3.11
>> openshift_disable_check=package_version
>> openshift_disable_check=docker_storage
>>
>>
>> template_service_broker_selector={"region":"infra"}
>> openshift_metrics_image_version="v3.11"
>> openshift_logging_image_version="v3.11"
>> openshift_logging_elasticsearch_proxy_image_version="v1.0.0"
>> logging_elasticsearch_rollout_override=false
>> osm_use_cockpit=true
>>
>>
>>
>> Any help is apprciated.
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: v3.11 - One or more checks failed

2018-10-12 Thread Daniel Comnea
I suspect this is a OKD on CentOS ?

On Fri, Oct 12, 2018 at 9:50 PM Anton Hughes 
wrote:

> Hello
>
> Im trying to install 3.11, but am getting the below error/
>
> Im using
> https://github.com/openshift/openshift-ansible/releases/tag/v3.11.0
>
> Failure summary:
>
>
>   1. Hosts:xxx.xxx.xxx.xxx
>  Play: OpenShift Health Checks
>  Task: Run health checks (install) - EL
>  Message:  One or more checks failed
>  Details:  check "package_version":
>Not all of the required packages are available at their
> requested version
>origin:3.11
>origin-node:3.11
>origin-master:3.11
>Please check your subscriptions and enabled repositories.
>
>
> The relevant section of my inventory file is:
>
> [OSEv3:vars]
> ansible_ssh_user=root
> enable_excluders=False
> enable_docker_excluder=False
> ansible_service_broker_install=False
>
> containerized=True
> os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'
>
> openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability
>
> #openshift_node_kubelet_args={'pods-per-core': ['10']}
>
> deployment_type=origin
> openshift_deployment_type=origin
>
> openshift_release=v3.11
> openshift_pkg_version=-3.11.0
> openshift_image_tag=v3.11
> openshift_disable_check=package_version
> openshift_disable_check=docker_storage
>
>
> template_service_broker_selector={"region":"infra"}
> openshift_metrics_image_version="v3.11"
> openshift_logging_image_version="v3.11"
> openshift_logging_elasticsearch_proxy_image_version="v1.0.0"
> logging_elasticsearch_rollout_override=false
> osm_use_cockpit=true
>
>
>
> Any help is apprciated.
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


v3.11 - One or more checks failed

2018-10-12 Thread Anton Hughes
Hello

Im trying to install 3.11, but am getting the below error/

Im using https://github.com/openshift/openshift-ansible/releases/tag/v3.11.0

Failure summary:


  1. Hosts:xxx.xxx.xxx.xxx
 Play: OpenShift Health Checks
 Task: Run health checks (install) - EL
 Message:  One or more checks failed
 Details:  check "package_version":
   Not all of the required packages are available at their
requested version
   origin:3.11
   origin-node:3.11
   origin-master:3.11
   Please check your subscriptions and enabled repositories.


The relevant section of my inventory file is:

[OSEv3:vars]
ansible_ssh_user=root
enable_excluders=False
enable_docker_excluder=False
ansible_service_broker_install=False

containerized=True
os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'
openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability

#openshift_node_kubelet_args={'pods-per-core': ['10']}

deployment_type=origin
openshift_deployment_type=origin

openshift_release=v3.11
openshift_pkg_version=-3.11.0
openshift_image_tag=v3.11
openshift_disable_check=package_version
openshift_disable_check=docker_storage


template_service_broker_selector={"region":"infra"}
openshift_metrics_image_version="v3.11"
openshift_logging_image_version="v3.11"
openshift_logging_elasticsearch_proxy_image_version="v1.0.0"
logging_elasticsearch_rollout_override=false
osm_use_cockpit=true



Any help is apprciated.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OKD v3.11.0 has been tagged and pushed to GitHub

2018-10-12 Thread Clayton Coleman
Right now, a fresh install is required.  The master and installer teams are
sorting through what an upgrade would look like.  I'm sure there will be an
upgrade at some point, but it might not be ready when the 4.0 bits are
available.  Stay tuned.

On Fri, Oct 12, 2018 at 4:37 AM David Conde  wrote:

> On the 4.0 changes, is the plan to provide the ability to upgrade from
> 3.11 to 4.0 or would a totally fresh install be required?
>
> On Thu, Oct 11, 2018 at 4:55 PM Clayton Coleman 
> wrote:
>
>> https://github.com/openshift/origin/releases/tag/v3.11.0 contains the
>> release notes and latest binaries.
>>
>> The v3.11.0 tag on docker.io is up to date and will be a rolling tag
>> (new fixes will be delivered there).
>>
>> Thanks to everyone on their hard work!
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Unable to manually reclaim an existing pv

2018-10-12 Thread Louis Santillan
Carlos,

To "clean up" the PV, you need to remove the "instance data" associated
with the binding with the previous PVC.  There's a handful of lines that
need to be deleted if you typed `oc edit pv/pv-x` (and then save the
object).  Using the following PV as an example, delete the `claimRef` and
`status` sections of the yaml document, then save & quit.  Run `oc get pv`
again it should show up as available.

```

# oc get pv pvc-d63a35a5-6153-11e7-b249-000d3a1a72a9 -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
EXPORT_block: "\nEXPORT\n{\n\tExport_Id = 5;\n\tPath =
/export/pvc-d63a35a5-6153-11e7-b249-000d3a1a72a9;\n\tPseudo
  = /export/pvc-d63a35a5-6153-11e7-b249-000d3a1a72a9;\n\tAccess_Type
= RW;\n\tSquash
  = no_root_squash;\n\tSecType = sys;\n\tFilesystem_id =
5.5;\n\tFSAL {\n\t\tName
  = VFS;\n\t}\n}\n"
Export_Id: "5"
Project_Id: "0"
Project_block: ""
Provisioner_Id: d5abc261-5fb7-11e7-8769-0a580a800010
kubernetes.io/createdby: nfs-dynamic-provisioner
pv.kubernetes.io/provisioned-by: example.com/nfs
  creationTimestamp: 2017-07-05T07:30:36Z
  name: pvc-d63a35a5-6153-11e7-b249-000d3a1a72a9
  resourceVersion: "60641"
  selfLink: /api/v1/persistentvolumes/pvc-d63a35a5-6153-11e7-b249-000d3a1a72a9
  uid: d6521c6e-6153-11e7-b249-000d3a1a72a9
spec:
  accessModes:
  - ReadWriteMany
  capacity:
storage: 1Gi
  claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: nfsdynpvc
namespace: 3z64o
resourceVersion: "60470"
uid: d63a35a5-6153-11e7-b249-000d3a1a72a9
  nfs:
path: /export/pvc-d63a35a5-6153-11e7-b249-000d3a1a72a9
server: 172.30.206.205
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs-provisioner-3z64o
status:
  phase: Released

```




___

LOUIS P. SANTILLAN

Architect, OPENSHIFT & DEVOPS

Red Hat Consulting,  Container and PaaS Practice

lsant...@redhat.com   M: 3236334854

TRIED. TESTED. TRUSTED. 




On Mon, Oct 8, 2018 at 3:04 PM Carlos María Cornejo Crespo <
carlos.cornejo.cre...@gmail.com> wrote:

> Hi folks,
>
> I'm not able to manually reclaim a pv and would like to know what I'm
> doing wrong.
> My setup is openshift 3.9 with glusterFS getting installed as part of the
> openshift installation.
>
> The inventory setup creates a storage class for gluster and also makes it
> the default one.
>
> As the setup by default is reclaim policy to Delete and I want to keep the
> pv when I delete the pvc I created a new storage class as follows:
>
> # storage class
> apiVersion: storage.k8s.io/v1
> kind: StorageClass
> metadata:
>   annotations:
> storageclass.kubernetes.io/is-default-class: "false"
>   name: glusterfs-retain
> parameters:
>   resturl: http://myheketi-storage-glusterfs.domainblah.com
>   restuser: admin
>   secretName: heketi-storage-admin-secret
>   secretNamespace: glusterfs
> provisioner: kubernetes.io/glusterfs
> reclaimPolicy: Retain
>
> and if I make a deployment requesting a volume via pvc it works well and
> the pv gets bounded as expected
>
> # deployment
> - kind: DeploymentConfig
>   apiVersion: v1
>   ..
> spec:
>   spec:
>   volumeMounts:
>   - name: "jenkins-data"
> mountPath: "/var/lib/jenkins"
> volumes:
> - name: "jenkins-data"
>   persistentVolumeClaim:
> claimName: "jenkins-data"
>
> #pvc
> - kind: PersistentVolumeClaim
>   apiVersion: v1
>   metadata:
> name: "jenkins-data"
>   spec:
> accessModes:
> - ReadWriteOnce
> resources:
>   requests:
> storage: 30Gi
> storageClassName: glusterfs-retain
>
> Now if I delete the pvc and try to reclaim that pv by creating a new
> deployment that refers to it is when I get the unexpected behaviour. A new
> pvc is created but that generates a new pv with the same name and the
> original pv stays as Released and never gets Available.
>
> How do I manually make it available? According to this
>  I
> need to manually clean up the data on the associated storage asset??? How
> am I supposed to do this if the volumen has been dynamically provisioned by
> GlusterFS?? I´m pretty sure it must be much simpler than that.
>
> Any advise?
>
> Kind regards,
> Carlos M.
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OKD v3.11.0 has been tagged and pushed to GitHub

2018-10-12 Thread David Conde
On the 4.0 changes, is the plan to provide the ability to upgrade from 3.11
to 4.0 or would a totally fresh install be required?

On Thu, Oct 11, 2018 at 4:55 PM Clayton Coleman  wrote:

> https://github.com/openshift/origin/releases/tag/v3.11.0 contains the
> release notes and latest binaries.
>
> The v3.11.0 tag on docker.io is up to date and will be a rolling tag (new
> fixes will be delivered there).
>
> Thanks to everyone on their hard work!
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Command to start/top a cluster gracefully

2018-10-12 Thread Marc Ledent

Thanks Nick for this!


That's what I suspected. I'll make some tests here to see if a simple 
big shutdown of the hosts do not break anything. ;)



Kind regards,

Marc


On 12/10/18 00:18, Nick Pilch wrote:


My team has found that, after we configure our nodes with ansible, 
things connect correctly themselves. So to shut down a cluster, we 
just stop the nodes, and to start a cluster, we just start the nodes. 
Occasionally we have to kick some services because they don't start in 
the right order. This is with 1.3. We recently made the big upgrade to 
3.9, but have not rolled that out in all our environments yet, so we 
don't have much experience with it yet.



Now the app is a different story. Our app pods unfortunately have some 
startup order dependencies, so we use a script with oc commands to 
start up our app pods which makes sure the order is correct. I would 
think ideally, your app pods could just start up in any order and be 
able to handle it.



If your app needs to do certain things when you shut it down, then 
you'll have to have some custom automation for that I would imagine.





Nick Pilch
Cloud Operations
O: 650.567.4560
M: 510.381.6777
E: nick.pi...@bluescape.com 
999 Skyway Rd, Suite 145, San Carlos, CA 94070

Join Bluescape Community 


Notice of Confidentiality: This message and any attachments are 
confidential. If you are not the intended recipient, please do not 
read or distribute. Alert the sender by reply email and delete this 
message immediately.



*From:* users-boun...@lists.openshift.redhat.com 
 on behalf of Aleksandar 
Lazic 

*Sent:* Wednesday, October 10, 2018 1:36:16 PM
*To:* Marc Ledent; users@lists.openshift.redhat.com
*Subject:* Re: Command to start/top a cluster gracefully
Am 10.10.2018 um 11:22 schrieb Marc Ledent:
> Hi all,
>
> Is there a command to stop/start an openshift cluster gracefully. 
"oc cluster"

> commands are acting only for a local all-in-one cluster...

Do you mean something like this?

* Scale all dc/rc/ds to 0

* stop all node processes

* stop all master process

* stop all etc processes

* stop all docker processes

* shutdown all machines

I don't know a easier way maybe there is a playbook in the ansible repo

Regards

Aleks

> Thanks in advance,
> Marc
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users




smime.p7s
Description: S/MIME Cryptographic Signature
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users