Re: Pod persistence without replication controller

2018-01-09 Thread Joel Pearson
You could use a StatefulSet if you want a consistent hostname, it would
also ensure that there is a always one running.
On Wed, 10 Jan 2018 at 3:49 am, Feld, Michael (IMS) 
wrote:

> Does anyone know why a standalone pod (without a replication controller)
> sometimes persists through a host/node reboot, but not all times (not
> evacuating first)? We have a database pod that we cannot risk scaling, and
> want to ensure that it’s always running.
>
> --
>
> Information in this e-mail may be confidential. It is intended only for
> the addressee(s) identified above. If you are not the addressee(s), or an
> employee or agent of the addressee(s), please note that any dissemination,
> distribution, or copying of this communication is strictly prohibited. If
> you have received this e-mail in error, please notify the sender of the
> error.
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Issues with logging and metrics on Origin 3.7

2018-01-09 Thread Eric Wolinetz
On Mon, Jan 8, 2018 at 12:04 PM, Tim Dudgeon  wrote:

> Ah, so that makes more sense.
>
> So can I define the persistence properties (e.g. using nfs) in the
> inventory file, but specify 'openshift_metrics_install_metrics=false' and
> then run the byo/config.yml  playbook will that create the PVs, but not
> deploy metrics. Then I can later run the 
> byo/openshift-cluster/openshift-metrics.yml
> to actually deploy the metrics.
>

Correct!


> The reason I'm doing this in 2 stages is that I sometimes hit 'Unable to
> allocate memory' problems when trying to deploy everything with
> byo/config.yml (possibly due to the 'forks' setting in ansible.cfg).
>
>
>
> On 08/01/18 17:49, Eric Wolinetz wrote:
>
> I think the issue you're seeing stems from the fact that the logging and
> metrics playbooks to not create their own PVs. That is handled by the
> cluster install playbook.
> The logging and metrics playbooks only create the PVCs that their objects
> may require (unless ephemeral storage is configured).
>
> I admit, the naming of the variables makes that confusing however it is
> described in our docs umbrella'd under the advanced install section which
> uses the cluster playbook...
> https://docs.openshift.com/container-platform/3.7/install_config/install/
> advanced_install.html#advanced-install-cluster-metrics
>
> On Mon, Jan 8, 2018 at 11:22 AM, Tim Dudgeon 
> wrote:
>
>> On 08/01/18 16:51, Luke Meyer wrote:
>>
>>
>>
>> On Thu, Jan 4, 2018 at 10:39 AM, Tim Dudgeon 
>> wrote:
>>
>>> I'm hitting a number of issues with installing logging and metrics on
>>> Origin 3.7.
>>> This is using Centos7 hosts, the release-3.7 branch of openshift-ansible
>>> and NFS for persistent storage.
>>>
>>> I first do a minimal deploy with logging and metrics turned off.
>>> This goes fine. On the NFS server I see various volumes exported under
>>> /exports for logging, metrics, prometheus, even thought these are not
>>> deployed, but that's fine,  they are there if they become needed.
>>> As epxected there are no PVs related to metrics and logging.
>>>
>>> So I try to install metrics. I add this to the inventory file:
>>>
>>> openshift_metrics_install_metrics=true
>>> openshift_metrics_storage_kind=nfs
>>> openshift_metrics_storage_access_modes=['ReadWriteOnce']
>>> openshift_metrics_storage_nfs_directory=/exports
>>> openshift_metrics_storage_nfs_options='*(rw,root_squash)'
>>> openshift_metrics_storage_volume_name=metrics
>>> openshift_metrics_storage_volume_size=10Gi
>>> openshift_metrics_storage_labels={'storage': 'metrics'}
>>>
>>> and run:
>>>
>>> ansible-playbook openshift-ansible/playbooks/by
>>> o/openshift-cluster/openshift-metrics.yml
>>>
>>> All seems to install OK, but metrics can't start, and it turns out that
>>> no PV is created so the PVC needed by Casandra can't be satisfied.
>>> So I manually create the PV using this definition:
>>>
>>> apiVersion: v1
>>> kind: PersistentVolume
>>> metadata:
>>>   name: metrics-pv
>>>   labels:
>>> storage: metrics
>>> spec:
>>>   capacity:
>>> storage: 10Gi
>>>   accessModes:
>>> - ReadWriteOnce
>>>   persistentVolumeReclaimPolicy: Recycle
>>>   nfs:
>>> path: /exports/metrics
>>> server: nfsserver
>>>
>>> Now the PVC is satisfied and metrics can be started (though pods may
>>> need to be bounced because they have timed out).
>>>
>>> ISSUE 1: why does the metrics PV not get created?
>>>
>>>
>>> So now on to trying to install logging. The approach is similar. Add
>>> this to the inventory file:
>>>
>>> openshift_logging_install_logging=true
>>> openshift_logging_storage_kind=nfs
>>> openshift_logging_storage_access_modes=['ReadWriteOnce']
>>> openshift_logging_storage_nfs_directory=/exports
>>> openshift_logging_storage_nfs_options='*(rw,root_squash)'
>>> openshift_logging_storage_volume_name=logging
>>> openshift_logging_storage_volume_size=10Gi
>>> openshift_logging_storage_labels={'storage': 'logging'}
>>>
>>> and run:
>>> ansible-playbook openshift-ansible/playbooks/by
>>> o/openshift-cluster/openshift-logging.yml
>>>
>>> Logging installs fine, and is running fine. Kibana shows logs.
>>> But look at what has been installed and there are no PVs or PVs for
>>> logging. It seems it has  ignored the instructions to use NFS and and
>>> deployed using ephemeral storage.
>>>
>>> ISSUE 2: why does the persistence definitions get ignored?
>>>
>>
>> I'm not entirely sure that under kind=nfs it's *supposed* to create a
>> PVC. Might just directly mount the volume.
>>
>> One thing to check: did you set up a host in the [nfs] group in your
>> inventory?
>>
>> Yes, there is a nfs server, and its working fine (e.g. for the docker
>> registry)
>>
>>
>>
>>>
>>> And finally, looking at the metrics and logging images on Docker Hub
>>> there are none with
>>> v3.7.0 or v3.7 tags. The only tag related to 3.7 is v3.7.0-rc.0. For
>>> example look here:
>>> 

cloud provider problems

2018-01-09 Thread Tim Dudgeon
I'm having problems setting up openstack as a cloud provider. In Ansible 
inventory file I have this, and other parameters defining the 
cloud_provider.


openshift_cloudprovider_kind = openstack

When this is present openshift fails to deploy, and I get this error on 
the nodes as reported by "journalctl -xe"


kubelet_node_status.go:106] Unable to register node "orndev-master-000" 
with API server: nodes "orndev-master-000" is forbidden: node 10.0.0.14 
cannot modify node orndev-master-000


"orndev-master-000" is the resolvable hostname of the node and 10.0.0.14 
is its IP address.


Any suggestions what the "Unable to register node" error is about?

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Pod persistence without replication controller

2018-01-09 Thread Feld, Michael (IMS)
Does anyone know why a standalone pod (without a replication controller) 
sometimes persists through a host/node reboot, but not all times (not 
evacuating first)? We have a database pod that we cannot risk scaling, and want 
to ensure that it's always running.



Information in this e-mail may be confidential. It is intended only for the 
addressee(s) identified above. If you are not the addressee(s), or an employee 
or agent of the addressee(s), please note that any dissemination, distribution, 
or copying of this communication is strictly prohibited. If you have received 
this e-mail in error, please notify the sender of the error.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users