Re: Low Disk Watermark

2016-08-31 Thread Luke Meyer
Looks like you're using your root partition for docker volume storage (and
thus Elasticsearch storage). That is the default configuration, but not a
recommended one - we recommend specifying storage specifically for docker
https://docs.openshift.org/latest/install_config/install/prerequisites.html#configuring-docker-storage

Also ES data will keep getting blown away if you don't give it a persistent
volume, but hopefully that was already evident to you.

On Mon, Aug 29, 2016 at 9:55 PM, Frank Liauw <fr...@vsee.com> wrote:

> Hi All,
>
> My Origin cluster is pretty new, and I happen to spot the following log
> entry by elasticsearch in kibana (I'm using OpenShift's logging stack):
>
> [2016-08-30 01:44:25,997][INFO ][cluster.routing.allocation.decider]
> [Quicksilver] low disk watermark [15%] exceeded on 
> [t2l6Oz8uT-WS8Fa7S7jzfQ][Quicksilver]
> free: 1.5gb[11.4%], replicas will not be assigned to this node
>
> df on the node shows the following:
>
> /dev/mapper/centos_node3-root   14G   13G  1.6G  89% /
> ..
> tmpfs  7.8G  4.0K  7.8G   1%
> /var/lib/origin/openshift.local.volumes/pods/8a2a40e3-
> 5f83-11e6-8b2f-0231a929d7bf/volumes/kubernetes.io~secret/
> builder-dockercfg-3z4qk-push
> tmpfs  7.8G  4.0K  7.8G   1%
> /var/lib/origin/openshift.local.volumes/pods/8a2a40e3-
> 5f83-11e6-8b2f-0231a929d7bf/volumes/kubernetes.io~secret/sshsecret-source
> tmpfs  7.8G   12K  7.8G   1%
> /var/lib/origin/openshift.local.volumes/pods/8a2a40e3-
> 5f83-11e6-8b2f-0231a929d7bf/volumes/kubernetes.io~secret/
> builder-token-znk7k
> tmpfs  7.8G  4.0K  7.8G   1%
> ..
>
> This appears to be the case on one of my other nodes as well (with a
> slightly different tmpfs size of 5.8G).
>
> Is this normal?
>
> Frank
> Systems Engineer
>
> VSee: fr...@vsee.com <http://vsee.com/u/tmd4RB> | Cell: +65 9338 0035
>
> Join me on VSee for Free <http://vsee.com/u/tmd4RB>
>
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Low Disk Watermark

2016-08-29 Thread Frank Liauw
Hi All,

My Origin cluster is pretty new, and I happen to spot the following log
entry by elasticsearch in kibana (I'm using OpenShift's logging stack):

[2016-08-30 01:44:25,997][INFO ][cluster.routing.allocation.decider]
[Quicksilver] low disk watermark [15%] exceeded on
[t2l6Oz8uT-WS8Fa7S7jzfQ][Quicksilver] free: 1.5gb[11.4%], replicas will not
be assigned to this node

df on the node shows the following:

/dev/mapper/centos_node3-root   14G   13G  1.6G  89% /
..
tmpfs  7.8G  4.0K  7.8G   1%
/var/lib/origin/openshift.local.volumes/pods/8a2a40e3-5f83-11e6-8b2f-0231a929d7bf/volumes/
kubernetes.io~secret/builder-dockercfg-3z4qk-push
tmpfs  7.8G  4.0K  7.8G   1%
/var/lib/origin/openshift.local.volumes/pods/8a2a40e3-5f83-11e6-8b2f-0231a929d7bf/volumes/
kubernetes.io~secret/sshsecret-source
tmpfs  7.8G   12K  7.8G   1%
/var/lib/origin/openshift.local.volumes/pods/8a2a40e3-5f83-11e6-8b2f-0231a929d7bf/volumes/
kubernetes.io~secret/builder-token-znk7k
tmpfs  7.8G  4.0K  7.8G   1%
..

This appears to be the case on one of my other nodes as well (with a
slightly different tmpfs size of 5.8G).

Is this normal?

Frank
Systems Engineer

VSee: fr...@vsee.com <http://vsee.com/u/tmd4RB> | Cell: +65 9338 0035

Join me on VSee for Free <http://vsee.com/u/tmd4RB>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users