Re: Resource limits - an "initial delay" or "max duration" would be really handy

2018-05-05 Thread Clayton Coleman
Resource limits are fixed because we need to make a good scheduling
decision for the initial burst you’re describing (the extremely high cpu at
the beginning).  Some applications might also need similar cpu on restart.
Your workload needs to “burst”, so setting your cpu limit to your startup
peak and your cpu request to a reasonable percentage of the long term use
is the best way to ensure the scheduler can put you on a node that can
accommodate you.  No matter what, if you want the cpu at the peak we have
to schedule you somewhere you can get the peak cpu.

The longer term approach that makes this less annoying is the feedback loop
between actual used resources on a node for running workloads and requested
workloads, and the eviction and descheduling agents (which attempt to
rebalance nodes by achuffling workloads around).

For the specific case of an app where you know for sure the processes will
use a fraction of the initial limit, you can always voluntarily limit your
own cpu at some time after startup.  That could be a side agent that puts a
more restrictive cgroup limit in place on the container after it has been
up a few minutes.

On May 5, 2018, at 9:57 AM, Alan Christie 
wrote:

I like the idea of placing resource limits on applications running in the
cluster but I wonder if there’s any advice for defining CPU “limits" that
are more tolerant of application start-up behaviour?  Something like the
initial delay on a readiness or liveness probe for example? It just seems
like a rather obvious property of any limit. The ones available are just
too “hard".

One example, and I’m sure this must be common to many applications, is an
application that consumes a significant proportion of the CPU during
initialisation but then, in its steady-state, falls back to a much lower
and non-bursty behaviour. I’ve attached a screenshot of one such
application. You can see that for a very short period of time, exclusive to
initialisation, it consumes many more cycles than the post-initialisation
stage.


During initialisation CPU demand tends to fall and memory use tends to rise.

I suspect that what I’m seeing during this time is OpenShift “throttling”
my app (understandable given the parameters available) and it then fails to
pass through initialisation fast enough to satisfy the readiness/liveness
probe and then gets restarted. Again, and again.

I cannot use any sensible steady-state limit (i.e. one that prevents the
normal stead-state behaviour from deviating) without the application
constantly being forced to throttle and potentially reboot during
initialisation.

In this example I’d like to set a a perfectly reasonable CPU limit of
something like 5Mi (because, after the first minute of execution it should
never deviate from the steady-state level). Sadly I cannot set a low level
because OpenShift will not let the application start (for reasons already
explained) as its initial but very brief CPU load exceeds any “reasonable"
level I set.

I can get around this by defining an abnormally large cpu limit but, to me,
using an “abnormal” level sort of defeats the purpose of a limit.

*Aren’t resource limits missing one vital parameter, “duration" or "initial
delay”.*

Maybe this is beyond the resources feature and has to be deferred to
something like prometheus? But can prometheus take actions rather than just
monitor and alert? And, even if it could, employing prometheus may seem to
some like "using a sledgehammer to crack a nut”.

Any advice on permitting bursty applications within the cluster but also
using limits would be most appreciated.

Alan Christie

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Resource limits - an "initial delay" or "max duration" would be really handy

2018-05-05 Thread Alan Christie
I like the idea of placing resource limits on applications running in the 
cluster but I wonder if there’s any advice for defining CPU “limits" that are 
more tolerant of application start-up behaviour?  Something like the initial 
delay on a readiness or liveness probe for example? It just seems like a rather 
obvious property of any limit. The ones available are just too “hard".

One example, and I’m sure this must be common to many applications, is an 
application that consumes a significant proportion of the CPU during 
initialisation but then, in its steady-state, falls back to a much lower and 
non-bursty behaviour. I’ve attached a screenshot of one such application. You 
can see that for a very short period of time, exclusive to initialisation, it 
consumes many more cycles than the post-initialisation stage.


During initialisation CPU demand tends to fall and memory use tends to rise.

I suspect that what I’m seeing during this time is OpenShift “throttling” my 
app (understandable given the parameters available) and it then fails to pass 
through initialisation fast enough to satisfy the readiness/liveness probe and 
then gets restarted. Again, and again.

I cannot use any sensible steady-state limit (i.e. one that prevents the normal 
stead-state behaviour from deviating) without the application constantly being 
forced to throttle and potentially reboot during initialisation.

In this example I’d like to set a a perfectly reasonable CPU limit of something 
like 5Mi (because, after the first minute of execution it should never deviate 
from the steady-state level). Sadly I cannot set a low level because OpenShift 
will not let the application start (for reasons already explained) as its 
initial but very brief CPU load exceeds any “reasonable" level I set.

I can get around this by defining an abnormally large cpu limit but, to me, 
using an “abnormal” level sort of defeats the purpose of a limit.

Aren’t resource limits missing one vital parameter, “duration" or "initial 
delay”.

Maybe this is beyond the resources feature and has to be deferred to something 
like prometheus? But can prometheus take actions rather than just monitor and 
alert? And, even if it could, employing prometheus may seem to some like "using 
a sledgehammer to crack a nut”.

Any advice on permitting bursty applications within the cluster but also using 
limits would be most appreciated.

Alan Christie

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Version settings for installing 3.9

2018-05-05 Thread Tim Dudgeon

Larry,

that's one of the combinations I tried.
It seems that the problem  is that the repos don't contain v3.9?
This is what I see on a node that I'm trying to install to:


$ yum search origin
Loaded plugins: fastestmirror
Determining fastest mirrors
 * base: centos.mirror.far.fi
 * extras: centos.mirror.far.fi
 * updates: centos.mirror.far.fi
=== 
N/S matched: origin 
===
centos-release-openshift-origin13.noarch : Yum configuration for 
OpenShift Origin 1.3 packages
centos-release-openshift-origin14.noarch : Yum configuration for 
OpenShift Origin 1.4 packages
centos-release-openshift-origin15.noarch : Yum configuration for 
OpenShift Origin 1.5 packages
centos-release-openshift-origin36.noarch : Yum configuration for 
OpenShift Origin 3.6 packages
centos-release-openshift-origin37.noarch : Yum configuration for 
OpenShift Origin 3.7 packages
google-noto-sans-canadian-aboriginal-fonts.noarch : Sans Canadian 
Aboriginal font
centos-release-openshift-origin.noarch : Common release file to 
establish shared metadata for CentOS PaaS SIG

ksh.x86_64 : The Original ATT Korn Shell
texlive-tetex.noarch : scripts and files originally written for or 
included in teTeX


  Name and summary matches only, use "search all" for everything.

3.9 is not there.
I thought 3.9 was released several weeks ago?

I even tried adding these to the inventory file, but to no affect:

openshift_enable_origin_repo=true
openshift_repos_enable_testing=true

Tim


On 04/05/18 15:43, Brigman, Larry wrote:

That top variable (openshift_release=v3.9) should be enough if you have the 
repos enabled.  The others aren't required and cause the installer to not find 
things.
If you are running the openshift-ansible installer make sure you are on branch 
'release-3.9'



From: users-boun...@lists.openshift.redhat.com 
[users-boun...@lists.openshift.redhat.com] on behalf of Tim Dudgeon 
[tdudgeon...@gmail.com]
Sent: Friday, May 04, 2018 3:26 AM
To: users@lists.openshift.redhat.com
Subject: Version settings for installing 3.9

What are the magical set of properties needed to run an ansible install
of Origin 3.9 on centos nodes?

I've tried various combinations around these but can't get anything to work:

openshift_deployment_type=origin
openshift_release=v3.9
openshift_image_tag=v3.9.0
openshift_pkg_version=-3.9.0

I'm continually getting:

1. Hosts:test39-master.openstacklocal
   Play: Determine openshift_version to configure on first master
   Task: openshift_version : fail
   Message:  Package 'origin-3.9*' not found

Surely if you are working from the release-3.9 branch of
openshift-ansible then you should not need to set any of these versions
- you should get the latest version of 3.9 images and rpms?

Tim

___
users mailing list
users@lists.openshift.redhat.com
https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.openshift.redhat.com%2Fopenshiftmm%2Flistinfo%2Fusers=01%7C01%7Clarry.brigman%40arris.com%7Cb80f48a9bf9c458d549008d5b1a9826f%7Cf27929ade5544d55837ac561519c3091%7C1=qz3ZXyQ81pfnzI0AJPxVVvJrEyLOxRmd%2F4X9QZ%2BFu4Y%3D=0


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users