Sorry, the command is the following (missed scopes on second):

$ kubectl create quota best-effort-not-terminating --hard=pods=5
--scopes=NotTerminating,BestEffort
$ kubectl create quota not-best-effort-not-terminating
--hard=requests.cpu=5,requests.memory=10Gi,limits.cpu=10,limits.memory=20Gi
--scopes=NotTerminating,NotBestEffort

On Tue, Oct 25, 2016 at 5:25 PM, Derek Carr <[email protected]> wrote:

> If you only want to quota pods that have a more permanent footprint on the
> node, then create a quota that only matches on the NotTerminating scope.
>
> If you want to allow usage of slack resources (i.e. run BestEffort pods),
> and define a quota that controls otherwise, create 2 quotas.
>
> $ kubectl create quota best-effort-not-terminating --hard=pods=5
> --scopes=NotTerminating,BestEffort
> $ kubectl create quota not-best-effort-not-terminating
> --hard=requests.cpu=5,requests.memory=10Gi,limits.
> cpu=10,limits.memory=20Gi
>
> So in this example:
>
> 1. the user is able to create 5 long running pods that make no resource
> request (i.e. no cpu, memory specified)
> 2. the user to request up to 5 cpu cores and 10Gi memory for scheduling
> purposes, and the node will work to ensure is available
> 3. are able to burst up to 10 cpu cores, and 20Gi memory based on
> node-local conditions
>
> Thanks,
> Derek
>
> On Tue, Oct 25, 2016 at 5:14 PM, Srinivas Naga Kotaru (skotaru) <
> [email protected]> wrote:
>
>> Derek/Clayton
>>
>>
>>
>> I saw this link yesterday. It was really good and helpful; I didn’t
>> understand the last advanced section. Let me spend some time again.
>>
>>
>>
>> @Clayton: Do we need to create separate quota policies for both
>> terminated and non-terminated ? or just creating a single policy for
>> non-terminated would be enough? Want to be simple but at same time, don’t
>> want non-terminated short lived pods don’t create any issues to regular
>> working pods.
>>
>>
>>
>> --
>>
>> *Srinivas Kotaru*
>>
>>
>>
>> *From: *Derek Carr <[email protected]>
>> *Date: *Tuesday, October 25, 2016 at 1:09 PM
>> *To: *"[email protected]" <[email protected]>
>> *Cc: *Srinivas Naga Kotaru <[email protected]>, dev <
>> [email protected]>
>> *Subject: *Re: Quota Policies
>>
>>
>>
>> You may find this document useful:
>> http://kubernetes.io/docs/admin/resourcequota/walkthrough/
>>
>>
>> >BestEffort or NotBestEffort are used to explain the concept or can Pod
>> definition can have these words?
>>
>> This refers to the quality of service for a pod.  If a container in a pod
>> makes no request/limit for compute resources, it is BestEffort.  If it
>> makes a request for any resource, its NotBestEffort.
>>
>> You can apply a quota to control the number of BestEffort pods you can
>> create separate from the number of NotBestEffort pods.
>>
>> See step 5 in the above linked example for a walkthrough.
>>
>> Thanks,
>>
>> Derek
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Oct 25, 2016 at 4:02 PM, Clayton Coleman <[email protected]>
>> wrote:
>>
>>
>>
>>
>>
>> On Tue, Oct 25, 2016 at 3:55 PM, Srinivas Naga Kotaru (skotaru) <
>> [email protected]> wrote:
>>
>> Hi
>>
>>
>>
>> I’m trying to frame a policy for best usage of compute resources for our
>> environment. I stared reading documentation on this topic. Although
>> documentation is pretty limited on this topic with working examples, now I
>> have some better understanding on quota and limtrange objects.
>>
>>
>>
>> We are planning to enforce quota and limtrange on every project as part
>> of project provision. Client can increase these limits by going to modify
>> screen on our system and pay the cost accordingly. Goal is to have high
>> efficient cluster resource usage and minimal client disturbance.
>>
>>
>>
>> Have few questions around implementation?
>>
>>
>>
>> Can we exclude build, deploy like short time span pods from quota
>> restrictions?
>>
>>
>>
>> There are two quotas - one for terminating pods (pods that are guaranteed
>> to finish in a certain time period) and one for non-terminating pods.
>>
>>
>>
>> Quotas enforced only running pods or dead pods, pending status, succeeded?
>>
>>
>>
>> Once a pod terminates (failed, succeeded) it is not counted for quota.
>> Pods that are pending deletion are still counted for quota.
>>
>>
>>
>> What is the meaning of scopes: Terminating or scopes: NotTerminating in
>> quota definition? It is bit confusing to understand.
>>
>>
>>
>> Terminating means "will finish in bounded time", i.e. does not have
>> RestartAlways and also has activeDeadlineSeconds.  NonTerminating is
>> everything else.
>>
>>
>>
>> BestEffort or NotBestEffort are used to explain the concept or can Pod
>> definition can have these words?
>>
>>
>>
>> We don't have quota per QoS class yet today, but it would be useful.
>>
>>
>>
>>
>>
>> Any good documentation with examples would help in documentation.
>>
>>
>>
>> I thought Derek had some good write ups of this.
>>
>>
>>
>>
>>
>> Srinivas Kotaru
>>
>>
>>
>>
>>
>> --
>>
>> *Srinivas Kotaru*
>>
>>
>> _______________________________________________
>> dev mailing list
>> [email protected]
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>>
>>
>> _______________________________________________
>> dev mailing list
>> [email protected]
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>>
>
>
_______________________________________________
dev mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

Reply via email to