Thanks for the reply, all.  This was helpful information.

We currently have a k8 cluster  set up to run our elastic agents, and we 
have a handful of static ec2 instances for which we are migrating things of 
and over to elastic agents. I hadn't considered running a "static" agent as 
a pod on the cluster as well.  

The other possibility in a response above would be to have to have a second 
elastic cluster profile as opposed to trying to do this in an elastic 
profile in the same cluster profile.   

We'll look to try one of these out.

Cheers

Doug




On Tuesday, September 3, 2019 at 3:13:48 PM UTC-4, Sheroy Marker wrote:
>
> Hi Doug, 
>
> What you've described fits the static agent pattern better. We could try 
> to coerce that sort of behavior it into the elastic agent but it doesn't 
> seem natural to an elastic agent to work in that way. 
>
> I'd also like to point you to the "agent.replicaCount" setting in 
> values.yaml in the GoCD Helm chart that let's you set the number of static 
> agents to provision during helm install or upgrade. This setting, along 
> with other agent settings, are explained here - 
> https://github.com/helm/charts/tree/master/stable/gocd. 
>
> The static agents on K8s are still pods, provisioned during helm install. 
> You can then use resources to assign jobs to static agents, as described 
> here - 
> https://docs.gocd.org/current/configuration/managing_a_build_cloud.html. 
>
> The only reason I can think of for choosing to go the other way with 
> queuing up elastic agent requests while one is underway is to recollect 
> agent pod resources when idle. 
>
> Regards, 
> Sheroy
>
> On Tue, Sep 3, 2019 at 10:05 AM Aravind SV <[email protected] 
> <javascript:>> wrote:
>
>> Hello Doug,
>>
>> Yes, as Jason said a static agent might work. However, if it doesn't and 
>> given that your question was about elastic agents, the answer would be "it 
>> depends on the elastic agent plugin". The plugin interface definitely has 
>> enough power to do this, since it controls almost all aspects of job 
>> assignment to agents.
>>
>> For the Kubernetes elastic agent plugin, there is a property called 
>> "Maximum pending pods" which seems like it will do what you want. I think 
>> you should try setting that to 1 to see if it does. What might be confusing 
>> is that this is at a cluster profile level. So, it affects the whole 
>> cluster and is not at an elastic agent profile. So, in the worst case, you 
>> might have to duplicate the cluster profile itself and associate only that 
>> job to an elastic profile in *that cluster*.
>>
>> There's a similar property called "Maximum docker containers to run at 
>> any given point in time" in the docker elastic agents' configuration 
>> <https://github.com/gocd-contrib/docker-elastic-agents-plugin/blob/master/INSTALL.md#configuration>.
>>  
>> It's also at the cluster level.
>>
>> If that property doesn't work for you, then you might need to look deeper 
>> into the Kubernetes properties you mentioned.
>>
>> Hope that helps,
>> Aravind
>>
>> PS: If the terms cluster profile and elastic profile, etc. are confusing, 
>> you're not the only one! There is some work being doing to improve the 
>> experience around this. For instance: 
>> https://github.com/gocd/gocd/issues/6731. If you (or anyone reading) is 
>> interested to help with opinions and want to be guinea pigs for some quick 
>> research, I'd *love* to hear from you.
>>
>>
>> On Tue, Sep 3, 2019 at 11:21 AM Jason Smyth <[email protected] 
>> <javascript:>> wrote:
>>
>>> We haven't started leveraging elastic agents yet so I can't comment on 
>>> whether or not what you are looking to do is possible but if the 
>>> restriction truly is for a single instance, why not use a single static 
>>> agent instead of an elastic agent profile?
>>>
>>> On Tuesday, 3 September 2019 09:43:55 UTC-4, Doug Lethin wrote:
>>>>
>>>> I have a use case where I would like to limit the number of parallel 
>>>> jobs running on a particular elastic agent profile in our kubernetes 
>>>> cluster to just one at a time.  If GoCD is currently executing a job with 
>>>> this elastic again profile I would want all pending jobs associated with 
>>>> the elastic agent to wait until the currently running agent finishes.  Is 
>>>> this current possible?  If so, its not immediately obvious how to do it.   
>>>>  
>>>> I thought maybe using kubernetes ResourceQuota's might solve the problem. 
>>>> I 
>>>> hadn't tried that yet, but my guess is it wouldn't work causing the 
>>>> request 
>>>> to spin up the next against to fail rather than wait.
>>>>
>>>> The use case I have is that the agent has a dependency on a fixed 
>>>> resource that can't be shared.
>>>>
>>>> Thanks.
>>>>
>>>>
>>>>
>>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "go-cd" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to [email protected] <javascript:>.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/go-cd/8720fded-59d2-4e29-afa3-5862a472a581%40googlegroups.com
>>>  
>>> <https://groups.google.com/d/msgid/go-cd/8720fded-59d2-4e29-afa3-5862a472a581%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "go-cd" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] <javascript:>.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/go-cd/CACxychFzBTx-YoZZ5TDPuTpKx7opyFqaz4Yq%2BZpcvsNbCvhAeg%40mail.gmail.com
>>  
>> <https://groups.google.com/d/msgid/go-cd/CACxychFzBTx-YoZZ5TDPuTpKx7opyFqaz4Yq%2BZpcvsNbCvhAeg%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"go-cd" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/go-cd/4b157050-e4f4-44f0-b0ec-29710a540f3e%40googlegroups.com.

Reply via email to