I noticed something and I just wanted to know if it was a coincidence or a feature.
I have a stage in a pipeline that has two jobs. The first job is configured to run on a locally running agent,with a resource called "coordinator". The second job is configured to run on all agents with a resource called "worker". I have about 10 remote agents running on AWS instances that autoregister when they spin up as workers.(they have the "worker" resource). Once when running the pipeline the local agent was busy with another stage in another pipeline. The stage details screen showed all the jobs on the remote agents as *scheduled* and the local job as *pending*. Once the local agent was free and it grabbed the local job then all the remote jobs switched to *building *as well. So is it by design that no jobs in a stage are executed until all the jobs in the stage have been assigned an agent? For my case this worked out great as otherwise the "workers" probably would have quit waiting for the coordinator if they had started right away. thanks. -- You received this message because you are subscribed to the Google Groups "go-cd" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
