Using Aravind's method I've managed to improve the build pipeline:
# Our job invokes a wrapper batch file that splits the list of inputs and 
extracts the correct parameters (eg: CONFIGURATION_LIST="x64 x64 ARM ARM", 
PLATFORM_LIST="Debug Release Debug Release", run_instances=4). It then 
invokes the actual build script
# Upload any build artifacts from all runInstances to the runInstance-1 job 
using the RESTful API. This means that the output artifacts we expect from 
each stage are defined within the code we're building - this is probably a 
positive change
# Downstream jobs can fetch artifacts from the runInstances-1 job using 
whatever filter is convenient

* Expected artifacts are checked into our repository as part of the build 
script. Even if this changes over time we can still build earlier versions
* Duplication of configuration and build logic is minimized
* Adding or removing configurations is straightforward and only requires 
changes to the single build job

* Artifacts must be uploaded as individual files - there doesn't seem to be 
a way to upload an entire folder at once
* Changes to the configuration matrix break the ability to build older code 

While I was eventually able to get the repo-config working I found the 
process of making changes to be too error prone to make sense. There's no 
mechanism to validate the configuration files or preview what jobs will be 
created with a given input, and no visual editor. It's unfortunate, because 
being able to check in the job configuration itself would take care of the 
second point which I expect will be the biggest source of problems down the 

Thanks again for the suggestion!

On Wednesday, February 21, 2018 at 12:46:37 PM UTC-6, Andrew Jones wrote:
> This seems like an elegant way to handle it. It does create more of a 
> dependency between the submitted scripts and configuration than I'd like - 
> if I add or remove a configuration I need to increase/decrease the number 
> of agents. Attempting to rebuild older code would either fail or miss some 
> needed configurations. We work around this by allocating more agents than 
> we think we'll ever need and handling out of bounds JOB_RUN_INDEX values in 
> the script.
> Definitely a good approach to consider - thanks for the suggestion.
> On Wednesday, February 21, 2018 at 12:39:23 PM UTC-6, Aravind SV wrote:
>> I would keep it simple and do a run X instances 
>> <>
>> and use the GO_JOB_RUN_INDEX and GO_JOB_RUN_COUNT to decide what to do. 
>> Even something as simple as:
>> $ cat matrix.rb
>> #!env ruby
>> matrix = {
>>   "1" => "Release/x64",
>>   "2" => "Release/RISC",
>>   "3" => "Debug/x64",
>>   "4" => "Debug/RISC",
>> }
>> puts "Execute: make #{matrix[ENV['GO_JOB_RUN_INDEX']]}"
>> $ GO_JOB_RUN_INDEX=1 ruby matrix.rb
>> Execute: make Release/x64
>> $ GO_JOB_RUN_INDEX=3 ruby matrix.rb
>> Execute: make Debug/x64
>> If you now just setup a job with "Run 4 instances" and have it call: ruby 
>> matrix.rb, you'll get 4 GoCD jobs, running parallel-ly, depending on agents 
>> available, since GoCD sets the correct GO_JOB_RUN_INDEX. No dependency on 
>> plugins. You can even run this locally.
>> On Wed, Feb 21, 2018 at 12:33 PM, Ketan Padegaonkar <
>>> wrote:
>>> I've been working on a groovy based DSL to define your pipeline. Since 
>>> it's all code, you can create the usual programming constructs to generate 
>>> a matrix.
>>> The code and some binaries are available, please read this note 
>>> <>
>>> before you use it.

You received this message because you are subscribed to the Google Groups 
"go-cd" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
For more options, visit

Reply via email to