Thanks again for the feedback David. On Friday, 9 February 2018 11:56:56 UTC+10, David Rice wrote: > > Hi! Thanks so much for the extra information. These lists tend to be more > helpful when we know what problem you are trying to solve. And you do have > an interesting problem! > > What I think I heard is this: > - You might get changes for 7 maps > - You want to rebuild & test those 7 maps > - For each map that passes, deploy it, otherwise revert it to manual QA > > I don't have an exact solution for this problem. Some folks on this list > might have some good ideas. I will share some thoughts: > > - As stated earlier, GoCD's native workflow components aren't going to > provide you much help. > - Despite that a pipeline is triggered by an atomic changeset that won't > help you easily deploy just the files in that changeset. The changeset is > used to define the state of all the files in the build. It's not something > easily available to your build. > - You can execute git log in any of the jobs, as the repository is cloned > on the agent, and parse the results to determine which files have changed > - You might be best to write scripts (bash, ruby, whatever) to manage the > "test > deploy or QA" part of your workflow. This is probably OK, as you > then won't be tightly coupled to your CI or CD tool. >
So from my understanding, we can use the GO_REVISION env var to get the commit ID (sha) which our pipeline tasks can then use to get a list of changed files. E.g. git diff f31c~ f31c--name-status M map1 A map2 and then process only these changed files. Ok I think I'm following you. We are thinking of breaking our pipeline into two. Firstly a QA pipeline (process any changed maps and then stage them in a "ready for release" branch. Then have a separate pipeline that will take maps that have been committed to the "ready for release" branch and publish them to production (which includes publishing the map and then committed the changed maps into the master branch). > > Some more pathological suggestions from my colleagues (We don't > necessarily recommend these. They are thought experiments. You'd want to > play with them): > - Utilize the run job across X agents feature. You could run a single job > across 200 agents. Each one would be passed an index which you could > utilize as map ID. But this doesn't really support adding new maps unless > you were running with extra agents and handled the "Map doesn't exist yet" > scenario. And everything would happen in jobs in a single stage. Not much > of a pipeline. > - You could actually configure 200 pipelines, 1 for each map, and use the > whitelist feature to only trigger for a single map. This would be super > painful to do by hand. Perhaps Gomatic > <https://github.com/gocd-contrib/gomatic> could help you here. You could > write a Python script to define the pipelines in a few lines of code. > > I'd suggest you try the 2 tools in parallel and see if either feels like a > good fit. You will definitely need to write some scripts. > Ok that's worth considering. Thanks > > > On Thu, Feb 8, 2018 at 3:15 PM, danielle.90 <[email protected] > <javascript:>> wrote: > >> Thank you David for the helpful advice. >> >> On Thursday, 8 February 2018 13:20:47 UTC+10, David Rice wrote: >>> >>> >>> On Wed, Feb 7, 2018 at 6:53 PM danielle.90 <[email protected]> wrote: >>> >>>> Is this design possible with GoCD? >>>> >>>> Our first challenge. >>>> >>>> 1. Splitting the build pipeline up by files within the GIT commit. For >>>> example, For every file in the git commit, we want to start a new, >>>> separate >>>> pipeline instance to process each file individually. >>>> >>> >>> No. GoCD respects the atomicity of each material as we see it as >>> critical to good pipeline design. The atomic boundary of a change set sets >>> and expectation for what should reasonably work. I don’t see how pulling a >>> single file out of a change set could result in something that is intended >>> or actually works, so think I might must be misunderstanding what your team >>> is trying to do. >>> >> >> Ok I think we can live with that. To clarify what we are doing, we are >> building an ArcGIS Server map publishing pipeline. We only want to 'patch' >> new or changed maps, instead of re-publishing the entire catalogue of maps >> (~200 of them). So, patching all changed maps as per a commit atomically >> will work for us. >> >> The reason we would prefer to split the work into sub-pipelines however >> is that because each published map requires a isolated QA workflow. We want >> maps that pass validation to go to production, maps that fail to go into >> QA. We don't want the failures to be blocking. >> >> >>> >>> and; >>>> >>>> 2. Can we use if/else logic in our pipeline? We need to have logic in >>>> our pipeline that will run different stages based on a condition. I.e. if >>>> build stage fails, go to QA\QC stage, if build passes, go to deploy stage >>>> >>> >>> A GoCD job can execute an optional set of tasks on failure. This is >>> typically for cleanup. But there is no suppprt of conditional stage >>> execution based upon failure. GoCD supports the notion that a broken >>> pipeline should stop the production line. And, of course, GoCD can notify >>> your team when a pipeline fails. >>> >>> >> Ok that helps. I was thinking about a failed pipeline as something that >> could be recovered and resumed by a manual QA process. Now I see that a >> failed pipeline should remain failed, and that the QA step should result in >> a new pipeline instance being run. >> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "go-cd" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected] <javascript:>. >> For more options, visit https://groups.google.com/d/optout. >> > > -- You received this message because you are subscribed to the Google Groups "go-cd" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
