Thanks a lot for this information. We are actually using a very similar design on our end regarding dependencies between the pipelines. The issue with this specific one I asked about is because it is not a dependency.
We have a set of pipelines that are used to deploy several services to the different environment. Each service has an approve stage that must be manually triggered before it is passed on to the next environment, i.e. test to uat. This is something that suits us perfectly. However, there are situations in which we want to manually trigger said approve stage for 20+ services. What I was trying to do is create a pipeline that when triggered, it does that. So this pipeline is neither a parent nor does it play any role at all in the dependencies. I do have it now working using the API and by adding the access token as a secret. I also found that I can use the service URL generated by docker to call the API and this way the calls would go internally on the server. The solution I used might work well for your needs. I simply have a created a GIT repo with a bunch of scripts used to trigger the pipelines I need and used that as a material in a pipeline. This pipeline can have the GoCD API URL as an env variable and the access token as a secret. These scripts can then be used to trigger pipeline on the same or a different GoCD server and even pass artifact information if needed. On Saturday, 27 February 2021 at 05:01:36 UTC+13 [email protected] wrote: > We do something similar. > The reason we call an API instead of daisy-chaining the pipelines > (with a pipeline material) is to allow more flexible branching logic. > I sometimes question the validity of this decision as it makes the > logic flow harder to see - it is not shown in the VSM. > > Pipelines can pass information to downstream pipelines using > artifacts. By downstream pipeline I mean a pipeline that is a > "parent" of another pipeline by virtue of the second pipeline having > the first as a pipeline property. Pipelines can also pass information > as artifacts to grandchildren pipelines and greatgrandchildren (etc). > These downstream agents appear in the VSM to the right of the other > and/or a arrow showing the direction of data flow. A typical artifact > might be a jar (the result of a compile task) or a log file. However > you can pass other kinds of artifacts too - you can create a property > file (or xml or yaml or csv ... etc) and pass that. You can also > place a file in something like GIT or Nexus and just put the URL in a > text file and pass that as an artifact. The exact mechanism is that > the artifact file is copied from the agent to the server, then from > the server to the downsteam agent / task later. > With this in mind you might be able to avoid using the API by placing > the target pipeline downstream and then passing information using > artifacts. Using an exec task to run bash or python or groovy (etc) > you can also add logic to a task to make it a "do nothing" task > depending on information passed in the artifact. Using these two > features it should be possible to avoid using API to connect tasks / > pipelines and avoid having hidden dependencies between pipelines that > to not appear in the VSM. > You would have to think hard about your exact use case (and / or share > more details here on the list and ask for more help) to do this. > > Also: Pipelines can have encrypted environment variables which can be > used for storing passwords for use with automation authentication. > > One problem I have not worked out yet is this: Having a pipeline on > one GoCD server (via an agent) call a pipeline on another GoCD server > (via the API). The point of this would be to relieve the > server-single-point-of-failure and server-bottle-neck we are seeing at > our site. I see that the server is not horizontally scalable and > think this is a big problem with the design. > Some things (such as polling GIT) can only happen on the server and if > the server locks up (ours does frequently) everything stops. > Horizontally scaling the server might help. Calling one server via > the API from another might "fake it". > > Another option is to use ssh instead of GoCD API as a networking > mechanism. PGP keys might be another tool to consider (perhaps used > WITH artifacts or WITH api). Python has several PGP libraries, as > does groovy. In my experience ssh keys are easier to use than > automated PGP (or GnuPG). > -- You received this message because you are subscribed to the Google Groups "go-cd" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/go-cd/521d20aa-cdcd-4cc8-87e8-08a8f83197f2n%40googlegroups.com.
