We do something similar.
The reason we call an API instead of daisy-chaining the pipelines
(with a pipeline material) is to allow more flexible branching logic.
I sometimes question the validity of this decision as it makes the
logic flow harder to see - it is not shown in the VSM.

Pipelines can pass information to downstream pipelines using
artifacts.  By downstream pipeline I mean a pipeline that is a
"parent" of another pipeline by virtue of the second pipeline having
the first as a pipeline property.  Pipelines can also pass information
as artifacts to grandchildren pipelines and greatgrandchildren (etc).
These downstream agents appear in the VSM to the right of the other
and/or a arrow showing the direction of data flow.  A typical artifact
might be a jar (the result of a compile task) or a log file.  However
you can pass other kinds of artifacts too - you can create a property
file (or xml or yaml or csv ... etc) and pass that.  You can also
place a file in something like GIT or Nexus and just put the URL in a
text file and pass that as an artifact.  The exact mechanism is that
the artifact file is copied from the agent to the server, then from
the server to the downsteam agent / task later.
With this in mind you might be able to avoid using the API by placing
the target pipeline downstream and then passing information using
artifacts.  Using an exec task to run bash or python or groovy (etc)
you can also add logic to a task to make it a "do nothing" task
depending on information passed in the artifact.  Using these two
features it should be possible to avoid using API to connect tasks /
pipelines and avoid having hidden dependencies between pipelines that
to not appear in the VSM.
You would have to think hard about your exact use case (and / or share
more details here on the list and ask for more help) to do this.

Also: Pipelines can have encrypted environment variables which can be
used for storing passwords for use with automation authentication.

One problem I have not worked out yet is this:  Having a pipeline on
one GoCD server (via an agent) call a pipeline on another GoCD server
(via the API).  The point of this would be to relieve the
server-single-point-of-failure and server-bottle-neck we are seeing at
our site.  I see that the server is not horizontally scalable and
think this is a big problem with the design.
Some things (such as polling GIT) can only happen on the server and if
the server locks up (ours does frequently) everything stops.
Horizontally scaling the server might help.  Calling one server via
the API from another might "fake it".

Another option is to use ssh instead of GoCD API as a networking
mechanism.  PGP keys might be another tool to consider (perhaps used
WITH artifacts or WITH api).  Python has several PGP libraries, as
does groovy.  In my experience ssh keys are easier to use than
automated PGP (or GnuPG).

-- 
You received this message because you are subscribed to the Google Groups 
"go-cd" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/go-cd/CAGJwi9jnZ4u7mz4v51uz9RZwpq%2BWSAjbn3FUZBZ2iDOghoF8fA%40mail.gmail.com.

Reply via email to