Restricted nodes may provide enough security for some use cases, but in my
opinion they don't provide enough for artifact publishing. An example would
be if there were a exploit available that worked against a Jenkins master.
In this case I think an attacker code still pivot to a secure node (correct
me if I'm wrong).

To your second point, it shouldn't be too hard for us to maintain all the
deps for our packages in Dockerfiles which are checked into source and
built on a regular basis.  To publish these artifacts I'd recommend doing
this from a separate, secure environment.  The flow I'd recommend would be
something like: (1) Developers commit PRs with verification that the
artifacts build properly on a continual basis from the CI. (2) In a
separate, secure environment we do the same artifact build generation
again, but this time we publish to various repos as a convenience to our
MXNet users.

On Mon, Dec 17, 2018 at 2:34 PM Qing Lan <[email protected]> wrote:

> Hi Kellen,
>
> Firstly the restricted node is completely isolated to the PR-checking CI
> system (physically) which is explained in here:
> https://cwiki.apache.org/confluence/display/MXNET/Restricted+jobs+and+nodes
> .
> What you are mentioning: the Public CIs are all having troubles if they
> are public accessible. I am not sure how secure the restricted node is.
> However, the only way I can think of from your end is to downloading all
> deps in a single machine and run everything there (disconnected from
> internet). It would bring us the best security we have.
>
> Thanks,
> Qing
>
> On 12/17/18, 2:06 PM, "kellen sunderland" <[email protected]>
> wrote:
>
>     I'm not in favour of publishing artifacts from any Jenkins based
> systems.
>     There are many ways to bundle artifacts and publish them from an
> automated
>     system.  Why we would use a CI system like Jenkins for this task?
> Jenkins
>     frequently has security vulnerabilities and is designed to run
> arbitrary
>     code from the internet.  It is a real possibility that an attacker
> could
>     pivot from any Jenkins based CI system to infect artifacts which would
> then
>     potentially be pushed to repositories our users would consume.  I would
>     consider any system using Jenkins as insecure-by-design, and encourage
> us
>     to air-gapped any artifact generation (websites, jars, PyPi packages)
>     completely from a system like that.
>
>     An alternative I could see is a simple Dockerfile (no Jenkins) that
> builds
>     all artifacts end-to-end and can be run in an automated account well
>     outside our CI account.
>
>     On Mon, Dec 17, 2018 at 1:53 PM Qing Lan <[email protected]> wrote:
>
>     > Dear community,
>     >
>     > Currently me and Zach are working on the Automated-publish pipeline
> on
>     > Jenkins which is a pipeline used to publish Maven packages and pip
> packages
>     > nightly build. We are trying to use NVIDIA deb which could help us
> to build
>     > different CUDA/CUDNN versions in the publish system. Sheng has
> provided a
>     > script here: https://github.com/apache/incubator-mxnet/pull/13646.
> This
>     > provide a very concrete and automatic solution from downloading to
>     > installing on the system. The only scenario we are facing is: It
> seemed
>     > NVIDIA has a restriction on distributing CUDA. We are not sure if it
> is
>     > legally-safe for us to use this in public.
>     >
>     > We would be grateful if somebody has a better context on it and help
> us
>     > out!
>     >
>     > Thanks,
>     > Qing
>     >
>
>
>

Reply via email to