Hi all, a few minor updates:

I've simplified the bootstrapping a bit so that dependencies are logically
grouped by process instead of rolling up to the entire job. Instead of
adding a single "init_nix" process to the task, each process has a
corresponding "init_" process that ensures its dependencies have been
initialized. This cuts down on global task initialization time and
bootstraps the minimal set of dependencies for each process.

Also, to progress past "hello world" examples, I've added a Prometheus
example job which demonstrates the utility of using packages defined in
nixpkgs. I plan to add a few more examples which manually build Nix
dependencies to show that workflow as well. If you have any example jobs
you'd like to see, please let me know!

One thing I'd like to draw attention to is a new Nix utility function I've
added called `copiedExpandedFile`. This arose out of a conversation in IRC
last Friday about the best way to initialize configuration files which
depend on the values of Thermos bindings. I assume there are as many
home-rolled solutions to this as there are readers of this mailing list. As
I alluded to in the original emails, a prime benefit of specifying all
dependencies using Nix is that configuration files become just another
dependency and also reside in the Nix store instead of being copied into
the task sandbox. This works sufficiently well when the configuration file
does not depend on the task instance context, i.e. Thermos bindings. For
tasks that do, you can use `copiedExpandedFile`. This function makes a
local file available to the task with all thermos bindings expanded. It
does this by copying the unexpanded file from the Nix store into the
sandbox, expanding the bindings using an inline Python script and returning
the path to the file inside the sandbox. You can see an example usage here:
https://github.com/rafikk/nix-aurora/blob/6578c62aef9396cfd23a3023def0f5b133594ee5/examples/prometheus/default.nix#L13

Also, I'll be around at MesosCon this week if anyone has feedback/questions
or is interested in meeting up.

Cheers,
Rafik

On Thu, Aug 13, 2015 at 4:22 PM [email protected] <[email protected]> wrote:

> I've also had tremendous success with using Nix and Aurora together, but
> haven't had such nice integration of the job definitions with Nix itself -
> thank you for sharing this!
>
> In case anyone else takes interest in this stuff: I took a different
> approach, writing the job descriptions with Pystachio, while still using
> Nix to package and distribute all dependencies.  A binding helper
> translates references to packages (i.e. Nix attributes) from their name to
> the store path where they can be accessed, and Nix takes care of retrieving
> anything not already cached from a networked binary cache.  On the client
> side, a simple script bundles the source derivation and/or the binary
> outputs and publishes them to that network location prior to submitting a
> job to the Aurora scheduler.
>
> One nice thing about my method is that it doesn't use a nix profile for
> each task, so there is also no need for post-task cleanup other than the
> sandbox garbage collection that aurora and mesos already handle.
>
> I like Rafik's implementation a lot, and will likely be using it at some
> point, once some of these concepts are bridged together.  I'll share my
> hacky version for reference or inspiration:
>
> Binding helper and example jobs:
> https://gist.github.com/benley/17f94a19c9b57b464a06
>
> Hacky build/publish/launch script:
> https://gist.github.com/benley/f1ce058a2f913674c408
>
>
> On Thu, Aug 13, 2015 at 12:48 PM Rafik Salama <[email protected]>
> wrote:
>
> > I don't know how many people here would be interested in this, but I'm
> > hoping to draw some interest from the list (besides benley who inspired
> the
> > work).
> >
> > We've been using Nix <http://nixos.org/nix/> to build and distribute the
> > programs we run on Aurora for a few months. It's been a tremendous
> success.
> > We first came across Nix when looking for an alternative to Docker for
> > isolating package dependencies and avoiding running jobs as root. Mesos
> > gives us the isolation we need, so Docker seemed an unjustifiable
> overhead
> > simply to isolate dependences.
> >
> > In our current setup, we create a Nix profile for each task, install the
> > dependencies, run the processes, then clean up the Nix profile. All the
> > jobs are configured using the standard Pystachio configuration and we've
> > written some Python helpers to wrap the tasks with the proper Nix init
> and
> > cleanup processes. Basically our jobs consist of two files: one which
> > defines the dependencies using Nix expressions, and another which
> specifies
> > the Aurora processes and tasks. The Aurora processes depend on the
> > dependencies defined in Nix, but the link is implicit.
> >
> > You can see where I'm going with this.
> >
> > So, to form a more perfect union, I've been experimenting with replacing
> > pystachio configurations with Nix expressions. I have a proof of concept
> of
> > the work here: https://github.com/rafikk/nix-aurora
> >
> > The repo contains the hello_world.py example from the Aurora tutorial.
> The
> > main thing to note is what happens to the fetch_package process. Since
> Nix
> > produces a cryptographic hash of its inputs, we can avoid the checksum
> > "trick". And since the package is really a dependency of the
> "hello_world"
> > process, we can model this explicitly by specifying the package as a
> build
> > input to the process and remove the "fetch_package" process altogether.
> Nix
> > takes care of building all the package dependencies and provides the
> store
> > paths. (this means we can even avoid copying the code into the sandbox in
> > the first place. just reference it from /nix/store).
> >
> > As it's written, the example works with the Vagrant VM. A bit more work
> is
> > required to distribute the sandbox and dependencies to the executors.
> >
> > I hope the benefits are apparent. Please let me know if you're interested
> > in getting this to production-grade or have any feedback!
> >
> > Thanks,
> > Rafik
> >
>

Reply via email to