As part of building a tool to make it really easy to run Spark on Aurora
(with the goal being to do 'spark run <name> <jar>' which launches 2 aurora
jobs) I am creating the ExecutorData JSON in Go and using the Thrift
interfaces to call createJob, it works, and is very verbose. I think it
will work well as long as you dont want to do too much configuration. To
ensure I got the correct things I wrote an aurora config then looked at the
generated JobConfiguration.

It would be nice if the ExecutorData was defined in Thrift or such, but I
can see why it is not.

On Mon, 11 Jan 2016 at 13:34 Erb, Stephan <[email protected]>
wrote:

> A couple of days ago there was a rather brief discussion in IRC regarding
> the generation of Aurora configuration files:
>
> > 00:16 <benley> random poll: anyone know who is something other than the
> normal Python DSL for building Aurora jobs?
> > 00:17 <benley> I know some people are using flabbergast, and rafik was
> at least experimenting with Nix to generate them
> > 00:17 <benley> and I've been tinkering with using Jsonnet to generate
> them, which ends up being quite similar to Flabbergast
> > 00:19 <wfarner> benley: i don't know of any other DSLs, but i have heard
> of a few distillations into essentially properties files
> > 00:21 <benley> that makes sense - heavily templating a job definition so
> it's generally applicable
>
> I find this topic quite interesting. Does anyone has some experiences to
> share with the mailinglist? What was your the main motivation using an
> additional configuration or templating mechanism? Did it pay off?
>
> Thanks and Best Regards,
> Stephan

Reply via email to