I build an uber jar with sbt. I used this
https://github.com/ianoc/ExampleSparkProject as a starting point. This
generates a uber jar with sbt-assembly and I just need to ship that
one file to the master. Then I use spark to ship it to the executors.
I use the EC2 standalone script. Once I've got the jar on the server
it's just a matter of getting the right settings -- SPARK_HOME, the
class path for the Scala classes which are already included, etc.

I chose this route after having gone through a somewhat painful
experience transitioning from a hybrid EMR deployment (I eventually
moved all my code over to Spark) using buildr (introduced before there
was an Spark or even Scala code and which has no uber jar support; I
had a very slow, manual uber jar generation process). As long as you
have something that builds the uberjar, the deployment is actually
very simple.

-Ewen


On Fri, Nov 22, 2013 at 6:43 PM, Philip Ogren <[email protected]> wrote:
> My last post (see 'akka config settings') makes me wonder how people are
> actually deploying Spark applications.  It seems unlikely that many people
> are building an uber jar file like I did and deploying in standalone mode
> directly with the java command (given the difficulties in actually getting
> this set up correctly.)  In fact, some of the various pages I stumbled on
> suggest that the default expectation is that one would run a spark
> application from maven (or perhaps sbt?)  I'd be curious to know how people
> are actually deploying spark applications in practice.
>
> Thanks,
> Philip
>
>

Reply via email to