Yes, that is correct. You can use this boiler plate to avoid spark-submit.

    //The configurations
    val sconf = new SparkConf()
      .setMaster("spark://spark-ak-master:7077")
      .setAppName("SigmoidApp")
      .set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
      .set("spark.cores.max", "12")
      .set("spark.executor.memory", "36g")

    //The context!
    val sc = new SparkContext(sconf)

    //The jar dependencies!
    sc.addJar("target/scala-2.10/sigmoidapp_10-1.0.jar")


Thanks
Best Regards

On Fri, Jul 10, 2015 at 4:07 PM, algermissen1971 <algermissen1...@icloud.com
> wrote:

> Hi,
>
> I am a bit confused about the steps I need to take to start a Spark
> application on a cluster.
>
> So far I had this impression from the documentation that I need to
> explicitly submit the application using for example spark-submit.
>
> However, from the SparkContext constructur signature I get the impression
> that maybe I do not have to do that after all:
>
> In
> http://spark.apache.org/docs/latest/api/scala/#org.apache.spark.SparkContext
> the first constructor has (among other things) a parameter 'jars' which
> indicates the "Collection of JARs to send to the cluster".
>
> To me this suggests that I can simply start the application anywhere and
> that it will deploy itself to the cluster in the same way a call to
> spark-submit would.
>
> Is that correct?
>
> If not, can someone explain why I can / need to provide master and jars
> etc. in the call to SparkContext because they essentially only duplicate
> what I would specify in the call to spark-submit.
>
> Jan
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to