I have not played around with spark-shell much (especially for spark
streaming), but was just suggesting that spark-submit logs could possibly
tell you whats going on and yes for that you would need to create a jar.

I am not even sure that you can give a .scala file to spark-shell

Usage: ./bin/spark-shell [options]
Options:
  --master MASTER_URL         spark://host:port, mesos://host:port, yarn,
or local.
  --deploy-mode DEPLOY_MODE   Where to run the driver program: either
"client" to run
                              on the local machine, or "cluster" to run
inside cluster.
  --class CLASS_NAME          Your application's main class (for Java /
Scala apps).
  --name NAME                 A name of your application.
  --jars JARS                 Comma-separated list of local jars to include
on the driver
                              and executor classpaths.
  --py-files PY_FILES         Comma-separated list of .zip, .egg, or .py
files to place
                              on the PYTHONPATH for Python apps.
  --files FILES               Comma-separated list of files to be placed in
the working
                              directory of each executor.
  --properties-file FILE      Path to a file from which to load extra
properties. If not
                              specified, this will look for
conf/spark-defaults.conf.

  --driver-memory MEM         Memory for driver (e.g. 1000M, 2G) (Default:
512M).
  --driver-java-options       Extra Java options to pass to the driver.
  --driver-library-path       Extra library path entries to pass to the
driver.
  --driver-class-path         Extra class path entries to pass to the
driver. Note that
                              jars added with --jars are automatically
included in the
                              classpath.

  --executor-memory MEM       Memory per executor (e.g. 1000M, 2G)
(Default: 1G).

 Spark standalone with cluster deploy mode only:
  --driver-cores NUM          Cores for driver (Default: 1).
  --supervise                 If given, restarts the driver on failure.

 Spark standalone and Mesos only:
  --total-executor-cores NUM  Total cores for all executors.

 YARN-only:
  --executor-cores NUM        Number of cores per executor (Default: 1).
  --queue QUEUE_NAME          The YARN queue to submit to (Default:
"default").
  --num-executors NUM         Number of executors to launch (Default: 2).
  --archives ARCHIVES         Comma separated list of archives to be
extracted into the
                              working directory of each executor.

For example if I do

spark-shell foo.scala

gives me the same scala shell prompt as you did.




Here are some of the documentations for submitting that I found useful -

http://spark.apache.org/docs/latest/streaming-programming-guide.html#deploying-applications

http://spark.apache.org/docs/latest/cluster-overview.html

http://spark.apache.org/docs/latest/submitting-applications.html



On Tue, Oct 7, 2014 at 4:09 PM, spr <s...@yarcdata.com> wrote:

> || Try using spark-submit instead of spark-shell
>
> Two questions:
> - What does spark-submit do differently from spark-shell that makes you
> think that may be the cause of my difficulty?
>
> - When I try spark-submit it complains about "Error: Cannot load main class
> from JAR: file:/Users/spr/.../try1.scala".  My program is not structured as
> a main class.  Does it have to be to run with Spark Streaming?  Or with
> spark-submit?
>
> Thanks much.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/SparkStreaming-program-does-not-start-tp15876p15881.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>


-- 
~

Reply via email to