[ 
https://issues.apache.org/jira/browse/SPARK-905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen resolved SPARK-905.
-----------------------------
    Resolution: Cannot Reproduce

This looks like something that was either long since fixed, or just a matter of 
not having the Spark installation set up on each machine. The 
compute-classpath.sh script does exist in bin/ in the tree and distro.

> Not able to run Job on  remote machine
> --------------------------------------
>
>                 Key: SPARK-905
>                 URL: https://issues.apache.org/jira/browse/SPARK-905
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 0.7.3
>            Reporter: Ayush
>
> I have two machines A and B.  On machine A, I run
>  ./run spark.deploy.master.Master 
> to start Master.
> Master URL is spark://abc-vostro.local:7077. 
> Now on machine B, I run 
> ./run spark.deploy.worker.Worker spark://abc-vostro.local:7077
> Now worker has been registered to  master. 
> Now I want to run a simple job on cluster. 
> Here is SimpleJob.scala
> package spark.examples
>  
> import spark.SparkContext
> import SparkContext._
>  
> object SimpleJob {
>   def main(args: Array[String]) {
>     val logFile = "s3n://<AWS_ACCESS_KEY_ID>:<AWS_SECRET_ACCESS_KEY>/<File 
> Name>"
>     val sc = new SparkContext("spark://abc-vostro.local:7077", "Simple Job",
>       System.getenv("SPARK_HOME"), 
> Seq("/home/abc/spark-scala-2.10/examples/target/scala-2.10/spark-examples_2.10-0.8.0-SNAPSHOT.jar"))
>     val logData = sc.textFile(logFile)
>     val numsa = logData.filter(line => line.contains("a")).count
>     val numsb = logData.filter(line => line.contains("b")).count
>     println("total a : %s, total b : %s".format(numsa, numsb))
>   }
> }  
> This file is located at 
> "/home/abc/spark-scala-2.10/examples/src/main/scala/spark/examples" on 
> machine A.
> Now on machine A, I run sbt/sbt package.
> When I run
>  MASTER=spark://abc-vostro.local:7077 ./run spark.examples.SimpleJob
> to run my job, I am getting below exception on both machines A and B,
> (class java.io.IOException: Cannot run program 
> "/home/abc/spark-scala-2.10/bin/compute-classpath.sh" (in directory "."): 
> error=2, No such file or directory)
> Could you please help me to resolve this? This is probably something I'm 
> missing any configuration on my end. 
> Thanks in advance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to