[ 
https://issues.apache.org/jira/browse/SPARK-980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen resolved SPARK-980.
-----------------------------
    Resolution: Fixed

> NullPointerException for single-host setup with S3 URLs
> -------------------------------------------------------
>
>                 Key: SPARK-980
>                 URL: https://issues.apache.org/jira/browse/SPARK-980
>             Project: Spark
>          Issue Type: Bug
>          Components: Input/Output
>    Affects Versions: 0.8.0
>            Reporter: Paul R. Brown
>
> Short version:
> * The use of {{execSparkHome_}} in 
> [Worker.scala|https://github.com/apache/incubator-spark/blob/v0.8.0-incubating/core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala#L135]
>  should be checked for {{null}} or that value should be defaulted or plumbed 
> through.
> * If the {{sparkHome}} argument to {{new SparkContext(...)}} is non-optional, 
> then it should not be marked as optional.
> Long version:
> Starting up with {{bin/start-all.sh}} and then connecting from a Scala 
> program and attempting to read two S3 URLs results in the following trace in 
> the worker log:
> {code}
> 13/12/03 21:50:23 ERROR worker.Worker:
> java.lang.NullPointerException
>       at java.io.File.<init>(File.java:277)
>       at 
> org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.apply(Worker.scala:135)
>       at 
> org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.apply(Worker.scala:120)
>       at akka.actor.Actor$class.apply(Actor.scala:318)
>       at org.apache.spark.deploy.worker.Worker.apply(Worker.scala:39)
>       at akka.actor.ActorCell.invoke(ActorCell.scala:626)
>       at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:197)
>       at akka.dispatch.Mailbox.run(Mailbox.scala:179)
>       at 
> akka.dispatch.ForkJoinExecutorConfigurator$MailboxExecutionTask.exec(AbstractDispatcher.scala:516)
>       at akka.jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:259)
>       at akka.jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:975)
>       at akka.jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1479)
>       at akka.jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)
> {code}
> This is on Mac OS X 10.9, Oracle Java 7u45, and the Hadoop 1 download from 
> the incubator.
> Reading the code, this occurs because {{execSparkHome_}} is {{null}}; see 
> [Worker.scala#L135|https://github.com/apache/incubator-spark/blob/v0.8.0-incubating/core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala#L135],
>  and setting a value explicitly in the Scala driver allows the computation to 
> complete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to