[ 
https://issues.apache.org/jira/browse/SPARK-2356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15055633#comment-15055633
 ] 

Michael Han edited comment on SPARK-2356 at 12/14/15 9:05 AM:
--------------------------------------------------------------

Hello Everyone,

I encounter this issue today again when I tried to create a cluster using two 
windows 7 (64) desktop.
This errors happens when I register the second worker to the master using the 
following command:
spark-class org.apache.spark.deploy.worker.Worker spark://masternode:7077

Strange it works fine when I register the first worker to the master.
anyone knows some work around to fix this issue?
The above work around works fine when I using local mode.

The error is:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/12/14 16:49:22 WARN NativeCodeLoader: Unable to load native-hadoop library fo
r your platform... using builtin-java classes where applicable
15/12/14 16:49:22 ERROR Shell: Failed to locate the winutils binary in the hadoo
p binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Ha
doop binaries.
        at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:355)
        at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:370)
        at org.apache.hadoop.util.Shell.<clinit>(Shell.java:363)
        at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:79)
        at org.apache.hadoop.security.Groups.parseStaticMapping(Groups.java:104)

        at org.apache.hadoop.security.Groups.<init>(Groups.java:86)
        at org.apache.hadoop.security.Groups.<init>(Groups.java:66)
        at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Group
s.java:280)
        at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupI
nformation.java:271)
        at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(Use
rGroupInformation.java:248)
        at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(
UserGroupInformation.java:763)
        at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGrou
pInformation.java:748)
        at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGr
oupInformation.java:621)
        at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils
.scala:2091)
        at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils
.scala:2091)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2091)
        at org.apache.spark.SecurityManager.<init>(SecurityManager.scala:212)
        at org.apache.spark.deploy.worker.Worker$.startRpcEnvAndEndpoint(Worker.
scala:692)
        at org.apache.spark.deploy.worker.Worker$.main(Worker.scala:674)
        at org.apache.spark.deploy.worker.Worker.main(Worker.scala)
15/12/14 16:49:22 INFO SecurityManager: Changing view acls to: mh6
15/12/14 16:49:22 INFO SecurityManager: Changing modify acls to: mh6
15/12/14 16:49:22 INFO SecurityManager: SecurityManager: authentication disabled
; ui acls disabled; users with view permissions: Set(mh6); users with modify per
missions: Set(mh6)
15/12/14 16:49:23 INFO Slf4jLogger: Slf4jLogger started
15/12/14 16:49:23 INFO Remoting: Starting remoting
15/12/14 16:49:24 INFO Remoting: Remoting started; listening on addresses :[akka
.tcp://sparkWorker@167.3.129.160:46862]
15/12/14 16:49:24 INFO Utils: Successfully started service 'sparkWorker' on port
 46862.
15/12/14 16:49:24 INFO Worker: Starting Spark worker 167.3.129.160:46862 with 4
cores, 2.9 GB RAM
15/12/14 16:49:24 INFO Worker: Running Spark version 1.5.2
15/12/14 16:49:24 INFO Worker: Spark home: C:\spark-1.5.2-bin-hadoop2.6\bin\..
15/12/14 16:49:24 INFO Utils: Successfully started service 'WorkerUI' on port 80
81.
15/12/14 16:49:24 INFO WorkerWebUI: Started WorkerWebUI at http://167.3.129.160:
8081
15/12/14 16:49:24 INFO Worker: Connecting to master 192.168.79.1:7077...
15/12/14 16:49:39 INFO Worker: Retrying connection to master (attempt # 1)
15/12/14 16:49:39 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thr
ead Thread[sparkWorker-akka.actor.default-dispatcher-2,5,main]
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.Futur
eTask@3ef5e68c rejected from java.util.concurrent.ThreadPoolExecutor@741cb720[Ru
nning, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]

        at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution
(ThreadPoolExecutor.java:2047)
        at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.jav
a:823)
        at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.ja
va:1369)
        at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorS
ervice.java:112)
        at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deplo
y$worker$Worker$$tryRegisterAllMasters$1.apply(Worker.scala:211)
        at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deplo
y$worker$Worker$$tryRegisterAllMasters$1.apply(Worker.scala:210)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike
.scala:244)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike
.scala:244)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimize
d.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)

        at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
        at org.apache.spark.deploy.worker.Worker.org$apache$spark$deploy$worker$
Worker$$tryRegisterAllMasters(Worker.scala:210)
        at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deplo
y$worker$Worker$$reregisterWithMaster$1.apply$mcV$sp(Worker.scala:288)
        at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1119)
        at org.apache.spark.deploy.worker.Worker.org$apache$spark$deploy$worker$
Worker$$reregisterWithMaster(Worker.scala:234)
        at org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.applyOrElse(
Worker.scala:521)
        at org.apache.spark.rpc.akka.AkkaRpcEnv.org$apache$spark$rpc$akka$AkkaRp
cEnv$$processMessage(AkkaRpcEnv.scala:177)
        at org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1
$$anon$1$$anonfun$receiveWithLogging$1$$anonfun$applyOrElse$4.apply$mcV$sp(AkkaR
pcEnv.scala:126)
        at org.apache.spark.rpc.akka.AkkaRpcEnv.org$apache$spark$rpc$akka$AkkaRp
cEnv$$safelyCall(AkkaRpcEnv.scala:197)
        at org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1
$$anon$1$$anonfun$receiveWithLogging$1.applyOrElse(AkkaRpcEnv.scala:125)
        at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractP
artialFunction.scala:33)
        at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFu
nction.scala:33)
        at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFu
nction.scala:25)
        at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.s
cala:59)
        at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.s
cala:42)
        at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
        at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogRec
eive.scala:42)
        at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
        at org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1
$$anon$1.aroundReceive(AkkaRpcEnv.scala:92)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
        at akka.actor.ActorCell.invoke(ActorCell.scala:487)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
        at akka.dispatch.Mailbox.run(Mailbox.scala:220)
        at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(Abst
ractDispatcher.scala:397)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool
.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:19
79)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThre
ad.java:107)
15/12/14 16:49:39 INFO ShutdownHookManager: Shutdown hook called


was (Author: michael_han):
Hello Everyone,

I encounter this issue today again when I tried to create a cluster using two 
windows 7 (64) desktop.
This errors happens when I register the second worker to the master using the 
following command:
spark-class org.apache.spark.deploy.worker.Worker spark://masternode:7077

Strange it works fine when I register the first worker to the master.
anyone knows some work around to fix this issue?
The above work around works fine when I using local mode.

> Exception: Could not locate executable null\bin\winutils.exe in the Hadoop 
> ---------------------------------------------------------------------------
>
>                 Key: SPARK-2356
>                 URL: https://issues.apache.org/jira/browse/SPARK-2356
>             Project: Spark
>          Issue Type: Bug
>          Components: Windows
>    Affects Versions: 1.0.0
>            Reporter: Kostiantyn Kudriavtsev
>            Priority: Critical
>
> I'm trying to run some transformation on Spark, it works fine on cluster 
> (YARN, linux machines). However, when I'm trying to run it on local machine 
> (Windows 7) under unit test, I got errors (I don't use Hadoop, I'm read file 
> from local filesystem):
> {code}
> 14/07/02 19:59:31 WARN NativeCodeLoader: Unable to load native-hadoop library 
> for your platform... using builtin-java classes where applicable
> 14/07/02 19:59:31 ERROR Shell: Failed to locate the winutils binary in the 
> hadoop binary path
> java.io.IOException: Could not locate executable null\bin\winutils.exe in the 
> Hadoop binaries.
>       at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:318)
>       at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:333)
>       at org.apache.hadoop.util.Shell.<clinit>(Shell.java:326)
>       at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)
>       at org.apache.hadoop.security.Groups.parseStaticMapping(Groups.java:93)
>       at org.apache.hadoop.security.Groups.<init>(Groups.java:77)
>       at 
> org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:240)
>       at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:255)
>       at 
> org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:283)
>       at 
> org.apache.spark.deploy.SparkHadoopUtil.<init>(SparkHadoopUtil.scala:36)
>       at 
> org.apache.spark.deploy.SparkHadoopUtil$.<init>(SparkHadoopUtil.scala:109)
>       at 
> org.apache.spark.deploy.SparkHadoopUtil$.<clinit>(SparkHadoopUtil.scala)
>       at org.apache.spark.SparkContext.<init>(SparkContext.scala:228)
>       at org.apache.spark.SparkContext.<init>(SparkContext.scala:97)
> {code}
> It's happened because Hadoop config is initialized each time when spark 
> context is created regardless is hadoop required or not.
> I propose to add some special flag to indicate if hadoop config is required 
> (or start this configuration manually)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to