Hi,

I am trying to run a Spark streaming application in yarn-cluster mode. However 
I am facing an issue where the job ends asking for a particular 
Hadoop_conf_**.zip file in hdfs location.
Can any one guide with this?
The application works fine in local mode only it stops abruptly for want of 
memory.

Below is the error stack trace:

diagnostics: Application application_1452763526769_0011 failed 2 times due to 
AM Container for appattempt_1452763526769_0011_000002 exited with  exitCode: 
-1000
For more detailed output, check application tracking 
page:http://slave1:8088/proxy/application_1452763526769_0011/Then, click on 
links to logs of each attempt.
Diagnostics: File does not exist: 
hdfs://slave1:9000/user/hduser/.sparkStaging/application_1452763526769_0011/__hadoop_conf__1057113228186399290.zip
java.io.FileNotFoundException: File does not exist: 
hdfs://slave1:9000/user/hduser/.sparkStaging/application_1452763526769_0011/__hadoop_conf__1057113228186399290.zip
                at 
org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1122)
                at 
org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
                at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
                at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
                at 
org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:251)
                at 
org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61)
                at 
org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
                at 
org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:357)
                at java.security.AccessController.doPrivileged(Native Method)
                at javax.security.auth.Subject.doAs(Subject.java:422)
                at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
                at 
org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356)
                at 
org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
                at java.util.concurrent.FutureTask.run(FutureTask.java:266)
                at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
                at java.util.concurrent.FutureTask.run(FutureTask.java:266)
                at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
                at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
                at java.lang.Thread.run(Thread.java:745)

Failing this attempt. Failing the application.
                ApplicationMaster host: N/A
                ApplicationMaster RPC port: -1
                queue: default
                start time: 1452866026622
                final status: FAILED
                tracking URL: 
http://slave1:8088/cluster/app/application_1452763526769_0011
                user: hduser
Exception in thread "main" org.apache.spark.SparkException: Application 
application_1452763526769_0011 finished with failed status
                at org.apache.spark.deploy.yarn.Client.run(Client.scala:841)
                at org.apache.spark.deploy.yarn.Client$.main(Client.scala:867)
                at org.apache.spark.deploy.yarn.Client.main(Client.scala)
                at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
                at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                at java.lang.reflect.Method.invoke(Method.java:497)
                at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
                at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
                at 
org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
                at 
org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
                at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16/01/15 19:23:53 INFO Utils: Shutdown hook called
16/01/15 19:23:53 INFO Utils: Deleting directory 
/tmp/spark-b6ebcb83-efff-432a-9a7a-b4764f482d81
java.lang.UNIXProcess$ProcessPipeOutputStream@7a0a6f73  1



Siddharth Ubale,
Synchronized Communications
#43, Velankani Tech Park, Block No. II,
3rd Floor, Electronic City Phase I,
Bangalore – 560 100
Tel : +91 80 3202 4060
Web: www.syncoms.com<http://www.syncoms.com/>
[LogoNEWmohLARGE]
London|Bangalore|Orlando

we innovate, plan, execute, and transform the business​

Reply via email to