RE: sparkR 1.5.1 batch yarn-client mode failing on daemon.R not found
Tom, Have you set the “MASTER” evn variable on your machine? What is the value if set? From: Tom Stewart [mailto:stewartthom...@yahoo.com.INVALID] Sent: Friday, October 30, 2015 10:11 PM To: user@spark.apache.org Subject: sparkR 1.5.1 batch yarn-client mode failing on daemon.R not found I have the following script in a file named test.R: library(SparkR) sc <- sparkR.init(master="yarn-client") sqlContext <- sparkRSQL.init(sc) df <- createDataFrame(sqlContext, faithful) showDF(df) sparkR.stop() q(save="no") If I submit this with "sparkR test.R" or "R CMD BATCH test.R" or "Rscript test.R" it fails with this error: 15/10/29 08:08:49 INFO r.BufferedStreamThread: Fatal error: cannot open file '/mnt/hdfs9/yarn/nm-local-dir/usercache/hadoop/appcache/application_1446058618330_0171/container_e805_1446058618330_0171_01_05/sparkr/SparkR/worker/daemon.R': No such file or directory 15/10/29 08:08:59 ERROR executor.Executor: Exception in task 0.0 in stage 1.0 (TID 1) java.net.SocketTimeoutException: Accept timed out However, if I launch just an interactive sparkR shell and cut/paste those commands - it runs fine. It also runs fine on the same Hadoop cluster with Spark 1.4.1. And, it runs fine from batch mode if I just use sparkR.init() and not sparkR.init(master="yarn-client")
sparkR 1.5.1 batch yarn-client mode failing on daemon.R not found
I have the following script in a file named test.R: library(SparkR) sc <- sparkR.init(master="yarn-client") sqlContext <- sparkRSQL.init(sc) df <- createDataFrame(sqlContext, faithful) showDF(df) sparkR.stop() q(save="no") If I submit this with "sparkR test.R" or "R CMD BATCH test.R" or "Rscript test.R" it fails with this error: 15/10/29 08:08:49 INFO r.BufferedStreamThread: Fatal error: cannot open file '/mnt/hdfs9/yarn/nm-local-dir/usercache/hadoop/appcache/application_1446058618330_0171/container_e805_1446058618330_0171_01_05/sparkr/SparkR/worker/daemon.R': No such file or directory 15/10/29 08:08:59 ERROR executor.Executor: Exception in task 0.0 in stage 1.0 (TID 1) java.net.SocketTimeoutException: Accept timed out However, if I launch just an interactive sparkR shell and cut/paste those commands - it runs fine. It also runs fine on the same Hadoop cluster with Spark 1.4.1. And, it runs fine from batch mode if I just use sparkR.init() and not sparkR.init(master="yarn-client")
sparkR 1.5.1 batch yarn-client mode failing on daemon.R not found
I have the following script in a file named test.R: library(SparkR) sc <- sparkR.init(master="yarn-client") sqlContext <- sparkRSQL.init(sc) df <- createDataFrame(sqlContext, faithful) showDF(df) sparkR.stop() q(save="no") If I submit this with "sparkR test.R" or "R CMD BATCH test.R" or "Rscript test.R" it fails with this error: 15/10/29 08:08:49 INFO r.BufferedStreamThread: Fatal error: cannot open file '/mnt/hdfs9/yarn/nm-local-dir/usercache/hadoop/appcache/application_1446058618330_0171/container_e805_1446058618330_0171_01_05/sparkr/SparkR/worker/daemon.R': No such file or directory 15/10/29 08:08:59 ERROR executor.Executor: Exception in task 0.0 in stage 1.0 (TID 1) java.net.SocketTimeoutException: Accept timed out However, if I launch just an interactive sparkR shell and cut/paste those commands - it runs fine. It also runs fine on the same Hadoop cluster with Spark 1.4.1. And, it runs fine from batch mode if I just use sparkR.init() and not sparkR.init(master="yarn-client") -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/sparkR-1-5-1-batch-yarn-client-mode-failing-on-daemon-R-not-found-tp25230.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org