[ https://issues.apache.org/jira/browse/SPARK-17054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15422423#comment-15422423 ]
Sun Rui commented on SPARK-17054: --------------------------------- {code} # do not download if it is run in the sparkR shell if (!nzchar(master) || is_master_local(master)) { {code} the master parameter for sparkR.session() is not necessarily the actual master which may be specified via spark-submit. install.spark only makes sense in a normal R session (which is not started by spark-submit). Maybe the following condition can be used to determine when install.spark should be called: both "SPARKR_SUBMIT_ARGS" and "EXISTING_SPARKR_BACKEND_PORT" env variables are not set. > SparkR can not run in yarn-cluster mode on mac os > ------------------------------------------------- > > Key: SPARK-17054 > URL: https://issues.apache.org/jira/browse/SPARK-17054 > Project: Spark > Issue Type: Bug > Components: SparkR > Affects Versions: 2.0.0 > Reporter: Jeff Zhang > > This is due to it download sparkR to the wrong place. > {noformat} > Warning message: > 'sparkR.init' is deprecated. > Use 'sparkR.session' instead. > See help("Deprecated") > Spark not found in SPARK_HOME: . > To search in the cache directory. Installation will start if not found. > Mirror site not provided. > Looking for site suggested from apache website... > Preferred mirror site found: http://apache.mirror.cdnetworks.com/spark > Downloading Spark spark-2.0.0 for Hadoop 2.7 from: > - > http://apache.mirror.cdnetworks.com/spark/spark-2.0.0/spark-2.0.0-bin-hadoop2.7.tgz > Fetch failed from http://apache.mirror.cdnetworks.com/spark > <simpleError in download.file(packageRemotePath, packageLocalPath): cannot > open destfile '/home//Library/Caches/spark/spark-2.0.0-bin-hadoop2.7.tgz', > reason 'No such file or directory'> > To use backup site... > Downloading Spark spark-2.0.0 for Hadoop 2.7 from: > - > http://www-us.apache.org/dist/spark/spark-2.0.0/spark-2.0.0-bin-hadoop2.7.tgz > Fetch failed from http://www-us.apache.org/dist/spark > <simpleError in download.file(packageRemotePath, packageLocalPath): cannot > open destfile '/home//Library/Caches/spark/spark-2.0.0-bin-hadoop2.7.tgz', > reason 'No such file or directory'> > Error in robust_download_tar(mirrorUrl, version, hadoopVersion, packageName, > : > Unable to download Spark spark-2.0.0 for Hadoop 2.7. Please check network > connection, Hadoop version, or provide other mirror sites. > Calls: sparkRSQL.init ... sparkR.session -> install.spark -> > robust_download_tar > In addition: Warning messages: > 1: 'sparkRSQL.init' is deprecated. > Use 'sparkR.session' instead. > See help("Deprecated") > 2: In dir.create(localDir, recursive = TRUE) : > cannot create dir '/home//Library', reason 'Operation not supported' > Execution halted > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org