Github user shivaram commented on the issue:

    https://github.com/apache/spark/pull/14639
  
    @zjffdu Thanks for clarifying -- I now remember that in the YARN cluster 
mode there is no `SPARK_HOME` set. However in this case the JVM comes up first 
and the R process then connects to it. So in such cases we should never have to 
download Spark as Spark is already running.
    
    Thus I think @zjffdu's change of not calling install.spark for cluster mode 
is the right fix.
    
    
    



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to