[ 
https://issues.apache.org/jira/browse/SPARK-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15088322#comment-15088322
 ] 

Apache Spark commented on SPARK-12699:
--------------------------------------

User 'felixcheung' has created a pull request for this issue:
https://github.com/apache/spark/pull/10652

> R driver process should start in a clean state
> ----------------------------------------------
>
>                 Key: SPARK-12699
>                 URL: https://issues.apache.org/jira/browse/SPARK-12699
>             Project: Spark
>          Issue Type: Bug
>          Components: SparkR
>            Reporter: Felix Cheung
>            Priority: Minor
>
> Currently we have R worker process launched with the --vanilla option that 
> brings it up in a clean state (without init profile or workspace data, 
> https://stat.ethz.ch/R-manual/R-devel/library/base/html/Startup.html). 
> However, the R process for the Spark driver is not.
> We should do that because
> 1. That would make driver consistent with the worker process in R - for 
> instance, a library would not be load in driver but not worker
> 2. Since SparkR depends on .libPath and .First() it could be broken by 
> something in the user workspace, for example
> Here are the changes proposed:
> 1. When starting `sparkR` shell (except: allow save/restore workspace, since 
> the driver/shell is local)
> 2. When launching R driver in cluster mode
> 3. In cluster mode, when calling R to install shipped R package



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to