[ 
https://issues.apache.org/jira/browse/SPARK-15799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15736616#comment-15736616
 ] 

Shivaram Venkataraman commented on SPARK-15799:
-----------------------------------------------

I see - Looks like its controlled by this `spark.sql.warehouse.dir` flag [1]. 
One change we can make is that we see if the user has supplied a value for this 
config flag in sparkR.session() [2] and if not we can set it to tmpdir() ? 

The one question this raises is that if the user wants to access some of these 
tables after the end of their session then it won't be possible. 


[1] 
https://github.com/apache/spark/blob/d60ab5fd9b6af9aa5080a2d13b3589d8b79c5c5c/sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala#L968
[2] 
https://github.com/apache/spark/blob/d60ab5fd9b6af9aa5080a2d13b3589d8b79c5c5c/R/pkg/R/sparkR.R#L365

> Release SparkR on CRAN
> ----------------------
>
>                 Key: SPARK-15799
>                 URL: https://issues.apache.org/jira/browse/SPARK-15799
>             Project: Spark
>          Issue Type: New Feature
>          Components: SparkR
>            Reporter: Xiangrui Meng
>
> Story: "As an R user, I would like to see SparkR released on CRAN, so I can 
> use SparkR easily in an existing R environment and have other packages built 
> on top of SparkR."
> I made this JIRA with the following questions in mind:
> * Are there known issues that prevent us releasing SparkR on CRAN?
> * Do we want to package Spark jars in the SparkR release?
> * Are there license issues?
> * How does it fit into Spark's release process?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to