[ 
https://issues.apache.org/jira/browse/SPARK-8602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734216#comment-16734216
 ] 

Hyukjin Kwon commented on SPARK-8602:
-------------------------------------

I think we now have one Spark context and multiple Spark sessions. If we use 
multiple Spark Sessions, then we're able to do this. Let me leave this 
resolved. Please reopen this if I am mistaken.

> Shared cached DataFrames
> ------------------------
>
>                 Key: SPARK-8602
>                 URL: https://issues.apache.org/jira/browse/SPARK-8602
>             Project: Spark
>          Issue Type: New Feature
>          Components: SQL
>    Affects Versions: 1.4.0
>            Reporter: John Muller
>            Priority: Major
>
> Currently, the only way I can think of to share HiveContexts, SparkContexts, 
> or cached DataFrames is to use spark-jobserver and spark-jobserver-extras:
> https://gist.github.com/anonymous/578385766261d6fa7196#file-exampleshareddf-scala
> But HiveServer2 users over plain JDBC cannot access the shared dataframe. 
> Request is to add this directly to SparkSQL and treat it like a shared temp 
> table Ex. 
> SELECT a, b, c
> FROM TableA
> CACHE DATAFRAME
> This would be very useful for Rollups and Cubes, though I'm not sure what 
> this may mean for HiveMetaStore. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to