Github user sun-rui commented on the pull request:

    https://github.com/apache/spark/pull/9185#issuecomment-154333467
  
    From user's point of view, multiple concurrent R sessions are expected to 
allow for parallel analysis when SparkR is running as service in cloud.
    
    However, R at its core is single threaded, and it does support concept of 
session. So there exists some intermediate layers that enable multiple R 
sessions, for example, [RStudio Server 
Pro](https://support.rstudio.com/hc/en-us/articles/211789298-Multiple-R-Sessions-in-RStudio-Server-Pro)
 and [Rserve](https://cran.r-project.org/web/packages/Rserve/index.html). Both 
of them enable multiple R sessions by spawning multiple R processes.
    
    So my point is that within SparkR, we don't need to support SQL session and 
a single SQLContext is enough.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to