Github user sun-rui commented on a diff in the pull request:
https://github.com/apache/spark/pull/9192#discussion_r42726488
--- Diff: R/pkg/R/SQLContext.R ---
@@ -17,6 +17,34 @@
# SQLcontext.R: SQLContext-driven functions
+#' Temporary function to reroute old S3 Method call to new
+#' We need to check the class of x to ensure it is SQLContext before
dispatching
+dispatchFunc <- function(newFuncSig, x, ...) {
+ funcName <- as.character(sys.call(sys.parent())[[1]])
+ f <- get0(paste0(funcName, ".default"))
+ # Strip sqlContext from list of parameters and then pass the rest along.
+ # In the following, if '&' is used instead of '&&', it warns about
+ # "the condition has length > 1 and only the first element will be used"
+ if (class(x) == "jobj" &&
+ grepl("org.apache.spark.sql.SQLContext", capture.output(show(x)))) {
+ .Deprecated(newFuncSig, old = paste0(funcName, "(sqlContext...)"))
+ f(...)
+ } else {
+ f(x, ...)
+ }
+}
--- End diff --
I took a rough look at https://github.com/apache/spark/pull/8909, it seems
that it is possible to have multiple root SQLContexts if
"spark.sql.allowMultipleContexts" is true. Even there is only one root
SQLContext (when "spark.sql.allowMultipleContexts" is false), there could be
multiple session SQLContexts (created by call rootSQLContext.newSession or
rootHiveContext.newSession()).
I am very clear out session management of SQLContext, @davies, could you
give me your point here? I am thinking do we need to expose session support in
SparkR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]