Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23252#discussion_r240022921
--- Diff: core/src/test/scala/org/apache/spark/SecurityManagerSuite.scala
---
@@ -440,12 +473,27 @@ class SecurityManagerSuite extends SparkFunSuite
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/23224
LGTM
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/23256
ideally, but really not for this PR
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23256#discussion_r239997109
--- Diff: R/pkg/tests/fulltests/test_mllib_fpm.R ---
@@ -84,19 +84,20 @@ test_that("spark.fpGrowth", {
})
test_that("s
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/23218
do we need to relnote jvm compatibility?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23252#discussion_r239705869
--- Diff: core/src/test/scala/org/apache/spark/SecurityManagerSuite.scala
---
@@ -440,12 +473,27 @@ class SecurityManagerSuite extends SparkFunSuite
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/22305
I can help if this looks good to @ueshin
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23184#discussion_r238120855
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/api/r/SQLUtils.scala
---
@@ -225,4 +225,10 @@ private[sql] object SQLUtils extends Logging
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23184#discussion_r238120812
--- Diff: R/pkg/R/functions.R ---
@@ -2254,40 +2255,48 @@ setMethod("date_format", signature(y = "Column", x
= "character"
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23072#discussion_r238087240
--- Diff:
examples/src/main/scala/org/apache/spark/examples/ml/FPGrowthExample.scala ---
@@ -64,4 +64,3 @@ object FPGrowthExample {
spark.stop
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/23161
merged to master
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/23161
LGTM
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23184#discussion_r238055143
--- Diff: R/pkg/R/functions.R ---
@@ -2254,40 +2255,48 @@ setMethod("date_format", signature(y = "Column", x
= "character"
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23184#discussion_r238055087
--- Diff: R/pkg/R/functions.R ---
@@ -2254,40 +2255,48 @@ setMethod("date_format", signature(y = "Column", x
= "character"
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23184#discussion_r238055126
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/api/r/SQLUtils.scala
---
@@ -225,4 +225,10 @@ private[sql] object SQLUtils extends Logging
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23184#discussion_r238055173
--- Diff: R/pkg/R/functions.R ---
@@ -202,8 +202,9 @@ NULL
#' \itemize{
#' \item \code{from_json}: a structType
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/22939
Error looks reasonable...
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23161#discussion_r237383462
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2732,13 +2732,24 @@ setMethod("union",
dataFrame(unioned)
})
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23161#discussion_r236972877
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2732,14 +2732,24 @@ setMethod("union",
dataFrame(unioned)
})
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23161#discussion_r236973169
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2732,14 +2732,24 @@ setMethod("union",
dataFrame(unioned)
})
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23025#discussion_r236970732
--- Diff: R/pkg/R/DataFrame.R ---
@@ -767,6 +767,14 @@ setMethod("repartition",
#' using \code{spark.sql.shu
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23072#discussion_r236771417
--- Diff: docs/ml-clustering.md ---
@@ -265,3 +265,44 @@ Refer to the [R API
docs](api/R/spark.gaussianMixture.html) for more details
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22939#discussion_r236770223
--- Diff: R/pkg/R/functions.R ---
@@ -2230,6 +2237,32 @@ setMethod("from_json", signature(x = "Column",
schema = &q
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23145#discussion_r236765511
--- Diff: docs/index.md ---
@@ -67,7 +67,7 @@ Example applications are also provided in Python. For
example,
./bin/spark-submit examples/src
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23098#discussion_r236764795
--- Diff: R/pkg/R/sparkR.R ---
@@ -269,7 +269,7 @@ sparkR.sparkContext <- function(
#' sparkR.session("yarn-client", &quo
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23131#discussion_r236763355
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2732,6 +2732,20 @@ setMethod("union",
dataFrame(unioned)
})
+#
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23025#discussion_r236762465
--- Diff: R/pkg/R/DataFrame.R ---
@@ -767,6 +767,14 @@ setMethod("repartition",
#' using \code{spark.sql.shu
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23131#discussion_r236760822
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2732,6 +2732,20 @@ setMethod("union",
dataFrame(unioned)
})
+#
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23145#discussion_r236546043
--- Diff: docs/index.md ---
@@ -67,7 +67,7 @@ Example applications are also provided in Python. For
example,
./bin/spark-submit examples/src
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/23089
Thanks!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/22939
Sorry for the delay, will do another pass in 1 or 2 days
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23072#discussion_r234432181
--- Diff: R/pkg/R/mllib_clustering.R ---
@@ -610,3 +616,57 @@ setMethod("write.ml", signature(object = "LDAModel"
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23072#discussion_r234432019
--- Diff: R/pkg/R/mllib_clustering.R ---
@@ -610,3 +616,57 @@ setMethod("write.ml", signature(object = "LDAModel"
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23072#discussion_r234432049
--- Diff: R/pkg/vignettes/sparkr-vignettes.Rmd ---
@@ -968,6 +970,17 @@ predicted <- predict(model, df)
head(predic
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23073#discussion_r234431864
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/ExecutorData.scala ---
@@ -27,12 +27,14 @@ import org.apache.spark.rpc.{RpcAddress
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/23073
please put ^ comment into PR description (because comment is not included
in commit message once the PR is merged)
---
-
To
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/23012
Yea there are some problem with some packages we depend on that are not
installable from CRAN (eg too old) so it will be hard to a new version of R and
new installation.
So to clarify
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/23012
Hey shane I donât think we are saying to test multiple R version at all.
In fact quite the opposite, just the new(er) version at some point in the
future.
(We donât have a better
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/23012
I think it's easier to say unsupported if we are not testing it in jenkins
or appveyer. I don't know if we any coverage at release for older R version
anyway, so it's better to u
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/23017
noted test issue. let's kick off test though
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.or
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/23017
ok to test
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/23007
merged to master/2.4
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/22866
thx, but DO NOT MERGE - there's some nasty bug I'm still investigating..
---
-
To unsubscribe, e-mail: review
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23012#discussion_r232881732
--- Diff: R/pkg/R/sparkR.R ---
@@ -283,6 +283,10 @@ sparkR.session <- function(
enableHiveSupport = TRUE,
...) {
+ if (ut
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23012#discussion_r232882419
--- Diff: docs/index.md ---
@@ -31,7 +31,8 @@ Spark runs on both Windows and UNIX-like systems (e.g.
Linux, Mac OS). It's easy
locally o
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23012#discussion_r232882178
--- Diff: docs/index.md ---
@@ -31,7 +31,8 @@ Spark runs on both Windows and UNIX-like systems (e.g.
Linux, Mac OS). It's easy
locally o
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/23012#discussion_r232881594
--- Diff: R/WINDOWS.md ---
@@ -3,7 +3,7 @@
To build SparkR on Windows, the following steps are required
1. Install R (>= 3.1)
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/23012
FYI
This is unused code Iâm going to remove it
https://github.com/apache/spark/blob/master/R/pkg/src-native/string_hash_code.c
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/23012
Also I think the warning should be in .First in general.R
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/23012
I think this should say unsupported (ie could still work) instead of
deprecated
Also the compareVersion should check both major and minor ie 3.4.0
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22939#discussion_r232500194
--- Diff: R/pkg/R/functions.R ---
@@ -2230,6 +2237,32 @@ setMethod("from_json", signature(x = "Column",
schema = &q
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r232500065
--- Diff: R/pkg/R/SQLContext.R ---
@@ -172,36 +257,72 @@ getDefaultSqlSource <- function() {
createDataFrame <- function(data, schema
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r232499902
--- Diff: R/pkg/R/SQLContext.R ---
@@ -147,6 +147,91 @@ getDefaultSqlSource <- function() {
l[["spark.sql.sources
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r232499848
--- Diff: R/pkg/tests/fulltests/test_sparkSQL.R ---
@@ -307,6 +307,64 @@ test_that("create DataFrame from RDD", {
unsetH
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r232499794
--- Diff: R/pkg/tests/fulltests/test_sparkSQL.R ---
@@ -307,6 +307,64 @@ test_that("create DataFrame from RDD", {
unsetH
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22993#discussion_r232499645
--- Diff: common/unsafe/src/main/java/org/apache/spark/unsafe/Platform.java
---
@@ -67,6 +67,59 @@
unaligned = _unaligned
GitHub user felixcheung opened a pull request:
https://github.com/apache/spark/pull/23007
[SPARK-26010] fix vignette eval with Java 11
## What changes were proposed in this pull request?
changes in vignette only to disable eval
## How was this patch tested
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22993#discussion_r232477875
--- Diff: common/unsafe/src/main/java/org/apache/spark/unsafe/Platform.java
---
@@ -67,6 +67,59 @@
unaligned = _unaligned
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/22993
what settings we need to allow `illegal reflective access`
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/22989
and catching Error or Throwable..
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22989#discussion_r232477783
--- Diff: scalastyle-config.xml ---
@@ -240,6 +240,18 @@ This file is divided into 3 sections:
]]>
+
+throw
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/22977
right, I mean both this and that should be part of the process
"post-release"
---
-
To unsubscribe, e-mai
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r232477365
--- Diff: R/pkg/R/SQLContext.R ---
@@ -147,6 +147,55 @@ getDefaultSqlSource <- function() {
l[["spark.sql.sources
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r232477325
--- Diff: R/pkg/R/SQLContext.R ---
@@ -189,19 +238,67 @@ createDataFrame <- function(data, schema = NULL,
samplingRatio = 1.0,
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r232477271
--- Diff: R/pkg/R/SQLContext.R ---
@@ -189,19 +238,67 @@ createDataFrame <- function(data, schema = NULL,
samplingRatio = 1.0,
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r232477257
--- Diff: R/pkg/R/SQLContext.R ---
@@ -189,19 +238,67 @@ createDataFrame <- function(data, schema = NULL,
samplingRatio = 1.0,
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r232477171
--- Diff: R/pkg/R/SQLContext.R ---
@@ -172,10 +221,10 @@ getDefaultSqlSource <- function() {
createDataFrame <- function(data, schema
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r232477155
--- Diff: R/pkg/R/SQLContext.R ---
@@ -147,6 +147,55 @@ getDefaultSqlSource <- function() {
l[["spark.sql.sources
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r232477131
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/api/r/SQLUtils.scala
---
@@ -225,4 +226,25 @@ private[sql] object SQLUtils extends Logging
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/22997
btw, please see the page https://spark.apache.org/contributing.html and
particularly "Pull Request" on the format.
---
--
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/22997
thx, but I'm not sure about this approach. this step will now cause hadoop
jar to be packaged into the release tarball of hadoop-provided, which is
undoing the point of hadoop-pro
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/22977
I think also there is a hive metastore test that downloads spark release
jar?
---
-
To unsubscribe, e-mail
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22967#discussion_r232178323
--- Diff: docs/sparkr.md ---
@@ -133,7 +133,7 @@ specifying `--packages` with `spark-submit` or `sparkR`
commands, or if initiali
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r232170936
--- Diff: R/pkg/R/SQLContext.R ---
@@ -147,6 +147,55 @@ getDefaultSqlSource <- function() {
l[["spark.sql.sources
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r232176721
--- Diff: R/pkg/R/SQLContext.R ---
@@ -189,19 +238,67 @@ createDataFrame <- function(data, schema = NULL,
samplingRatio = 1.0,
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r232172687
--- Diff: R/pkg/R/SQLContext.R ---
@@ -147,6 +147,55 @@ getDefaultSqlSource <- function() {
l[["spark.sql.sources
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r232169938
--- Diff: R/pkg/R/SQLContext.R ---
@@ -147,6 +147,55 @@ getDefaultSqlSource <- function() {
l[["spark.sql.sources
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r232173367
--- Diff: R/pkg/R/SQLContext.R ---
@@ -189,19 +238,67 @@ createDataFrame <- function(data, schema = NULL,
samplingRatio = 1.0,
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r232173043
--- Diff: R/pkg/R/SQLContext.R ---
@@ -172,10 +221,10 @@ getDefaultSqlSource <- function() {
createDataFrame <- function(data, schema
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r232167634
--- Diff: R/pkg/R/SQLContext.R ---
@@ -147,6 +147,55 @@ getDefaultSqlSource <- function() {
l[["spark.sql.sources
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r232172546
--- Diff: R/pkg/R/SQLContext.R ---
@@ -147,6 +147,55 @@ getDefaultSqlSource <- function() {
l[["spark.sql.sources
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r232177132
--- Diff: R/pkg/R/SQLContext.R ---
@@ -189,19 +238,67 @@ createDataFrame <- function(data, schema = NULL,
samplingRatio = 1.0,
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r232167926
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/api/r/SQLUtils.scala
---
@@ -225,4 +226,25 @@ private[sql] object SQLUtils extends Logging
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r232176774
--- Diff: R/pkg/R/SQLContext.R ---
@@ -189,19 +238,67 @@ createDataFrame <- function(data, schema = NULL,
samplingRatio = 1.0,
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r232171176
--- Diff: R/pkg/R/SQLContext.R ---
@@ -147,6 +147,55 @@ getDefaultSqlSource <- function() {
l[["spark.sql.sources
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r232167480
--- Diff: R/pkg/R/SQLContext.R ---
@@ -147,6 +147,55 @@ getDefaultSqlSource <- function() {
l[["spark.sql.sources
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r232167110
--- Diff: R/pkg/R/SQLContext.R ---
@@ -215,14 +278,16 @@ createDataFrame <- function(data, schema = NULL,
samplingRatio = 1.0,
}
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22939#discussion_r232166370
--- Diff: R/pkg/R/functions.R ---
@@ -2230,6 +2237,32 @@ setMethod("from_json", signature(x = "Column",
schema = &q
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22967#discussion_r231819016
--- Diff: docs/sparkr.md ---
@@ -133,7 +133,7 @@ specifying `--packages` with `spark-submit` or `sparkR`
commands, or if initiali
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/22932
Does it have different values for
new native ORC writer, old Hive ORC writer
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22948#discussion_r231598045
--- Diff: dev/appveyor-install-dependencies.ps1 ---
@@ -115,7 +115,7 @@ $env:Path += ";$env:HADOOP_HOME\bin"
Po
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22921#discussion_r231596680
--- Diff: R/pkg/R/functions.R ---
@@ -1663,9 +1692,24 @@ setMethod("toDegrees",
#' @aliases toRadians toRadians,Column-metho
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22921#discussion_r231403827
--- Diff: R/pkg/R/functions.R ---
@@ -319,6 +319,27 @@ setMethod("acos",
column(jc)
})
+#
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22939#discussion_r231403096
--- Diff: R/pkg/R/functions.R ---
@@ -2230,6 +2237,32 @@ setMethod("from_json", signature(x = "Column",
schema = &q
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r231402726
--- Diff: R/pkg/R/SQLContext.R ---
@@ -147,6 +147,30 @@ getDefaultSqlSource <- function() {
l[["spark.sql.sources
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r231402235
--- Diff: R/pkg/R/SQLContext.R ---
@@ -147,6 +147,30 @@ getDefaultSqlSource <- function() {
l[["spark.sql.sources
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r231402297
--- Diff: R/pkg/R/SQLContext.R ---
@@ -172,15 +196,17 @@ getDefaultSqlSource <- function() {
createDataFrame <- function(data, schema
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r231402063
--- Diff: R/pkg/R/SQLContext.R ---
@@ -147,6 +147,30 @@ getDefaultSqlSource <- function() {
l[["spark.sql.sources
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22954#discussion_r231401994
--- Diff: R/pkg/R/SQLContext.R ---
@@ -147,6 +147,30 @@ getDefaultSqlSource <- function() {
l[["spark.sql.sources
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/22939#discussion_r231025592
--- Diff: R/pkg/R/functions.R ---
@@ -205,11 +205,18 @@ NULL
#' also supported for the schema.
#' \item \cod
1 - 100 of 4662 matches
Mail list logo