Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14705
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75566267
--- Diff: R/pkg/R/generics.R ---
@@ -735,6 +752,8 @@ setGeneric("between", function(x, bounds) {
standardGeneric("between") })
setGeneric("cast",
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75516177
--- Diff: R/pkg/R/DataFrame.R ---
@@ -932,7 +932,7 @@ setMethod("sample_frac",
#' @param x a SparkDataFrame.
#' @family SparkDataFrame
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75515842
--- Diff: R/pkg/R/functions.R ---
@@ -2276,9 +2276,8 @@ setMethod("n_distinct", signature(x = "Column"),
countDistinct(x, ...)
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75441096
--- Diff: R/pkg/R/DataFrame.R ---
@@ -932,7 +932,7 @@ setMethod("sample_frac",
#' @param x a SparkDataFrame.
#' @family SparkDataFrame functions
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75438540
--- Diff: R/pkg/R/DataFrame.R ---
@@ -932,7 +932,7 @@ setMethod("sample_frac",
#' @param x a SparkDataFrame.
#' @family SparkDataFrame
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75437776
--- Diff: R/pkg/R/functions.R ---
@@ -2276,9 +2276,8 @@ setMethod("n_distinct", signature(x = "Column"),
countDistinct(x, ...)
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75437423
--- Diff: R/pkg/R/functions.R ---
@@ -1335,7 +1336,7 @@ setMethod("rtrim",
#' @note sd since 1.6.0
setMethod("sd",
signature(x =
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75437149
--- Diff: R/pkg/R/mllib.R ---
@@ -917,14 +922,14 @@ setMethod("spark.lda", signature(data =
"SparkDataFrame"),
# Returns a summary of the AFT survival
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75431046
--- Diff: R/pkg/R/functions.R ---
@@ -319,7 +316,7 @@ setMethod("column",
#'
#' Computes the Pearson Correlation Coefficient for two Columns.
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75430253
--- Diff: R/pkg/R/SQLContext.R ---
@@ -727,6 +730,7 @@ dropTempView <- function(viewName) {
#' @param source The name of external data source
#'
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75430152
--- Diff: R/pkg/R/functions.R ---
@@ -1848,7 +1850,7 @@ setMethod("upper",
#' @note var since 1.6.0
setMethod("var",
signature(x =
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75430048
--- Diff: R/pkg/R/functions.R ---
@@ -3115,6 +3166,11 @@ setMethod("dense_rank",
#'
#' This is equivalent to the LAG function in SQL.
#'
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r7543
--- Diff: R/pkg/R/functions.R ---
@@ -3115,6 +3166,11 @@ setMethod("dense_rank",
#'
#' This is equivalent to the LAG function in SQL.
#'
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75429772
--- Diff: R/pkg/R/mllib.R ---
@@ -620,11 +625,12 @@ setMethod("predict", signature(object =
"KMeansModel"),
#' predictions on new data, and
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75429664
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3187,6 +3221,7 @@ setMethod("histogram",
#' @param x A SparkDataFrame
#' @param url JDBC database url of the
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75429658
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3003,9 +3036,10 @@ setMethod("str",
#' Returns a new SparkDataFrame with columns dropped.
#' This is a no-op
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75429536
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2464,8 +2489,10 @@ setMethod("unionAll",
#' Union two or more SparkDataFrames. This is equivalent to `UNION ALL`
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75429434
--- Diff: R/pkg/R/generics.R ---
@@ -735,6 +752,8 @@ setGeneric("between", function(x, bounds) {
standardGeneric("between") })
setGeneric("cast",
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75429158
--- Diff: R/pkg/R/mllib.R ---
@@ -917,14 +922,14 @@ setMethod("spark.lda", signature(data =
"SparkDataFrame"),
# Returns a summary of the AFT
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75429017
--- Diff: R/pkg/R/mllib.R ---
@@ -504,14 +504,15 @@ setMethod("summary", signature(object =
"IsotonicRegressionModel"),
#' Users can call
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75427819
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1719,12 +1732,13 @@ setMethod("[", signature(x = "SparkDataFrame"),
#' Subset
#'
#' Return subsets
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75427717
--- Diff: R/pkg/R/mllib.R ---
@@ -917,14 +922,14 @@ setMethod("spark.lda", signature(data =
"SparkDataFrame"),
# Returns a summary of the AFT survival
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75427619
--- Diff: R/pkg/R/functions.R ---
@@ -1848,7 +1850,7 @@ setMethod("upper",
#' @note var since 1.6.0
setMethod("var",
signature(x
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75427590
--- Diff: R/pkg/R/functions.R ---
@@ -1335,7 +1336,7 @@ setMethod("rtrim",
#' @note sd since 1.6.0
setMethod("sd",
signature(x =
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75427019
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1202,6 +1215,7 @@ setMethod("toRDD",
#' Groups the SparkDataFrame using the specified columns, so we can run
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75426918
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1202,6 +1215,7 @@ setMethod("toRDD",
#' Groups the SparkDataFrame using the specified columns, so we can run
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75425834
--- Diff: R/pkg/R/functions.R ---
@@ -362,8 +357,8 @@ setMethod("cov", signature(x = "characterOrColumn"),
#' @rdname cov
#'
-#'
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75425765
--- Diff: R/pkg/R/functions.R ---
@@ -362,8 +357,8 @@ setMethod("cov", signature(x = "characterOrColumn"),
#' @rdname cov
#'
-#'
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75425753
--- Diff: R/pkg/R/functions.R ---
@@ -319,7 +316,7 @@ setMethod("column",
#'
#' Computes the Pearson Correlation Coefficient for two Columns.
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75425616
--- Diff: R/pkg/R/functions.R ---
@@ -1273,12 +1271,15 @@ setMethod("round",
#' bround
#'
#' Returns the value of the column `e` rounded
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75424988
--- Diff: R/pkg/R/functions.R ---
@@ -2276,9 +2276,8 @@ setMethod("n_distinct", signature(x = "Column"),
countDistinct(x, ...)
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75424491
--- Diff: R/pkg/R/functions.R ---
@@ -832,7 +827,10 @@ setMethod("kurtosis",
#' The function by default returns the last values it sees. It will
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75424452
--- Diff: R/pkg/R/SQLContext.R ---
@@ -727,6 +730,7 @@ dropTempView <- function(viewName) {
#' @param source The name of external data source
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75424331
--- Diff: R/pkg/R/DataFrame.R ---
@@ -514,9 +519,10 @@ setMethod("registerTempTable",
#'
#' Insert the contents of a SparkDataFrame into a
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75424346
--- Diff: R/pkg/R/DataFrame.R ---
@@ -999,9 +1008,10 @@ setMethod("dim",
#' Collects all the elements of a SparkDataFrame and coerces them
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75424338
--- Diff: R/pkg/R/DataFrame.R ---
@@ -603,8 +611,9 @@ setMethod("persist",
#' Mark this SparkDataFrame as non-persistent, and remove all blocks for
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75424376
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2759,31 +2793,27 @@ setMethod("dropna",
dataFrame(sdf)
})
+#' @param
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75424317
--- Diff: R/pkg/R/DataFrame.R ---
@@ -120,8 +120,9 @@ setMethod("schema",
#'
#' Print the logical and physical Catalyst plans to the console
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75424303
--- Diff: R/pkg/R/functions.R ---
@@ -1273,12 +1271,15 @@ setMethod("round",
#' bround
#'
#' Returns the value of the column `e` rounded
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75424295
--- Diff: R/pkg/R/functions.R ---
@@ -1335,7 +1336,7 @@ setMethod("rtrim",
#' @note sd since 1.6.0
setMethod("sd",
signature(x =
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75424253
--- Diff: R/pkg/R/functions.R ---
@@ -1848,7 +1850,7 @@ setMethod("upper",
#' @note var since 1.6.0
setMethod("var",
signature(x
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75424182
--- Diff: R/pkg/R/functions.R ---
@@ -2114,20 +2116,22 @@ setMethod("pmod", signature(y = "Column"),
#' @rdname approxCountDistinct
#' @name
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75424170
--- Diff: R/pkg/R/functions.R ---
@@ -2676,6 +2679,11 @@ setMethod("format_string", signature(format =
"character", x = "Column"),
#' representing
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75424161
--- Diff: R/pkg/R/functions.R ---
@@ -2702,19 +2710,21 @@ setMethod("from_unixtime", signature(x = "Column"),
#' [12:05,12:10) but not in
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75424152
--- Diff: R/pkg/R/functions.R ---
@@ -2766,6 +2776,10 @@ setMethod("window", signature(x = "Column"),
#' NOTE: The position is not zero based, but 1
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75424138
--- Diff: R/pkg/R/functions.R ---
@@ -3115,6 +3166,11 @@ setMethod("dense_rank",
#'
#' This is equivalent to the LAG function in SQL.
#'
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75423807
--- Diff: R/pkg/R/mllib.R ---
@@ -620,11 +625,12 @@ setMethod("predict", signature(object =
"KMeansModel"),
#' predictions on new data, and
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75423748
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3187,6 +3221,7 @@ setMethod("histogram",
#' @param x A SparkDataFrame
#' @param url JDBC database url of
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75423733
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3003,9 +3036,10 @@ setMethod("str",
#' Returns a new SparkDataFrame with columns dropped.
#' This is a
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75423717
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2464,8 +2489,10 @@ setMethod("unionAll",
#' Union two or more SparkDataFrames. This is equivalent to `UNION
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75423643
--- Diff: R/pkg/R/mllib.R ---
@@ -917,14 +922,14 @@ setMethod("spark.lda", signature(data =
"SparkDataFrame"),
# Returns a summary of the AFT
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75422922
--- Diff: R/pkg/R/generics.R ---
@@ -735,6 +752,8 @@ setGeneric("between", function(x, bounds) {
standardGeneric("between") })
setGeneric("cast",
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75422686
--- Diff: R/pkg/R/mllib.R ---
@@ -504,14 +504,15 @@ setMethod("summary", signature(object =
"IsotonicRegressionModel"),
#' Users can call
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75422499
--- Diff: R/pkg/R/mllib.R ---
@@ -917,14 +922,14 @@ setMethod("spark.lda", signature(data =
"SparkDataFrame"),
# Returns a summary of the AFT
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75421912
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1202,6 +1215,7 @@ setMethod("toRDD",
#' Groups the SparkDataFrame using the specified columns, so we can run
GitHub user junyangq opened a pull request:
https://github.com/apache/spark/pull/14705
[SPARK-16508][SparkR] Fix CRAN undocumented/duplicated arguments warnings.
## What changes were proposed in this pull request?
This PR tries to fix all the remaining
57 matches
Mail list logo