Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14702
Can you update the description to say more about what this pr includes, and
what future todos are?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user angolon commented on the issue:
https://github.com/apache/spark/pull/14710
Done, sorry!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/14705
surely - we should have said `lint-r` as the baseline. There's definitely
more we could add though. It would be great if we have bandwidth to write more
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/8880
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/8880
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64035/
Test PASSed.
---
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14709
I suspect array and struct literals will fail, looking at what Literal.sql
does. That said, it's an existing problem and we can fix that later.
---
If your project is set up for it, you can reply to
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/8880
**[Test build #64035 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64035/consoleFull)**
for PR 8880 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14700
**[Test build #3226 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3226/consoleFull)**
for PR 14700 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14711
**[Test build #64044 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64044/consoleFull)**
for PR 14711 at commit
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14118
Also LGTM other than that major question.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/14699
It's hard to say. Right now it is being converted on the [JVM
side](https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/api/r/SQLUtils.scala#L63)
- so it is
GitHub user jagadeesanas2 opened a pull request:
https://github.com/apache/spark/pull/14711
[SPARK-16822] [DOC] [Support latex in scaladoc with MathJax]
## What changes were proposed in this pull request?
LaTeX is rendered as simple code, in `LinearRegression.scala`
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75430253
--- Diff: R/pkg/R/SQLContext.R ---
@@ -727,6 +730,7 @@ dropTempView <- function(viewName) {
#' @param source The name of external data source
#'
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14118
With this change, do all empty (e.g. zero sized string) values become null
values once they are read back?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75430152
--- Diff: R/pkg/R/functions.R ---
@@ -1848,7 +1850,7 @@ setMethod("upper",
#' @note var since 1.6.0
setMethod("var",
signature(x =
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75430048
--- Diff: R/pkg/R/functions.R ---
@@ -3115,6 +3166,11 @@ setMethod("dense_rank",
#'
#' This is equivalent to the LAG function in SQL.
#'
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r7543
--- Diff: R/pkg/R/functions.R ---
@@ -3115,6 +3166,11 @@ setMethod("dense_rank",
#'
#' This is equivalent to the LAG function in SQL.
#'
Github user sun-rui commented on the issue:
https://github.com/apache/spark/pull/14639
If in the future SparkConf is needed, instead of passing all spark conf to
R via env variables, we can expose API for accessing SparkConf in the R
backend, similar to that in Pyspark.
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14710
Can you put a more descriptive title for the change?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14708
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64034/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14708
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/14447
@felixcheung sure, no problem
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75429772
--- Diff: R/pkg/R/mllib.R ---
@@ -620,11 +625,12 @@ setMethod("predict", signature(object =
"KMeansModel"),
#' predictions on new data, and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14708
**[Test build #64034 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64034/consoleFull)**
for PR 14708 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14705
@felixcheung Thanks for kind explanation. BTW, it'd be great too if it just
has a sentence, for example, `"For R code, Apache Spark follows lint-r"` in the
wiki just like Python has `"For
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75429664
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3187,6 +3221,7 @@ setMethod("histogram",
#' @param x A SparkDataFrame
#' @param url JDBC database url of the
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75429658
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3003,9 +3036,10 @@ setMethod("str",
#' Returns a new SparkDataFrame with columns dropped.
#' This is a no-op
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75429536
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2464,8 +2489,10 @@ setMethod("unionAll",
#' Union two or more SparkDataFrames. This is equivalent to `UNION ALL`
Github user zjffdu commented on the issue:
https://github.com/apache/spark/pull/14639
Thanks @sun-rui, `EXISTING_SPARKR_BACKEND_PORT` do indicate cluster mode
indirectly for now. But here not only deployMode is unknown in R side, but also
master and other spark configurations. For
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75429434
--- Diff: R/pkg/R/generics.R ---
@@ -735,6 +752,8 @@ setGeneric("between", function(x, bounds) {
standardGeneric("between") })
setGeneric("cast",
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/14447
we are looking at establishing some guidelines in PR 14705. Let's hold on
for another day or 2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14639
**[Test build #64043 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64043/consoleFull)**
for PR 14639 at commit
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/14705
looking good - looks like we are very close.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75429158
--- Diff: R/pkg/R/mllib.R ---
@@ -917,14 +922,14 @@ setMethod("spark.lda", signature(data =
"SparkDataFrame"),
# Returns a summary of the AFT
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75429017
--- Diff: R/pkg/R/mllib.R ---
@@ -504,14 +504,15 @@ setMethod("summary", signature(object =
"IsotonicRegressionModel"),
#' Users can call
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/14705
@inheritParams would be the way to go.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/14705
@HyukjinKwon - we don't have a coding style guide for R. We have some style
check with lint-r.
In addition, the document style you are looking at is a bit different from
coding style - this
Github user sethah commented on the issue:
https://github.com/apache/spark/pull/13796
@dbtsai Thanks for all of your meticulous review. Very much appreciated!
Glad we can have MLOR in Spark ML now.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user junyangq commented on the issue:
https://github.com/apache/spark/pull/14705
@shivaram I found perhaps a neat way to document R'glm if we don't want to
remove it is to use `@inheritParams stats::glm`. That will bring in all the
parameters from `stats::glm` not listed in
Github user sethah commented on a diff in the pull request:
https://github.com/apache/spark/pull/13796#discussion_r75428713
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/classification/MultinomialLogisticRegression.scala
---
@@ -0,0 +1,611 @@
+/*
+ * Licensed to the
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/13796
@sethah Thank you for this great weighted MLOR work in Spark 2.1. I merged
this PR into master, and let's discuss/work on the followups in separate JIRAs.
Thanks.
---
If your project is set up for
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/13796
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14279
BTW I think this is pretty important for 2.0.1 release.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14279
If we are introducing breaking changes to fix the bugs here, let's fix it
for real. (definitely a problem if we can't specify dateFormat and
timestampFormat separately).
---
If your project is set
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13796#discussion_r75428163
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/classification/MultinomialLogisticRegression.scala
---
@@ -0,0 +1,611 @@
+/*
+ * Licensed to the
Github user petermaxlee commented on the issue:
https://github.com/apache/spark/pull/14710
cc @vanzin and @kayousterhout
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75427819
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1719,12 +1732,13 @@ setMethod("[", signature(x = "SparkDataFrame"),
#' Subset
#'
#' Return subsets
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/14222
Close this now since the pr #14576 is merged.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user viirya closed the pull request at:
https://github.com/apache/spark/pull/14222
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75427717
--- Diff: R/pkg/R/mllib.R ---
@@ -917,14 +922,14 @@ setMethod("spark.lda", signature(data =
"SparkDataFrame"),
# Returns a summary of the AFT survival
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75427619
--- Diff: R/pkg/R/functions.R ---
@@ -1848,7 +1850,7 @@ setMethod("upper",
#' @note var since 1.6.0
setMethod("var",
signature(x
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75427590
--- Diff: R/pkg/R/functions.R ---
@@ -1335,7 +1336,7 @@ setMethod("rtrim",
#' @note sd since 1.6.0
setMethod("sd",
signature(x =
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13950
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64031/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13950
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13950
**[Test build #64031 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64031/consoleFull)**
for PR 13950 at commit
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13152#discussion_r75427102
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -1088,109 +1108,86 @@ private[spark] class BlockManager(
}
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14710
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75427019
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1202,6 +1215,7 @@ setMethod("toRDD",
#' Groups the SparkDataFrame using the specified columns, so we can run
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13152#discussion_r75426940
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -1088,109 +1108,86 @@ private[spark] class BlockManager(
}
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13152#discussion_r75426951
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -1088,109 +1108,86 @@ private[spark] class BlockManager(
}
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75426918
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1202,6 +1215,7 @@ setMethod("toRDD",
#' Groups the SparkDataFrame using the specified columns, so we can run
GitHub user angolon opened a pull request:
https://github.com/apache/spark/pull/14710
[SPARK-16533][CORE]
## What changes were proposed in this pull request?
This pull request reverts the changes made as a part of #14605, which
simply side-steps the deadlock issue. Instead, I
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13152#discussion_r75426791
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -1088,109 +1108,86 @@ private[spark] class BlockManager(
}
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13152#discussion_r75426781
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -1088,109 +1108,86 @@ private[spark] class BlockManager(
}
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13152#discussion_r75426753
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -1088,109 +1108,86 @@ private[spark] class BlockManager(
}
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13152#discussion_r75426714
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -1088,109 +1108,86 @@ private[spark] class BlockManager(
}
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14426
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14426
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64042/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14426
**[Test build #64042 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64042/consoleFull)**
for PR 14426 at commit
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13152#discussion_r75426587
--- Diff:
core/src/main/scala/org/apache/spark/storage/BlockReplicationPrioritization.scala
---
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13152#discussion_r75426605
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -20,6 +20,7 @@ package org.apache.spark.storage
import java.io._
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13152#discussion_r75426552
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -1088,109 +1108,86 @@ private[spark] class BlockManager(
}
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13152#discussion_r75426567
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -1088,109 +1108,86 @@ private[spark] class BlockManager(
}
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13152#discussion_r75426531
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManagerId.scala
---
@@ -69,24 +72,37 @@ class BlockManagerId private (
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13152#discussion_r75426476
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManagerId.scala
---
@@ -101,10 +117,18 @@ private[spark] object BlockManagerId {
* @param
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13152#discussion_r75426446
--- Diff:
core/src/main/scala/org/apache/spark/storage/BlockManagerMasterEndpoint.scala
---
@@ -298,7 +310,17 @@ class BlockManagerMasterEndpoint(
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13152#discussion_r75426435
--- Diff:
core/src/main/scala/org/apache/spark/storage/BlockManagerMasterEndpoint.scala
---
@@ -298,7 +310,17 @@ class BlockManagerMasterEndpoint(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14426
**[Test build #64042 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64042/consoleFull)**
for PR 14426 at commit
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13152#discussion_r75426412
--- Diff:
core/src/main/scala/org/apache/spark/storage/BlockManagerMasterEndpoint.scala
---
@@ -298,7 +310,17 @@ class BlockManagerMasterEndpoint(
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13152#discussion_r75426383
--- Diff:
core/src/main/scala/org/apache/spark/storage/BlockManagerMaster.scala ---
@@ -50,12 +50,20 @@ class BlockManagerMaster(
logInfo("Removal of
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13152#discussion_r75426300
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -160,8 +163,25 @@ private[spark] class BlockManager(
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14639
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13152#discussion_r75426231
--- Diff:
core/src/main/scala/org/apache/spark/storage/BlockReplicationPrioritization.scala
---
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14116
**[Test build #64041 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64041/consoleFull)**
for PR 14116 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14639
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64039/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14639
**[Test build #64039 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64039/consoleFull)**
for PR 14639 at commit
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13152#discussion_r75426199
--- Diff:
core/src/main/scala/org/apache/spark/storage/BlockReplicationPrioritization.scala
---
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13152#discussion_r75426189
--- Diff:
core/src/main/scala/org/apache/spark/storage/BlockReplicationPrioritization.scala
---
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13152#discussion_r75426147
--- Diff:
core/src/main/scala/org/apache/spark/storage/BlockReplicationPrioritization.scala
---
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13152#discussion_r75426070
--- Diff: core/src/main/scala/org/apache/spark/storage/TopologyMapper.scala
---
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user lw-lin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14118#discussion_r75426062
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala ---
@@ -370,7 +370,8 @@ class DataFrameReader private[sql](sparkSession:
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14639
**[Test build #64039 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64039/consoleFull)**
for PR 14639 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14680
@jaceklaskowski It seems the test [here]
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14118
**[Test build #64040 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64040/consoleFull)**
for PR 14118 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13152
**[Test build #3225 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3225/consoleFull)**
for PR 13152 at commit
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75425834
--- Diff: R/pkg/R/functions.R ---
@@ -362,8 +357,8 @@ setMethod("cov", signature(x = "characterOrColumn"),
#' @rdname cov
#'
-#'
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75425765
--- Diff: R/pkg/R/functions.R ---
@@ -362,8 +357,8 @@ setMethod("cov", signature(x = "characterOrColumn"),
#' @rdname cov
#'
-#'
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75425753
--- Diff: R/pkg/R/functions.R ---
@@ -319,7 +316,7 @@ setMethod("column",
#'
#' Computes the Pearson Correlation Coefficient for two Columns.
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14705#discussion_r75425616
--- Diff: R/pkg/R/functions.R ---
@@ -1273,12 +1271,15 @@ setMethod("round",
#' bround
#'
#' Returns the value of the column `e` rounded
Github user petermaxlee commented on the issue:
https://github.com/apache/spark/pull/14709
cc @cloud-fan and @hvanhovell
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
1 - 100 of 667 matches
Mail list logo