Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14776
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14776
**[Test build #64319 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64319/consoleFull)**
for PR 14776 at commit
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/14181#discussion_r75978678
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -1534,15 +1534,15 @@ class Dataset[T] private[sql](
* Returns a new
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14777
**[Test build #64320 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64320/consoleFull)**
for PR 14777 at commit
GitHub user JoshRosen opened a pull request:
https://github.com/apache/spark/pull/14777
[SPARK-17205] Literal.sql should handle Infinity and NaN
This patch updates `Literal.sql` to properly generate SQL for `NaN` and
`Infinity` float and double literals: these special values need
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14776
**[Test build #64319 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64319/consoleFull)**
for PR 14776 at commit
GitHub user junyangq opened a pull request:
https://github.com/apache/spark/pull/14776
[SparkR][Minor] Fix doc for show method
## What changes were proposed in this pull request?
The original doc of `show` put methods for multiple classes together but
the text only talks
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14537
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14537
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64314/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14537
**[Test build #64314 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64314/consoleFull)**
for PR 14537 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14702
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14702
**[Test build #64318 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64318/consoleFull)**
for PR 14702 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14702
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64318/
Test FAILed.
---
Github user vijay1106 commented on the issue:
https://github.com/apache/spark/pull/5400
Hey does this address the issue of spark.sql.autoBroadcastJoinThreshold
cannot be more than 2GB?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14702
**[Test build #64318 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64318/consoleFull)**
for PR 14702 at commit
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14757#discussion_r75974295
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveExternalCatalogSuite.scala
---
@@ -21,26 +21,26 @@ import
Github user BryanCutler commented on the issue:
https://github.com/apache/spark/pull/13428
Thanks @JoshRosen, @brkyvz and @vanzin!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/14617
@mallman thanks a lot for your comments, I will change the UI to split into
separate columns.
Yes, as you mentioned current executor memory usage tracked in Standalone
Master only shows
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14761
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14761
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64315/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14761
**[Test build #64315 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64315/consoleFull)**
for PR 14761 at commit
Github user Sherry302 commented on the issue:
https://github.com/apache/spark/pull/14769
Yes. Are they ok now? @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14637
**[Test build #64317 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64317/consoleFull)**
for PR 14637 at commit
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/14637
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14769
Can you fix the title and description?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14753
**[Test build #64316 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64316/consoleFull)**
for PR 14753 at commit
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14761#discussion_r75970587
--- Diff: R/pkg/R/sparkR.R ---
@@ -550,3 +532,27 @@ processSparkPackages <- function(packages) {
}
splittedPackages
}
+
+#
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14761
**[Test build #64315 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64315/consoleFull)**
for PR 14761 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14607
Build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14607
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64310/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14607
**[Test build #64310 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64310/consoleFull)**
for PR 14607 at commit
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/14753#discussion_r75969912
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/TypedImperativeAggregateSuite.scala
---
@@ -0,0 +1,235 @@
+/*
+ * Licensed to the Apache
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14607
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14607
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64311/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14607
**[Test build #64311 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64311/consoleFull)**
for PR 14607 at commit
Github user sameeragarwal commented on the issue:
https://github.com/apache/spark/pull/14607
LGTM pending jenkins
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/14181#discussion_r75968909
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -1534,15 +1534,15 @@ class Dataset[T] private[sql](
* Returns a new
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/14753#discussion_r75968351
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/interfaces.scala
---
@@ -389,3 +389,146 @@ abstract class
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14637
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64312/
Test FAILed.
---
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/14753#discussion_r75968233
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/interfaces.scala
---
@@ -389,3 +389,175 @@ abstract class
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14637
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14637
**[Test build #64312 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64312/consoleFull)**
for PR 14637 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14637
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64303/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14637
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14702
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14537
**[Test build #64314 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64314/consoleFull)**
for PR 14537 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14637
**[Test build #64303 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64303/consoleFull)**
for PR 14637 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14702
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64313/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14702
**[Test build #64313 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64313/consoleFull)**
for PR 14702 at commit
Github user rajeshbalamohan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14537#discussion_r75967137
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -237,21 +237,26 @@ private[hive] class
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13440
Build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13440
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64309/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13440
**[Test build #64309 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64309/consoleFull)**
for PR 13440 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14702
**[Test build #64313 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64313/consoleFull)**
for PR 14702 at commit
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/14702
@rxin : I have updated the description to include more info on changes done
and future todos
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/14702#discussion_r7591
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ScriptTransformationExec.scala
---
@@ -0,0 +1,312 @@
+/*
+ * Licensed to the
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14768
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64308/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14768
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14768
**[Test build #64308 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64308/consoleFull)**
for PR 14768 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14757#discussion_r75963484
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/catalog/ExternalCatalogSuite.scala
---
@@ -40,6 +40,15 @@ abstract class
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14553#discussion_r75962153
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSourceSuite.scala
---
@@ -727,6 +732,48 @@ class FileStreamSourceSuite
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14553#discussion_r75962073
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/socket.scala
---
@@ -24,21 +24,24 @@ import java.text.SimpleDateFormat
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14553#discussion_r75961808
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -244,6 +250,21 @@ class StreamExecution(
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14553#discussion_r75961667
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/MetadataLog.scala
---
@@ -48,4 +49,13 @@ trait MetadataLog[T] {
*
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/14726#discussion_r75960177
--- Diff:
core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeSorterSpillReader.java
---
@@ -22,15 +22,21 @@
import
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14775#discussion_r75956901
--- Diff: R/pkg/R/backend.R ---
@@ -37,12 +51,42 @@ callJMethod <- function(objId, methodName, ...) {
invokeJava(isStatic = FALSE, objId$id,
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14775#discussion_r75956948
--- Diff: R/pkg/R/backend.R ---
@@ -25,9 +25,23 @@ isInstanceOf <- function(jobj, className) {
callJMethod(cls, "isInstance", jobj)
}
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14775#discussion_r75956724
--- Diff: R/pkg/R/jobj.R ---
@@ -82,7 +82,20 @@ getClassName.jobj <- function(x) {
callJMethod(cls, "getName")
}
-cleanup.jobj <-
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14775#discussion_r75956533
--- Diff: R/pkg/inst/tests/testthat/test_jvm_api.R ---
@@ -0,0 +1,41 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14775#discussion_r75956166
--- Diff: R/pkg/R/backend.R ---
@@ -37,12 +51,42 @@ callJMethod <- function(objId, methodName, ...) {
invokeJava(isStatic = FALSE, objId$id,
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14775#discussion_r75956122
--- Diff: R/pkg/R/backend.R ---
@@ -37,12 +51,42 @@ callJMethod <- function(objId, methodName, ...) {
invokeJava(isStatic = FALSE, objId$id,
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14775#discussion_r75956062
--- Diff: R/pkg/R/backend.R ---
@@ -37,12 +51,42 @@ callJMethod <- function(objId, methodName, ...) {
invokeJava(isStatic = FALSE, objId$id,
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14775#discussion_r75955890
--- Diff: R/pkg/R/backend.R ---
@@ -37,12 +51,42 @@ callJMethod <- function(objId, methodName, ...) {
invokeJava(isStatic = FALSE, objId$id,
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14775#discussion_r75955803
--- Diff: R/pkg/R/backend.R ---
@@ -25,9 +25,23 @@ isInstanceOf <- function(jobj, className) {
callJMethod(cls, "isInstance", jobj)
}
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14775#discussion_r75955453
--- Diff: R/pkg/R/backend.R ---
@@ -25,9 +25,23 @@ isInstanceOf <- function(jobj, className) {
callJMethod(cls, "isInstance", jobj)
}
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14775#discussion_r75955348
--- Diff: R/pkg/R/backend.R ---
@@ -25,9 +25,23 @@ isInstanceOf <- function(jobj, className) {
callJMethod(cls, "isInstance", jobj)
}
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14775#discussion_r75955078
--- Diff: R/pkg/R/backend.R ---
@@ -25,9 +25,23 @@ isInstanceOf <- function(jobj, className) {
callJMethod(cls, "isInstance", jobj)
}
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/14775
I think the downside of naming them as-is and keeping the signature (`...`
at the end) are that it would be very hard to change or add to the signature
later on (say, to add a
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14775#discussion_r75953989
--- Diff: R/pkg/R/jobj.R ---
@@ -82,7 +82,20 @@ getClassName.jobj <- function(x) {
callJMethod(cls, "getName")
}
-cleanup.jobj <-
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14775#discussion_r75953769
--- Diff: R/pkg/R/jobj.R ---
@@ -82,7 +82,20 @@ getClassName.jobj <- function(x) {
callJMethod(cls, "getName")
}
-cleanup.jobj <-
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14637
**[Test build #64312 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64312/consoleFull)**
for PR 14637 at commit
Github user mgummelt commented on a diff in the pull request:
https://github.com/apache/spark/pull/14637#discussion_r75953420
--- Diff: mesos/pom.xml ---
@@ -0,0 +1,109 @@
+
+
+http://maven.apache.org/POM/4.0.0;
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14775#discussion_r75953464
--- Diff: R/pkg/R/backend.R ---
@@ -37,12 +51,42 @@ callJMethod <- function(objId, methodName, ...) {
invokeJava(isStatic = FALSE, objId$id,
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14775#discussion_r75953330
--- Diff: R/pkg/R/backend.R ---
@@ -37,12 +51,42 @@ callJMethod <- function(objId, methodName, ...) {
invokeJava(isStatic = FALSE, objId$id,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14607
**[Test build #64311 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64311/consoleFull)**
for PR 14607 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14607
**[Test build #64310 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64310/consoleFull)**
for PR 14607 at commit
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/14607#discussion_r75949718
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -518,21 +550,87 @@ case class AlterTableRecoverPartitionsCommand(
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14774
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64300/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14774
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/14607#discussion_r75947797
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -518,21 +550,87 @@ case class AlterTableRecoverPartitionsCommand(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14774
**[Test build #64300 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64300/consoleFull)**
for PR 14774 at commit
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/14607#discussion_r75947173
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -443,6 +446,31 @@ case class AlterTableDropPartitionCommand(
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14731#discussion_r75946809
--- Diff: docs/streaming-programming-guide.md ---
@@ -644,13 +644,39 @@ methods for creating DStreams from files as input
sources.
Github user MechCoder commented on the issue:
https://github.com/apache/spark/pull/14640
Just FYI, we plan to rename "LabelKFold" to "GroupKFold" in the next
version of sklearn as a label can mean several things. (including the target
label)
---
If your project is set up for it,
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/14731#discussion_r75945926
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/FileInputDStream.scala
---
@@ -196,29 +192,33 @@ class FileInputDStream[K, V, F
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/14731#discussion_r75945790
--- Diff: docs/streaming-programming-guide.md ---
@@ -644,13 +644,39 @@ methods for creating DStreams from files as input
sources.
Github user steveloughran commented on the issue:
https://github.com/apache/spark/pull/14731
The logic has got complex enough it merits unit tests. Pulling into
SparkHadoopUtils itself and writing some for the possible: simple, glob matches
one , glob matches 1+, glob doesn't match,
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14637
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14637
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64299/
Test PASSed.
---
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14763
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
101 - 200 of 525 matches
Mail list logo