Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17956
BTW, if we want to support the JSON strings like `{"a": "-Infinity"}` as
the FLOAT type, I think we also should support the float data in a string.
Below is an example.
```Scala
def
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17973
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17964#discussion_r116352803
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SameResultSuite.scala ---
@@ -46,4 +48,10 @@ class SameResultSuite extends QueryTest
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17964
**[Test build #76893 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76893/testReport)**
for PR 17964 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17956
**[Test build #76894 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76894/testReport)**
for PR 17956 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17973
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/17973#discussion_r116354253
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala
---
@@ -622,6 +622,31 @@ class CSVSuite extends QueryTest
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17308
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17308
**[Test build #76890 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76890/testReport)**
for PR 17308 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17308
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76890/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17964
**[Test build #76891 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76891/testReport)**
for PR 17964 at commit
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17964
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
GitHub user phatak-dev opened a pull request:
https://github.com/apache/spark/pull/17972
[SPARK-20723][ML]Add intermediate storage level to tree based classifiers
## What changes were proposed in this pull request?
Currently Random Forest implementation caches the
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17964
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17956
We are handling the Json data sources here. Are the following inputs widely
used?
```
{"a": "+INF"}
{"a": "INF"}
{"a": "-INF"}
{"a": "NaN"}
{"a": "+NaN"}
{"a": "-NaN"}
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17956
Note, we do not support the following cases
```
def floatRecords: Dataset[String] =
spark.createDataset(spark.sparkContext.parallelize(
"""{"f": "18.00"}""" ::
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/12646
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/12646
**[Test build #76895 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76895/testReport)**
for PR 12646 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17956
Unfortunately, we already support
```
"NaN"
"-Infinity"
"Infinity"
```
Now, this PR targets primarily to avoid unnecessary conversion try
primarily.
---
If
Github user hhbyyh commented on a diff in the pull request:
https://github.com/apache/spark/pull/17940#discussion_r116351996
--- Diff: mllib/src/main/scala/org/apache/spark/mllib/linalg/Matrices.scala
---
@@ -992,7 +992,24 @@ object Matrices {
new DenseMatrix(dm.rows,
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17972
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user phatak-dev commented on the issue:
https://github.com/apache/spark/pull/17972
Current unit test case is rudimentary. Any help in improving it is
appreciated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/12646
**[Test build #76892 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76892/testReport)**
for PR 12646 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17956
Given the existing logic below:
```
if (lowerCaseValue.equals("nan") ||
lowerCaseValue.equals("infinity") ||
lowerCaseValue.equals("-infinity") ||
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17973
**[Test build #76896 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76896/testReport)**
for PR 17973 at commit
Github user mikkokupsu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17973#discussion_r116353919
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala
---
@@ -622,6 +622,31 @@ class CSVSuite extends
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17956
@gatorsmile, @cloud-fan and @viirya, could you take another look please? I
tried to get rid of all the behaviour changes existing in both previous PRs but
only leave the change to avoid the
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17956
**[Test build #76894 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76894/testReport)**
for PR 17956 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17956
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76894/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17956
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17964
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76893/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17964
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17973
**[Test build #76896 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76896/testReport)**
for PR 17973 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17964
**[Test build #76893 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76893/testReport)**
for PR 17964 at commit
Github user mikkokupsu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17973#discussion_r116354610
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala
---
@@ -622,6 +622,31 @@ class CSVSuite extends
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17973
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17973
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76896/
Test PASSed.
---
Github user zero323 commented on the issue:
https://github.com/apache/spark/pull/17938
@gatorsmile Huh... in that case it looks like parser (?) needs a little
bit of work, unless of course following are features.
- Omitting `USING` doesn't work
```sql
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/12646
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/12646
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76895/
Test PASSed.
---
Github user zero323 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17965#discussion_r116355836
--- Diff: R/pkg/R/generics.R ---
@@ -799,6 +799,10 @@ setGeneric("write.df", function(df, path = NULL, ...)
{ standardGeneric("write.d
#' @export
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17965
**[Test build #76898 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76898/testReport)**
for PR 17965 at commit
Github user zero323 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17965#discussion_r116355839
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3769,3 +3769,33 @@ setMethod("alias",
sdf <- callJMethod(object@sdf, "alias", data)
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17938
When you omit `USING`, it's hive style CREATE TABLE syntax, which is very
different from Spark. We should encourage users to use the spark style CREATE
TABLE syntax and only document it(with
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17298
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17298
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76897/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17298
**[Test build #76897 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76897/testReport)**
for PR 17298 at commit
Github user sharkdtu commented on the issue:
https://github.com/apache/spark/pull/17963
cc @srowen @ajbozarth
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17298
**[Test build #76897 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76897/testReport)**
for PR 17298 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17938
**[Test build #76899 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76899/testReport)**
for PR 17938 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17938
**[Test build #76900 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76900/testReport)**
for PR 17938 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17964
**[Test build #76901 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76901/testReport)**
for PR 17964 at commit
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17938
we are going to support bucketing in hive style CREATE TABLE syntax soon.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17956
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/16199
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17956
thanks, merging to master/2.2!
I think this change is pretty safe, we can discuss 2 things later:
1. if we want to support more special strings like `Inf`
2. if we want to make it
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17973#discussion_r116358020
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala
---
@@ -622,6 +622,31 @@ class CSVSuite extends
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17956
Thank you everybody sincerely.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17644#discussion_r116359799
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -408,9 +425,7 @@ private[hive] class HiveClientImpl(
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17644#discussion_r116359891
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala
---
@@ -307,6 +307,27 @@ case class InsertIntoHiveTable(
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17970#discussion_r116359887
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -111,7 +111,8 @@ abstract class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17964
**[Test build #76901 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76901/testReport)**
for PR 17964 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17970
**[Test build #76902 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76902/testReport)**
for PR 17970 at commit
Github user zero323 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17965#discussion_r116355102
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3769,3 +3769,33 @@ setMethod("alias",
sdf <- callJMethod(object@sdf, "alias", data)
Github user zero323 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17969#discussion_r116355659
--- Diff: R/pkg/R/mllib_wrapper.R ---
@@ -0,0 +1,61 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17938
**[Test build #76899 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76899/testReport)**
for PR 17938 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17938
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17938
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76900/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17938
**[Test build #76900 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76900/testReport)**
for PR 17938 at commit
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17970#discussion_r116360222
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -111,7 +111,8 @@ abstract class
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17941
Thank you for comments, @falaki and @felixcheung . I added the duplication
link to the issue, SPARK-20684, and ask @falaki to close the JIRA issue because
he is the reporter.
---
If your
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17251
Could this fix be part of Spark 2.2.0, @cloud-fan and @gatorsmile ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17973#discussion_r116359183
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala
---
@@ -622,6 +622,31 @@ class CSVSuite extends
Github user kevinyu98 commented on the issue:
https://github.com/apache/spark/pull/12646
test please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
GitHub user zero323 reopened a pull request:
https://github.com/apache/spark/pull/17965
[SPARK-20726][SPARKR] wrapper for SQL broadcast
## What changes were proposed in this pull request?
- Adds R wrapper for `o.a.s.sql.functions.broadcast`.
- Renames `broadcast` to
Github user zero323 closed the pull request at:
https://github.com/apache/spark/pull/17965
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user mikkokupsu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17973#discussion_r116359395
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala
---
@@ -622,6 +622,31 @@ class CSVSuite extends
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17644#discussion_r116359692
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/ExternalCatalog.scala
---
@@ -17,6 +17,7 @@
package
GitHub user mikkokupsu opened a pull request:
https://github.com/apache/spark/pull/17973
SPARK-20731][SQL] Add ability to change or omit .csv file extension in CSV
Data Source
## What changes were proposed in this pull request?
Add new option to CSV Data Source to make
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17956
Yea, Let's focus on the topic. For the cases below:
```
{"a": NaN}
{"a": Infinity}
{"a": +Infinity}
{"a": -Infinity}
```
They are related with
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/12646
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17964
LGTM - merging to master/2.2/2.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17964
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17964
@cloud-fan can you backport this to 2.1?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user zero323 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17965#discussion_r116366145
--- Diff: R/pkg/R/generics.R ---
@@ -799,6 +799,10 @@ setGeneric("write.df", function(df, path = NULL, ...)
{ standardGeneric("write.d
#' @export
Github user actuaryzhang commented on the issue:
https://github.com/apache/spark/pull/17084
@imatiach-msft Thanks for the PR. Added a couple of comments. Sorry for the
delayed review.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17970
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76902/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17970
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17970
**[Test build #76902 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76902/testReport)**
for PR 17970 at commit
Github user actuaryzhang commented on a diff in the pull request:
https://github.com/apache/spark/pull/17084#discussion_r116364047
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/evaluation/BinaryClassificationEvaluator.scala
---
@@ -77,12 +87,16 @@ class
Github user actuaryzhang commented on a diff in the pull request:
https://github.com/apache/spark/pull/17084#discussion_r116364179
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/evaluation/binary/BinaryConfusionMatrix.scala
---
@@ -22,22 +22,22 @@ package
Github user actuaryzhang commented on a diff in the pull request:
https://github.com/apache/spark/pull/17084#discussion_r116364061
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/evaluation/BinaryClassificationEvaluator.scala
---
@@ -36,12 +36,18 @@ import
Github user actuaryzhang commented on a diff in the pull request:
https://github.com/apache/spark/pull/17084#discussion_r116364140
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/evaluation/BinaryClassificationMetrics.scala
---
@@ -41,13 +41,27 @@ import
Github user actuaryzhang commented on a diff in the pull request:
https://github.com/apache/spark/pull/17084#discussion_r116364224
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/evaluation/BinaryClassificationMetrics.scala
---
@@ -146,11 +160,13 @@ class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/12646
**[Test build #76895 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76895/testReport)**
for PR 12646 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17965
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17965
**[Test build #76898 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76898/testReport)**
for PR 17965 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17965
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76898/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17938
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76899/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17938
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
1 - 100 of 140 matches
Mail list logo