Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17083
Due to the change of (https://github.com/apache/spark/pull/16625), the
issue is obsolete. So it effects spark 2.1 and 2.0.
---
If your project is set up for it, you can reply to this email and
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17071
(I put a test here -
https://github.com/apache/spark/pull/17071/files#diff-7e47859dbd409cc39f2908615fbd07ffR419)
---
If your project is set up for it, you can reply to this email and have your
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17068#discussion_r103214603
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVInferSchema.scala
---
@@ -40,7 +41,19 @@ private[csv] object
Github user windpiger commented on a diff in the pull request:
https://github.com/apache/spark/pull/16809#discussion_r103185139
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/InsertIntoHadoopFsRelationCommand.scala
---
@@ -132,6 +132,9 @@ case class
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/16990#discussion_r103185158
--- Diff:
sql/hive/src/test/resources/ql/src/test/queries/clientpositive/smb_mapjoin_25.q
---
@@ -19,7 +19,7 @@ select * from (select a.key from
Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14731#discussion_r103187577
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/FileInputDStream.scala
---
@@ -140,7 +137,7 @@ class FileInputDStream[K, V, F <:
Github user zhengruifeng commented on the issue:
https://github.com/apache/spark/pull/16971
ping @MLnick @gatorsmile @thunterdb
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user datumbox commented on the issue:
https://github.com/apache/spark/pull/17059
@srowen: Thanks for the comments. We are getting there. :)
I will handle the Long case as you suggest.
If you think people use SQL decimal types, I can include them at the end of
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/16990#discussion_r103183030
--- Diff:
sql/hive/src/test/resources/ql/src/test/queries/clientpositive/smb_mapjoin_25.q
---
@@ -19,7 +19,7 @@ select * from (select a.key from
Github user robbinspg commented on the issue:
https://github.com/apache/spark/pull/17039
@gatorsmile I'm glad it wasn't just me that found it complex ;-)
I've modified the patch to remove an unnecessary change as that query was
not ordered and the test suite code handles
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/15505
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16990#discussion_r103180859
--- Diff:
sql/hive/src/test/resources/ql/src/test/queries/clientpositive/smb_mapjoin_25.q
---
@@ -19,7 +19,7 @@ select * from (select a.key from
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/17083
Was this fixed otherwise in master, or did some other change make it
obsolete? just trying to link this to whatever reason it's only a problem in
2.1, for the record.
---
If your project is set up
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/14731#discussion_r103183646
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/FileInputDStream.scala
---
@@ -140,7 +137,7 @@ class FileInputDStream[K, V, F
Github user steveloughran commented on the issue:
https://github.com/apache/spark/pull/17080
LGTM. Verified option name in `org.apache.hadoop.fs.s3a.Constants` file;
env var name in `com.amazonaws.SDKGlobalConfiguration'
---
If your project is set up for it, you can reply to
Github user robbinspg commented on the issue:
https://github.com/apache/spark/pull/17039
Jenkins retest please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/16990#discussion_r103200223
--- Diff: python/pyspark/tests.py ---
@@ -1515,12 +1515,12 @@ def test_oldhadoop(self):
conf = {
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16990#discussion_r103179898
--- Diff: python/pyspark/tests.py ---
@@ -1515,12 +1515,12 @@ def test_oldhadoop(self):
conf = {
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/14731#discussion_r103184528
--- Diff: docs/streaming-programming-guide.md ---
@@ -615,35 +615,114 @@ which creates a DStream from text
data received over a TCP socket
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16867
@squito
Thanks a lot for your comments : )
>When check speculatable tasks in TaskSetManager, current code scan all
task infos and sort durations of successful tasks in O(NlogN) time
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17071
Sure, sounds better and I can't find a reason to not follow. Let me maybe
add single small Java one somewhere because the deprecated Java one calls the
deprecated Scala one.
---
If your
Github user datumbox commented on the issue:
https://github.com/apache/spark/pull/17059
Ignore my comment about duplicate code. It can be written to avoid it. I
will investigate handling the SQL decimal types as you recommended and I will
update the code tonight.
---
If your
Github user jcamachor commented on a diff in the pull request:
https://github.com/apache/spark/pull/16990#discussion_r103184073
--- Diff:
sql/hive/src/test/resources/ql/src/test/queries/clientpositive/smb_mapjoin_25.q
---
@@ -19,7 +19,7 @@ select * from (select a.key from
Github user MLnick commented on a diff in the pull request:
https://github.com/apache/spark/pull/17076#discussion_r103187723
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/classification/LinearSVC.scala ---
@@ -440,19 +440,9 @@ private class LinearSVCAggregator(
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17083
Not sure why Jenkins test cannot be started automatically.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17090
**[Test build #73543 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73543/testReport)**
for PR 17090 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16959
**[Test build #73544 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73544/testReport)**
for PR 16959 at commit
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/17052
working on unit test failure
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17012
**[Test build #73548 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73548/testReport)**
for PR 17012 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17012
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73548/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17012
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17082
Thanks! LGTM. Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16774
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16774
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73545/
Test PASSed.
---
GitHub user windpiger opened a pull request:
https://github.com/apache/spark/pull/17093
[SPARK-19761][SQL]create InMemoryFileIndex with an empty rootPaths when set
PARALLEL_PARTITION_DISCOVERY_THRESHOLD to zero failed
## What changes were proposed in this pull request?
If
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16989
@squito
I've uploaded a design doc to jira, please take a look when you have time :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17092
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user markgrover commented on a diff in the pull request:
https://github.com/apache/spark/pull/17047#discussion_r103367049
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -2574,13 +2575,30 @@ private[spark] object Utils extends Logging {
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17092
**[Test build #73550 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73550/testReport)**
for PR 17092 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17092
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73550/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17079
**[Test build #73546 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73546/testReport)**
for PR 17079 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17094
**[Test build #73557 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73557/testReport)**
for PR 17094 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17052
**[Test build #73558 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73558/testReport)**
for PR 17052 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17095
**[Test build #73556 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73556/testReport)**
for PR 17095 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17079#discussion_r103373646
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileIndexSuite.scala
---
@@ -178,6 +178,33 @@ class FileIndexSuite extends
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17079
LGTM except two minor comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17079#discussion_r103373620
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileIndexSuite.scala
---
@@ -178,6 +178,33 @@ class FileIndexSuite extends
Github user imatiach-msft commented on the issue:
https://github.com/apache/spark/pull/17059
@datumbox I like the changes, I just had a minor concern about the code
where we call v.intValue and then compare this to v.doubleValue -- due to
precision issues, I'm not sure if this is
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17015
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/10307
**[Test build #73567 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73567/testReport)**
for PR 10307 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17015#discussion_r103387597
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeColumnCommand.scala
---
@@ -40,38 +38,24 @@ case class
Github user windpiger commented on a diff in the pull request:
https://github.com/apache/spark/pull/17079#discussion_r103357633
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileIndexSuite.scala
---
@@ -178,6 +178,34 @@ class FileIndexSuite extends
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17062#discussion_r103357272
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/HashExpressionsSuite.scala
---
@@ -169,6 +171,96 @@ class
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17062#discussion_r103357588
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/HashExpressionsSuite.scala
---
@@ -169,6 +171,96 @@ class
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17062#discussion_r103300592
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/HashExpressionsSuite.scala
---
@@ -169,6 +171,96 @@ class
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/16715#discussion_r103357342
--- Diff: python/pyspark/ml/feature.py ---
@@ -120,6 +122,196 @@ def getThreshold(self):
return self.getOrDefault(self.threshold)
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17062#discussion_r103281696
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/HashExpressionsSuite.scala
---
@@ -169,6 +171,96 @@ class
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17062#discussion_r103300013
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/HashExpressionsSuite.scala
---
@@ -169,6 +171,96 @@ class
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17062#discussion_r103357472
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/HashExpressionsSuite.scala
---
@@ -169,6 +171,96 @@ class
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/17062#discussion_r103300293
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/HashExpressionsSuite.scala
---
@@ -169,6 +171,96 @@ class
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/17062
@gatorsmile : can you please review this PR ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17012#discussion_r103359940
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/HDFSBackedStateStoreProvider.scala
---
@@ -274,7 +274,9 @@ private[state]
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17082
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/16954#discussion_r103362319
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
---
@@ -123,19 +123,36 @@ case class Not(child:
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16917
Let's use a meaningful title in future :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user windpiger commented on a diff in the pull request:
https://github.com/apache/spark/pull/17093#discussion_r103377566
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileIndexSuite.scala
---
@@ -178,6 +179,12 @@ class FileIndexSuite extends
Github user sethah closed the pull request at:
https://github.com/apache/spark/pull/13036
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user sethah commented on the issue:
https://github.com/apache/spark/pull/13036
@holdenk please feel free to take this over. Can't find time to work on it
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/16954#discussion_r103354299
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/subquery.scala
---
@@ -40,19 +42,179 @@ abstract class
Github user MechCoder closed the pull request at:
https://github.com/apache/spark/pull/14273
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user kayousterhout commented on a diff in the pull request:
https://github.com/apache/spark/pull/16959#discussion_r103354382
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/OutputCommitCoordinator.scala ---
@@ -111,13 +115,13 @@ private[spark] class
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17012
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
GitHub user Yunni opened a pull request:
https://github.com/apache/spark/pull/17092
[SPARK-18450][ML] Scala API Change for LSH AND-amplification
## What changes were proposed in this pull request?
Implemented a new Param numHashFunctions as the dimension of
AND-amplification
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17012#discussion_r103361529
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/state/StateStoreSuite.scala
---
@@ -682,6 +684,21 @@ private[state] object
Github user Yunni commented on a diff in the pull request:
https://github.com/apache/spark/pull/16715#discussion_r103361528
--- Diff: python/pyspark/ml/feature.py ---
@@ -120,6 +122,196 @@ def getThreshold(self):
return self.getOrDefault(self.threshold)
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17092
**[Test build #73550 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73550/testReport)**
for PR 17092 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17093
**[Test build #73552 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73552/testReport)**
for PR 17093 at commit
Github user windpiger commented on the issue:
https://github.com/apache/spark/pull/17093
cc @cloud-fan @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user sueann commented on a diff in the pull request:
https://github.com/apache/spark/pull/17090#discussion_r103366357
--- Diff: mllib/src/main/scala/org/apache/spark/ml/recommendation/ALS.scala
---
@@ -285,6 +285,43 @@ class ALSModel private[ml] (
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17090
**[Test build #73553 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73553/testReport)**
for PR 17090 at commit
GitHub user sethah opened a pull request:
https://github.com/apache/spark/pull/17094
[SPARK-19762][ML] Hierarchy for consolidating ML aggregator/loss code
## What changes were proposed in this pull request?
JIRA:
Github user sitalkedia commented on the issue:
https://github.com/apache/spark/pull/17088
>> This is quite drastic for a fetch failure : spark already has mechanisms
in place to detect executor/host failure - which take care of these failure
modes.
Unfortunately, mechanisms
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16959
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73544/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16959
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user sethah commented on the issue:
https://github.com/apache/spark/pull/17094
Jenkins test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
GitHub user windpiger opened a pull request:
https://github.com/apache/spark/pull/17095
[SPARK-19763][SQL]qualified external datasource table location stored in
catalog
## What changes were proposed in this pull request?
If we create a external datasource table with a
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17015#discussion_r103383279
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -349,36 +350,41 @@ object CatalogTypes {
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17015#discussion_r103387529
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeColumnCommand.scala
---
@@ -90,10 +74,10 @@ object AnalyzeColumnCommand
Github user windpiger commented on the issue:
https://github.com/apache/spark/pull/17079
there is no related test case for InMemoryFileIndex with FileStatusCache.
When I do this [PR](https://github.com/apache/spark/pull/17081), and add a
fileStatusCache in DataSource, I found this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17012
**[Test build #73548 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73548/testReport)**
for PR 17012 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17047
**[Test build #73554 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73554/testReport)**
for PR 17047 at commit
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/16819
@vanzin What do you think about current approach? I have tested on a same
Spark hive-thriftserver, the `spark.dynamicAllocation.maxExecutors` wiil
decrease if I kill 4 NodeManager:
```
Github user sethah commented on the issue:
https://github.com/apache/spark/pull/17094
ping @MLnick @jkbradley
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17093
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17093
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73552/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17083
**[Test build #73551 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73551/testReport)**
for PR 17083 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17052
**[Test build #73559 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73559/testReport)**
for PR 17052 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17052
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17052
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73559/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17067
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
1 - 100 of 613 matches
Mail list logo