Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/10846
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/10846
**[Test build #63367 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63367/consoleFull)**
for PR 10846 at commit
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/14266#discussion_r73928683
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/AggregateBenchmark.scala
---
@@ -1078,6 +1078,146 @@ class AggregateBenchmark
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/13680#discussion_r73926983
--- Diff:
sql/catalyst/src/main/java/org/apache/spark/sql/catalyst/expressions/UnsafeArrayData.java
---
@@ -25,55 +25,57 @@
import
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14543
This is already in `branch-1.6`:
https://github.com/apache/spark/blob/branch-1.6/build/mvn
We can't change releases/tags of course. Yeah, this is because non-current
versions eventually get
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/14407
Sounds like no objections on using postfix seconds operator - I'll go ahead
and switch it to that.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user rickalm commented on the issue:
https://github.com/apache/spark/pull/14543
Was considering withdrawing the PR. Here are my thoughts
Tag v1.6.2 specifies mvn 3.3.3 (unless user intentionally overrides). Users
who previously built wanting to replicate their previous
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14542
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63370/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14542
**[Test build #63370 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63370/consoleFull)**
for PR 14542 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14542
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14397
Hi, @hvanhovell .
Sorry for late update. I updated the PR description and code.
Could you review this PR again?
---
If your project is set up for it, you can reply to this email and
Github user holdenk commented on a diff in the pull request:
https://github.com/apache/spark/pull/14543#discussion_r73922880
--- Diff: build/mvn ---
@@ -72,7 +72,7 @@ install_mvn() {
local MVN_VERSION="3.3.3"
install_app \
-
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/14519
LGTM. Will be nice to see the compassion of shuffle write size, and then
will be ready to merge. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14543
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/14543
cc @srowen for build update
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14519#discussion_r73922053
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/regression/AFTSurvivalRegression.scala
---
@@ -478,21 +482,23 @@ object AFTSurvivalRegressionModel
GitHub user rickalm opened a pull request:
https://github.com/apache/spark/pull/14543
Bugs/mvn3.3.3
## What changes were proposed in this pull request?
Changing the URI for Maven 3.3.3, version is no longer available on
previous endpoint
## How was this patch
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14539
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14539
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63366/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14539
**[Test build #63366 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63366/consoleFull)**
for PR 14539 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14397
**[Test build #63371 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63371/consoleFull)**
for PR 14397 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14542
**[Test build #63370 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63370/consoleFull)**
for PR 14542 at commit
Github user karthikvadla16 commented on the issue:
https://github.com/apache/spark/pull/5048
Hi,
Can anyone please share me a example for using MatrixUDT as datatype column
in Spark SQL dataframe. I'm trying to load bunch of images into dataframe. Each
image as matrix in
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14508
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/14508
Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
GitHub user vanzin opened a pull request:
https://github.com/apache/spark/pull/14542
[SPARK-16930][yarn] Fix a couple of races in cluster app initialization.
There are two narrow races that could cause the ApplicationMaster to miss
when the user application instantiates the
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14541
**[Test build #63369 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63369/consoleFull)**
for PR 14541 at commit
GitHub user tdas opened a pull request:
https://github.com/apache/spark/pull/14541
Make requestTotalExecutors public Developer API to be consistent with
requestExecutors/killExecutor
## What changes were proposed in this pull request?
RequestExecutors and killExecutor are
Github user tdas commented on the issue:
https://github.com/apache/spark/pull/14541
@rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14376#discussion_r73917166
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/WindowExec.scala ---
@@ -565,7 +566,7 @@ private[execution] abstract class
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13886
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63365/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13886
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13146#discussion_r73916554
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/SparkLauncher.java ---
@@ -64,6 +64,10 @@
/** Configuration key for the number of executor
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13886
**[Test build #63365 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63365/consoleFull)**
for PR 13886 at commit
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14518
I think it is fine that `compression` takes precedence. btw, is this flag
used by other data sources?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14518#discussion_r73914641
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/orc/OrcOptions.scala ---
@@ -31,7 +30,8 @@ private[orc] class OrcOptions(
* Acceptable
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14518#discussion_r73914816
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/orc/OrcQuerySuite.scala ---
@@ -161,6 +161,29 @@ class OrcQuerySuite extends QueryTest with
Github user dafrista commented on the issue:
https://github.com/apache/spark/pull/14515
This triggers the else case here:
https://github.com/apache/spark/blob/master/sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala#L368.
cc: @andrewor14
Can you please
Github user skonto commented on the issue:
https://github.com/apache/spark/pull/14414
I tested it with out suite. Two comments:
a. Its not convenient to have only one option to add the history server
uri. I had to create a properties file to pass it to the dispatcher process. I
Github user markhamstra commented on a diff in the pull request:
https://github.com/apache/spark/pull/14534#discussion_r73908742
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/server/SparkSQLOperationManager.scala
---
@@ -39,8 +39,10 @@
Github user markhamstra commented on the issue:
https://github.com/apache/spark/pull/14533
PR title typo? Intended "misleading"?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14522
**[Test build #63368 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63368/consoleFull)**
for PR 14522 at commit
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14522#discussion_r73907661
--- Diff: R/pkg/R/generics.R ---
@@ -551,7 +551,7 @@ setGeneric("merge")
#' @export
setGeneric("mutate", function(.data, ...)
Github user nsyca commented on the issue:
https://github.com/apache/spark/pull/14411
@hvanhovell,
Thanks for getting the PR merged and sorry for causing a few hiccups before
I got it right. It's my first PR.
I have opened a new JIRA, SPARK-16951, to track the NOT IN
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/10846
**[Test build #63367 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63367/consoleFull)**
for PR 10846 at commit
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/10846
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14522#discussion_r73904520
--- Diff: R/pkg/R/WindowSpec.R ---
@@ -82,16 +82,18 @@ setMethod("partitionBy",
}
})
-#' orderBy
+#' Ordering
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14522#discussion_r73903804
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2121,7 +2121,7 @@ setMethod("arrange",
})
#' @rdname arrange
-#' @name orderBy
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14539
**[Test build #63366 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63366/consoleFull)**
for PR 14539 at commit
Github user sethah commented on the issue:
https://github.com/apache/spark/pull/14519
@yanboliang Can you do a quick test to make sure the shuffle write size is
the expected size? For example, in logistic regression only the gradient should
be serialized which is an array of
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13146
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63361/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13146
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13146
**[Test build #63361 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63361/consoleFull)**
for PR 13146 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13886
**[Test build #63365 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63365/consoleFull)**
for PR 13886 at commit
Github user mallman commented on a diff in the pull request:
https://github.com/apache/spark/pull/13818#discussion_r73892580
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala
---
@@ -298,6 +298,7 @@ case class InsertIntoHiveTable(
Github user sethah commented on the issue:
https://github.com/apache/spark/pull/14520
Well, I suppose this won't go into another PR since the other one got
merged. I think it's correct to make this match the approach taken in Linear
Regression. The current code doesn't quite match
Github user skonto commented on the issue:
https://github.com/apache/spark/pull/14414
I will add more comment im trying to test it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14540
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63363/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14540
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14540
**[Test build #63363 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63363/consoleFull)**
for PR 14540 at commit
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14523
thanks, merging to master and 2.0!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14539
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63360/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14539
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14539
**[Test build #63360 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63360/consoleFull)**
for PR 14539 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14523
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14517
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63364/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14517
**[Test build #63364 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63364/consoleFull)**
for PR 14517 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14517
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user MLnick commented on the issue:
https://github.com/apache/spark/pull/13621
cc @JeremyNixon also
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14517
**[Test build #63364 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63364/consoleFull)**
for PR 14517 at commit
Github user MLnick commented on the issue:
https://github.com/apache/spark/pull/14517
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user MLnick commented on a diff in the pull request:
https://github.com/apache/spark/pull/14517#discussion_r73884828
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -500,6 +500,41 @@ def partitionBy(self, *cols):
self._jwrite =
Github user MLnick commented on a diff in the pull request:
https://github.com/apache/spark/pull/14517#discussion_r73884740
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -500,6 +500,41 @@ def partitionBy(self, *cols):
self._jwrite =
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14519
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14113
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14519
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63362/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14519
**[Test build #63362 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63362/consoleFull)**
for PR 14519 at commit
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14113
thanks, merging to master and 2.0!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user MLnick commented on a diff in the pull request:
https://github.com/apache/spark/pull/14517#discussion_r73884350
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -500,6 +500,41 @@ def partitionBy(self, *cols):
self._jwrite =
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14501
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14501
thanks, merging to master!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14494#discussion_r73881870
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/typedaggregators.scala
---
@@ -27,7 +27,7 @@ import
Github user yanboliang commented on a diff in the pull request:
https://github.com/apache/spark/pull/14519#discussion_r73881801
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/regression/AFTSurvivalRegression.scala
---
@@ -478,21 +482,23 @@ object AFTSurvivalRegressionModel
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14540
**[Test build #63363 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63363/consoleFull)**
for PR 14540 at commit
Github user MLnick commented on the issue:
https://github.com/apache/spark/pull/14540
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user MLnick commented on the issue:
https://github.com/apache/spark/pull/14520
@WeichenXu123 I believe #13729 already took care of the actual
serialization issue. Out of interest have you tested this impl here for a
difference in shuffle data read/write?
However,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14519
**[Test build #63362 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63362/consoleFull)**
for PR 14519 at commit
Github user MLnick commented on the issue:
https://github.com/apache/spark/pull/12574
@debasish83 thanks, would like to get your comments especially around
`transform` semantics. I will be doing performance testing on this and post
numbers soon.
---
If your project is set up for
Github user eyalfa commented on the issue:
https://github.com/apache/spark/pull/14539
fair enough, I think it's worth adding a negative test just to see what
we're dealing with.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user skonto commented on a diff in the pull request:
https://github.com/apache/spark/pull/14414#discussion_r73874399
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
---
@@ -152,8 +152,13 @@ private[spark]
Github user MLnick commented on the issue:
https://github.com/apache/spark/pull/14524
I tend to agree the default should be a new random seed, and it is more
consistent with other libs. But @jkbradley seemed to explicitly want things to
default to reproducible behavior in
GitHub user zjffdu reopened a pull request:
https://github.com/apache/spark/pull/13146
[SPARK-13081][PYSPARK][SPARK_SUBMIT]. Allow set pythonExec of driver and
executor through confâ¦
## What changes were proposed in this pull request?
Before this PR, user have to export
Github user zjffdu closed the pull request at:
https://github.com/apache/spark/pull/13146
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13146
**[Test build #63361 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63361/consoleFull)**
for PR 13146 at commit
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/14539
@eyalfa I am a bit hesitant to add yet another almost pointless
`LogicalPlan` node to Catalyst, and certainly not one that is functionally
exactly the same as a `Union`. This would require us to
Github user skonto commented on a diff in the pull request:
https://github.com/apache/spark/pull/14414#discussion_r73872335
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
---
@@ -37,32 +37,36 @@ import
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14538
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63359/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14538
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14538
**[Test build #63359 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63359/consoleFull)**
for PR 14538 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14539
**[Test build #63360 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63360/consoleFull)**
for PR 14539 at commit
401 - 500 of 620 matches
Mail list logo