Github user debugger87 commented on the issue:
https://github.com/apache/spark/pull/18900
My changes is not enough to support `createTime` in CatalogTablePartition,
I will check and re-commit again.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/18492#discussion_r132628016
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/SparkListener.scala ---
@@ -291,6 +297,16 @@ private[spark] trait SparkListenerInterface {
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/18492#discussion_r132628227
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/SparkListener.scala ---
@@ -291,6 +297,16 @@ private[spark] trait SparkListenerInterface {
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18843
LGTM, pending jenkins
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18843
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user eyalfa commented on the issue:
https://github.com/apache/spark/pull/18855
@cloud-fan I think I found the sbt setting that controlled max heap size
for forked tests, I've increased it from 3g to 6g.
cc: @srowen, @vanzin and @a-roberts you guys seem to be the last ones t
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/18917
Add the description of 'sbin/stop-slave.sh' in spark-standalone.html.
## What changes were proposed in this pull request?
1.Add the description of 'sbin/stop-slave.sh' in spark-
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18810
shall we improve the whole stage codegen framework and avoid generating
super long functions?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHu
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/18810
@cloud-fan This is a right direction we should work. This change can be a
base used to check if we generate super long functions. Once we are confident
that whole stage codegen framework is completel
Github user debugger87 commented on the issue:
https://github.com/apache/spark/pull/18900
@cloud-fan Have a look at this PR again?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user mpjlu commented on the issue:
https://github.com/apache/spark/pull/18899
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18917
@guoxiaolongzte this stuff is too minor to make JIRAs and separate PRs for.
I'd refocus yourself on bigger changes
---
If your project is set up for it, you can reply to this email and have your
rep
Github user mpjlu commented on the issue:
https://github.com/apache/spark/pull/18904
A gentle ping: @sethah @jkbradley
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled an
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18899
In theory there's no new functionality here so nothing new to test, but
more tests never hurt.
This seems OK. Is there any other call site where nnz is already known?
It is a nontri
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18806
"set" is pretty synonymous with "true" for boolean properties. Its name
includes 'enabled'. I think this is too trivial.
---
If your project is set up for it, you can reply to this email and have yo
Github user caneGuy commented on the issue:
https://github.com/apache/spark/pull/18901
@vanzin take a look at this pr?Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enab
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/18917
Do you mean only PR?No need to submit JIRA?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user mpjlu commented on the issue:
https://github.com/apache/spark/pull/18899
For PR-18904, before this change, one iteration is about 58s, after this
change, one iteration is about:40s
---
If your project is set up for it, you can reply to this email and have your
reply appea
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17425
@gatorsmile I'm exploring Generate related code. I'm curious why you don't
revert the `supportCodegen` back in the end?
---
If your project is set up for it, you can reply to this email and have you
Github user mpjlu commented on the issue:
https://github.com/apache/spark/pull/18899
Hi @srowen; how about using our first version? though duplicate some code,
but change is small.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user debugger87 commented on the issue:
https://github.com/apache/spark/pull/18900
@cloud-fan Look at this PR again? I just put `createTime` into
CatalogTablePartion.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18899
No, duplicate code like that is bad.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18916
I think the problem is that this duplicates a bunch of documentation. The
extended description don't need to be in the code or help messages.
---
If your project is set up for it, you can reply to t
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18916
I think the problem is that this duplicates a bunch of documentation. The
extended description don't need to be in the code or help messages.
---
If your project is set up for it, you can reply to t
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18917
No JIRA, and, consider whether this is worth opening a PR for.
The docs were actually correct. The start-slave and stop-slave scripts
aren't really something the end user calls.
---
If your proj
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/18872#discussion_r132647596
--- Diff:
mllib/src/test/scala/org/apache/spark/ml/source/libsvm/LibSVMRelationSuite.scala
---
@@ -126,6 +130,29 @@ class LibSVMRelationSuite extends SparkF
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/18917
Okay.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if t
Github user yanboliang commented on the issue:
https://github.com/apache/spark/pull/18798
@thunterdb I'm on travel these days, will do a final pass and merge it on
next Monday/Tuesday. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appea
GitHub user heary-cao opened a pull request:
https://github.com/apache/spark/pull/18918
[SQL]Improvement a special case for non-deterministic filters in optimizer
## What changes were proposed in this pull request?
Currently, Did a lot of special handling for non-determinist
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18893
**[Test build #3886 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3886/testReport)**
for PR 18893 at commit
[`eaf5e52`](https://github.com/apache/spark/commit/e
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18899
@mpjlu sorry which benchmark are you referring to? PR 18904 doesn't seem to
benchmark just this in isolation. I just want to be sure the gain is significant
---
If your project is set up for it, you
Github user mpjlu commented on the issue:
https://github.com/apache/spark/pull/18899
I did not only test this PR. Only work for PR 18904 and find this
performance difference.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as we
GitHub user DonnyZone opened a pull request:
https://github.com/apache/spark/pull/18919
[SPARK-19471][SQL]AggregationIterator does not initialize the generated
result projection before using it
## What changes were proposed in this pull request?
Recently, we have also encou
Github user DonnyZone closed the pull request at:
https://github.com/apache/spark/pull/18919
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user DonnyZone commented on the issue:
https://github.com/apache/spark/pull/18919
There are some confilicts, close it first
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user aokolnychyi commented on the issue:
https://github.com/apache/spark/pull/18909
@gatorsmile I took a look at both PRs.
I quickly scanned PR #14866 and did not find tests for existence joins.
Also, `SQLConf.CROSS_JOINS_ENABLED = true` is checked only for `left_oute
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18914
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes s
Github user ProtD commented on a diff in the pull request:
https://github.com/apache/spark/pull/18872#discussion_r132671477
--- Diff:
mllib/src/test/scala/org/apache/spark/ml/source/libsvm/LibSVMRelationSuite.scala
---
@@ -126,6 +130,29 @@ class LibSVMRelationSuite extends SparkFu
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18907
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user LucaCanali commented on a diff in the pull request:
https://github.com/apache/spark/pull/18724#discussion_r132679699
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala
---
@@ -1007,4 +1007,23 @@ class JDBCSuite extends SparkFunSuite
a
GitHub user DonnyZone opened a pull request:
https://github.com/apache/spark/pull/18920
[SPARK-19471][SQL]AggregationIterator does not initialize the generated
result projection before using it
## What changes were proposed in this pull request?
Recently, we have also encou
Github user DonnyZone commented on the issue:
https://github.com/apache/spark/pull/18920
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wis
Github user DonnyZone commented on the issue:
https://github.com/apache/spark/pull/18920
@hvanhovell, @yangw1234, @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
ena
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18865#discussion_r132688380
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JsonFileFormat.scala
---
@@ -114,7 +114,16 @@ class JsonFileFormat extends
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18893
**[Test build #3886 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3886/testReport)**
for PR 18893 at commit
[`eaf5e52`](https://github.com/apache/spark/commit/
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18893
Merged to master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or i
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18893
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
GitHub user pjfanning opened a pull request:
https://github.com/apache/spark/pull/18921
[SPARK-21709][Build] sbt 0.13.16 and some plugin updates
## What changes were proposed in this pull request?
Update sbt version to 0.13.16. I think this is a useful stepping stone to
get
GitHub user neoremind opened a pull request:
https://github.com/apache/spark/pull/18922
[SPARK-21701][CORE] Enable RPC client to use SO_RCVBUF, SO_SNDBUF and
SO_BACKLOG in SparkConf
## What changes were proposed in this pull request?
1. TCP parameters like SO_RCVBUF, SO_SND
Github user thide commented on the issue:
https://github.com/apache/spark/pull/18846
@zsxwing Thank you for the pointer. I tested manually, as far as I tested,
Spark works as expected even if we apply this patch. I was able to confirm that
driver/executor shut down when its connec
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/18874
The minimum count is still needed, its needed between stages when the
number of tasks goes below the minimum count. Its either going to keep minimum
number of executors or enough executors to run
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/18843
jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wi
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/18846
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/18915
LGTM, could you also post some screen shots to show the effectiveness of
these changes? Thanks !
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18913#discussion_r132706353
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -1792,6 +1796,9 @@ class SparkContext(config: SparkConf) extends Logging
{
Github user debugger87 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18900#discussion_r132711854
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -986,6 +986,7 @@ private[hive] object HiveClientImpl {
GitHub user maasg opened a pull request:
https://github.com/apache/spark/pull/18923
[SPARK-21710][SS] Fix OOM on ConsoleSink with large inputs
## What changes were proposed in this pull request?
Replace a full `collect` with a `take` using the expected number of
elements as
Github user ArtRand commented on the issue:
https://github.com/apache/spark/pull/18519
@vanzin Fixed this up. Please have a look. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user tejasapatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/16985#discussion_r132715845
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/sources/BucketedReadSuite.scala ---
@@ -543,6 +551,68 @@ abstract class BucketedReadSuite extends
Github user ArtRand commented on a diff in the pull request:
https://github.com/apache/spark/pull/18910#discussion_r132715684
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/deploy/mesos/config.scala
---
@@ -70,4 +70,12 @@ package object config {
"
Github user ArtRand commented on a diff in the pull request:
https://github.com/apache/spark/pull/18910#discussion_r132716461
--- Diff:
resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackendSuite.scala
---
@@ -582,6 +583,
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/16985
jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wi
Github user ArtRand commented on a diff in the pull request:
https://github.com/apache/spark/pull/18910#discussion_r132715831
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
---
@@ -21,6 +21,7 @@ import org.
Github user ArtRand commented on a diff in the pull request:
https://github.com/apache/spark/pull/18910#discussion_r132715924
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendUtil.scala
---
@@ -162,7 +163,11 @@ private[
Github user ArtRand commented on the issue:
https://github.com/apache/spark/pull/18837
Hello @srowen could you have a look at this (and green light the testing)
when you have a chance?
---
If your project is set up for it, you can reply to this email and have your
reply appear on Git
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18907
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18913#discussion_r132721021
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -1792,6 +1796,9 @@ class SparkContext(config: SparkConf) extends Logging
{
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18907
restest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes s
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18907
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18907
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/18907
I think something is up jenkins. @shaneknapp could you take a look?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18900#discussion_r132727520
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -97,7 +97,9 @@ object CatalogStorageFormat {
cas
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/18903
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18900
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/18903#discussion_r132729005
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/TimeWindowSuite.scala
---
@@ -77,6 +77,19 @@ class TimeWindowSuite extends S
GitHub user akopich opened a pull request:
https://github.com/apache/spark/pull/18924
[SPARK-14371] [MLLIB] OnlineLDAOptimizer should not collect stats for each
doc in mini-batch to driver
Hi,
as it was proposed by Joseph K. Bradley, gammat are not collected to the
driver
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18907
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18907
sometimes jobs don't like to trigger and there's nothing in the logs as to
exactly why. since nothing was building, i decided to kick jenkins and then
retrigger this build.
---
If your project
Github user shaneknapp commented on the issue:
https://github.com/apache/spark/pull/18907
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/18810
@cloud-fan I am thinking about how we can split a super long function into
multiple function as `CodeGenerator.splitExpressions` does.
---
If your project is set up for it, you can reply to this emai
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18918
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18921
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18923
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18920
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18922
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18917
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18915
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18916
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18924
**[Test build #80524 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80524/testReport)**
for PR 18924 at commit
[`f81f1cd`](https://github.com/apache/spark/commit/f8
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18914
**[Test build #80525 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80525/testReport)**
for PR 18914 at commit
[`b537ce0`](https://github.com/apache/spark/commit/b5
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18904
**[Test build #80527 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80527/testReport)**
for PR 18904 at commit
[`b349668`](https://github.com/apache/spark/commit/b3
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18892
**[Test build #80531 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80531/testReport)**
for PR 18892 at commit
[`1ee1a76`](https://github.com/apache/spark/commit/1e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18907
**[Test build #80526 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80526/testReport)**
for PR 18907 at commit
[`3820442`](https://github.com/apache/spark/commit/38
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18895
**[Test build #80530 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80530/testReport)**
for PR 18895 at commit
[`5d8d4b6`](https://github.com/apache/spark/commit/5d
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18855
**[Test build #80534 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80534/testReport)**
for PR 18855 at commit
[`6cbe8d0`](https://github.com/apache/spark/commit/6c
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18843
**[Test build #80536 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80536/testReport)**
for PR 18843 at commit
[`ab5cd2e`](https://github.com/apache/spark/commit/ab
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18900
**[Test build #80528 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80528/testReport)**
for PR 18900 at commit
[`c833ce7`](https://github.com/apache/spark/commit/c8
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18875
**[Test build #80532 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80532/testReport)**
for PR 18875 at commit
[`f83a110`](https://github.com/apache/spark/commit/f8
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18872
**[Test build #80533 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80533/testReport)**
for PR 18872 at commit
[`38da77d`](https://github.com/apache/spark/commit/38
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18555
**[Test build #80540 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80540/testReport)**
for PR 18555 at commit
[`3135235`](https://github.com/apache/spark/commit/31
1 - 100 of 385 matches
Mail list logo