Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/18940
nit: title should be "`[SPARK-21501] ...`".
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user nchammas commented on the issue:
https://github.com/apache/spark/pull/18926
It's cleaner but less specific. Unless we branch on whether `startPos` and
`length` are the same type, we will give the same error message for mixed types
and for unsupported types. That seems
Github user mike0sv commented on the issue:
https://github.com/apache/spark/pull/18488
@srowen @HyukjinKwon , retest this please :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/16648
kindly ping @bdrillard
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/18640#discussion_r133255040
--- Diff: sql/core/pom.xml ---
@@ -87,6 +87,16 @@
+ org.apache.orc
+ orc-core
+ ${orc.classifier}
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18907
**[Test build #80682 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80682/testReport)**
for PR 18907 at commit
Github user icexelloss commented on a diff in the pull request:
https://github.com/apache/spark/pull/18933#discussion_r133229705
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -912,6 +912,14 @@ object SQLConf {
.intConf
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18926
```Python
if isinstance(startPos, int) and isinstance(length, int):
jc = self._jc.substr(startPos, length)
elif isinstance(startPos, Column) and
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18943#discussion_r133233495
--- Diff:
examples/src/main/scala/org/apache/spark/examples/ml/BucketedRandomProjectionLSHExample.scala
---
@@ -21,9 +21,9 @@ package
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18266
The example in the PR description looks a little bit confusing.
```Scala
val dfRead = spark.read.schema(schema).jdbc(jdbcUrl,
"tableWithCustomSchema", new Properties())
```
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18266
Users should be allowed to specify the schema from the table properties by
using DDL-like strings.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18918
**[Test build #80693 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80693/testReport)**
for PR 18918 at commit
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/11494
kindly ping @yzotov
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user redsanket commented on the issue:
https://github.com/apache/spark/pull/18940
@dbolshak there were no unit tests for google cache implementation here
before, I could add a simple test to check for cache behavior if it is
necessary but ideally a scale test is necessary to
Github user mbasmanova commented on the issue:
https://github.com/apache/spark/pull/18421
ping @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18946
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18907
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18930#discussion_r133249207
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonSuite.scala
---
@@ -2034,4 +2034,25 @@ class JsonSuite extends
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18950
**[Test build #80690 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80690/testReport)**
for PR 18950 at commit
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/18933#discussion_r133255672
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -912,6 +912,14 @@ object SQLConf {
.intConf
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/18933#discussion_r133254340
--- Diff: python/pyspark/sql/tests.py ---
@@ -2507,6 +2507,37 @@ def test_to_pandas(self):
self.assertEquals(types[2], np.bool)
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18918
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80693/
Test FAILed.
---
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18877
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user redsanket commented on a diff in the pull request:
https://github.com/apache/spark/pull/18940#discussion_r133220047
--- Diff:
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/ExternalShuffleBlockResolver.java
---
@@ -104,15 +105,22 @@ public
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/18622
@srowen any comment on this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17373
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17373
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80689/
Test PASSed.
---
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18907
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user debugger87 commented on the issue:
https://github.com/apache/spark/pull/18649
@dilipbiswal
Thanks for your reply. In my eyes, there have been some mechanism or
configuration to control the number of opening files generated by SQL
Operation. e.g:
```
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18855#discussion_r133232857
--- Diff:
core/src/test/scala/org/apache/spark/storage/BlockManagerSuite.scala ---
@@ -1415,6 +1415,79 @@ class BlockManagerSuite extends SparkFunSuite with
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18849#discussion_r133236830
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -1175,6 +1205,27 @@ private[spark] class
Github user eyalfa commented on the issue:
https://github.com/apache/spark/pull/18855
Funny enough, that's the approach I've chosen.
On Aug 15, 2017 19:17, "Marcelo Vanzin" wrote:
> *@vanzin* commented on this pull request.
>
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18266#discussion_r133239599
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRelation.scala
---
@@ -111,7 +111,22 @@ private[sql] case class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18849
**[Test build #80694 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80694/testReport)**
for PR 18849 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18930
**[Test build #80688 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80688/testReport)**
for PR 18930 at commit
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/18664#discussion_r133261665
--- Diff: python/pyspark/sql/tests.py ---
@@ -3036,6 +3052,9 @@ def test_toPandas_arrow_toggle(self):
pdf = df.toPandas()
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18918
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18918
**[Test build #80693 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80693/testReport)**
for PR 18918 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18952
**[Test build #80695 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80695/testReport)**
for PR 18952 at commit
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/18877
Alright, merging this to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18940
**[Test build #80691 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80691/testReport)**
for PR 18940 at commit
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/18943
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user redsanket commented on the issue:
https://github.com/apache/spark/pull/18940
@kiszk I dont think that would be ideal, it is better to backport the
feature itself to a desired version or branch, having two conflicting configs
for the same task is not ideal, if that is what
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18947
@viirya Could you close it? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18930
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80688/
Test PASSed.
---
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18421
This is just to make it consistent with the partition spec in our current
INSERT statement. Could you justify why we need to make them inconsistent?
Thanks!
Also cc @sameeragarwal
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18930
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/18949
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/18940
@redsanket I am thinking about the case that the same configuration file,
which explicitly sets a value (e.g. 4096) into
`spark.shuffle.service.index.cache.entries`, is used in Spark 2.3.
The
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/18641
ping @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user thunterdb commented on the issue:
https://github.com/apache/spark/pull/18798
Thank you @yanboliang.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/18940
I like this feature.
For backward compatibility, how about referring to
`spark.shuffle.service.index.cache.entries` only if
`spark.shuffle.service.index.cache.entries` is explicitly declared.
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18907
Thanks! Merging to master.
Hit conflicts when trying to merge to the previous versions.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18943
**[Test build #80692 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80692/testReport)**
for PR 18943 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18951
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user jiangxb1987 opened a pull request:
https://github.com/apache/spark/pull/18952
[MINOR] Fix a typo in the method name `UserDefinedFunction.asNonNullabe`
## What changes were proposed in this pull request?
The method name `asNonNullabe` should be `asNonNullable`.
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18947
LGTM
Merging to 2.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18907
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80682/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18907
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/18329
@zsxwing @tdas any comment on this? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18849#discussion_r133236101
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -342,6 +359,12 @@ private[spark] class
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18266#discussion_r133235997
--- Diff:
external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/OracleIntegrationSuite.scala
---
@@ -268,4 +275,44 @@ class
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18266#discussion_r133239835
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRelation.scala
---
@@ -111,7 +111,22 @@ private[sql] case class
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18266#discussion_r133239924
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRelation.scala
---
@@ -111,7 +111,22 @@ private[sql] case class
Github user omalley commented on a diff in the pull request:
https://github.com/apache/spark/pull/18640#discussion_r133248648
--- Diff: sql/core/pom.xml ---
@@ -87,6 +87,16 @@
+ org.apache.orc
+ orc-core
+ ${orc.classifier}
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/18940
If you're removing a public config, you should at least add it to
`SparkConf.deprecatedConfigs`. It would be nice, but not required, to have some
kind of mapping of the old value to the new (in
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18946
LGTM
Thanks! Merging to master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17373
**[Test build #80689 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80689/testReport)**
for PR 17373 at commit
GitHub user mgaido91 opened a pull request:
https://github.com/apache/spark/pull/18951
[SPARK-21738] Thriftserver doesn't cancel jobs when session is closed
## What changes were proposed in this pull request?
When a session is closed the Thriftserver doesn't cancel the jobs
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18266#discussion_r133235311
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala ---
@@ -197,11 +197,13 @@ class DataFrameReader private[sql](sparkSession:
Github user redsanket commented on the issue:
https://github.com/apache/spark/pull/18940
@kiszk wouldn't the updated release notes/docs take care of that, which
configs can no longer be used and which are not. I don't mind adding a warning
msg saying please use another cache.size
Github user dhruve commented on the issue:
https://github.com/apache/spark/pull/18950
@kayousterhout @squito Can you review this PR ? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/18592
gentle ping @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18950
**[Test build #80690 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80690/testReport)**
for PR 18950 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18488
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80685/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18488
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18488
**[Test build #80685 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80685/testReport)**
for PR 18488 at commit
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/18942
@poplav it looks good
@gatorsmile Do you think it is ok for backport now? The previous commit
included unnecessary changes.
---
If your project is set up for it, you can reply to this email and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18950
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18950
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80690/
Test PASSed.
---
Github user BryanCutler commented on the issue:
https://github.com/apache/spark/pull/18787
Yes, I agree with changing the interfaces as you suggest @cloud-fan , is
there currently a JIRA open for that? I'm ok with holding off if it's planned
to be soon, but I would like to get
Github user redsanket commented on the issue:
https://github.com/apache/spark/pull/18940
Thanks @vanzin @kiszk will do, makes sense to me now
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18519#discussion_r133290641
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
---
@@ -95,6 +99,12 @@ class
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/18887#discussion_r133316362
--- Diff: scalastyle-config.xml ---
@@ -86,7 +86,7 @@ This file is divided into 3 sections:
-
+
--- End diff --
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18887#discussion_r133319377
--- Diff: docs/monitoring.md ---
@@ -220,6 +220,13 @@ The history server can be configured as follows:
Number of threads that will be used by
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15770#discussion_r133271527
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/clustering/PowerIterationClustering.scala
---
@@ -0,0 +1,213 @@
+/*
+ * Licensed to the
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18849
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80694/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18849
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15770
**[Test build #80698 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80698/testReport)**
for PR 15770 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15770
**[Test build #80699 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80699/testReport)**
for PR 15770 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15770
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/18950#discussion_r133311545
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -602,6 +604,21 @@ private[spark] class ExecutorAllocationManager(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15770
**[Test build #80702 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80702/testReport)**
for PR 15770 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15770
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15770
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80702/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18940
**[Test build #80691 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80691/testReport)**
for PR 18940 at commit
Github user poplav commented on the issue:
https://github.com/apache/spark/pull/18942
Was this working in 2.0 in the first place? I want to get this into 2.1.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18519#discussion_r133290499
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
---
@@ -95,6 +99,12 @@ class
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15770
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80699/
Test FAILed.
---
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/15770
Jenkins, retest please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
1 - 100 of 484 matches
Mail list logo