Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/19048
I think I'm starting to understand what you're getting at, but I still
don't see why this has anything to do with the CGSB. What I understand from
your comment is that the EAM may reduce its target
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/19047
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19047
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19047
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81133/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19047
**[Test build #81133 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81133/testReport)**
for PR 19047 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19055
**[Test build #81139 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81139/testReport)**
for PR 19055 at commit
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/19055
[SPARK-21839][SQL] Support SQL config for ORC compression
## What changes were proposed in this pull request?
This PR aims to support `spark.sql.orc.compression.codec` like Parquet's
Github user sitalkedia commented on the issue:
https://github.com/apache/spark/pull/19048
That's not really true.
The EAM uses the `requestTotalExecutors` api to set the target for the
scheduler.
- 10 executors are running, each executor can run 4 tasks at max.
- 20
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18966#discussion_r135332868
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -582,6 +582,15 @@ object SQLConf {
.intConf
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19048
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19048
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81132/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19048
**[Test build #81132 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81132/testReport)**
for PR 19048 at commit
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/19048
> This is when things get out of sync because now the scheduler will set
the number of total executors needed from 4 to 1.
Have you actually observed that behavior?
The way I
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19024
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81136/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19024
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19024
**[Test build #81136 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81136/testReport)**
for PR 19024 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18193
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18193
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81134/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18193
**[Test build #81134 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81134/testReport)**
for PR 18193 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18966#discussion_r135327262
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -582,6 +582,15 @@ object SQLConf {
.intConf
Github user sitalkedia commented on the issue:
https://github.com/apache/spark/pull/19048
Looking at the scheduler and the dynamic executor allocator code, this is
what my understanding, correct me if I am wrong.
Let's say the dynamic executor allocator is ramping down the
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18966#discussion_r135326229
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -582,6 +582,15 @@ object SQLConf {
.intConf
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18659
**[Test build #81138 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81138/testReport)**
for PR 18659 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18966#discussion_r135324695
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -769,16 +769,21 @@ class
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/19048
I'm not sure I understand why is this a problem. What is the undesired
behavior that happens because of this? That's not explained either in the PR
nor in the bug.
The way I understand the
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18837
**[Test build #3904 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3904/testReport)**
for PR 18837 at commit
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/19054
cc @hvanhovell @cloud-fan for review
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19053
**[Test build #81137 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81137/testReport)**
for PR 19053 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19054
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81131/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19054
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/18581#discussion_r135316064
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/HadoopFileLinesReader.scala
---
@@ -32,7 +32,9 @@ import
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19054
**[Test build #81131 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81131/testReport)**
for PR 19054 at commit
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/18941
The PR title does not match what the PR summary says. The title is about
one change, the summary is about a different change, and the code seems to
handle both. It's all pretty confusing.
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19053
**[Test build #3903 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3903/testReport)**
for PR 19053 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19024
**[Test build #81136 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81136/testReport)**
for PR 19024 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19053
LGTM except one comment.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/19053#discussion_r135313526
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/UserDefinedTypeSuite.scala ---
@@ -203,12 +203,14 @@ class UserDefinedTypeSuite extends QueryTest
Github user ajbozarth commented on a diff in the pull request:
https://github.com/apache/spark/pull/19049#discussion_r135313226
--- Diff: core/src/main/resources/org/apache/spark/ui/static/historypage.js
---
@@ -136,6 +136,16 @@ $(document).ready(function() {
Github user ajbozarth commented on the issue:
https://github.com/apache/spark/pull/19049
I'll try to clarify @srowen issue for you @guoxiaolongzte
For most use-cases each Spark cluster has it's own history server and also
uses one type of resource manager. Therefore for most
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19013
**[Test build #81135 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81135/testReport)**
for PR 19013 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/19008
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19008
Thanks! Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19008
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/19024#discussion_r135307673
--- Diff: docs/ml-features.md ---
@@ -211,6 +211,89 @@ for more details on the API.
+## FeatureHasher
+
+Feature hashing
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/19024#discussion_r135307551
--- Diff: docs/ml-features.md ---
@@ -53,9 +53,9 @@ are calculated based on the mapped indices. This approach
avoids the need to com
term-to-index
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/19012
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/19050#discussion_r135307335
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/subquery.scala
---
@@ -98,6 +99,11 @@ object RewritePredicateSubquery
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/19012
Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18962
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/18962
@jerryshao there are conflicts in 2.2, will need a separate PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18581#discussion_r135305729
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/HadoopFileLinesReader.scala
---
@@ -32,7 +32,9 @@ import
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/18962
Merging to master, will also try 2.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user ericvandenbergfb commented on the issue:
https://github.com/apache/spark/pull/18791
The default is off, so people can opt-in to more aggressive clean up.
Is this okay to be merged?
---
If your project is set up for it, you can reply to this email and have your
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/18193
@cloud-fan I have address your comments. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18193
**[Test build #81134 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81134/testReport)**
for PR 18193 at commit
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/19016
Thats great ! I will also run this by winbuilder later today.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/18991
+1, I cannot agree anymore.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18991
Yes. The commercial DBMS products have a very good/comprehensive test
coverage. So far, it is missing in Apache Spark. Basically, we simply trust the
underlying data sources, which are
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19047
**[Test build #81133 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81133/testReport)**
for PR 19047 at commit
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/18991
Wow. It's real commercial spec. Thank you! I understand.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18991
Since I saw you are also working on the enhancement of ORC reader/writer,
we need to check all the limits (value ranges). I am not sure how good Apache
ORC/Parquet did in their test case design.
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/19047
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/19053
Oops, I meant UDT. Just referring to the tests name in the code.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user pgandhi999 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19047#discussion_r135297693
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/AbstractCommandBuilder.java ---
@@ -136,7 +136,8 @@ void addOptionString(List cmd, String
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/18991
Thank you for the comments and directions. Definitely, I'll try!
Since we depends on Apache Spark 1.4.0, I think I can add raw level test
case somewhere for evaluation purpose only.
---
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18991
If ORC incorrectly filters out the extra rows, we might get incorrect
results. In addition, we do not know whether the push down could get the
performance gain. We saw the performance regression
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/18991
Hi, @gatorsmile .
Could you review this ORC PPD default configuration? Our data source
doesn't trust any data sources including Parquet/ORC. I think ORC PPD do no
harm on Spark.
---
If
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/19044
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/19044
Thank you for review and merging, @gatorsmile .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19044
Thanks! Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19053
UDF?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19048
**[Test build #81132 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81132/testReport)**
for PR 19048 at commit
Github user sitalkedia commented on the issue:
https://github.com/apache/spark/pull/19048
Jenkins retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user hhbyyh commented on the issue:
https://github.com/apache/spark/pull/17461
Got it. Will make a pass today.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/15605
This is superseded by https://github.com/apache/spark/pull/19054 Closing
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user tejasapatil closed the pull request at:
https://github.com/apache/spark/pull/15605
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19054
**[Test build #81131 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81131/testReport)**
for PR 19054 at commit
GitHub user tejasapatil opened a pull request:
https://github.com/apache/spark/pull/19054
[SPARK-18067] Avoid shuffling child if join keys are superset of child's
partitioning keys
Jira : https://issues.apache.org/jira/browse/SPARK-18067
## What problem is being addressed
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/19050#discussion_r135283339
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/subquery.scala
---
@@ -98,6 +99,11 @@ object RewritePredicateSubquery extends
Github user srowen closed the pull request at:
https://github.com/apache/spark/pull/19051
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/19051
Merged to master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18837
**[Test build #3904 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3904/testReport)**
for PR 18837 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19053
**[Test build #3903 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3903/testReport)**
for PR 19053 at commit
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18730
thanks, merging to master!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18730
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/19050#discussion_r135280321
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -2502,3 +2373,140 @@ object UpdateOuterReferences
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/19050#discussion_r135271400
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/subquery.scala
---
@@ -98,6 +99,11 @@ object RewritePredicateSubquery
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/19050#discussion_r135270779
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -2502,3 +2373,140 @@ object UpdateOuterReferences
Github user ArtRand commented on the issue:
https://github.com/apache/spark/pull/18837
Hello @srowen, thanks for taking a look at this. You're correct in that
this change does not require users to have a Mesos 1.3+ cluster, we do not
change or omit any required records in the proto
Github user MLnick commented on a diff in the pull request:
https://github.com/apache/spark/pull/19024#discussion_r135261324
--- Diff: docs/ml-features.md ---
@@ -211,6 +211,89 @@ for more details on the API.
+## FeatureHasher
+
+Feature hashing
Github user MLnick commented on a diff in the pull request:
https://github.com/apache/spark/pull/19024#discussion_r135261228
--- Diff: docs/ml-features.md ---
@@ -53,9 +53,9 @@ are calculated based on the mapped indices. This approach
avoids the need to com
term-to-index map,
Github user caneGuy commented on the issue:
https://github.com/apache/spark/pull/18730
@cloud-fan Jekins done!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19053
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19053
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81130/
Test FAILed.
---
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19012
LGTM, I tried locally. Looks like now the NPE is gone in yarn UT, thanks
for the fix.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19053
**[Test build #81130 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81130/testReport)**
for PR 19053 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19051
**[Test build #3902 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3902/testReport)**
for PR 19051 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18730
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18730
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/81128/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18730
**[Test build #81128 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/81128/testReport)**
for PR 18730 at commit
101 - 200 of 300 matches
Mail list logo