Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/16732
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/17669
Looks good. I back-ported this to 2.2 as well to match up with the other
logical changes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/17686#discussion_r112225200
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -220,18 +220,20 @@ private[ui] class AllJobsPage(parent: JobsTab)
extends
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17685
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17685
**[Test build #75942 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75942/testReport)**
for PR 17685 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17685
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75942/
Test FAILed.
---
GitHub user n-marion opened a pull request:
https://github.com/apache/spark/pull/17686
[SPARK-20393][Webu UI] Strengthen Spark to prevent XSS vulnerabilities
## What changes were proposed in this pull request?
Add stripXSS and stripXSSMap to Spark Core's UIUtils. Calling
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/17686#discussion_r112225087
--- Diff: core/src/main/scala/org/apache/spark/ui/UIUtils.scala ---
@@ -527,4 +528,27 @@ private[spark] object UIUtils extends Logging {
origHref
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17684
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17684
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75941/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17685
**[Test build #75943 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75943/testReport)**
for PR 17685 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17684
**[Test build #75941 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75941/testReport)**
for PR 17684 at commit
Github user erikerlandson commented on the issue:
https://github.com/apache/spark/pull/13440
@thunterdb still can't diagnose what the source of this "fails to generate
doc" error is. I don't see anything wrong with the scaladoc.
---
If your project is set up for it, you can reply
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17685
**[Test build #75943 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75943/testReport)**
for PR 17685 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17685
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17685
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75943/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13440
**[Test build #75944 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75944/testReport)**
for PR 13440 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17686
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/17685
Anybody knows why [we cannot use non-deterministic expressions on grouping
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17130
right, how are we on this? let's get this ready soon and merge?
could you also add reference to the R example which is merged yesterday.
---
If your project is set up for it, you can reply
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/17665
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user shubhamchopra commented on the issue:
https://github.com/apache/spark/pull/17673
The [original paper](https://arxiv.org/abs/1301.3781) proposed two model
architectures for generating word embeddings, Continuous Skip-Gram model and
continuous Bag-of-words model. Spark ML
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17087
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75948/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17087
**[Test build #75948 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75948/testReport)**
for PR 17087 at commit
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/17685
I'm looking into the failure...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user redsanket commented on a diff in the pull request:
https://github.com/apache/spark/pull/17658#discussion_r112268780
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/SparkListenerBus.scala ---
@@ -71,7 +71,6 @@ private[spark] trait SparkListenerBus
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17665
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75945/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17665
**[Test build #75945 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75945/testReport)**
for PR 17665 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17681
I am not sure whether we should support it. Could you do a search which
DBMS supports it? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user mgummelt commented on a diff in the pull request:
https://github.com/apache/spark/pull/17665#discussion_r112272077
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -564,12 +566,22 @@ object SparkSubmit extends CommandLineUtils {
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17672#discussion_r112252271
--- Diff: R/pkg/R/functions.R ---
@@ -3652,3 +3652,43 @@ setMethod("posexplode",
jc <- callJStatic("org.apache.spark.sql.functions",
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17672#discussion_r112252301
--- Diff: R/pkg/R/generics.R ---
@@ -918,6 +918,14 @@ setGeneric("cbrt", function(x) {
standardGeneric("cbrt") })
#' @export
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17087
**[Test build #75948 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75948/testReport)**
for PR 17087 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17087
**[Test build #75950 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75950/testReport)**
for PR 17087 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17687
**[Test build #75949 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75949/testReport)**
for PR 17687 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17636
**[Test build #75952 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75952/testReport)**
for PR 17636 at commit
Github user ethanyxu commented on the issue:
https://github.com/apache/spark/pull/16648
I encountered this Exception when handling a data frame with 3000+ columns.
I hope this patch got resolved soon.
---
If your project is set up for it, you can reply to this email and have your
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17674#discussion_r112250707
--- Diff: R/pkg/R/functions.R ---
@@ -3652,3 +3652,56 @@ setMethod("posexplode",
jc <- callJStatic("org.apache.spark.sql.functions",
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17674#discussion_r112250519
--- Diff: R/pkg/R/generics.R ---
@@ -942,6 +942,14 @@ setGeneric("countDistinct", function(x, ...) {
standardGeneric("countDistinct")
#' @export
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17130#discussion_r112252772
--- Diff: docs/ml-frequent-pattern-mining.md ---
@@ -0,0 +1,80 @@
+---
+layout: global
+title: Frequent Pattern Mining
+displayTitle:
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17665
**[Test build #75946 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75946/testReport)**
for PR 17665 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17665
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17665
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75946/
Test FAILed.
---
Github user redsanket commented on the issue:
https://github.com/apache/spark/pull/17658
@vanzin sure will address the concerns thanks for the review
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17681#discussion_r112272487
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -985,14 +985,14 @@ class SessionCatalog(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13440
**[Test build #75944 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75944/testReport)**
for PR 13440 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13440
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13440
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75944/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17665
**[Test build #75946 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75946/testReport)**
for PR 17665 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17685
`PullOutNondeterministic` rule should pull out the nondeterministic
expressions. Could you check why it does not work?
---
If your project is set up for it, you can reply to this email and have
Github user emlyn closed the pull request at:
https://github.com/apache/spark/pull/16609
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17087
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17130
**[Test build #75947 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75947/testReport)**
for PR 17130 at commit
Github user hhbyyh commented on a diff in the pull request:
https://github.com/apache/spark/pull/17130#discussion_r112278365
--- Diff: docs/ml-frequent-pattern-mining.md ---
@@ -0,0 +1,80 @@
+---
+layout: global
+title: Frequent Pattern Mining
+displayTitle:
GitHub user zsxwing opened a pull request:
https://github.com/apache/spark/pull/17687
[SPARK-20397][SparkR][SS]Fix flaky test: test_streaming.R.Terminated by
error
## What changes were proposed in this pull request?
Checking a source parameter is asynchronous. When the
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17658#discussion_r112268220
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/SparkListenerBus.scala ---
@@ -71,7 +71,6 @@ private[spark] trait SparkListenerBus
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17684#discussion_r112282798
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/Decimal.scala ---
@@ -135,11 +135,17 @@ final class Decimal extends Ordered[Decimal]
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17684#discussion_r112282702
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/Decimal.scala ---
@@ -135,11 +135,17 @@ final class Decimal extends Ordered[Decimal]
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17669
Thank you @srowen.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17087
**[Test build #75951 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75951/testReport)**
for PR 17087 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17688
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user ymahajan opened a pull request:
https://github.com/apache/spark/pull/17689
[SPARK-20378][CORE][SQL][STREAMING] StreamSinkProvider should provide
schema in createSink
## What changes were proposed in this pull request?
Provided schema in
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17689
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17665#discussion_r112298650
--- Diff:
resource-managers/yarn/src/main/resources/META-INF/services/org.apache.spark.deploy.security.ServiceCredentialProvider
---
@@ -0,0 +1,3 @@
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17665#discussion_r112282505
--- Diff:
core/src/main/scala/org/apache/spark/deploy/security/ConfigurableCredentialManager.scala
---
@@ -41,15 +41,17 @@ import
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17665#discussion_r112279632
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -30,6 +30,7 @@ import scala.util.Properties
import
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17665#discussion_r112296620
--- Diff: dev/.rat-excludes ---
@@ -102,7 +102,7 @@ spark-deps-.*
org.apache.spark.scheduler.ExternalClusterManager
.*\.sql
.Rbuildignore
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17665#discussion_r112282595
--- Diff:
core/src/main/scala/org/apache/spark/deploy/security/ConfigurableCredentialManager.scala
---
@@ -41,15 +41,17 @@ import
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17665#discussion_r112297376
--- Diff:
resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackendSuite.scala
---
@@ -617,13
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17665#discussion_r112297041
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSecurityManager.scala
---
@@ -0,0 +1,66 @@
+/*
+ *
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17665#discussion_r112296192
--- Diff:
core/src/main/scala/org/apache/spark/executor/CoarseGrainedExecutorBackend.scala
---
@@ -174,6 +177,24 @@ private[spark] class
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17665#discussion_r112282249
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -564,12 +566,22 @@ object SparkSubmit extends CommandLineUtils {
GitHub user ymahajan opened a pull request:
https://github.com/apache/spark/pull/17690
Fixed typos in docs
## What changes were proposed in this pull request?
Typos at a couple of place in the docs.
## How was this patch tested?
build including docs
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17130
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75947/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17130
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17665#discussion_r112284755
--- Diff:
core/src/main/scala/org/apache/spark/deploy/security/ServiceCredentialProvider.scala
---
@@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/17690
We typically ask anyone who proposes a tiny typo fix PR to please have a
look at the rest of the doc and related docs while they're at it, to try to cut
down on the number we process. The change is
Github user ptkool commented on a diff in the pull request:
https://github.com/apache/spark/pull/17650#discussion_r112309456
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/BooleanSimplificationSuite.scala
---
@@ -160,4 +166,12 @@ class
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17687
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75949/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17687
**[Test build #75949 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75949/testReport)**
for PR 17687 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17687
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17130
**[Test build #75947 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75947/testReport)**
for PR 17130 at commit
GitHub user vundela opened a pull request:
https://github.com/apache/spark/pull/17688
[MINOR][DOCS] Adding missing boolean type for replacement value in fiâ¦
â¦llna
## What changes were proposed in this pull request?
Currently pyspark Dataframe.fillna API
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/17665
The thing that triggered my comment is the fact that I'm pretty sure you're
breaking the YARN side with this change (loading too many credential providers,
including one that probably won't work
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17680#discussion_r112285677
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala
---
@@ -536,4 +537,43 @@ class
Github user ash211 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17680#discussion_r112285883
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala
---
@@ -536,4 +537,43 @@ class
Github user zero323 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17674#discussion_r112288821
--- Diff: R/pkg/R/functions.R ---
@@ -3652,3 +3652,56 @@ setMethod("posexplode",
jc <- callJStatic("org.apache.spark.sql.functions",
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/17416
Well it's a little different, now that I've modified the tests to cover the
classifier case. In this case, when the main artifact has a classifier, it
doesn't find the dependency that the test sets
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/17684
To always use `BigDecimal` can make code simple. However, it always create
and keep an additional object (i.e. `BigDecimal`) even if value can be fit into
long value range.
This is a design
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17687
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/17665
@vanzin How important is it to you for this to be two PRs? I already
factored out the renewal logic to simplify this PR, and I'd rather not have to
decouple again. But I can if it's a blocker.
Github user map222 commented on the issue:
https://github.com/apache/spark/pull/17469
@holdenk @srowen Could I get a Jenkins test for this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17680#discussion_r112288596
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala
---
@@ -536,4 +537,43 @@ class
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17690
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17687
Thanks! Merging to master and 2.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ericl commented on the issue:
https://github.com/apache/spark/pull/17659
Ping.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user ymahajan commented on the issue:
https://github.com/apache/spark/pull/17690
Yes, this is the consolidated list across all the docs. I don't think there
are any more typos in the .md files under the docs folder.
---
If your project is set up for it, you can reply to this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17650
**[Test build #75953 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75953/testReport)**
for PR 17650 at commit
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17582#discussion_r112305549
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/HistoryServer.scala ---
@@ -301,6 +301,14 @@ object HistoryServer extends Logging {
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17582#discussion_r112305835
--- Diff:
core/src/main/scala/org/apache/spark/status/api/v1/ApiRootResource.scala ---
@@ -184,14 +184,27 @@ private[v1] class ApiRootResource extends
1 - 100 of 406 matches
Mail list logo