Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22784
Test results with existing PCA and using SVD without computing covariance
matrix
val data = Array(
Vectors.sparse(5, Seq((1, 1.0), (3, 7.0))),
Vectors.dense(2.0, 0.0, 3.0, 4.0
GitHub user shahidki31 opened a pull request:
https://github.com/apache/spark/pull/22784
[SPARK-25790][MLLIB] PCA: Support more than 65535 column matrix
## What changes were proposed in this pull request?
Spark PCA supports maximum only ~65,535 columns matrix. This is due
Github user shahidki31 closed the pull request at:
https://github.com/apache/spark/pull/22714
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22714
I am closing the PR, since already one PR is there for the webui auto
refresh. Thanks.
---
-
To unsubscribe, e-mail: reviews
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22714
@gengliangwang Sorry, I didn't see the PR. Yes, that PR also for refreshing
functionality for the webui.
I have taken the patch and checked the functionality, and it seems fine.
Below
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22714
Thank you @srowen for the comment. Yes, we should not hard-code the refresh
interval and we let the user to enable the parameter. I will update the code
accordingly
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22714
cc @srowen @cloud-fan . Kindly review.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
GitHub user shahidki31 opened a pull request:
https://github.com/apache/spark/pull/22714
[SPARK-25720][WEBUI] Support auto refresh page for the WEBUI
## What changes were proposed in this pull request?
Currently spark webui doesn't have an option of auto refresh page. Because
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22645
Hi @cloud-fan , Since other webtabs like jobs, stages etc. embed the
javascript code in scala code, that is why I followed the same. It would be
great if we rewrite the spark UI with some modern
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22689
Thanks a lot @srowen
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22645
Thanks a lot @srowen
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22689
Yes. Thank you @srowen .
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22689
retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22689
@srowen . Yes. We should read only from the finished frames of zstd. When
the listener try to read from the unfinished frame, zstd input reader throws an
exception (unless we make set continuous
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22689
Hi @srowen . Yes. Event logs are available for running apps, but with the
extension, ".inprogress".
We can open webui from the history server for both running and finished
ap
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22689
cc @vanzin @srowen . Kindly review.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
GitHub user shahidki31 opened a pull request:
https://github.com/apache/spark/pull/22689
[SPARK-25697][CORE]When zstd compression enabled in progress application is
throwing Error is throwing in the history webuiâ¦
## What changes were proposed in this pull request?
When we
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22645
@felixcheung It is a random failure. Could you please re-trigger the test.
Thanks
Please refer:
https://issues.apache.org/jira/browse/SPARK-23622
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22645
@felixcheung I build locally, Now scalastyle issue is not happening. kindly
re-trigger the PR builder.
---
-
To unsubscribe
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22645
Hi @felixcheung , I will update the code
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22659
Thank you @srowen for merging.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22645
Hi @srowen , There is one behavior change this PR introduces, which is
correct. Sorting Job Ids in the previous versions of spark was not proper.
After the PR the sorting is proper
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22659
In Jenkins CI, testing time of logisticRegressionSuite without the PR is 5
min 10 sec and with the PR, 4 min 21 sec
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22659
Before the changes:
Running time of logistic regression suite: **4min 35 sec**
After the changes:
Running time of logistic regression suite: **3min 22 sec**
cc @srowen
Github user shahidki31 closed the pull request at:
https://github.com/apache/spark/pull/22660
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22659
In the test "binary logistic regression with intercept with ElasticNet
regularization", taking around 30sec to run. But we can reduce the time to 15
sec by reducing the
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22659
In the test, "multinomial logistic regression with intercept with
elasticnet regularization" in the "LogisticRegressionSuite", taking around 1
minute to train 2 logis
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22660
Thanks for the suggestion. I will close this and amend in the other PR.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22660
cc @srowen Kindly review.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user shahidki31 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22645#discussion_r223200104
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala
---
@@ -121,74 +122,257 @@ private[ui] class
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22650
Thanks a lot @srown
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22645
Thank you @srowen , I have modified the code based on your suggestions.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user shahidki31 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22645#discussion_r223195007
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala
---
@@ -229,73 +406,88 @@ private[ui] abstract class
Github user shahidki31 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22645#discussion_r223193975
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala
---
@@ -121,65 +122,242 @@ private[ui] class
Github user shahidki31 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22645#discussion_r223193956
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala
---
@@ -121,65 +122,242 @@ private[ui] class
Github user shahidki31 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22645#discussion_r223193970
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala
---
@@ -121,65 +122,242 @@ private[ui] class
Github user shahidki31 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22645#discussion_r223193960
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala
---
@@ -121,65 +122,242 @@ private[ui] class
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22645
Thank you @srowen for the review. I have addressed the comments.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user shahidki31 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22645#discussion_r223192419
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala
---
@@ -121,65 +122,247 @@ private[ui] class
Github user shahidki31 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22645#discussion_r223192399
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala
---
@@ -121,65 +122,247 @@ private[ui] class
Github user shahidki31 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22645#discussion_r223192408
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala
---
@@ -121,65 +122,247 @@ private[ui] class
Github user shahidki31 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22645#discussion_r223191064
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala
---
@@ -202,100 +385,127 @@ private[ui] abstract class
Github user shahidki31 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22645#discussion_r223191054
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala
---
@@ -202,100 +385,127 @@ private[ui] abstract class
Github user shahidki31 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22645#discussion_r223191033
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala
---
@@ -202,100 +385,127 @@ private[ui] abstract class
Github user shahidki31 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22645#discussion_r223191000
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala
---
@@ -121,65 +122,247 @@ private[ui] class
Github user shahidki31 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22645#discussion_r223190984
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala
---
@@ -121,65 +122,247 @@ private[ui] class
Github user shahidki31 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22645#discussion_r223190979
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala
---
@@ -17,16 +17,17 @@
package
GitHub user shahidki31 opened a pull request:
https://github.com/apache/spark/pull/22660
[SPARK-25624][TEST] Reduce test time of
LogisticRegressionSuite.multinomial logistic regressionâ¦
⦠with intercept with elasticnet regularization
## What changes were proposed
GitHub user shahidki31 opened a pull request:
https://github.com/apache/spark/pull/22659
[SPARK-25623][TEST] Reduce test time of LogisticRegressionSuite:
multinomial logistic regression
...with intercept with L1 regularization
## What changes were proposed
Github user shahidki31 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22645#discussion_r223174634
--- Diff: core/src/main/scala/org/apache/spark/ui/PagedTable.scala ---
@@ -31,7 +31,7 @@ import org.apache.spark.util.Utils
*
* @param
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22650
Hi @srowen , Kindly review and merge.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22650
Hi @srowen , Kindly review and merge. This PR will be dependent on the PR
https://github.com/apache/spark/pull/22645
GitHub user shahidki31 opened a pull request:
https://github.com/apache/spark/pull/22650
[SPARK-25575][FOLLOWUP]SQL tab in the spark UI support hide tables
## What changes were proposed in this pull request?
After the PR, https://github.com/apache/spark/pull/22592, SQL tab
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22645
Test step to reproduce OOM without the PR.
1) bin/spark-shell --conf spark.sql.ui.retainedExecutions=5
for (i <- 0 until 5) {
val df = Seq(
(1
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22645
cc @vanzin @srowen @cloud-fan @dongjoon-hyun . Kindly review the PR.
---
-
To unsubscribe, e-mail: reviews-unsubscr
GitHub user shahidki31 opened a pull request:
https://github.com/apache/spark/pull/22645
[SPARK-25566][SPARK-25567][WEBUI][SQL]Support pagination for SQL tab to
avoid OOM
## What changes were proposed in this pull request?
Currently SQL tab in the WEBUI doesn't have pagination
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22613
Thank you @dongjoon-hyun for merging. I will close the PR.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user shahidki31 closed the pull request at:
https://github.com/apache/spark/pull/22613
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22613
cc @dongjoon-hyun
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
GitHub user shahidki31 opened a pull request:
https://github.com/apache/spark/pull/22613
[SPARK-25583][DOC][BRANCH-2.3]Add history-server related configuration in
the documentation.
## What changes were proposed in this pull request?
This is a follow up PR for the PR,
https
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22601
Thanks a lot @dongjoon-hyun
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22609
Hi @mridulm , Can't we limit the task information by setting
'spark.ui.retainedTasks' lesser, to avoid OOM? correct me if I am wrong
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22601
Hi @dongjoon-hyun , I have addressed the comments. Thank you.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user shahidki31 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22601#discussion_r221847126
--- Diff: docs/configuration.md ---
@@ -807,6 +814,14 @@ Apart from these, the following properties are also
available, and may be useful
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22592
Thank you @srowen for merging.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22601
@dongjoon-hyun . There is one more, "ASYNC_TRACKING_ENABLED", but it is not
configurable for history server. For live UI, it is configurable. I am not
sure whether this n
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22601
Hi @dongjoon-hyun. Thanks for the review.
These confugurations are recently added in the history server (2.3 +) ,
which are not there in the documentation, but it needs to
add
GitHub user shahidki31 opened a pull request:
https://github.com/apache/spark/pull/22601
[SPARK-25583][DOCS]Add history-server related configuration in the
documentation.
## What changes were proposed in this pull request?
Add history-server related configuration
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22592
Thank you @srowen .
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22592
Thank you for the review @ajbozarth .
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user shahidki31 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22592#discussion_r221443587
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala
---
@@ -206,11 +238,8 @@ private[ui] abstract class
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22592
Thanks @srowen for reviewing.
> I think it's OK. Do you need to collapse this one table though? It's the
only thing on the page.
There are 'Running', 'Completed' and 'Fai
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22592
cc @srowen @dongjoon-hyun
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22592
Jobs and stages page support hiding table. So to make it consistent, SQL
tab also should behave the same.
![screenshot from 2018-09-30
00-15-08](https://user-images.githubusercontent.com
GitHub user shahidki31 opened a pull request:
https://github.com/apache/spark/pull/22592
[SPARK-25575][WEBUI][SQL] SQL tab in the spark UI support hide tables, to
make it consistent with other tabs.
## What changes were proposed in this pull request?
Currently, SQL tab
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22555
Thank you @dongjoon-hyun for merging
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22549
Thank you @vanzin for merging.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user shahidki31 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22555#discussion_r220743860
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -464,7 +464,7 @@ private[spark] class Executor
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22555
It is random failure. passed in local. Retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22555
cc @cloud-fan
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22502
Hi @cloud-fan , could you please review the code.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
GitHub user shahidki31 opened a pull request:
https://github.com/apache/spark/pull/22555
[SPARK-25536][CORE][Minor]metric value for METRIC_OUTPUT_RECORDS_WRITTEN is
incorrect
## What changes were proposed in this pull request?
changed metric value
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22549
Hi @vanzin , The build has passed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands
Github user shahidki31 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22549#discussion_r220385769
--- Diff:
core/src/main/scala/org/apache/spark/status/AppStatusListener.scala ---
@@ -388,10 +388,11 @@ private[spark] class AppStatusListener
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22549
cc @vanzin @srowen
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22549
Spark 2.2.1 output:
![screenshot from 2018-09-26
03-34-56](https://user-images.githubusercontent.com/23054875/46046043-3ddf2480-c13d-11e8-9d30-5eda75288a87.png)
![screenshot from 2018-09
GitHub user shahidki31 opened a pull request:
https://github.com/apache/spark/pull/22549
[SPARK-25533][CORE][WEBUI]AppSummary should hold the information about
succeeded Jobs and completed stages only
## What changes were proposed in this pull request?
Currently, In the spark
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22502
@dongjoon-hyun . Thanks for the comment. I have modified the title.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22525
seems Jenkins is down again. @SparkQA Test this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user shahidki31 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22526#discussion_r219897175
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala ---
@@ -685,7 +685,7 @@ private[ui] class TaskDataSource(
private var
Github user shahidki31 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22526#discussion_r219876198
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala ---
@@ -685,7 +685,10 @@ private[ui] class TaskDataSource(
private
Github user shahidki31 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22526#discussion_r219844417
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala ---
@@ -685,7 +685,15 @@ private[ui] class TaskDataSource(
private
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22525
Hi @dongjoon-hyun , seems Jenkins is down. could you please ask for test
again?
---
-
To unsubscribe, e-mail: reviews
Github user shahidki31 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22526#discussion_r219732766
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala ---
@@ -685,7 +685,15 @@ private[ui] class TaskDataSource(
private
Github user shahidki31 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22525#discussion_r219726139
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala ---
@@ -132,7 +132,7 @@ private[ui] class StagePage(parent: StagesTab, store
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22525
cc @vanzin
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22526
cc @vanzin
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
GitHub user shahidki31 opened a pull request:
https://github.com/apache/spark/pull/22526
[SPARK-25502]Empty Page when page number exceeds the reatinedTask size.
## What changes were proposed in this pull request?
Test steps :
1) bin/spark-shell --conf
GitHub user shahidki31 opened a pull request:
https://github.com/apache/spark/pull/22525
[SPARK-25503] Total task message in stage page is ambiguous
## What changes were proposed in this pull request?
Test steps :
1) bin/spark-shell --conf spark.ui.retainedTasks=10
Github user shahidki31 commented on the issue:
https://github.com/apache/spark/pull/22502
cc @cloud-fan
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
201 - 300 of 352 matches
Mail list logo