Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16503
@zsxwing @kayousterhout @andrewor14 Could you please help take a look at
this ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user jinxing64 opened a pull request:
https://github.com/apache/spark/pull/16503
[SPARK-18113] Method canCommit should return the same value when callâ¦
â¦ed by the same attempt multi times.
## What changes were proposed in this pull request?
Method
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16503
@mccheah @JoshRosen @ash211 Could you please take look at this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16503#discussion_r96127359
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/OutputCommitCoordinatorSuite.scala
---
@@ -221,6 +229,22 @@ private case class
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16503
@vanzin @ash211
Thanks a lot for your comments; I've changed accordingly. Please give
another look at this~~
---
If your project is set up for it, you can reply to this email and have your
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16867
@kayousterhout
Thanks a lot for comments. I refined accordingly :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17276
@mridulm
Thanks a lot for taking time looking into this and thanks for comments :)
1) I changed the size of underestimated blocks to be
`partitionLengths.filter(_ > hc.getAvgSize).
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17276#discussion_r108061417
--- Diff:
core/src/main/java/org/apache/spark/shuffle/sort/BypassMergeSortShuffleWriter.java
---
@@ -169,6 +173,36 @@ public void write(Iterator
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17276
@squito oh, I feel sorry if this is disturbing. I will mark it as wip.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17312
@rxin because I killed executor1 and it is not active during this stage.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17312
@rxin
Yes, I'm so confused by the second screenshot I posted.
The only reason I can find is that the `stageData` in `ExecutorTable` is
none thread safe. Size(2 executors) returned; maybe
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17276
You are so kind person.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17276
@squito
Thanks a lot for taking time looking into this pr.
I updated the pr. Currently just add two metrics: a) the total size of
underestimated blocks size, b) the size of blocks shuffled
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16867
@kayousterhout @squito @mridulm
Thanks for reviewing this !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16867
@mridulm
Thanks a lot for helping review this : ) really appreciate.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16867#discussion_r106433273
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala ---
@@ -172,7 +172,7 @@ private[spark] class TaskSchedulerImpl
private
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16867#discussion_r106453502
--- Diff:
core/src/test/scala/org/apache/spark/util/collection/MedianHeapSuite.scala ---
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16867#discussion_r106453205
--- Diff:
core/src/test/scala/org/apache/spark/util/collection/MedianHeapSuite.scala ---
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17276
@squito
Would you mind help comment on this when have time ? :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17312
![screenshot](https://cloud.githubusercontent.com/assets/4058918/24069386/0f556622-0be2-11e7-9f48-cc096cdd7d9b.png)
---
If your project is set up for it, you can reply to this email and have
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16867
@squito
Sure. I did test for 100k tasks. The results are as below:
| | time cost |
| --| -- |
| insert | 135ms, 122ms, 119ms, 120ms, 163ms
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17312
@rxin Thanks a lot. I added a number after `Aggregated Metrics by Executor`
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16867
@squito
Thanks :) already refined.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17312
https://cloud.githubusercontent.com/assets/4058918/24134191/8392c5ea-0e3d-11e7-8a53-f164acf04764.png;>
---
If your project is set up for it, you can reply to this email and have your
re
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17312
@rxin @jerryshao @srowen
I've refined the description and uploaded the screenshot of latest version.
Please take another look.
---
If your project is set up for it, you can reply
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17312
Sure, that would be cool :) Thanks again you can help review this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17312
![screenshot2](https://cloud.githubusercontent.com/assets/4058918/24127926/5e0e7294-0e13-11e7-8af0-434b05e2815a.png)
---
If your project is set up for it, you can reply to this email
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17312
@rxin @jerryshao
I uploaded another screenshot and give a short description there.
Now it is (2 executors supplied).
---
If your project is set up for it, you can reply to this email
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17312
I want to show the number of executors once active during the stage.
`StageUIData` gets updated when receiving the hear beat from executor.
---
If your project is set up for it, you can reply
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16867
@kayousterhout more comments?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16389
@zhaorongsheng
I think its better to just not reset `numRunningTasks` to 0. If we got some
`ExecutorLostFailure`, the stage should not be marked as finished.
---
If your project is set up
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16867#discussion_r106340513
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/TaskSetManagerSuite.scala ---
@@ -893,6 +893,7 @@ class TaskSetManagerSuite extends
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16867#discussion_r106340321
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala ---
@@ -172,7 +172,7 @@ private[spark] class TaskSchedulerImpl
private
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16867
@kayousterhout
Thanks a lot for the comments :) very helpful.
I've refined, please take another look when you have time.
---
If your project is set up for it, you can reply
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17312
@jerryshao
Thanks a lot you can help review, really appreciate. I will give a
description soon.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17276
@squito
Thanks a lot for your comments and I will think and do the test carefully
:)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17312
@harounemohammedi
Thanks a lot for comment on this. I'm hesitate to include the `total time`
in this pr.
---
If your project is set up for it, you can reply to this email and have your
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16867
@kayousterhout @mridulm
More comments on this ? :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user jinxing64 opened a pull request:
https://github.com/apache/spark/pull/17312
Display num of executors for the stage.
## What changes were proposed in this pull request?
In `StagePage` the total num of executors are not displayed. Since
executorId may
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17312
@srowen
Thanks a lot for quick reply. When we check the reason why a stage ran
today much longer than yesterday, we want to know how many executors are
supplied. We don't want to count
Github user jinxing64 closed the pull request at:
https://github.com/apache/spark/pull/17312
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user jinxing64 reopened a pull request:
https://github.com/apache/spark/pull/17312
[SPARK-19973] Display num of executors for the stage.
## What changes were proposed in this pull request?
In `StagePage` the total num of executors are not displayed. Since
executorId
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17312
The executor metrics are updated to `StageUIData` when receive executor
hear beat.
Yes, the longevity of the executor may not cover the whole stage, but it
was once active during the stage
Github user jinxing64 closed the pull request at:
https://github.com/apache/spark/pull/17112
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17533
Yes, I did the test in my cluster. In highly-skew stage, the time cost can
be reduced significantly. Tasks are scheduled with locality preference. But in
current code, input size of tasks
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17533#discussion_r109877754
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -138,7 +139,7 @@ private[spark] class TaskSetManager(
private
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17533
@squito
Thank you so much for taking look into this.
> we don't want the TSM requesting info from the DAGSCheduler
Sorry I missed this point for the previous change. Now I p
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17533
@kayousterhout
Thanks a lot for comment and sorry for late reply. I replied your comment
from JIRA. Please take a look when you have time :)
---
If your project is set up for it, you can
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17533#discussion_r109930532
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -512,6 +522,57 @@ private[spark] class TaskSetManager
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17603
I found this when test https://github.com/apache/spark/pull/17533. It
failed now and then when try to get size of reduce from `MapStatus`.
I'm not sure how to make it better:
Modify
GitHub user jinxing64 opened a pull request:
https://github.com/apache/spark/pull/17603
[SPARK-20288] Avoid generating the MapStatus by stageId in
BasicSchedulerIntegrationSuite
## What changes were proposed in this pull request?
ShuffleId is determined before job
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16989
Jenkins, test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16989#discussion_r111734780
--- Diff: core/src/main/scala/org/apache/spark/scheduler/MapStatus.scala ---
@@ -133,36 +135,53 @@ private[spark] class HighlyCompressedMapStatus
private
GitHub user jinxing64 opened a pull request:
https://github.com/apache/spark/pull/17634
[SPARK-20333] HashPartitioner should be compatible with num of child RDD's
partitions.
## What changes were proposed in this pull request?
Fix test "don't submit stage unti
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17634
I found this when doing https://github.com/apache/spark/pull/17533
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17533#discussion_r111545285
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -472,6 +472,47 @@ class DAGScheduler
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17533#discussion_r111545327
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -1080,6 +1122,25 @@ class DAGScheduler
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17533
@squito
Thank you so much for reviewing thus far and sorry for the complexity I
bring in.
I tried to simplify the code according to your comment and please take
another look when tests
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17603
@squito
Could you help comment on this ? :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17634
@squito @srowen
Could you help comment on this :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17533#discussion_r111545019
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -472,6 +472,47 @@ class DAGScheduler
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17533#discussion_r111545406
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -1080,6 +1122,25 @@ class DAGScheduler
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17533#discussion_r111546462
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -168,6 +169,8 @@ private[spark] class TaskSetManager
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17533
I think the failed unit test can be fixed in
https://github.com/apache/spark/pull/17634 and
https://github.com/apache/spark/pull/17603
---
If your project is set up for it, you can reply
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17133
@vanzin @srowen
I refined according to the comments, please take a look when you have time
:)
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16867
@kayousterhout @squito @mridulm
I refined according comments. Please take a look when you have time :)
---
If your project is set up for it, you can reply to this email and have your
reply
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16867
@kayousterhout @squito
Thanks a lot for your comments, really helpful :)
I really think median heap is a good idea. `slice` is `O(n)` and is not
most efficient.
I'm doing
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16867
@squito
Yes, some of machine learning jobs which do cartesian product in my cluster
have over than 100k tasks in the `TaskSetManager`.
---
If your project is set up for it, you can reply
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17133#discussion_r104273530
--- Diff: core/src/main/scala/org/apache/spark/scheduler/TaskInfo.scala ---
@@ -75,6 +75,8 @@ class TaskInfo(
}
private[spark] def
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17208
@squito
Thanks for notification :) this is not in my pr.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user jinxing64 opened a pull request:
https://github.com/apache/spark/pull/17276
[WIP][SPARK-19937] Collect metrics of block sizes when shuffle.
## What changes were proposed in this pull request?
Metrics of blocks sizes(when shuffle) should be collected for later
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16867
@squito Sorry, it seems like something went wrong when I did merge and try
resolve the conflict. I squashed the commits and did rebase. It seems ok now.
---
If your project is set up for it, you
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16867
@squito
Thanks a lot for comments. I've refined :):)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16867
Thanks a lot for comments. I refined accordingly : )
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16867#discussion_r104344529
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -754,7 +743,6 @@ private[spark] class TaskSetManager
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16867#discussion_r104344524
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -754,7 +743,6 @@ private[spark] class TaskSetManager
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16867#discussion_r104344274
--- Diff:
core/src/main/scala/org/apache/spark/util/collection/MedianHeap.scala ---
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16867#discussion_r104344072
--- Diff:
core/src/test/scala/org/apache/spark/util/collection/MedianHeapSuite.scala ---
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16867
@mridulm
Thanks a lot for your comments. I did a test with `TreeSet` previously with
100k tasks. I calculate the time spent on insertion. The results are: 372ms,
362ms, 458ms, 429ms, 363ms
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16867
@mridulm
Thanks a lot for comments. I refined accordingly. (btw time complexity of
the `rebalance` in `MedianHeap`is O(1)).
---
If your project is set up for it, you can reply to this email
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17133#discussion_r104161512
--- Diff: core/src/main/scala/org/apache/spark/scheduler/TaskInfo.scala ---
@@ -75,6 +75,8 @@ class TaskInfo(
}
private[spark] def
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17111
@squito
Thanks a lot :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user jinxing64 opened a pull request:
https://github.com/apache/spark/pull/17111
[SPARK-19777] Scan runningTasksSet when check speculatable tasks in Tâ¦
â¦askSetManager.
## What changes were proposed in this pull request?
When check speculatable tasks
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17133#discussion_r104066996
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -695,7 +695,8 @@ private[spark] class TaskSetManager(
def
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/17276
@mridulm
Sorry for late reply. I opened the pr for
SPARK-19659(https://github.com/apache/spark/pull/16989) and make these two PRs
independent. Basically this pr is is to evaluate
GitHub user jinxing64 opened a pull request:
https://github.com/apache/spark/pull/17533
[SPARK-20219] Schedule tasks based on size of input from ScheduledRDD
## What changes were proposed in this pull request?
When data is highly skewed on `ShuffledRDD`, it make sense
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17533#discussion_r109896244
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -512,6 +522,57 @@ private[spark] class TaskSetManager
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17533#discussion_r109900096
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -438,6 +443,11 @@ private[spark] class TaskSetManager
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17533#discussion_r109900019
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -512,6 +522,57 @@ private[spark] class TaskSetManager
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17533#discussion_r109901087
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -512,6 +522,57 @@ private[spark] class TaskSetManager
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17533#discussion_r109893630
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -168,6 +169,10 @@ private[spark] class TaskSetManager
Github user jinxing64 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18565#discussion_r126178968
--- Diff:
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/OneForOneBlockFetcher.java
---
@@ -151,15 +152,27 @@ private void
GitHub user jinxing64 opened a pull request:
https://github.com/apache/spark/pull/18566
Refine the document for spark.reducer.maxReqSizeShuffleToMem.
## What changes were proposed in this pull request?
In current code, reducer can break the old shuffle service when
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/18388
@tgravescs
Thanks a lot for reviewing this pr thus much. I think I'm making a stupid
mistake. Can I ask a question, how to decide the number of connections? I'm
just counting
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/18565
cc @zsxwing @cloud-fan @jiangxb1987
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user jinxing64 opened a pull request:
https://github.com/apache/spark/pull/18565
[SPARK-21342] Fix DownloadCallback to work well with RetryingBlockFetcher.
## What changes were proposed in this pull request?
When `RetryingBlockFetcher` retries fetching blocks
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/18566
I didn't include this config in configuration.md. Do I need to?
cc @zsxwing @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/18388
I think it could be more efficient to do the control on shuffle service
side.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/18388
Previously I was saying that I have 200k+ connections to one shuffle
service. I'm sorry about this, the information is wrong. It turns out that our
each `NodeManager` has two auxiliary shuffle
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/18388
@tgravescs
Thanks a lot for advice.
> the flow control part should allow everyone to start fetching without
rejecting a bunch, especially if the network can't push it out that fast any
101 - 200 of 719 matches
Mail list logo