Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/339#issuecomment-56505176
I will rebase the PR...I think I was waiting for the review?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/2490#discussion_r17949828
--- Diff: docs/programming-guide.md ---
@@ -1183,6 +1188,10 @@ running on the cluster can then add to it using the
`add` method or the `+=` ope
GitHub user CodingCat opened a pull request:
https://github.com/apache/spark/pull/2524
[SPARK-732][RESUBMIT] make if allowing duplicate update as an option of
accumulator
In current implementation, the accumulator will be updated for every
successfully finished task, even the task
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2524#issuecomment-56719035
@mateiz @mridulm @kayousterhout @markhamstra @pwendell I proposed this as
an resubmission of https://github.com/apache/spark/pull/228
Expecting your review
Github user CodingCat closed the pull request at:
https://github.com/apache/spark/pull/228
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2524#issuecomment-56748597
OK...I will make MIMA happy.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2524#issuecomment-56959242
BTW, if we don't want to de-duplicate in shuffle stages, we can just move
the necessary part to TaskSetManager
---
If your project is set up for it, you can reply
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2524#issuecomment-57074388
the drawbacks for us not to de-duplicate in shuffle stage is that, it makes
accumulator usage to be very tricky...
it sounds like you are not encouraged to use
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2524#issuecomment-57074398
I can simply monitor the accumulator update in TaskSetManager, just not
sure if that can maximumly resolve the problem.
---
If your project is set up for it, you
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2524#issuecomment-57248974
I think it should work...I'm trying this
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2524#issuecomment-57299654
OK, Jenkins said OK
Finished the modification,
1. Removed the option for the user to choose whether the accumulator
accepts duplication (this may
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2524#issuecomment-57323970
added a test case for result stage
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2524#issuecomment-57806876
ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2097#issuecomment-58261731
sure
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user CodingCat closed the pull request at:
https://github.com/apache/spark/pull/2097
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2524#issuecomment-58273902
ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1528#issuecomment-49744177
so it's actually another type of scheduling instead of FIFO/FAIR?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1313#issuecomment-49744305
ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1528#issuecomment-49744970
also, this is preemptive or non-preemptive?
according to my understanding on the code, it's non-preemptive, then a high
priority TaskSet is easily
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/637#issuecomment-49745118
ping.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/731#issuecomment-49745089
ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1331#issuecomment-49744276
ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/1528#discussion_r15246517
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -778,8 +778,10 @@ class DAGScheduler(
logInfo(Submitting
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1528#issuecomment-49862183
enI'm thinking that if we can achieve the same goal with FAIR
scheduler.my own answer is yes..@markhamstra your thoughts?
---
If your project is set up
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1528#issuecomment-49863644
I mean if we just want to prioritize some jobs, why not assigning them to a
pool with higher weight?
---
If your project is set up for it, you can reply to this email
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/731#issuecomment-49958572
you mean start multiple executors per worker?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/731#issuecomment-50014048
@nishkamravi2 I got your point now... yes, this patch is to enable the user
to run multiple executors with a single worker instead of running multiple
workers
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1313#issuecomment-50253029
Hi, @mateiz , thanks for the comments
If we just adding NO_PREF level, it can avoid the unnecessary waiting when
we only have no-pref tasks,
however
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/1313#discussion_r15436880
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -738,6 +771,8 @@ private[spark] class TaskSetManager
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1313#issuecomment-50260912
@mateiz for skipping the levels in computeValidLocalityLevels, that's
straightforward, however, the current TaskSetManager does not update the valid
locality after
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1313#issuecomment-50289440
@mateiz thanks, I'm working on simplify this,
just one thing to confirm, if we call resourceOffer with NO_PREF only after
NODE_LOCAL, we will have
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1313#issuecomment-50292050
Yes, I understand that
but in that case the behaviour is exactly the same as the code in the
commit
(https://github.com/CodingCat/spark/commit
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1313#issuecomment-50299464
Ah, yes, you're right, working on this...
the only issue is that PROCESS_LOCAL is also NODE_LOCAL and myLocalities is
not updated upon the completion of each
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1313#issuecomment-50363865
Hi, @mateiz, I just updated the patch, it has passed the unit test
also I squashed the commits to keep only 3 versions of the patch,
---
If your project
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/1313#discussion_r15518169
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -751,20 +787,7 @@ private[spark] class TaskSetManager
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/1313#discussion_r15518594
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala ---
@@ -246,28 +246,36 @@ private[spark] class TaskSchedulerImpl
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1313#issuecomment-50469155
Hi, @mateiz , I just updated the patch,
The reason I kept a per-node tracking of NODE_LOCAL-only tasks is that, we
don't want to delay the scheduling
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1313#issuecomment-50571327
@mateiz , it seems that if we don't track NODE_LOCAL only tasks in a fine
granularity and adjust the delay for different nodes and different moments
(after all
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1313#issuecomment-50605235
all tasks are NO_PREFS or ANY
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1313#issuecomment-50605973
actually, I cannot reproduce it locally.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user CodingCat closed the pull request at:
https://github.com/apache/spark/pull/1331
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1331#issuecomment-50698688
OK, that would be a more comprehensive solution,
then I will close this one
---
If your project is set up for it, you can reply to this email and have your
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1313#issuecomment-50699556
@mateiz it seems that when I remove the fine granularity tracking, the test
cases failed again
https://amplab.cs.berkeley.edu/jenkins/job
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1313#issuecomment-50886564
hey @mateiz I tried different implementations locally, it seems that the
failed test cases are just brought by the unnecessary delay for nopref tasks
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1313#issuecomment-50978501
I might give more explanation on the trace printed above
Set()
ANY,NODE_LOCAL
task 1, ArrayBuffer()
task 0, ArrayBuffer(TaskLocation(localhost, None
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1313#issuecomment-51098059
In the current version (no delay for nopref tasks at all)
yes, we check node-local tasks prior to no-pref tasks, if none, we can
launch no-pref immediately
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1313#issuecomment-51104685
ah, yes, I didn't do cross-job tracking here, so basically, the problem you
mentioned also exists
---
If your project is set up for it, you can reply to this email
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1313#issuecomment-51109866
yes, if there are only node-local and no-pref , node-local ones should run
right away, it's included in the current version
if you think it's OK, I can add
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1313#issuecomment-51130511
unrelated pyspark failure (crashed)
rebasing to retest it
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1313#issuecomment-51131059
@mateiz , BTW, I met this for several times, all scala/java test cases
passed, but pyspark just crashed, what's the reason of this? maybe need to file
a JIRA
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1313#issuecomment-51135109
@JoshRosen thanks for the explanation, that's just fine
@mateiz ...except pyspark, others seem to be fine...
---
If your project is set up for it, you can
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1313#issuecomment-51139533
Jenkins?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1313#issuecomment-51146059
finally,
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1313#issuecomment-51287327
that's for fitting some early version of the patches, sorry about that, I
just forgot to undo the changes...
---
If your project is set up for it, you can reply
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/637#issuecomment-51287642
rebased the patchjust curiouswho triggered the test?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1313#issuecomment-51288095
thanks for the patient review @mateiz @mridulm @kayousterhout @lirui-intel
---
If your project is set up for it, you can reply to this email and have your
reply
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2524#issuecomment-58687609
ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2524#issuecomment-58754077
ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/637#issuecomment-58754084
ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/637#issuecomment-58754089
ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/2824#discussion_r18956465
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -280,12 +280,29 @@ private[spark] object Utils extends Logging {
// When
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/2824#discussion_r18957913
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -280,12 +280,29 @@ private[spark] object Utils extends Logging {
// When
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/2828#discussion_r18976036
--- Diff: core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala
---
@@ -94,6 +96,7 @@ private[spark] class Worker(
val finishedExecutors
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/2828#discussion_r18976594
--- Diff: core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala
---
@@ -365,6 +375,16 @@ private[spark] class Worker(
def
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/2828#discussion_r18976619
--- Diff: core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala
---
@@ -365,6 +375,16 @@ private[spark] class Worker(
def
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/2828#discussion_r18977018
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -341,7 +341,11 @@ private[spark] class Master(
case Some
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/2828#discussion_r18978412
--- Diff: core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala
---
@@ -362,9 +372,19 @@ private[spark] class Worker
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/2828#discussion_r18985861
--- Diff: core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala
---
@@ -362,9 +372,19 @@ private[spark] class Worker
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/2828#discussion_r18986488
--- Diff: core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala
---
@@ -362,9 +372,19 @@ private[spark] class Worker
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/2828#discussion_r18988031
--- Diff: core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala
---
@@ -362,9 +372,19 @@ private[spark] class Worker
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/2828#discussion_r18988140
--- Diff: core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala
---
@@ -362,9 +372,19 @@ private[spark] class Worker
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/2828#discussion_r19011614
--- Diff: core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala
---
@@ -362,9 +372,19 @@ private[spark] class Worker
GitHub user CodingCat opened a pull request:
https://github.com/apache/spark/pull/2851
[WIP]SPARK-3957: show broadcast variable resource usage info in UI
WIP, finish the most of logic, need to reorganize the pages
![image](https://cloud.githubusercontent.com/assets/678008
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2828#issuecomment-59807896
@JoshRosen , this is awesome to test Spark integration with Docker
@mccheah , this PR is LGTM now, except that we exposed too many
should-be-private members
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2828#issuecomment-59810532
@markhamstra , yeah, my concern is just this, though Worker is marked as
private[spark], is it a good practice to expose every detail in the
implementation
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2828#issuecomment-59814026
sure, I created the JIRA: https://issues.apache.org/jira/browse/SPARK-4011
---
If your project is set up for it, you can reply to this email and have your
reply
GitHub user CodingCat opened a pull request:
https://github.com/apache/spark/pull/2864
SPARK-4012: call tryOrExit instead of logUncaughtExceptions in
ContextCleaner
When running an might-be-memory-intensive application locally, I received
the following exception
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/2851#discussion_r19185382
--- Diff: core/src/main/scala/org/apache/spark/HeartbeatReceiver.scala ---
@@ -30,7 +30,8 @@ import org.apache.spark.util.ActorLogReceive
private[spark
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2851#issuecomment-60039787
![image](https://cloud.githubusercontent.com/assets/678008/4731666/589ce496-59af-11e4-99fd-01e4b37d7fef.png)
![image](https://cloud.githubusercontent.com
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2851#issuecomment-60075293
@rxin I'm not sure, maybe we don't need that, because currently all RDD
blocks are not reported via network, but by calling post(...) from the driver
---
If your
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2864#issuecomment-60179940
Hi, @andrewor14, the issue here is JVM default UncaughtExceptionHandler
seems not handle the exception correctly, as I said in the PR description, it
will request user
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2864#issuecomment-60181397
this is also very similar to https://github.com/apache/spark/pull/622/,
where the main thread cannot handle the exception thrown by the akka's
scheduler thread
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2864#issuecomment-60181601
but I don't mind to grab that ExecutorUncaughtExceptionHandler to somewhere
else to make it more general
---
If your project is set up for it, you can reply
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2524#issuecomment-60237457
ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2851#issuecomment-60308342
@andrewor14 do you want to take a look at this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user CodingCat opened a pull request:
https://github.com/apache/spark/pull/2913
SPARK-4067: refactor ExecutorUncaughtExceptionHandler
https://issues.apache.org/jira/browse/SPARK-4067
currently , we call Utils.tryOrExit everywhere
AppClient
Executor
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2851#issuecomment-60315631
@rxin, I see
then I will try to refactor the reporting mechanism (currently piggyback in
heartbeat) to make it more general
---
If your project is set up
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2864#issuecomment-60316623
@andrewor14, (just met a LiveListernBus uncaught exception this
afternoon)
personally, I feel that we shall stop the driver when such things happen
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2851#issuecomment-60319128
Hi, @shivaram, do you mean we send the report with tell instead of
askDriverWithReplyhmmm...
what's the original motivation to send BlockInfo synchronously
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2851#issuecomment-60319425
I mean Akka's tell, not
```
private def tell(message: Any) {
if (!askDriverWithReply[Boolean](message)) {
throw new SparkException
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2524#issuecomment-60382812
Hi, @mateiz @markhamstra , you want to take further review?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2913#issuecomment-60447582
@andrewor14 thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2851#issuecomment-61180867
haven't forgot this, I will make it done tomorrow
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user CodingCat opened a pull request:
https://github.com/apache/spark/pull/1087
SPARK-2038: rename conf parameters in the saveAsHadoop functions
to distinguish with SparkConf object
https://issues.apache.org/jira/browse/SPARK-2038
You can merge this pull request
GitHub user CodingCat opened a pull request:
https://github.com/apache/spark/pull/1088
SPARK-2309: apply output dir existence checking for all output formats
https://issues.apache.org/jira/browse/SPARK-2039
apply output dir existence checking for all output formats
You can
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/637#issuecomment-46104739
what does this mean?
```
[error] * method
dagScheduler_=(org.apache.spark.scheduler.DAGScheduler)Unit in class
org.apache.spark.SparkContext does not have
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/637#issuecomment-46105020
I seethank you for the hints!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/731#discussion_r13790512
--- Diff:
core/src/main/scala/org/apache/spark/deploy/ApplicationDescription.scala ---
@@ -20,7 +20,7 @@ package org.apache.spark.deploy
private[spark
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1088#issuecomment-46161534
@pwendell , ah, sorry for the mistake
thanks for fixing this
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/1106#issuecomment-46341281
did you see any performance impact on the current strategy, randomization
at the start of every schedule point is used not only at Master but also
TaskSchedulerImpl
1 - 100 of 847 matches
Mail list logo