Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/12078#issuecomment-205684324
Can I ask a question, why can't initialize executor before register to
driver as this pr? Is there any hidden danger?
---
If your project is set up for it, you can
Github user XuTingjun closed the pull request at:
https://github.com/apache/spark/pull/9246
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/9731#issuecomment-162828252
ok, please fix it as soon as possible, thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/9731#issuecomment-162814698
@srowen I can't find this jar file, can you give me a download url?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/9731#issuecomment-162819273
@srowen I only find below commons-collection file:
```
commons-collections
commons-collections
3.2.2
```
---
If your project is set up
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/9731#issuecomment-162824623
I think the groupId should be "commons-collections", not
"org.apache.commons", right?
---
If your project is set up for it, you can reply to this
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/10198#issuecomment-162851296
LGTM, thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user XuTingjun opened a pull request:
https://github.com/apache/spark/pull/10138
[ SPARK-12142]Reply false when container allocator is not ready and reset
target
Using Dynamic Allocation function, when a new AM is starting, and
ExecutorAllocationManager send RequestExecutor
GitHub user XuTingjun opened a pull request:
https://github.com/apache/spark/pull/10139
[SPARK-12143]When cloumn type is binary, change to Array[Byte] instead of
string
In Beeline, execute below sql:
1. create table bb(bi binary);
2. load data inpath 'tmp/data' into table
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/9288#issuecomment-156078546
@andrewor14, yeah the ```DAGScheduler``` post events from a single thread,
but the root cause is that ```DAGScheduler``` receive the
```SparkListenerTaskEnd``` behind
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/9288#issuecomment-156078133
@jerryshao, I try to realize the code of your suggestion, but many unit
tests are failed. I think it's difficult for me. If you can help me, I would be
very grateful
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/9288#issuecomment-151400153
Yeah, I know the root cause is the wrong ordering of events.
The code of these event order are: [kill
Task](https://github.com/apache/spark/blob/master/core/src
GitHub user XuTingjun opened a pull request:
https://github.com/apache/spark/pull/9288
[SPARK-11334] numRunningTasks can't be less than 0, or it will affect
executor allocation
With Dynamic Allocation function, a task failed over ```maxFailure``` time,
all the dependent jobs
GitHub user XuTingjun opened a pull request:
https://github.com/apache/spark/pull/9246
[SPARK-5210] Support group event log when app is long-running
For long-running Spark applications (e.g. running for days / weeks), the
Spark event log may grow to be very large.
I think
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/7918#issuecomment-142487300
@rxin, My jira id is **meiyoula**
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user XuTingjun commented on a diff in the pull request:
https://github.com/apache/spark/pull/7918#discussion_r40058843
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -182,17 +182,13 @@ class HadoopRDD[K, V](
}
protected def
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/7918#issuecomment-140619178
@JoshRosen, Can you have a look on this? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user XuTingjun opened a pull request:
https://github.com/apache/spark/pull/8741
[SPARK-10586] fix bug: BlockManager ca't be removed when it is
re-registered, then disassociats
Scene: When the executor has been removed, but it still exists on the
SparkUI web
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/7918#issuecomment-140005869
@sryza I don't really understand **caching the constructor**.
I find the method```ReflectionUtils.newInstance``` will cache the
constructor.
---
If your project
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/8477#issuecomment-135602834
Sorry that I don't declare the problem clearly.
When an app starts with CheckPoint file using [getOrCreate
method](https://github.com/apache/spark/blob/master
GitHub user XuTingjun opened a pull request:
https://github.com/apache/spark/pull/8477
[SPARK-10311]Reload appId and attemptId when a new ApplicationMaster
registes
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/XuTingjun
GitHub user XuTingjun opened a pull request:
https://github.com/apache/spark/pull/8348
[SPARK-10147]App shouldn't show in HistoryServer web when the event file
has been deleted on hdfs
Phenomenonï¼App still shows in HistoryServer web when the event file has
been deleted on hdfs
Github user XuTingjun closed the pull request at:
https://github.com/apache/spark/pull/8348
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6817#issuecomment-129747858
@andrewor14, I have tested it in the real cluster, it's ok.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6817#issuecomment-129336529
Maybe we can change below code, right?
```
val numTasksScheduled = stageIdToTaskIndices(stageId).size
val numTasksTotal
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6817#issuecomment-129287244
@andrewor14, I understand what you mean.
what I consider is that, if many stages run in parallel, just delete L606
may be not correct.
---
If your project
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/7918#issuecomment-128620998
Hi all, Can you have a look on this? I think it's meaningful.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6817#issuecomment-127886475
@squito, I have updated the test, thank you very much.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/7918#issuecomment-127892711
Thanks all, I have added the document .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/7296#issuecomment-128218149
@andrewor14, Now the [task
pagination](https://github.com/apache/spark/pull/7399) has been realized. So
this patch has no meaning now?
---
If your project is set up
Github user XuTingjun commented on a diff in the pull request:
https://github.com/apache/spark/pull/6817#discussion_r36377614
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -628,6 +621,13 @@ private[spark] class ExecutorAllocationManager
GitHub user XuTingjun opened a pull request:
https://github.com/apache/spark/pull/7918
[SPARK-9585] add config to enable inputFormat cache or not
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/XuTingjun/spark cached_inputFormat
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6817#issuecomment-127448769
retest please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6817#issuecomment-127223219
@andrewor14, I have changed the code to what you suggested, please have a
look.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6817#issuecomment-123588243
Yeah, I got it. I think we can add below code into ```onTaskEnd``` method,
right?
```
stageIdToTaskIndices.get(taskEnd.stageId).get.remove(taskIndex
Github user XuTingjun commented on a diff in the pull request:
https://github.com/apache/spark/pull/6817#discussion_r35186974
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -553,12 +562,14 @@ private[spark] class ExecutorAllocationManager
Github user XuTingjun commented on a diff in the pull request:
https://github.com/apache/spark/pull/6817#discussion_r35285511
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -553,12 +562,14 @@ private[spark] class ExecutorAllocationManager
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6817#issuecomment-121798632
@andrewor14 , Sorry to bother you again. I think it's really a bug, wish
you have a look again, thanks!
---
If your project is set up for it, you can reply
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/7399#issuecomment-121448667
This table can sort globally by any field?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user XuTingjun opened a pull request:
https://github.com/apache/spark/pull/7322
[SPARK-8953] SPARK_EXECUTOR_CORES has no effect to dynamic executor
allocation function
The configuration ```SPARK_EXECUTOR_CORES``` won't put into
```SparkConf```, so it has no effect
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6817#issuecomment-118210073
@andrewor14, Have you understood the problem?
``` def totalPendingTasks(): Int = {
stageIdToNumTasks.map { case (stageId, numTasks
Github user XuTingjun closed the pull request at:
https://github.com/apache/spark/pull/7007
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user XuTingjun commented on a diff in the pull request:
https://github.com/apache/spark/pull/6950#discussion_r33543095
--- Diff: core/src/main/scala/org/apache/spark/ui/exec/ExecutorsTab.scala
---
@@ -92,15 +92,18 @@ class ExecutorsListener(storageStatusListener
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/7007#issuecomment-116998896
sorry, I think this patch is not good, so i will close it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6817#issuecomment-116995151
@andrewor14, sorry for my pool English. The problem is :
when a executor losts, the running tasks on it will be failed, and post a
```SparkListenerTaskEnd
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6935#issuecomment-116374449
@squito, I am agree with what you said. First about my
patch[#6545](https://github.com/apache/spark/pull/6545/files), it will refresh
only on request ```http
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6935#issuecomment-116404740
yeah, I think this patch also can't refresh. The place where detache the
handlers is not right.
I think we need to thinking when and where to detache the handlers
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6935#issuecomment-115131256
@steveloughran Have you tested it? I don't think it's ok.
yeah, you realized the refreshing of incompleted apps. But I think second
time sending /history/appid
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6950#issuecomment-115178185
@tdas @pwendell Can you have a look on this patch, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
GitHub user XuTingjun opened a pull request:
https://github.com/apache/spark/pull/7007
[SPARK-8618] add judgment of hbase configuration
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/XuTingjun/spark hbaseToken
Alternatively
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6817#issuecomment-114735104
@andrewor14
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user XuTingjun closed the pull request at:
https://github.com/apache/spark/pull/6893
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user XuTingjun opened a pull request:
https://github.com/apache/spark/pull/6950
[SPARK-8560][UI] The Executors page will have negative if having
resubmitted tasks
when the ```taskEnd.reason``` is ```Resubmitted```, it shouldn't do
statistics. Because this tasks has
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6817#issuecomment-114326614
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user XuTingjun commented on a diff in the pull request:
https://github.com/apache/spark/pull/6893#discussion_r33005372
--- Diff:
core/src/main/scala/org/apache/spark/ui/scope/RDDOperationGraph.scala ---
@@ -164,12 +164,20 @@ private[ui] object RDDOperationGraph extends
Github user XuTingjun commented on a diff in the pull request:
https://github.com/apache/spark/pull/6893#discussion_r33002628
--- Diff:
core/src/main/scala/org/apache/spark/ui/scope/RDDOperationGraph.scala ---
@@ -164,12 +164,20 @@ private[ui] object RDDOperationGraph extends
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6935#issuecomment-114329092
@steveloughran, I think you always put too many space in some place.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6817#issuecomment-113399629
I think this patch has no association with the failed unit tests, please
retest.
---
If your project is set up for it, you can reply to this email and have your
Github user XuTingjun commented on a diff in the pull request:
https://github.com/apache/spark/pull/6893#discussion_r32809194
--- Diff:
core/src/main/scala/org/apache/spark/ui/scope/RDDOperationGraph.scala ---
@@ -164,12 +164,20 @@ private[ui] object RDDOperationGraph extends
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6545#issuecomment-113358887
@squito, Can you pay attention on this? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user XuTingjun opened a pull request:
https://github.com/apache/spark/pull/6893
[SPARK-8391] catch the Throwable and report error to DAG graph
1. Maybe there has different Throwables while making dot file, in order to
prevent the whole page dies, I think using try-catch
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6893#issuecomment-113350684
retest please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6839#issuecomment-113015490
@andrewor14, I have updated the title and code, please have a look again,
thanks.
---
If your project is set up for it, you can reply to this email and have your
Github user XuTingjun commented on a diff in the pull request:
https://github.com/apache/spark/pull/6839#discussion_r32693244
--- Diff:
core/src/main/scala/org/apache/spark/ui/scope/RDDOperationGraph.scala ---
@@ -70,6 +70,13 @@ private[ui] class RDDOperationCluster(val id: String
Github user XuTingjun commented on a diff in the pull request:
https://github.com/apache/spark/pull/6817#discussion_r32694699
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -553,12 +562,13 @@ private[spark] class ExecutorAllocationManager
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6839#issuecomment-112407985
Yeah, I think expand all nodes then filter every node, is slow and cost
memory.
---
If your project is set up for it, you can reply to this email and have your
GitHub user XuTingjun opened a pull request:
https://github.com/apache/spark/pull/6839
[SPARK-8392] Improve the efficiency
def getAllNodes: Seq[RDDOperationNode] =
{ _childNodes ++ _childClusters.flatMap(_.childNodes) }
when the _childClusters has so many nodes
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6817#issuecomment-111985278
@sryza @andrewor14 Can you have a look?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user XuTingjun commented on a diff in the pull request:
https://github.com/apache/spark/pull/6817#discussion_r32484601
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -537,10 +537,19 @@ private[spark] class ExecutorAllocationManager
Github user XuTingjun commented on a diff in the pull request:
https://github.com/apache/spark/pull/6817#discussion_r32399543
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -732,6 +731,8 @@ private[spark] class TaskSetManager
Github user XuTingjun commented on a diff in the pull request:
https://github.com/apache/spark/pull/6817#discussion_r32399782
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -553,12 +562,13 @@ private[spark] class ExecutorAllocationManager
Github user XuTingjun commented on a diff in the pull request:
https://github.com/apache/spark/pull/6817#discussion_r32399852
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -537,10 +537,19 @@ private[spark] class ExecutorAllocationManager
GitHub user XuTingjun opened a pull request:
https://github.com/apache/spark/pull/6817
When tasks failed and append new ones, post SparkListenerTaskResubmit event
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/XuTingjun/spark
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6545#issuecomment-110192207
Hi @squito, I think I need your help, I am not clearly know how to write
this test.
---
If your project is set up for it, you can reply to this email and have your
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6545#issuecomment-110191346
@squito, I think you can help me, I am not clearly know how to write this
test, thanks.
---
If your project is set up for it, you can reply to this email and have
Github user XuTingjun closed the pull request at:
https://github.com/apache/spark/pull/5550
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/5550#issuecomment-109820313
@srowen, The jira has been updated to resolved, I think this patch can
merged, right?
---
If your project is set up for it, you can reply to this email and have your
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6643#issuecomment-109247456
please retest
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6643#issuecomment-109142206
Thanks all, I have updated the code, please have a look again.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
GitHub user XuTingjun opened a pull request:
https://github.com/apache/spark/pull/6643
set executor cores into system
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/XuTingjun/spark SPARK-8099
Alternatively you can review
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6545#issuecomment-108708777
I think the reason is not just because of the appCache. After my debug the
code, I found there are two mainly reasons:
1. First time send */history/appid* request
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6545#issuecomment-107912763
@tsudukim Sorry, I don't agree with you,
spark.history.retainedApplications can't to be 0 I think
---
If your project is set up for it, you can reply to this email
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/6545#issuecomment-107774238
@tsudukim hey, I think this bug is introduced by #3467, can you have a look?
---
If your project is set up for it, you can reply to this email and have your
reply
GitHub user XuTingjun opened a pull request:
https://github.com/apache/spark/pull/6545
[SPARK-7889] make sure click the App ID on HistoryPage, the SparkUI will
be refreshed.
The bug is that: When clicking the app in incomplete page, the tasks
progress is 100/2000. After the app
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/5586#issuecomment-103777236
@deanchen Can you list the needed configs of hbase in client.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/5586#issuecomment-103314158
@deanchen ,I use this patch, hbase throw the exception below. Can you help
me?
java.io.IOException: No secret manager configured for token authentication
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/5550#issuecomment-100898085
@JoshRosen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/5550#issuecomment-100074535
I agree the opinion said by @srowen before. In my case, job details page
has one completed and one skipped stage, so I think decrease the numerator is
better
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/5550#issuecomment-98594154
@srowen Can you deal with this patch? Thanks !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/5586#issuecomment-97653791
These days I run the select command to read data in hbase with beeline
shell, it always throw the exception:
java.lang.IllegalStateException: unread block data
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/5550#issuecomment-95767885
@JoshRosen Can you have a look on this? We need your opinion.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/5550#issuecomment-95526618
But I think I have done that in `onStageSubmitted`, removing a stage ID
from the completed set the moment it is retried. Have you seen that?
---
If your project
Github user XuTingjun commented on a diff in the pull request:
https://github.com/apache/spark/pull/5550#discussion_r28953413
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/UIData.scala ---
@@ -63,7 +64,7 @@ private[jobs] object UIData {
/* Stages */
var
Github user XuTingjun commented on a diff in the pull request:
https://github.com/apache/spark/pull/5550#discussion_r28953027
--- Diff:
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala ---
@@ -271,7 +271,9 @@ class JobProgressListener(conf: SparkConf) extends
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/5550#issuecomment-95559213
I have tested it, it has fixed my problem.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/5550#issuecomment-9147
@srowen I have updated the code, please have a look.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/5550#issuecomment-95386082
@srowen I may miss something. Actually your idea is that, we should count
the last retry status of stage into completed/total, right?
---
If your project is set up
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/5550#issuecomment-95008837
@srowen I have update the patch code, delete the skipped stage from
completed set.
---
If your project is set up for it, you can reply to this email and have your
Github user XuTingjun commented on the pull request:
https://github.com/apache/spark/pull/5550#issuecomment-94364746
@srowen , do you mean we should count the first time of stages into
total/completed stages?
@JoshRosen , Do you have any suggestions?
---
If your project is set
GitHub user XuTingjun reopened a pull request:
https://github.com/apache/spark/pull/5550
[SPARK-6973]modify total stages/tasks on the allJobsPage
Though totalStages = allStages - skippedStages is understandable. But
consider the problem [SPARK-6973], I think totalStages = allStages
1 - 100 of 171 matches
Mail list logo