[
https://issues.apache.org/jira/browse/TEZ-4187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17409246#comment-17409246
]
Sungwoo commented on TEZ-4187:
------------------------------
Is this related to the deadlock in ShuffleScheduler which is addressed in
TEZ-4334?
> Shuffle hangs occasionally after fetch failure with multiple input vertices
> ---------------------------------------------------------------------------
>
> Key: TEZ-4187
> URL: https://issues.apache.org/jira/browse/TEZ-4187
> Project: Apache Tez
> Issue Type: Bug
> Affects Versions: 0.9.2
> Environment: Tez 0.9.2
> Hadoop 2.9.2
> Debian 9.12
> Google Cloud Dataproc 1.3 + Preemptible workers
> Reporter: Aaron Pfeifer
> Priority: Major
> Attachments: DAG.png, task.log
>
>
> Hi folks!
> There appears to be an issue where the ShuffleSchedulerCallable is not
> properly interrupted when a fetch failure has occurred from one of multiple
> input vertices. When this happens, our logs show the ShuffleScheduler (for
> the input vertex which did *not* set the throwable) running forever in this
> loop:
> https://github.com/apache/tez/blob/10cb3519bd34389210e6511a2ba291b52dcda081/tez-runtime-library/src/main/java/org/apache/tez/runtime/library/common/shuffle/orderedgrouped/ShuffleScheduler.java#L1122-L1127
> I've attached logs demonstrating this issue and what our DAG looks like (to
> clarify, the attached DAG graphical view is not from the failed task -- just
> demonstrates what the vertex relationships are). I can reproduce this almost
> every time. It always fails on a vertex with more than 1 input.
> Some additional notes of interest:
> * If I reduce the shuffle.src-attempt.abort.limit to 1, tasks never hang.
> Anything above that results in the hung shuffle scheduler.
> * I can change the Tez settings to prevent my shuffle from ever being
> considered unhealthy (i.e. increase abort limit and turn off
> {{shuffle.failed.check.since-last.completion}}), which allows the tasks to
> continue and eventually succeed. However, this isn't ideal.
> I suspect there is some race condition with how we're shutting down the
> scheduler thread / multiple fetchers / the referee, but I haven't been able
> to quite put my finger on it.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)