[ 
https://issues.apache.org/jira/browse/HDFS-9612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-9612:
----------------------------------
    Attachment: HDFS-9612.003.patch

Rev03:
# Added another (more complex) test case; also, make sure all ProducerConsumer 
tests call shutdown() to terminate threads.
# Simplified ProducerConsumer$Worker.run() logic. In SimpleCopyListing, 
ProducerConsumer.shutdown() is called after all work is consumed, so there is 
no need to consider the case where workers are interrupted in the middle of 
getting or putting or processing a work. Therefore, all workers are supposed to 
wait at 
{code:java}
work = inputQueue.take();
{code}
and if it gets an interrupt, simply return;

> DistCp worker threads are not terminated after jobs are done.
> -------------------------------------------------------------
>
>                 Key: HDFS-9612
>                 URL: https://issues.apache.org/jira/browse/HDFS-9612
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: distcp
>    Affects Versions: 2.8.0
>            Reporter: Wei-Chiu Chuang
>            Assignee: Wei-Chiu Chuang
>         Attachments: HDFS-9612.001.patch, HDFS-9612.002.patch, 
> HDFS-9612.003.patch
>
>
> In HADOOP-11827, a producer-consumer style thread pool was introduced to 
> parallelize the task of listing files/directories.
> We have a use case where a distcp job is run during the commit phase of a MR2 
> job. However, it was found distcp does not terminate ProducerConsumer thread 
> pools properly. Because threads are not terminated, those MR2 jobs never 
> finish.
> In a more typical use case where distcp is run as a standalone job, those 
> threads are terminated forcefully when the java process is terminated. So 
> these leaked threads did not become a problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to