[ 
https://issues.apache.org/jira/browse/HDFS-1542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12973377#action_12973377
 ] 

Amit Nithian commented on HDFS-1542:
------------------------------------

Todd. Thanks! I was just curious as to if we are indeed seeing the same issue. 
As an aside, I also tried to drop the block size of another job down to 1024 
and it didn't deadlock just didn't make a ton of progress either :-).. hence 
why I was trying to reproduce. I was just making sure that the deadlock that I 
am seeing is caused by the reason you mentioned because the job conf XML for 
the specific job(s) is nowhere near 64MB in size.

> Deadlock in Configuration.writeXml when serialized form is larger than one 
> DFS block
> ------------------------------------------------------------------------------------
>
>                 Key: HDFS-1542
>                 URL: https://issues.apache.org/jira/browse/HDFS-1542
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs client
>    Affects Versions: 0.20.2, 0.22.0, 0.23.0
>            Reporter: Todd Lipcon
>            Priority: Critical
>         Attachments: deadlock.txt, Test.java
>
>
> Configuration.writeXml holds a lock on itself and then writes the XML to an 
> output stream, during which DFSOutputStream will try to get a lock on 
> ackQueue/dataQueue. Meanwihle the DataStreamer thread will call functions 
> like conf.getInt() and deadlock against the other thread, since it could be 
> the same conf object.
> This causes a deterministic deadlock whenever the serialized form is larger 
> than block size.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to