[
https://issues.apache.org/jira/browse/HDFS-1542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12972344#action_12972344
]
Todd Lipcon commented on HDFS-1542:
-----------------------------------
Yes, you can probably set the dfs block size to very large for your job, but it
will have the side effect of having very large blocks on the output as well.
The other workaround is to not use Configuration to store very large objects.
Instead please consider using the DistributedCache API - it should be more
efficient anyway.
> Deadlock in Configuration.writeXml when serialized form is larger than one
> DFS block
> ------------------------------------------------------------------------------------
>
> Key: HDFS-1542
> URL: https://issues.apache.org/jira/browse/HDFS-1542
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs client
> Affects Versions: 0.20.2, 0.22.0, 0.23.0
> Reporter: Todd Lipcon
> Priority: Critical
> Attachments: Test.java
>
>
> Configuration.writeXml holds a lock on itself and then writes the XML to an
> output stream, during which DFSOutputStream will try to get a lock on
> ackQueue/dataQueue. Meanwihle the DataStreamer thread will call functions
> like conf.getInt() and deadlock against the other thread, since it could be
> the same conf object.
> This causes a deterministic deadlock whenever the serialized form is larger
> than block size.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.