[ 
https://issues.apache.org/jira/browse/HADOOP-5933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12714116#action_12714116
 ] 

Raghu Angadi commented on HADOOP-5933:
--------------------------------------

> I am not sure if it would be a good idea to alter code execution paths based 
> on logging levels

+1. If this feature is committed the behavior should be same with or without 
Debug enabled. As a practical matter it is pretty hard to ask users to enable 
debug since that prints boatloads of other stuff.

+1 for the feature. Looking at how hard it is for users debug such problems, 
this seems like a useful feature. User still need to add code to getCause().. 
that is ok.


> Make it harder to accidentally close a shared DFSClient
> -------------------------------------------------------
>
>                 Key: HADOOP-5933
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5933
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: fs
>    Affects Versions: 0.21.0
>            Reporter: Steve Loughran
>            Priority: Minor
>         Attachments: HADOOP-5933.patch
>
>
> Every so often I get stack traces telling me that DFSClient is closed, 
> usually in {{org.apache.hadoop.hdfs.DFSClient.checkOpen() }} . The root cause 
> of this is usually that one thread has closed a shared fsclient while another 
> thread still has a reference to it. If the other thread then asks for a new 
> client it will get one -and the cache repopulated- but if has one already, 
> then I get to see a stack trace. 
> It's effectively a race condition between clients in different threads. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to