[
https://issues.apache.org/jira/browse/HADOOP-5933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Steve Loughran updated HADOOP-5933:
-----------------------------------
Attachment: HADOOP-5933.patch
This is the first solution, some extra diagnostics. It's main cost when the log
is not set to debug is one extra reference.
I don't really like the log settings changing program behaviour, so I'm not
sure if anyone does want to check this patch in; it's just what I put together
to track down my problems. The real problem is that the caching system isn't
compatible with the users of DfsClient calling close() on their clients.
> Make it harder to accidentally close a shared DFSClient
> -------------------------------------------------------
>
> Key: HADOOP-5933
> URL: https://issues.apache.org/jira/browse/HADOOP-5933
> Project: Hadoop Core
> Issue Type: Improvement
> Components: fs
> Affects Versions: 0.21.0
> Reporter: Steve Loughran
> Priority: Minor
> Attachments: HADOOP-5933.patch
>
>
> Every so often I get stack traces telling me that DFSClient is closed,
> usually in {{org.apache.hadoop.hdfs.DFSClient.checkOpen() }} . The root cause
> of this is usually that one thread has closed a shared fsclient while another
> thread still has a reference to it. If the other thread then asks for a new
> client it will get one -and the cache repopulated- but if has one already,
> then I get to see a stack trace.
> It's effectively a race condition between clients in different threads.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.