[
https://issues.apache.org/jira/browse/HDFS-9699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15172540#comment-15172540
]
James Clampffer commented on HDFS-9699:
---------------------------------------
Realized I didn't look at the change in filesystem.cc (was trying a new diff
tool that wasn't tabbed).
The change there looks pretty good too. I think it would be a good idea to note
the remaining number of worker threads if an exception forces one thread to
bail out. Since the default is 1 worker it might be worth adding a very clear
"No worker threads left! Libhdfs++ needs at least 1 worker to perform any IO"
message to save people looking too hard, assuming logging is enabled, when
things grind to a halt.
FileSystem could take a handler to push internal errors like this directly to
client code when it hits something that can't be directly tied to a file
operation; that should be it's own jira if it happens. Any thoughts there?
> libhdfs++: Add appropriate catch blocks for ASIO operations that throw
> ----------------------------------------------------------------------
>
> Key: HDFS-9699
> URL: https://issues.apache.org/jira/browse/HDFS-9699
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: hdfs-client
> Reporter: James Clampffer
> Assignee: James Clampffer
> Attachments: HDFS-6966.HDFS-8707.000.patch,
> HDFS-9699.HDFS-8707.001.patch, cancel_backtrace.txt
>
>
> libhdfs++ doesn't create exceptions of its own but it should be able to
> gracefully handle exceptions thrown by libraries it uses, particularly asio.
> libhdfs++ should be able to catch most exceptions within reason either at the
> call site or in the code that spins up asio worker threads. Certain system
> exceptions like std::bad_alloc don't need to be caught because by that point
> the process is likely in a unrecoverable state.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)