[
https://issues.apache.org/jira/browse/HDFS-2452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13128577#comment-13128577
]
Uma Maheswara Rao G commented on HDFS-2452:
-------------------------------------------
Yes , currently we are handling only SocketTimeOutException and IOExceptions
{code}
} catch (SocketTimeoutException ignored) {
// wake up to see if should continue to run
} catch (IOException ie) {
LOG.warn(datanode.dnRegistration + ":DataXceiveServer: "
+ StringUtils.stringifyException(ie));
} catch (Throwable te) {
LOG.error(datanode.dnRegistration + ":DataXceiveServer: Exiting due
to:"
+ StringUtils.stringifyException(te));
datanode.shouldRun = false;
}
{code}
thanks
Uma
> OutOfMemoryError in DataXceiverServer takes down the DataNode
> -------------------------------------------------------------
>
> Key: HDFS-2452
> URL: https://issues.apache.org/jira/browse/HDFS-2452
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: data-node
> Affects Versions: 0.22.0
> Reporter: Konstantin Shvachko
> Fix For: 0.22.0
>
>
> OutOfMemoryError brings down DataNode, when DataXceiverServer tries to spawn
> a new data transfer thread.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira