[
https://issues.apache.org/jira/browse/HDFS-10311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
James Clampffer updated HDFS-10311:
-----------------------------------
Attachment: HDFS-10311.HDFS-8707.002.patch
New patch addressing [~bobhansen]'s comments
-got rid of extra is_open
-return e.what
-don't hold lock before event hooks
> libhdfs++: DatanodeConnection::Cancel should not delete the underlying socket
> -----------------------------------------------------------------------------
>
> Key: HDFS-10311
> URL: https://issues.apache.org/jira/browse/HDFS-10311
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: hdfs-client
> Reporter: James Clampffer
> Assignee: James Clampffer
> Attachments: HDFS-10311.HDFS-8707.000.patch,
> HDFS-10311.HDFS-8707.001.patch, HDFS-10311.HDFS-8707.002.patch
>
>
> DataNodeConnectionImpl calls reset on the unique_ptr that references the
> underlying asio::tcp::socket. If this happens after the continuation
> pipeline checks the cancel state but before asio uses the socket it will
> segfault because unique_ptr::reset will explicitly change it's value to
> nullptr.
> Cancel should only call shutdown() and close() on the socket but keep the
> instance of it alive. The socket can probably also be turned into a member
> of DataNodeConnectionImpl to get rid of the unique pointer and simplify
> things a bit.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)