[ https://issues.apache.org/jira/browse/HADOOP-4386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12639127#action_12639127 ]
Raghu Angadi commented on HADOOP-4386: -------------------------------------- >[...] but in most other cases I think we do care about the response. I think so. Currently even while DFSClient is writing data, the pipeline does need to know if there was an error while sending or receiving acks to break the pipeline appropriately. Once we have an API for async RPC calls, it is not much more to return a 'Future' that can be inquired for success of failure. > RPC support for large data transfers. > ------------------------------------- > > Key: HADOOP-4386 > URL: https://issues.apache.org/jira/browse/HADOOP-4386 > Project: Hadoop Core > Issue Type: New Feature > Components: dfs, ipc > Reporter: Raghu Angadi > > Currently HDFS has a socket level protocol for serving HDFS data to clients. > Clients do not use RPCs to read or write data. Fundamentally there is no > reason why this data transfer can not use RPCs. > This jira is place holder for any porting Datanode transfers to RPC. This > topic has been discussed in varying detail many times, the latest being in > the context of HADOOP-3856. There are quite a few issues to be resolved both > at API level and at implementation level. > We should probably copy some of the comments from HADOOP-3856 to here. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.