[ 
https://issues.apache.org/jira/browse/HDFS-8015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14513671#comment-14513671
 ] 

Li Bo commented on HDFS-8015:
-----------------------------

hi, Yi
I think there’re several ways to handle writing a decoded block to remote or 
local. My idea is that first get a domain socket which is bound and listened by 
{{DataNode#localDataXCeriverServer}}, then write data via the output stream of 
this socket. The advantage is that we don’t need to handle the details of block 
writing. The next step is to extend {{BlockReceiver}}. Currently it writes to 
local disk and may also write to remote. We can add a switcher to control its 
writing directions, i.e,  only local, only remote, remote + local. We can 
discuss this issue after your first patch is ready. 


> Erasure Coding: local and remote block writer for coding work in DataNode
> -------------------------------------------------------------------------
>
>                 Key: HDFS-8015
>                 URL: https://issues.apache.org/jira/browse/HDFS-8015
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Kai Zheng
>            Assignee: Li Bo
>         Attachments: HDFS-8015-000.patch
>
>
> As a task of HDFS-7344 ECWorker, in either stripping or non-stripping erasure 
> coding, to perform encoding or decoding, we need to be able to write data 
> blocks locally or remotely. This is to come up block writer facility in 
> DataNode side. Better to think about the similar work done in client side, so 
> in future it's possible to unify the both.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to