[ https://issues.apache.org/jira/browse/HDFS-7348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Zhe Zhang updated HDFS-7348: ---------------------------- Attachment: ECWorker.java This is a very rough prototype I wrote *long time ago*. On {{DataNode}}, upon receiving a codec command, it starts the {{ECWorker}} like this: {code} private void encodeBlock(ExtendedBlock block, BlockGroup bg) throws IOException { LinkedList<ExtendedBlock> blocks = new LinkedList<ExtendedBlock>(); LinkedList<DatanodeInfo> sources = new LinkedList<DatanodeInfo>(); for (LocatedBlock lb : bg.getLocatedBlocks()) { blocks.add(new ExtendedBlock(block.getBlockPoolId(), lb.getBlock().getLocalBlock())); sources.add(lb.getLocations()[0]); } LOG.debug("encodeBlock " + block); ECWorker ecWorker = new ECWorker(block.getLocalBlock(), blocks.toArray(new ExtendedBlock[blocks.size()]), sources.toArray(new DatanodeInfo[sources.size()]), this); ecWorkerMap.put(block.getLocalBlock(), ecWorker); new Daemon(ecWorker).start(); } {code} Just wanted to share it here in case it's of any help. Basically, it mimics the existing {{DataTransfer}} in {{DataNode}} but pulls data instead of pushing. I'm not sure if we should follow this direction or create a block reader / writer for this purpose. Let's brainstorm here first. > Erasure Coding: perform stripping erasure decoding/recovery work given block > reader and writer > ---------------------------------------------------------------------------------------------- > > Key: HDFS-7348 > URL: https://issues.apache.org/jira/browse/HDFS-7348 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode > Reporter: Kai Zheng > Assignee: Yi Liu > Attachments: ECWorker.java > > > This assumes the facilities like block reader and writer are ready, > implements and performs erasure decoding/recovery work in *stripping* case > utilizing erasure codec and coder provided by the codec framework. -- This message was sent by Atlassian JIRA (v6.3.4#6332)