[
https://issues.apache.org/jira/browse/HDFS-13486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16536728#comment-16536728
]
KarlManong edited comment on HDFS-13486 at 7/9/18 9:38 AM:
-----------------------------------------------------------
This will cause [HDFS-7524|https://issues.apache.org/jira/browse/HDFS-7524]
The keypoint is chaging the storages.
{code:java}
// Some comments here
void invoke() throws Exception {
DatanodeInfo[] newNodes = new DatanodeInfo[2];
newNodes[0] = nodes[0];
newNodes[1] = nodes[1];
final DatanodeManager dm = cluster.getNamesystem(0).getBlockManager()
.getDatanodeManager();
final String storageID1 = dm.getDatanode(newNodes[0]).getStorageInfos()[0]
.getStorageID();
final String storageID2 = dm.getDatanode(newNodes[1]).getStorageInfos()[0]
.getStorageID();
String[] storageIDs = {storageID1, storageID2};
client.getNamenode().updatePipeline(client.getClientName(), oldBlock,
newBlock, newNodes, storageIDs);
// close can fail if the out.close() commit the block after block received
// notifications from Datanode.
// Since datanodes and output stream have still old genstamps, these
// blocks will be marked as corrupt after HDFS-5723 if RECEIVED
// notifications reaches namenode first and close() will fail.
DFSTestUtil.abortStream((DFSOutputStream) out.getWrappedStream());
}
{code}
Plz have a look
was (Author: karlmanong):
This will cause [HDFS-7524|https://issues.apache.org/jira/browse/HDFS-7524]
The keypoint is chaging the storages.
{code:java}
// Some comments here
void invoke() throws Exception {
DatanodeInfo[] newNodes = new DatanodeInfo[2];
newNodes[0] = nodes[0];
newNodes[1] = nodes[1];
final DatanodeManager dm = cluster.getNamesystem(0).getBlockManager()
.getDatanodeManager();
final String storageID1 = dm.getDatanode(newNodes[0]).getStorageInfos()[0]
.getStorageID();
final String storageID2 = dm.getDatanode(newNodes[1]).getStorageInfos()[0]
.getStorageID();
String[] storageIDs = {storageID1, storageID2};
client.getNamenode().updatePipeline(client.getClientName(), oldBlock,
newBlock, newNodes, storageIDs);
// close can fail if the out.close() commit the block after block received
// notifications from Datanode.
// Since datanodes and output stream have still old genstamps, these
// blocks will be marked as corrupt after HDFS-5723 if RECEIVED
// notifications reaches namenode first and close() will fail.
DFSTestUtil.abortStream((DFSOutputStream) out.getWrappedStream());
}
{code}
> Backport HDFS-11817 (A faulty node can cause a lease leak and NPE on
> accessing data) to branch-2.7
> --------------------------------------------------------------------------------------------------
>
> Key: HDFS-13486
> URL: https://issues.apache.org/jira/browse/HDFS-13486
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Wei-Chiu Chuang
> Assignee: Wei-Chiu Chuang
> Priority: Major
> Fix For: 2.7.7
>
> Attachments: HDFS-11817.branch-2.7.001.patch,
> HDFS-11817.branch-2.7.002.patch
>
>
> HDFS-11817 is a good fix to have in branch-2.7.
> I'm taking a stab at it now.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]