[ 
https://issues.apache.org/jira/browse/HDFS-12832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16261880#comment-16261880
 ] 

Konstantin Shvachko commented on HDFS-12832:
--------------------------------------------

The funny thing is that in {{BlockPlacementPolicyDefault.chooseTarget(String 
srcPath, ...)}} the {{srcPath}} parameter is completely redundant, as it is not 
used at all.
I'm wondering if there are any policies outside that take file name into 
account while choosing a target for a block?
So a dirty fix would be to pass {{src = null}} and  avoid calling 
{{bc.getName()}} completely:
{code:java}
    private void chooseTargets(BlockPlacementPolicy blockplacement,
        BlockStoragePolicySuite storagePolicySuite,
        Set<Node> excludedNodes) {
      try {
-       targets = blockplacement.chooseTarget(bc.getName(),
+       targets = blockplacement.chooseTarget(null,
            additionalReplRequired, srcNode, liveReplicaStorages, false,
            excludedNodes, block.getNumBytes(),
            storagePolicySuite.getPolicy(bc.getStoragePolicyID()));
{code}
A more accurate approach is to remove {{ReplicationWork.bc}} and replace it 
with two {{srcPath}} and {{storagePolicyID}}, which are the only things need 
from {{bc}} and which could be computed inside {{ReplicationWork()}} from 
{{bc}}. The latter is safe since it is constructed under the lock. That way we 
will avoid breaking *private* interface {{BlockPlacementPolicy}} if it matters.

> INode.getFullPathName may throw ArrayIndexOutOfBoundsException lead to 
> NameNode exit
> ------------------------------------------------------------------------------------
>
>                 Key: HDFS-12832
>                 URL: https://issues.apache.org/jira/browse/HDFS-12832
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.7.4, 3.0.0-beta1
>            Reporter: DENG FEI
>            Priority: Critical
>              Labels: release-blocker
>         Attachments: HDFS-12832-trunk-001.patch, exception.log
>
>
> {code:title=INode.java|borderStyle=solid}
> public String getFullPathName() {
>     // Get the full path name of this inode.
>     if (isRoot()) {
>       return Path.SEPARATOR;
>     }
>     // compute size of needed bytes for the path
>     int idx = 0;
>     for (INode inode = this; inode != null; inode = inode.getParent()) {
>       // add component + delimiter (if not tail component)
>       idx += inode.getLocalNameBytes().length + (inode != this ? 1 : 0);
>     }
>     byte[] path = new byte[idx];
>     for (INode inode = this; inode != null; inode = inode.getParent()) {
>       if (inode != this) {
>         path[--idx] = Path.SEPARATOR_CHAR;
>       }
>       byte[] name = inode.getLocalNameBytes();
>       idx -= name.length;
>       System.arraycopy(name, 0, path, idx, name.length);
>     }
>     return DFSUtil.bytes2String(path);
>   }
> {code}
> We found ArrayIndexOutOfBoundsException at 
> _{color:#707070}System.arraycopy(name, 0, path, idx, name.length){color}_ 
> when ReplicaMonitor work ,and the NameNode will quit.
> It seems the two loop is not synchronized, the path's length is changed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to