[ https://issues.apache.org/jira/browse/HDFS-7076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16644499#comment-16644499 ]
Xiang Li commented on HDFS-7076: -------------------------------- Hi all, if you are about to use the patch, please be aware that Mover does not work but it is easy to fix. *Stack trace* {code:java} java.lang.ArrayIndexOutOfBoundsException: 16 at org.apache.hadoop.hdfs.server.mover.Mover.initStoragePolicies(Mover.java:159) at org.apache.hadoop.hdfs.server.mover.Mover.init(Mover.java:141) at org.apache.hadoop.hdfs.server.mover.Mover.run(Mover.java:165) at org.apache.hadoop.hdfs.server.mover.Mover.run(Mover.java:568) at org.apache.hadoop.hdfs.server.mover.Mover$Cli.run(Mover.java:696) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hdfs.server.mover.Mover.main(Mover.java:727) {code} *Reason* The size of BlockStoragePolicy array is hardcoded to 16, but the ID of a custom storage policy is always >= 16 *Fix* {code:java} diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java index 5fcd29f..d721f6b 100644 --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/Mover.java @@ -114,7 +114,7 @@ private StorageGroup getTarget(String uuid, StorageType storageType) { private final BlockStoragePolicy[] blockStoragePolicies; - Mover(NameNodeConnector nnc, Configuration conf, AtomicInteger retryCount) { + Mover(NameNodeConnector nnc, Configuration conf, AtomicInteger retryCount) throws IOException { final long movedWinWidth = conf.getLong( DFSConfigKeys.DFS_MOVER_MOVEDWINWIDTH_KEY, DFSConfigKeys.DFS_MOVER_MOVEDWINWIDTH_DEFAULT); @@ -133,8 +133,17 @@ private StorageGroup getTarget(String uuid, StorageType storageType) { maxConcurrentMovesPerNode, conf); this.storages = new StorageMap(); this.targetPaths = nnc.getTargetPaths(); - this.blockStoragePolicies = new BlockStoragePolicy[1 << - BlockStoragePolicySuite.ID_BIT_LENGTH]; + + // Set the size of blockStoragePolicies array according to the current HDFS setup + BlockStoragePolicy[] policies = dispatcher.getDistributedFileSystem().getStoragePolicies(); + int size = BlockStoragePolicySuite.RESERVED_POLICY_NUM; + for (BlockStoragePolicy policy : policies) { + int id = policy.getId(); + if (size < id + 1) { + size = id + 1; // set size to the max id + 1 + } + } + this.blockStoragePolicies = new BlockStoragePolicy[size]; } {code} Need some efforts to add the change above (and UT) into the latest patch (008) > Allow users to define custom storage policies > --------------------------------------------- > > Key: HDFS-7076 > URL: https://issues.apache.org/jira/browse/HDFS-7076 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode > Reporter: Jing Zhao > Assignee: Jing Zhao > Priority: Major > Labels: BB2015-05-TBR > Attachments: HDFS-7076.000.patch, HDFS-7076.001.patch, > HDFS-7076.002.patch, HDFS-7076.003.patch, HDFS-7076.004.patch, > HDFS-7076.005.patch, HDFS-7076.005.patch, HDFS-7076.007.patch, > HDFS-7076.008.patch, editsStored > > > Currently block storage policies are hard coded. This JIRA is to persist the > policies in FSImage and Edit Log in order to support adding new policies or > modifying existing policies. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org