[jira] [Commented] (HDFS-13678) StorageType is incompatible when rolling upgrade to 2.6/2.6+ versions

2022-05-24 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17541340#comment-17541340
 ] 

Masatake Iwasaki commented on HDFS-13678:
-

updated the target version for preparing 2.10.2 release.

> StorageType is incompatible when rolling upgrade to 2.6/2.6+ versions
> -
>
> Key: HDFS-13678
> URL: https://issues.apache.org/jira/browse/HDFS-13678
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rolling upgrades
>Affects Versions: 2.5.0
>Reporter: Yiqun Lin
>Priority: Major
>
> In version 2.6.0, we supported more storage types in HDFS that implemented in 
> HDFS-6584. But this seems a incompatible change when we rolling upgrade our 
> cluster from 2.5.0 to 2.6.0 and throw following error.
> {noformat}
> 2018-06-14 11:43:39,246 ERROR [DataNode: 
> [[[DISK]file:/home/vipshop/hard_disk/dfs/, [DISK]file:/data1/dfs/, 
> [DISK]file:/data2/dfs/]] heartbeating to xx.xx.xx.xx:8022] 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in BPOfferService 
> for Block pool BP-670256553-xx.xx.xx.xx-1528795419404 (Datanode Uuid 
> ab150e05-fcb7-49ed-b8ba-f05c27593fee) service to xx.xx.xx.xx:8022
> java.lang.ArrayStoreException
>  at java.util.ArrayList.toArray(ArrayList.java:412)
>  at 
> java.util.Collections$UnmodifiableCollection.toArray(Collections.java:1034)
>  at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1030)
>  at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:836)
>  at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:146)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:566)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:664)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:835)
>  at java.lang.Thread.run(Thread.java:748)
> {noformat}
> The scenery is that old DN parses StorageType error that got from new NN. 
> This error is taking place in sending heratbeat to NN and blocks won't be 
> reported to NN successfully. This will lead subsequent errors.
> Corresponding logic in 2.5.0:
> {code}
>   public static BlockCommand convert(BlockCommandProto blkCmd) {
> ...
> StorageType[][] targetStorageTypes = new StorageType[targetList.size()][];
> List targetStorageTypesList = 
> blkCmd.getTargetStorageTypesList();
> if (targetStorageTypesList.isEmpty()) { // missing storage types
>   for(int i = 0; i < targetStorageTypes.length; i++) {
> targetStorageTypes[i] = new StorageType[targets[i].length];
> Arrays.fill(targetStorageTypes[i], StorageType.DEFAULT);
>   }
> } else {
>   for(int i = 0; i < targetStorageTypes.length; i++) {
> List p = 
> targetStorageTypesList.get(i).getStorageTypesList();
> targetStorageTypes[i] = p.toArray(new StorageType[p.size()]);  < 
> error here
>   }
> }
> {code}
> But corresponding to the current logic , it's will be better to return 
> default type instead of a exception in case StorageType changed(new fields 
> added or new types) in new versions during rolling upgrade.
> {code:java}
> public static StorageType convertStorageType(StorageTypeProto type) {
> switch(type) {
> case DISK:
>   return StorageType.DISK;
> case SSD:
>   return StorageType.SSD;
> case ARCHIVE:
>   return StorageType.ARCHIVE;
> case RAM_DISK:
>   return StorageType.RAM_DISK;
> case PROVIDED:
>   return StorageType.PROVIDED;
> default:
>   throw new IllegalStateException(
>   "BUG: StorageTypeProto not found, type=" + type);
> }
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13678) StorageType is incompatible when rolling upgrade to 2.6/2.6+ versions

2020-09-03 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17190567#comment-17190567
 ] 

Masatake Iwasaki commented on HDFS-13678:
-

updated the target version for preparing 2.10.1 release.

> StorageType is incompatible when rolling upgrade to 2.6/2.6+ versions
> -
>
> Key: HDFS-13678
> URL: https://issues.apache.org/jira/browse/HDFS-13678
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rolling upgrades
>Affects Versions: 2.5.0
>Reporter: Yiqun Lin
>Priority: Major
>
> In version 2.6.0, we supported more storage types in HDFS that implemented in 
> HDFS-6584. But this seems a incompatible change when we rolling upgrade our 
> cluster from 2.5.0 to 2.6.0 and throw following error.
> {noformat}
> 2018-06-14 11:43:39,246 ERROR [DataNode: 
> [[[DISK]file:/home/vipshop/hard_disk/dfs/, [DISK]file:/data1/dfs/, 
> [DISK]file:/data2/dfs/]] heartbeating to xx.xx.xx.xx:8022] 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in BPOfferService 
> for Block pool BP-670256553-xx.xx.xx.xx-1528795419404 (Datanode Uuid 
> ab150e05-fcb7-49ed-b8ba-f05c27593fee) service to xx.xx.xx.xx:8022
> java.lang.ArrayStoreException
>  at java.util.ArrayList.toArray(ArrayList.java:412)
>  at 
> java.util.Collections$UnmodifiableCollection.toArray(Collections.java:1034)
>  at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1030)
>  at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:836)
>  at 
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:146)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:566)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:664)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:835)
>  at java.lang.Thread.run(Thread.java:748)
> {noformat}
> The scenery is that old DN parses StorageType error that got from new NN. 
> This error is taking place in sending heratbeat to NN and blocks won't be 
> reported to NN successfully. This will lead subsequent errors.
> Corresponding logic in 2.5.0:
> {code}
>   public static BlockCommand convert(BlockCommandProto blkCmd) {
> ...
> StorageType[][] targetStorageTypes = new StorageType[targetList.size()][];
> List targetStorageTypesList = 
> blkCmd.getTargetStorageTypesList();
> if (targetStorageTypesList.isEmpty()) { // missing storage types
>   for(int i = 0; i < targetStorageTypes.length; i++) {
> targetStorageTypes[i] = new StorageType[targets[i].length];
> Arrays.fill(targetStorageTypes[i], StorageType.DEFAULT);
>   }
> } else {
>   for(int i = 0; i < targetStorageTypes.length; i++) {
> List p = 
> targetStorageTypesList.get(i).getStorageTypesList();
> targetStorageTypes[i] = p.toArray(new StorageType[p.size()]);  < 
> error here
>   }
> }
> {code}
> But corresponding to the current logic , it's will be better to return 
> default type instead of a exception in case StorageType changed(new fields 
> added or new types) in new versions during rolling upgrade.
> {code:java}
> public static StorageType convertStorageType(StorageTypeProto type) {
> switch(type) {
> case DISK:
>   return StorageType.DISK;
> case SSD:
>   return StorageType.SSD;
> case ARCHIVE:
>   return StorageType.ARCHIVE;
> case RAM_DISK:
>   return StorageType.RAM_DISK;
> case PROVIDED:
>   return StorageType.PROVIDED;
> default:
>   throw new IllegalStateException(
>   "BUG: StorageTypeProto not found, type=" + type);
> }
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org