[
https://issues.apache.org/jira/browse/HDFS-4363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13546021#comment-13546021
]
Tsz Wo (Nicholas), SZE commented on HDFS-4363:
----------------------------------------------
- Do you also want to clean up DataTransferProtoUtil? Some minor suggestions:
{code}
@@ -41,14 +41,12 @@
public abstract class DataTransferProtoUtil {
static BlockConstructionStage fromProto(
OpWriteBlockProto.BlockConstructionStage stage) {
- return BlockConstructionStage.valueOf(BlockConstructionStage.class,
- stage.name());
+ return BlockConstructionStage.valueOf(stage.name());
}
static OpWriteBlockProto.BlockConstructionStage toProto(
BlockConstructionStage stage) {
- return OpWriteBlockProto.BlockConstructionStage.valueOf(
- stage.name());
+ return OpWriteBlockProto.BlockConstructionStage.valueOf(stage.name());
}
{code}
- This is not directly related to the patch: use valueOf(..) instead of
switch-case for converting enum types, e.g.
{code}
public static DatanodeInfoProto.AdminState convert(
final DatanodeInfo.AdminStates inAs) {
return DatanodeInfoProto.AdminState.valueOf(inAs.name());
}
{code}
- PBHelper.convert(DatanodeInfo[] dnInfos, int startIdx) and other related
methods should return List instead of ArrayList.
- Understood that you probably had used ecilpse for formatting. However, it
might not be better than manual formatting in some cases. E.g.
{code}
// DataEncryptionKey
public static DataEncryptionKey convert(DataEncryptionKeyProto bet) {
String encryptionAlgorithm = bet.getEncryptionAlgorithm();
- return new DataEncryptionKey(bet.getKeyId(),
- bet.getBlockPoolId(),
- bet.getNonce().toByteArray(),
- bet.getEncryptionKey().toByteArray(),
- bet.getExpiryDate(),
- encryptionAlgorithm.isEmpty() ? null : encryptionAlgorithm);
+ return new DataEncryptionKey(bet.getKeyId(), bet.getBlockPoolId(), bet
+ .getNonce().toByteArray(), bet.getEncryptionKey().toByteArray(),
+ bet.getExpiryDate(), encryptionAlgorithm.isEmpty() ? null
+ : encryptionAlgorithm);
}
{code}
{code}
@@ -1132,41 +1140,44 @@
if (fsStats.length >= ClientProtocol.GET_STATS_REMAINING_IDX + 1)
result.setRemaining(fsStats[ClientProtocol.GET_STATS_REMAINING_IDX]);
if (fsStats.length >= ClientProtocol.GET_STATS_UNDER_REPLICATED_IDX + 1)
- result.setUnderReplicated(
- fsStats[ClientProtocol.GET_STATS_UNDER_REPLICATED_IDX]);
+ result
+
.setUnderReplicated(fsStats[ClientProtocol.GET_STATS_UNDER_REPLICATED_IDX]);
if (fsStats.length >= ClientProtocol.GET_STATS_CORRUPT_BLOCKS_IDX + 1)
- result.setCorruptBlocks(
- fsStats[ClientProtocol.GET_STATS_CORRUPT_BLOCKS_IDX]);
+ result
+
.setCorruptBlocks(fsStats[ClientProtocol.GET_STATS_CORRUPT_BLOCKS_IDX]);
if (fsStats.length >= ClientProtocol.GET_STATS_MISSING_BLOCKS_IDX + 1)
- result.setMissingBlocks(
- fsStats[ClientProtocol.GET_STATS_MISSING_BLOCKS_IDX]);
+ result
+
.setMissingBlocks(fsStats[ClientProtocol.GET_STATS_MISSING_BLOCKS_IDX]);
return result.build();
}
{code}
> Combine PBHelper and HdfsProtoUtil and remove redundant methods
> ---------------------------------------------------------------
>
> Key: HDFS-4363
> URL: https://issues.apache.org/jira/browse/HDFS-4363
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 2.0.2-alpha
> Reporter: Suresh Srinivas
> Assignee: Suresh Srinivas
> Attachments: HDFS-4363.patch, HDFS-4363.patch
>
>
> There are many methods overlapping between PBHelper and HdfsProtoUtil. This
> jira combines these two helper classes.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira