[jira] [Updated] (HDFS-12682) ECAdmin -listPolicies will always show policy state as DISABLED

2017-10-22 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-12682:
-
Status: Patch Available  (was: Open)

> ECAdmin -listPolicies will always show policy state as DISABLED
> ---
>
> Key: HDFS-12682
> URL: https://issues.apache.org/jira/browse/HDFS-12682
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12682.01.patch, HDFS-12682.02.patch
>
>
> On a real cluster, {{hdfs ec -listPolicies}} will always show policy state as 
> DISABLED.
> {noformat}
> [hdfs@nightly6x-1 root]$ hdfs ec -listPolicies
> Erasure Coding Policies:
> ErasureCodingPolicy=[Name=RS-10-4-1024k, Schema=[ECSchema=[Codec=rs, 
> numDataUnits=10, numParityUnits=4]], CellSize=1048576, Id=5, State=DISABLED]
> ErasureCodingPolicy=[Name=RS-3-2-1024k, Schema=[ECSchema=[Codec=rs, 
> numDataUnits=3, numParityUnits=2]], CellSize=1048576, Id=2, State=DISABLED]
> ErasureCodingPolicy=[Name=RS-6-3-1024k, Schema=[ECSchema=[Codec=rs, 
> numDataUnits=6, numParityUnits=3]], CellSize=1048576, Id=1, State=DISABLED]
> ErasureCodingPolicy=[Name=RS-LEGACY-6-3-1024k, 
> Schema=[ECSchema=[Codec=rs-legacy, numDataUnits=6, numParityUnits=3]], 
> CellSize=1048576, Id=3, State=DISABLED]
> ErasureCodingPolicy=[Name=XOR-2-1-1024k, Schema=[ECSchema=[Codec=xor, 
> numDataUnits=2, numParityUnits=1]], CellSize=1048576, Id=4, State=DISABLED]
> [hdfs@nightly6x-1 root]$ hdfs ec -getPolicy -path /ecec
> XOR-2-1-1024k
> {noformat}
> This is because when [deserializing 
> protobuf|https://github.com/apache/hadoop/blob/branch-3.0.0-beta1/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java#L2942],
>  the static instance of [SystemErasureCodingPolicies 
> class|https://github.com/apache/hadoop/blob/branch-3.0.0-beta1/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SystemErasureCodingPolicies.java#L101]
>  is first checked, and always returns the cached policy objects, which are 
> created by default with state=DISABLED.
> All the existing unit tests pass, because that static instance that the 
> client (e.g. ECAdmin) reads in unit test is updated by NN. :)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12682) ECAdmin -listPolicies will always show policy state as DISABLED

2017-10-22 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-12682:
-
Attachment: HDFS-12682.02.patch

Patch 2 ready for review:
Add basic unit test for the new {{ErasureCodingPolicyInfo}} class. A few 
cosmetic updates.

> ECAdmin -listPolicies will always show policy state as DISABLED
> ---
>
> Key: HDFS-12682
> URL: https://issues.apache.org/jira/browse/HDFS-12682
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12682.01.patch, HDFS-12682.02.patch
>
>
> On a real cluster, {{hdfs ec -listPolicies}} will always show policy state as 
> DISABLED.
> {noformat}
> [hdfs@nightly6x-1 root]$ hdfs ec -listPolicies
> Erasure Coding Policies:
> ErasureCodingPolicy=[Name=RS-10-4-1024k, Schema=[ECSchema=[Codec=rs, 
> numDataUnits=10, numParityUnits=4]], CellSize=1048576, Id=5, State=DISABLED]
> ErasureCodingPolicy=[Name=RS-3-2-1024k, Schema=[ECSchema=[Codec=rs, 
> numDataUnits=3, numParityUnits=2]], CellSize=1048576, Id=2, State=DISABLED]
> ErasureCodingPolicy=[Name=RS-6-3-1024k, Schema=[ECSchema=[Codec=rs, 
> numDataUnits=6, numParityUnits=3]], CellSize=1048576, Id=1, State=DISABLED]
> ErasureCodingPolicy=[Name=RS-LEGACY-6-3-1024k, 
> Schema=[ECSchema=[Codec=rs-legacy, numDataUnits=6, numParityUnits=3]], 
> CellSize=1048576, Id=3, State=DISABLED]
> ErasureCodingPolicy=[Name=XOR-2-1-1024k, Schema=[ECSchema=[Codec=xor, 
> numDataUnits=2, numParityUnits=1]], CellSize=1048576, Id=4, State=DISABLED]
> [hdfs@nightly6x-1 root]$ hdfs ec -getPolicy -path /ecec
> XOR-2-1-1024k
> {noformat}
> This is because when [deserializing 
> protobuf|https://github.com/apache/hadoop/blob/branch-3.0.0-beta1/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java#L2942],
>  the static instance of [SystemErasureCodingPolicies 
> class|https://github.com/apache/hadoop/blob/branch-3.0.0-beta1/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SystemErasureCodingPolicies.java#L101]
>  is first checked, and always returns the cached policy objects, which are 
> created by default with state=DISABLED.
> All the existing unit tests pass, because that static instance that the 
> client (e.g. ECAdmin) reads in unit test is updated by NN. :)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11468) Ozone: SCM: Add Node Metrics for SCM

2017-10-22 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11468:
-
Attachment: HDFS-11468-HDFS-7240.005.patch

Attach the same patch to trigger Jenkins again.

> Ozone: SCM: Add Node Metrics for SCM
> 
>
> Key: HDFS-11468
> URL: https://issues.apache.org/jira/browse/HDFS-11468
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Xiaoyu Yao
>Assignee: Yiqun Lin
>Priority: Critical
>  Labels: OzonePostMerge
> Attachments: HDFS-11468-HDFS-7240.001.patch, 
> HDFS-11468-HDFS-7240.002.patch, HDFS-11468-HDFS-7240.003.patch, 
> HDFS-11468-HDFS-7240.004.patch, HDFS-11468-HDFS-7240.005.patch
>
>
> This ticket is opened to add node metrics in SCM based on heartbeat, node 
> report and container report from datanodes. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12677) Extend TestReconstructStripedFile with a random EC policy

2017-10-22 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-12677:

Status: Patch Available  (was: Open)

> Extend TestReconstructStripedFile with a random EC policy
> -
>
> Key: HDFS-12677
> URL: https://issues.apache.org/jira/browse/HDFS-12677
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding, test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
> Attachments: HDFS-12677.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12677) Extend TestReconstructStripedFile with a random EC policy

2017-10-22 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-12677:

Status: Open  (was: Patch Available)

> Extend TestReconstructStripedFile with a random EC policy
> -
>
> Key: HDFS-12677
> URL: https://issues.apache.org/jira/browse/HDFS-12677
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding, test
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
> Attachments: HDFS-12677.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-5750) JHLogAnalyzer#parseLogFile() should close stm upon return

2017-10-22 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HDFS-5750:
-
Description: 
stm is assigned to in
But stm may point to another InputStream :

{code}
if(compressionClass != null) {
  CompressionCodec codec = (CompressionCodec)
ReflectionUtils.newInstance(compressionClass, new Configuration());
  in = codec.createInputStream(stm);
{code}
stm should be closed in the finally block.

  was:
stm is assigned to in
But stm may point to another InputStream :
{code}
if(compressionClass != null) {
  CompressionCodec codec = (CompressionCodec)
ReflectionUtils.newInstance(compressionClass, new Configuration());
  in = codec.createInputStream(stm);
{code}
stm should be closed in the finally block.


> JHLogAnalyzer#parseLogFile() should close stm upon return
> -
>
> Key: HDFS-5750
> URL: https://issues.apache.org/jira/browse/HDFS-5750
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> stm is assigned to in
> But stm may point to another InputStream :
> {code}
> if(compressionClass != null) {
>   CompressionCodec codec = (CompressionCodec)
> ReflectionUtils.newInstance(compressionClass, new 
> Configuration());
>   in = codec.createInputStream(stm);
> {code}
> stm should be closed in the finally block.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-5012) replica.getGenerationStamp() may be >= recoveryId

2017-10-22 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16214331#comment-16214331
 ] 

Ted Yu commented on HDFS-5012:
--

Planning to resolve this since there has been no repro.

> replica.getGenerationStamp() may be >= recoveryId
> -
>
> Key: HDFS-5012
> URL: https://issues.apache.org/jira/browse/HDFS-5012
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.0.5-alpha
>Reporter: Ted Yu
> Attachments: testReplicationQueueFailover.txt
>
>
> The following was first observed by [~jdcryans] in 
> TestReplicationQueueFailover running against 2.0.5-alpha:
> {code}
> 2013-07-16 17:14:33,340 ERROR [IPC Server handler 7 on 35081] 
> security.UserGroupInformation(1481): PriviledgedActionException as:ec2-user 
> (auth:SIMPLE) cause:java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: 
> replica.getGenerationStamp() >= recoveryId = 1041, 
> block=blk_4297992342878601848_1041, replica=FinalizedReplica, 
> blk_4297992342878601848_1041, FINALIZED
>   getNumBytes() = 794
>   getBytesOnDisk()  = 794
>   getVisibleLength()= 794
>   getVolume()   = 
> /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data3/current
>   getBlockFile()= 
> /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data3/current/BP-1477359609-10.197.55.49-1373994849464/current/finalized/blk_4297992342878601848
>   unlinked  =false
> 2013-07-16 17:14:33,341 WARN  
> [org.apache.hadoop.hdfs.server.datanode.DataNode$2@64a1fcba] 
> datanode.DataNode(1894): Failed to obtain replica info for block 
> (=BP-1477359609-10.197.55.49-1373994849464:blk_4297992342878601848_1041) from 
> datanode (=127.0.0.1:47006)
> java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: 
> replica.getGenerationStamp() >= recoveryId = 1041, 
> block=blk_4297992342878601848_1041, replica=FinalizedReplica, 
> blk_4297992342878601848_1041, FINALIZED
>   getNumBytes() = 794
>   getBytesOnDisk()  = 794
>   getVisibleLength()= 794
>   getVolume()   = 
> /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data3/current
>   getBlockFile()= 
> /home/ec2-user/jenkins/workspace/HBase-0.95-Hadoop-2/hbase-server/target/test-data/f2763e32-fe49-4988-ac94-eeca82431821/dfscluster_643a635e-4e39-4aa5-974c-25e01db16ff7/dfs/data/data3/current/BP-1477359609-10.197.55.49-1373994849464/current/finalized/blk_4297992342878601848
>   unlinked  =false
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org