[jira] [Commented] (HDFS-12230) Ozone: KSM: Add creation time field in bucket info

2017-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16125269#comment-16125269
 ] 

Hadoop QA commented on HDFS-12230:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
3s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
53s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 45s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
18s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.scm.TestArchive |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.container.ozoneimpl.TestRatisManager |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.cblock.TestBufferManager |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
| Timed out junit tests | org.apache.hadoop.ozone.web.client.TestKeysRatis |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12230 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881679/HDFS-12230-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 62e406b03dd1 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|

[jira] [Commented] (HDFS-12054) FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to ensure Namenode is not in safemode

2017-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16125223#comment-16125223
 ] 

Hadoop QA commented on HDFS-12054:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
4s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 9 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 |
|   | hadoop.hdfs.TestDFSShell |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12054 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881674/HDFS-12054.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3bfc6738cbf9 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7769e96 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20678/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20678/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20678/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20678/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to 
> ensure Namenode is not in safemode
> 

[jira] [Commented] (HDFS-12066) When Namenode is in safemode,may not allowed to remove an user's erasure coding policy

2017-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16125213#comment-16125213
 ] 

Hadoop QA commented on HDFS-12066:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
40s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 9 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12066 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881675/HDFS-12066.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0bd9750a39be 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7769e96 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20677/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20677/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20677/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20677/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> When Namenode is in safemode,may not allowed to remove an user's erasure 
> coding policy
> --
>
> 

[jira] [Updated] (HDFS-12283) Ozone: DeleteKey-5: Implement SCM DeletedBlockLog

2017-08-13 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-12283:
--
Attachment: HDFS-12283-HDFS-7240.001.patch

sorry forget the branch.

> Ozone: DeleteKey-5: Implement SCM DeletedBlockLog
> -
>
> Key: HDFS-12283
> URL: https://issues.apache.org/jira/browse/HDFS-12283
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
> Attachments: HDFS-12283.001.patch, HDFS-12283-HDFS-7240.001.patch
>
>
> The DeletedBlockLog is a persisted log in SCM to keep tracking container 
> blocks which are under deletion. It maintains info about under-deletion 
> container blocks that notified by KSM, and the state how it is processed. We 
> can use RocksDB to implement the 1st version of the log, the schema looks like
> ||TxID||ContainerName||Block List||ProcessedCount||
> |0|c1|b1,b2,b3|0|
> |1|c2|b1|3|
> |2|c2|b2, b3|-1|
> Some explanations
> # TxID is an incremental long value transaction ID for ONE container and 
> multiple blocks
> # Container name is the name of the container
> # Block list is a list of block IDs
> # ProcessedCount is the number of times SCM has sent this record to datanode, 
> it represents the "state" of the transaction, it is in range of \[-1, 5\], -1 
> means the transaction eventually failed after some retries, 5 is the max 
> number times of retries.
> We need to define {{DeletedBlockLog}} as an interface and implement this with 
> RocksDB {{MetadataStore}} as the first version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12283) Ozone: DeleteKey-5: Implement SCM DeletedBlockLog

2017-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16125197#comment-16125197
 ] 

Hadoop QA commented on HDFS-12283:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-12283 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12283 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881681/HDFS-12283.001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20681/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: DeleteKey-5: Implement SCM DeletedBlockLog
> -
>
> Key: HDFS-12283
> URL: https://issues.apache.org/jira/browse/HDFS-12283
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
> Attachments: HDFS-12283.001.patch
>
>
> The DeletedBlockLog is a persisted log in SCM to keep tracking container 
> blocks which are under deletion. It maintains info about under-deletion 
> container blocks that notified by KSM, and the state how it is processed. We 
> can use RocksDB to implement the 1st version of the log, the schema looks like
> ||TxID||ContainerName||Block List||ProcessedCount||
> |0|c1|b1,b2,b3|0|
> |1|c2|b1|3|
> |2|c2|b2, b3|-1|
> Some explanations
> # TxID is an incremental long value transaction ID for ONE container and 
> multiple blocks
> # Container name is the name of the container
> # Block list is a list of block IDs
> # ProcessedCount is the number of times SCM has sent this record to datanode, 
> it represents the "state" of the transaction, it is in range of \[-1, 5\], -1 
> means the transaction eventually failed after some retries, 5 is the max 
> number times of retries.
> We need to define {{DeletedBlockLog}} as an interface and implement this with 
> RocksDB {{MetadataStore}} as the first version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12283) Ozone: DeleteKey-5: Implement SCM DeletedBlockLog

2017-08-13 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-12283:
--
Attachment: HDFS-12283.001.patch

upload v1 patch.
[~cheersyang] I've renamed some method so that the name can keep compact.

> Ozone: DeleteKey-5: Implement SCM DeletedBlockLog
> -
>
> Key: HDFS-12283
> URL: https://issues.apache.org/jira/browse/HDFS-12283
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
> Attachments: HDFS-12283.001.patch
>
>
> The DeletedBlockLog is a persisted log in SCM to keep tracking container 
> blocks which are under deletion. It maintains info about under-deletion 
> container blocks that notified by KSM, and the state how it is processed. We 
> can use RocksDB to implement the 1st version of the log, the schema looks like
> ||TxID||ContainerName||Block List||ProcessedCount||
> |0|c1|b1,b2,b3|0|
> |1|c2|b1|3|
> |2|c2|b2, b3|-1|
> Some explanations
> # TxID is an incremental long value transaction ID for ONE container and 
> multiple blocks
> # Container name is the name of the container
> # Block list is a list of block IDs
> # ProcessedCount is the number of times SCM has sent this record to datanode, 
> it represents the "state" of the transaction, it is in range of \[-1, 5\], -1 
> means the transaction eventually failed after some retries, 5 is the max 
> number times of retries.
> We need to define {{DeletedBlockLog}} as an interface and implement this with 
> RocksDB {{MetadataStore}} as the first version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12283) Ozone: DeleteKey-5: Implement SCM DeletedBlockLog

2017-08-13 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-12283:
--
Status: Patch Available  (was: Open)

> Ozone: DeleteKey-5: Implement SCM DeletedBlockLog
> -
>
> Key: HDFS-12283
> URL: https://issues.apache.org/jira/browse/HDFS-12283
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone, scm
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
> Attachments: HDFS-12283.001.patch
>
>
> The DeletedBlockLog is a persisted log in SCM to keep tracking container 
> blocks which are under deletion. It maintains info about under-deletion 
> container blocks that notified by KSM, and the state how it is processed. We 
> can use RocksDB to implement the 1st version of the log, the schema looks like
> ||TxID||ContainerName||Block List||ProcessedCount||
> |0|c1|b1,b2,b3|0|
> |1|c2|b1|3|
> |2|c2|b2, b3|-1|
> Some explanations
> # TxID is an incremental long value transaction ID for ONE container and 
> multiple blocks
> # Container name is the name of the container
> # Block list is a list of block IDs
> # ProcessedCount is the number of times SCM has sent this record to datanode, 
> it represents the "state" of the transaction, it is in range of \[-1, 5\], -1 
> means the transaction eventually failed after some retries, 5 is the max 
> number times of retries.
> We need to define {{DeletedBlockLog}} as an interface and implement this with 
> RocksDB {{MetadataStore}} as the first version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12258) ec -listPolicies should list all policies in system, no matter it's enabled or disabled

2017-08-13 Thread Wei Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zhou updated HDFS-12258:

Attachment: HDFS-12258.02.patch

Update patch, fix code style and related tests issue reported. Thanks!

> ec -listPolicies should list all policies in system, no matter it's enabled 
> or disabled
> ---
>
> Key: HDFS-12258
> URL: https://issues.apache.org/jira/browse/HDFS-12258
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: SammiChen
>Assignee: Wei Zhou
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12258.01.patch, HDFS-12258.02.patch
>
>
> ec -listPolicies should list all policies in system, no matter it's enabled 
> or disabled



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12230) Ozone: KSM: Add creation time field in bucket info

2017-08-13 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12230:
-
Attachment: HDFS-12230-HDFS-7240.002.patch

> Ozone: KSM: Add creation time field in bucket info
> --
>
> Key: HDFS-12230
> URL: https://issues.apache.org/jira/browse/HDFS-12230
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12230-HDFS-7240.001.patch, 
> HDFS-12230-HDFS-7240.002.patch
>
>
> It would be nice to add creation time in bucket info which will be returned 
> in {{infoBucket}} and {{listBicket}} command.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12230) Ozone: KSM: Add creation time field in bucket info

2017-08-13 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16125187#comment-16125187
 ] 

Yiqun Lin edited comment on HDFS-12230 at 8/14/17 3:42 AM:
---

The failure tests are related.
Attach the new patch. Just caught one bug that we missing update the creation 
time of specified volume/bucket when doing update property operations (like 
setOwner, setQuota). The latest patch will addresses this.


was (Author: linyiqun):
The failure test are related.
Attach the new patch. Just caught one bug that we missing update the creation 
time of specified volume/bucket when doing update property operations (like 
setOwner, setQuota). The latest will addresses this.

> Ozone: KSM: Add creation time field in bucket info
> --
>
> Key: HDFS-12230
> URL: https://issues.apache.org/jira/browse/HDFS-12230
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12230-HDFS-7240.001.patch, 
> HDFS-12230-HDFS-7240.002.patch
>
>
> It would be nice to add creation time in bucket info which will be returned 
> in {{infoBucket}} and {{listBicket}} command.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12230) Ozone: KSM: Add creation time field in bucket info

2017-08-13 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16125187#comment-16125187
 ] 

Yiqun Lin commented on HDFS-12230:
--

The failure test are related.
Attach the new patch. Just caught one bug that we missing update the creation 
time of specified volume/bucket when doing update property operations (like 
setOwner, setQuota). The latest will addresses this.

> Ozone: KSM: Add creation time field in bucket info
> --
>
> Key: HDFS-12230
> URL: https://issues.apache.org/jira/browse/HDFS-12230
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12230-HDFS-7240.001.patch
>
>
> It would be nice to add creation time in bucket info which will be returned 
> in {{infoBucket}} and {{listBicket}} command.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10631) Federation State Store ZooKeeper implementation

2017-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16125174#comment-16125174
 ] 

Hadoop QA commented on HDFS-10631:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
0s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-10467 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  4m 
17s{color} | {color:red} root in HDFS-10467 failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  2m  
5s{color} | {color:red} root in HDFS-10467 failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
13s{color} | {color:green} HDFS-10467 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
43s{color} | {color:red} hadoop-hdfs in HDFS-10467 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-hdfs in HDFS-10467 failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} HDFS-10467 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 18m 28s{color} 
| {color:red} root generated 997 new + 348 unchanged - 2 fixed = 1345 total 
(was 350) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
12s{color} | {color:green} The patch generated 0 new + 0 unchanged - 3 fixed = 
0 total (was 3) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
53s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}155m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-10631 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881670/HDFS-10631-HDFS-10467-007.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  shellcheck  shelldocs  |
| uname | Linux 7637632bb3f2 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 

[jira] [Comment Edited] (HDFS-12054) FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to ensure Namenode is not in safemode

2017-08-13 Thread lufei (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16125170#comment-16125170
 ] 

lufei edited comment on HDFS-12054 at 8/14/17 3:01 AM:
---

Yes,your suggestion is very nice. The function of addErasureCodingPolicies is 
no need to make public.I have modified it in HDFS-12054.004.patch and 
HDFS-12066.004.patch.
Thanks.


was (Author: figo):
Yes,your suggestion is very nice. The function of addErasureCodingPolicies is 
no need to make public.I have modified it in HDFS-12054.004.patch and 
HDFS-12066.003.patch.
Thanks.

> FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to 
> ensure Namenode is not in safemode
> ---
>
> Key: HDFS-12054
> URL: https://issues.apache.org/jira/browse/HDFS-12054
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12054.001.patch, HDFS-12054.002.patch, 
> HDFS-12054.003.patch, HDFS-12054.004.patch
>
>
> In the process of FSNamesystem#addErasureCodingPolicies, it would be better 
> to  call checkNameNodeSafeMode() to ensure NN is not in safemode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12054) FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to ensure Namenode is not in safemode

2017-08-13 Thread lufei (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16125170#comment-16125170
 ] 

lufei commented on HDFS-12054:
--

Yes,your suggestion is very nice. The function of addErasureCodingPolicies is 
no need to make public.I have modified it in HDFS-12054.004.patch and 
HDFS-12066.003.patch.
Thanks.

> FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to 
> ensure Namenode is not in safemode
> ---
>
> Key: HDFS-12054
> URL: https://issues.apache.org/jira/browse/HDFS-12054
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12054.001.patch, HDFS-12054.002.patch, 
> HDFS-12054.003.patch, HDFS-12054.004.patch
>
>
> In the process of FSNamesystem#addErasureCodingPolicies, it would be better 
> to  call checkNameNodeSafeMode() to ensure NN is not in safemode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12066) When Namenode is in safemode,may not allowed to remove an user's erasure coding policy

2017-08-13 Thread lufei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lufei updated HDFS-12066:
-
Attachment: HDFS-12066.004.patch

> When Namenode is in safemode,may not allowed to remove an user's erasure 
> coding policy
> --
>
> Key: HDFS-12066
> URL: https://issues.apache.org/jira/browse/HDFS-12066
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12066.001.patch, HDFS-12066.002.patch, 
> HDFS-12066.003.patch, HDFS-12066.004.patch
>
>
> FSNamesystem#removeErasureCodingPolicy should call checkNameNodeSafeMode() to 
> ensure Namenode is not in safemode



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12054) FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to ensure Namenode is not in safemode

2017-08-13 Thread lufei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lufei updated HDFS-12054:
-
Attachment: HDFS-12054.004.patch

> FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to 
> ensure Namenode is not in safemode
> ---
>
> Key: HDFS-12054
> URL: https://issues.apache.org/jira/browse/HDFS-12054
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12054.001.patch, HDFS-12054.002.patch, 
> HDFS-12054.003.patch, HDFS-12054.004.patch
>
>
> In the process of FSNamesystem#addErasureCodingPolicies, it would be better 
> to  call checkNameNodeSafeMode() to ensure NN is not in safemode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12054) FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to ensure Namenode is not in safemode

2017-08-13 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16125132#comment-16125132
 ] 

Wei-Chiu Chuang commented on HDFS-12054:


Apologies for late response.

Instead of making {{addErasureCodingPolicies}} a public method unnecessarily, I 
would suggest that the test case should use 
{{DistributedFileSystem#addErasureCodingPolicies}}. There is already a dfs 
object initialized so I think we can use it if it is possible.

> FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to 
> ensure Namenode is not in safemode
> ---
>
> Key: HDFS-12054
> URL: https://issues.apache.org/jira/browse/HDFS-12054
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12054.001.patch, HDFS-12054.002.patch, 
> HDFS-12054.003.patch
>
>
> In the process of FSNamesystem#addErasureCodingPolicies, it would be better 
> to  call checkNameNodeSafeMode() to ensure NN is not in safemode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12054) FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to ensure Namenode is not in safemode

2017-08-13 Thread lufei (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16125121#comment-16125121
 ] 

lufei commented on HDFS-12054:
--

I linked the the jira  HDFS-12054 with  HDFS-12066 (as a Child-Issue), Is it 
appropriate?
Thanks

> FSNamesystem#addErasureCodingPolicies should call checkNameNodeSafeMode() to 
> ensure Namenode is not in safemode
> ---
>
> Key: HDFS-12054
> URL: https://issues.apache.org/jira/browse/HDFS-12054
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.0-alpha3
>Reporter: lufei
>Assignee: lufei
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12054.001.patch, HDFS-12054.002.patch, 
> HDFS-12054.003.patch
>
>
> In the process of FSNamesystem#addErasureCodingPolicies, it would be better 
> to  call checkNameNodeSafeMode() to ensure NN is not in safemode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11964) Fix TestDFSStripedInputStreamWithRandomECPolicy#testPreadWithDNFailure failure

2017-08-13 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16125111#comment-16125111
 ] 

Takanobu Asanuma commented on HDFS-11964:
-

[~jojochuang] Sure, I will. Thanks for assigning it!

> Fix TestDFSStripedInputStreamWithRandomECPolicy#testPreadWithDNFailure failure
> --
>
> Key: HDFS-11964
> URL: https://issues.apache.org/jira/browse/HDFS-11964
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Takanobu Asanuma
>
> TestDFSStripedInputStreamWithRandomECPolicy#testPreadWithDNFailure fails on 
> trunk:
> {code}
> Running org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
> Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 10.99 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
> testPreadWithDNFailure(org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy)
>   Time elapsed: 1.265 sec  <<< FAILURE!
> org.junit.internal.ArrayComparisonFailure: arrays first differed at element 
> [327680]; expected:<-36> but was:<2>
>   at 
> org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50)
>   at org.junit.Assert.internalArrayEquals(Assert.java:473)
>   at org.junit.Assert.assertArrayEquals(Assert.java:294)
>   at org.junit.Assert.assertArrayEquals(Assert.java:305)
>   at 
> org.apache.hadoop.hdfs.TestDFSStripedInputStream.testPreadWithDNFailure(TestDFSStripedInputStream.java:306)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10631) Federation State Store ZooKeeper implementation

2017-08-13 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-10631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-10631:
---
Attachment: HDFS-10631-HDFS-10467-007.patch

Fixed unit tests.

> Federation State Store ZooKeeper implementation
> ---
>
> Key: HDFS-10631
> URL: https://issues.apache.org/jira/browse/HDFS-10631
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10631-HDFS-10467-001.patch, 
> HDFS-10631-HDFS-10467-002.patch, HDFS-10631-HDFS-10467-003.patch, 
> HDFS-10631-HDFS-10467-004.patch, HDFS-10631-HDFS-10467-005.patch, 
> HDFS-10631-HDFS-10467-006.patch, HDFS-10631-HDFS-10467-007.patch
>
>
> State Store implementation using ZooKeeper.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12296) Add a field to FsServerDefaults to tell if external attribute provider is enabled

2017-08-13 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-12296:
-
Summary: Add a field to FsServerDefaults to tell if external attribute 
provider is enabled  (was: Add a field to FsServerDefaults to tell if external 
attirbute provider is enabled)

> Add a field to FsServerDefaults to tell if external attribute provider is 
> enabled
> -
>
> Key: HDFS-12296
> URL: https://issues.apache.org/jira/browse/HDFS-12296
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12296) Add a field to FsServerDefaults to tell if external attirbute provider is enabled

2017-08-13 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HDFS-12296:


 Summary: Add a field to FsServerDefaults to tell if external 
attirbute provider is enabled
 Key: HDFS-12296
 URL: https://issues.apache.org/jira/browse/HDFS-12296
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-3593) Throughput Metrics

2017-08-13 Thread Ashutosh Dubey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Dubey updated HDFS-3593:
-
Comment: was deleted

(was: Hi,
The issue seems to be still open.I am interested in taking on this task. Could 
this issue be assigned to me ?  so that I could get started with.
Thanks)

> Throughput Metrics
> --
>
> Key: HDFS-3593
> URL: https://issues.apache.org/jira/browse/HDFS-3593
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: balancer & mover, datanode, performance, tools
>Affects Versions: trunk-win
> Environment: N/A
>Reporter: Rita M
>Priority: Minor
>  Labels: newbie
>
> It would be nice to have throughput statistics of the entire cluster such as 
> total aggregate read/write in bytes/sec. The stat should be on the namenode 
> main-page or be placed in a text file with timestamp,bytes/sec 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12117) HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST Interface

2017-08-13 Thread Wellington Chevreuil (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16124983#comment-16124983
 ] 

Wellington Chevreuil edited comment on HDFS-12117 at 8/13/17 5:32 PM:
--

HDFS-12052 seems to depend on HDFS-10860, which in turn depends on 
HADOOP-13597. None of these are in branch-2 yet.

||id||date||author||message||
|69b23632c48297ac844c58fd3e21aad10c093cc8|Mon Feb 6 13:33:32 2017|Xiao 
Chen|HDFS-10860. Switch HttpFS from Tomcat to Jetty. Contributed by John Zhuge.|
|5d182949badb2eb80393de7ba3838102d006488b|Thu Jan 5 17:21:57 2017|Xiao 
Chen|HADOOP-13597. Switch KMS from Tomcat to Jetty. Contributed by John Zhuge.|


was (Author: wchevreuil):
HDFS-12052, in turn, seems to depend on HDFS-10860, which is not in branch-2 
either.


||id||date||author||message||
|69b23632c48297ac844c58fd3e21aad10c093cc8|Mon Feb 6 13:33:32 2017|Xiao 
Chen|HDFS-10860. Switch HttpFS from Tomcat to Jetty. Contributed by John Zhuge.|


> HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST 
> Interface
> ---
>
> Key: HDFS-12117
> URL: https://issues.apache.org/jira/browse/HDFS-12117
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: httpfs
>Affects Versions: 3.0.0-alpha3
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12117.003.patch, HDFS-12117.004.patch, 
> HDFS-12117.005.patch, HDFS-12117.006.patch, HDFS-12117.patch.01, 
> HDFS-12117.patch.02
>
>
> Currently, HttpFS is lacking implementation for SNAPSHOT related methods from 
> WebHDFS REST interface as defined by WebHDFS documentation [WebHDFS 
> documentation|https://archive.cloudera.com/cdh5/cdh/5/hadoop/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Snapshot_Operations]
> I would like to work on this implementation, following the existing design 
> approach already implemented by other WebHDFS methods on current HttpFS 
> project, so I'll be proposing an initial patch soon for reviews.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12295) NameNode to support file path prefix /.reserved/bypassExtAttr

2017-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16124991#comment-16124991
 ] 

Hadoop QA commented on HDFS-12295:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
24s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
40s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 9 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
12s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12295 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881657/HDFS-12295.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e75ca4a98908 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7769e96 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Commented] (HDFS-12117) HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST Interface

2017-08-13 Thread Wellington Chevreuil (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16124983#comment-16124983
 ] 

Wellington Chevreuil commented on HDFS-12117:
-

HDFS-12052, in turn, seems to depend on HDFS-10860, which is not in branch-2 
either.


||id||date||author||message||
|69b23632c48297ac844c58fd3e21aad10c093cc8|Mon Feb 6 13:33:32 2017|Xiao 
Chen|HDFS-10860. Switch HttpFS from Tomcat to Jetty. Contributed by John Zhuge.|


> HttpFS does not seem to support SNAPSHOT related methods for WebHDFS REST 
> Interface
> ---
>
> Key: HDFS-12117
> URL: https://issues.apache.org/jira/browse/HDFS-12117
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: httpfs
>Affects Versions: 3.0.0-alpha3
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
> Fix For: 3.0.0-beta1
>
> Attachments: HDFS-12117.003.patch, HDFS-12117.004.patch, 
> HDFS-12117.005.patch, HDFS-12117.006.patch, HDFS-12117.patch.01, 
> HDFS-12117.patch.02
>
>
> Currently, HttpFS is lacking implementation for SNAPSHOT related methods from 
> WebHDFS REST interface as defined by WebHDFS documentation [WebHDFS 
> documentation|https://archive.cloudera.com/cdh5/cdh/5/hadoop/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Snapshot_Operations]
> I would like to work on this implementation, following the existing design 
> approach already implemented by other WebHDFS methods on current HttpFS 
> project, so I'll be proposing an initial patch soon for reviews.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-376) FileSystem.listStatus should report the current length of a file if it has been sync'd

2017-08-13 Thread leib (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16124974#comment-16124974
 ] 

leib commented on HDFS-376:
---

i don't think it's a additional rpc to nn but instead of dn.u can reference 
from DFSClient.DFSInputStream#openInfo().
but u if there are many files under the dir u wanted to list,that means this op 
will cost certain overhead to complete it.

> FileSystem.listStatus should report the current length of a file if it has 
> been sync'd
> --
>
> Key: HDFS-376
> URL: https://issues.apache.org/jira/browse/HDFS-376
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jim Kellerman
>
> If a writer calls sync(), it should be possible to get the current file size 
> via FileSystem.listStatus, rather than listStatus returning 0 for the file 
> size until the file lease times out or the file is closed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12295) NameNode to support file path prefix /.reserved/bypassExtAttr

2017-08-13 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-12295:
-
Attachment: HDFS-12295.001.patch

> NameNode to support file path prefix /.reserved/bypassExtAttr
> -
>
> Key: HDFS-12295
> URL: https://issues.apache.org/jira/browse/HDFS-12295
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs, namenode
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-12295.001.patch, HDFS-12295.001.patch
>
>
> Let NameNode to support prefix /.reserved/bypassExtAttr, so client can add 
> thisprefix to a path before calling getFileStatus, e.g. /ab/c becomes 
> /.reserved/bypassExtAttr/a/b/c. NN will parse the path at the very beginning, 
> and bypass external attribute provider if the prefix is there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12230) Ozone: KSM: Add creation time field in bucket info

2017-08-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16124902#comment-16124902
 ] 

Hadoop QA commented on HDFS-12230:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
51s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 39s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
18s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.scm.TestArchive |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.ozone.scm.TestContainerSQLCli |
|   | hadoop.ozone.ksm.TestBucketManagerImpl |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.cblock.TestBufferManager |
|   | hadoop.ozone.ozShell.TestOzoneShell |
|   | hadoop.ozone.web.client.TestBucketsRatis |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
|   | hadoop.ozone.web.client.TestBuckets |
| Timed out junit tests | org.apache.hadoop.ozone.web.client.TestKeysRatis |
|   | org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12230 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12881648/HDFS-12230-HDFS-7240.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux cfb1f014cf36 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |

[jira] [Comment Edited] (HDFS-12230) Ozone: KSM: Add creation time field in bucket info

2017-08-13 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16124868#comment-16124868
 ] 

Yiqun Lin edited comment on HDFS-12230 at 8/13/17 10:13 AM:


Attach the initial patch. I think bucket creation time is also expected to have 
in Ozone FileSystem.


was (Author: linyiqun):
Attach the initial patch. I think bucket creation time is expected to have in 
Ozone Filesystem.

> Ozone: KSM: Add creation time field in bucket info
> --
>
> Key: HDFS-12230
> URL: https://issues.apache.org/jira/browse/HDFS-12230
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12230-HDFS-7240.001.patch
>
>
> It would be nice to add creation time in bucket info which will be returned 
> in {{infoBucket}} and {{listBicket}} command.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12230) Ozone: KSM: Add creation time field in bucket info

2017-08-13 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16124868#comment-16124868
 ] 

Yiqun Lin edited comment on HDFS-12230 at 8/13/17 10:12 AM:


Attach the initial patch. I think bucket creation time is expected to have in 
Ozone Filesystem.


was (Author: linyiqun):
Attach the initial patch.

> Ozone: KSM: Add creation time field in bucket info
> --
>
> Key: HDFS-12230
> URL: https://issues.apache.org/jira/browse/HDFS-12230
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12230-HDFS-7240.001.patch
>
>
> It would be nice to add creation time in bucket info which will be returned 
> in {{infoBucket}} and {{listBicket}} command.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12230) Ozone: KSM: Add creation time field in bucket info

2017-08-13 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12230:
-
Status: Patch Available  (was: Open)

> Ozone: KSM: Add creation time field in bucket info
> --
>
> Key: HDFS-12230
> URL: https://issues.apache.org/jira/browse/HDFS-12230
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12230-HDFS-7240.001.patch
>
>
> It would be nice to add creation time in bucket info which will be returned 
> in {{infoBucket}} and {{listBicket}} command.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12230) Ozone: KSM: Add creation time field in bucket info

2017-08-13 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12230:
-
Attachment: HDFS-12230-HDFS-7240.001.patch

Attach the initial patch.

> Ozone: KSM: Add creation time field in bucket info
> --
>
> Key: HDFS-12230
> URL: https://issues.apache.org/jira/browse/HDFS-12230
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12230-HDFS-7240.001.patch
>
>
> It would be nice to add creation time in bucket info which will be returned 
> in {{infoBucket}} and {{listBicket}} command.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org