[jira] [Commented] (HDFS-12583) Ozone: Fix swallow exceptions which makes hard to debug failures

2017-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190841#comment-16190841
 ] 

Hadoop QA commented on HDFS-12583:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
20s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 32s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12583 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890280/HDFS-12583-HDFS-7240.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 039fcab5c1f0 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / c0387ab |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21516/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21516/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Commented] (HDFS-12578) TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7

2017-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190829#comment-16190829
 ] 

Hadoop QA commented on HDFS-12578:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.7 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
58s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
14s{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_151 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 60 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}577m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_151. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  7m  
0s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}667m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_144 Failed junit tests | 
hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.web.TestWebHdfsTokens |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.server.datanode.TestBlockReplacement |
| JDK v1.7.0_151 Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.namenode.TestEditLogRace |
|   | hadoop.hdfs.TestDFSRename |
|   | hadoop.hdfs.TestDatanodeConfig |
|   | hadoop.hdfs.server.namenode.TestQuotaByStorageType |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength |
|   | hadoop.hdfs.server.namenode.TestXAttrConfigFlag |
|   | hadoop.hdfs.TestBlockReaderFactory |
|   | hadoop.hdfs.TestDFSClientRetries |

[jira] [Updated] (HDFS-12387) Ozone: Support Ratis as a first class replication mechanism

2017-10-03 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12387:

Attachment: HDFS-12387-HDFS-7240.007.patch

Rebase to support moving OzoneConfigration to common

> Ozone: Support Ratis as a first class replication mechanism
> ---
>
> Key: HDFS-12387
> URL: https://issues.apache.org/jira/browse/HDFS-12387
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Critical
>  Labels: ozoneMerge
> Attachments: HDFS-12387-HDFS-7240.001.patch, 
> HDFS-12387-HDFS-7240.002.patch, HDFS-12387-HDFS-7240.003.patch, 
> HDFS-12387-HDFS-7240.004.patch, HDFS-12387-HDFS-7240.005.patch, 
> HDFS-12387-HDFS-7240.006.patch, HDFS-12387-HDFS-7240.007.patch
>
>
> Ozone container layer supports pluggable replication policies. This JIRA 
> brings Apache Ratis based replication to Ozone.  Apache Ratis is a java 
> implementation of Raft protocol.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12513) Ozone: Create UI page to show Ozone configs by tags

2017-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190823#comment-16190823
 ] 

Hadoop QA commented on HDFS-12513:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 54 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
55s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
13s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
16s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
31s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  1s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
56s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
8s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 12m 13s{color} 
| {color:red} root generated 1 new + 1274 unchanged - 0 fixed = 1275 total (was 
1274) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 12s{color} | {color:orange} root: The patch generated 3 new + 323 unchanged 
- 0 fixed = 326 total (was 323) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 16s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
39s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}126m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 42s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}239m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.conf.TestCommonConfigurationFields |
|   | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.ozone.scm.TestAllocateContainer |
|   | hadoop.ozone.container.common.impl.TestContainerPersistence |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles |
|   | hadoop.fs.ozone.TestOzoneFileInterfaces |
| Timed out junit tests | 

[jira] [Commented] (HDFS-12578) TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7

2017-10-03 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190819#comment-16190819
 ] 

Xiao Chen commented on HDFS-12578:
--

Thanks [~ajayydv] for the fix. Your change makes sense to me.
1 question: have you checked why this is only failing in branch-2.7? HDFS-9107 
was committed to branch-2.8+ as well, but the test there is still using an 
interval of 1.

> TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7
> 
>
> Key: HDFS-12578
> URL: https://issues.apache.org/jira/browse/HDFS-12578
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiao Chen
>Assignee: Ajay Kumar
>Priority: Blocker
> Attachments: HDFS-12578-branch-2.7.001.patch
>
>
> It appears {{TestDeadDatanode#testNonDFSUsedONDeadNodeReReg}} is consistently 
> failing in branch-2.7. We should investigate and fix it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12387) Ozone: Support Ratis as a first class replication mechanism

2017-10-03 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190816#comment-16190816
 ] 

Xiaoyu Yao commented on HDFS-12387:
---

The test Test*Ratis need to update "import 
org.apache.hadoop.ozone.OzoneConfiguration;" to "import 
org.apache.hadoop.conf.OzoneConfiguration;" to fix the build.

> Ozone: Support Ratis as a first class replication mechanism
> ---
>
> Key: HDFS-12387
> URL: https://issues.apache.org/jira/browse/HDFS-12387
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Critical
>  Labels: ozoneMerge
> Attachments: HDFS-12387-HDFS-7240.001.patch, 
> HDFS-12387-HDFS-7240.002.patch, HDFS-12387-HDFS-7240.003.patch, 
> HDFS-12387-HDFS-7240.004.patch, HDFS-12387-HDFS-7240.005.patch, 
> HDFS-12387-HDFS-7240.006.patch
>
>
> Ozone container layer supports pluggable replication policies. This JIRA 
> brings Apache Ratis based replication to Ozone.  Apache Ratis is a java 
> implementation of Raft protocol.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11442) Ozone: Fix the Cluster ID generation code in SCM

2017-10-03 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-11442:

Labels:   (was: ozoneMerge)

> Ozone: Fix the Cluster ID generation code in SCM
> 
>
> Key: HDFS-11442
> URL: https://issues.apache.org/jira/browse/HDFS-11442
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Shashikant Banerjee
>Priority: Blocker
> Fix For: HDFS-7240
>
>
> The Cluster ID is randomly generated right now when SCM is started and we 
> avoid verifying the clients cluster ID matches what SCM expects. This JIRA is 
> to track the comments in code.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12387) Ozone: Support Ratis as a first class replication mechanism

2017-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190793#comment-16190793
 ] 

Hadoop QA commented on HDFS-12387:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
48s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
48s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
16s{color} | {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  1m 16s{color} | 
{color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 16s{color} 
| {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project: The patch generated 4 new + 
3 unchanged - 1 fixed = 7 total (was 4) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
50s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
48s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
29s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 51s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12387 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890282/HDFS-12387-HDFS-7240.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  xml  |
| uname | Linux b8756fe1dc7e 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 

[jira] [Commented] (HDFS-12577) Rename Router tooling

2017-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190770#comment-16190770
 ] 

Hadoop QA commented on HDFS-12577:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
5s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-10467 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
52s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
7m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-10467 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 32s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
23s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 38s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 22s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}145m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.router.TestNamenodeHeartbeat |
|   | hadoop.hdfs.server.federation.router.TestRouterRpc |
|   | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
| Timed out junit tests | 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12577 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890253/HDFS-12577-HDFS-10467-000.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  compile  
javac  javadoc  mvninstall  

[jira] [Commented] (HDFS-12387) Ozone: Support Ratis as a first class replication mechanism

2017-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190769#comment-16190769
 ] 

Hadoop QA commented on HDFS-12387:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 8s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
29s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
25s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} hadoop-hdfs-project: The patch generated 4 new + 
3 unchanged - 1 fixed = 7 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 47s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
24s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 24s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.ozone.scm.TestAllocateContainer |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.ozone.ksm.TestMultipleContainerReadWrite |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | 

[jira] [Updated] (HDFS-12513) Ozone: Create UI page to show Ozone configs by tags

2017-10-03 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12513:

Attachment: Screen Shot 2017-10-03 at 7.21.14 PM.png

Attaching a screenshot for others to see how the finished page looks like.

> Ozone: Create UI page to show Ozone configs by tags
> ---
>
> Key: HDFS-12513
> URL: https://issues.apache.org/jira/browse/HDFS-12513
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: HDFS-7240
>
> Attachments: HDFS-12513-HDFS-7240.001.patch, 
> HDFS-12513-HDFS-7240.002.patch, HDFS-12513-HDFS-7240.003.patch, 
> OzoneSettings.png, Screen Shot 2017-10-03 at 7.21.14 PM.png
>
>
> Create UI page to show Ozone configs by tags



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12513) Ozone: Create UI page to show Ozone configs by tags

2017-10-03 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190763#comment-16190763
 ] 

Ajay Kumar commented on HDFS-12513:
---

[~cheersyang], thanks for comments. [~anu], thanks for all the discussions, 
review and commit.

> Ozone: Create UI page to show Ozone configs by tags
> ---
>
> Key: HDFS-12513
> URL: https://issues.apache.org/jira/browse/HDFS-12513
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: HDFS-7240
>
> Attachments: HDFS-12513-HDFS-7240.001.patch, 
> HDFS-12513-HDFS-7240.002.patch, HDFS-12513-HDFS-7240.003.patch, 
> OzoneSettings.png
>
>
> Create UI page to show Ozone configs by tags



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12583) Ozone: Fix swallow exceptions which makes hard to debug failures

2017-10-03 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190754#comment-16190754
 ] 

Weiwei Yang commented on HDFS-12583:


Hi [~linyiqun], [~vagarychen]

Instead of logging errors in server log, can we make sure the entire exception 
is thrown to the client side (not just short message) ? It looks like to me 
following code is swallowing the exception, e.g in 
{{VolumeProcessTemplate#handleIOException}}, 

{code}
OzoneException exp = null;
...
// this creates a new ozone exception without inheriting the stack trace from 
IOException
exp = ErrorTable
  .newError(ErrorTable.VOLUME_ALREADY_EXISTS, reqID, volume, hostName);
...
// this only sets the message
if ((fsExp != null) && (exp != null)) {
  exp.setMessage(fsExp.getMessage());
}
...
{code}

this call transfers an {{IOException}} to {{OzoneException}} but only sets the 
error message. I am thinking whenever creating an {{OzoneException}}, if we 
make sure we do pass the exception instance to its constructor, e.g

{code}
exp = ErrorTable
  .newError(ErrorTable.VOLUME_ALREADY_EXISTS, reqID, volume, hostName, 
ioe);
{code}

would that resolve this problem (client will get full stack trace about the 
original exception)? Similar code in {{BucketProcessTemplate}} and 
{{KeyProcessTemplate}}.

> Ozone: Fix swallow exceptions which makes hard to debug failures
> 
>
> Key: HDFS-12583
> URL: https://issues.apache.org/jira/browse/HDFS-12583
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12583-HDFS-7240.001.patch, 
> HDFS-12583-HDFS-7240.002.patch, HDFS-12583-HDFS-7240.003.patch
>
>
> There are some places that swallow exceptions that makes client hard to debug 
> the failure. For example, if we get xceiver client from xceiver client 
> manager error, client only gets the error info like this:
> {noformat}
> org.apache.hadoop.ozone.web.exceptions.OzoneException: Exception getting 
> XceiverClient.
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> com.fasterxml.jackson.databind.introspect.AnnotatedConstructor.call(AnnotatedConstructor.java:119)
>   at 
> com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createUsingDefault(StdValueInstantiator.java:243)
> {noformat}
> The error exception stack is missing. We should print the error log as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12490) Ozone: OzoneClient: OzoneBucket should have information about the bucket creation time

2017-10-03 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190751#comment-16190751
 ] 

Anu Engineer edited comment on HDFS-12490 at 10/4/17 3:40 AM:
--

[~msingh] Can you please rebase this patch, This is not applying now. I would 
like to commit this if possible.

There are some warnings like ASF Lic. issues, you might want to fix them before 
the next patch too.



was (Author: anu):
[~msingh] Can you please rebase this patch, This is not applying now. I would 
like to commit this if possible.


> Ozone: OzoneClient: OzoneBucket should have information about the bucket 
> creation time
> --
>
> Key: HDFS-12490
> URL: https://issues.apache.org/jira/browse/HDFS-12490
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12490-HDFS-7240.001.patch
>
>
> OzoneBucket should have information about the bucket creation time.
> OzoneFileSystem needs creation time to display the file status information 
> for the root of the filesystem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12490) Ozone: OzoneClient: OzoneBucket should have information about the bucket creation time

2017-10-03 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190751#comment-16190751
 ] 

Anu Engineer commented on HDFS-12490:
-

[~msingh] Can you please rebase this patch, This is not applying now. I would 
like to commit this if possible.


> Ozone: OzoneClient: OzoneBucket should have information about the bucket 
> creation time
> --
>
> Key: HDFS-12490
> URL: https://issues.apache.org/jira/browse/HDFS-12490
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12490-HDFS-7240.001.patch
>
>
> OzoneBucket should have information about the bucket creation time.
> OzoneFileSystem needs creation time to display the file status information 
> for the root of the filesystem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12387) Ozone: Support Ratis as a first class replication mechanism

2017-10-03 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12387:

Attachment: HDFS-12387-HDFS-7240.006.patch

rebasing again .. v5 was failing with some conflicts.

> Ozone: Support Ratis as a first class replication mechanism
> ---
>
> Key: HDFS-12387
> URL: https://issues.apache.org/jira/browse/HDFS-12387
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Critical
>  Labels: ozoneMerge
> Attachments: HDFS-12387-HDFS-7240.001.patch, 
> HDFS-12387-HDFS-7240.002.patch, HDFS-12387-HDFS-7240.003.patch, 
> HDFS-12387-HDFS-7240.004.patch, HDFS-12387-HDFS-7240.005.patch, 
> HDFS-12387-HDFS-7240.006.patch
>
>
> Ozone container layer supports pluggable replication policies. This JIRA 
> brings Apache Ratis based replication to Ozone.  Apache Ratis is a java 
> implementation of Raft protocol.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12471) Ozone: Reduce some KSM/SCM deletion log messages from INFO to DEBUG

2017-10-03 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190731#comment-16190731
 ] 

Weiwei Yang commented on HDFS-12471:


Thanks [~anu], [~xyao], [~linyiqun]!

> Ozone: Reduce some KSM/SCM deletion log messages from INFO to DEBUG
> ---
>
> Key: HDFS-12471
> URL: https://issues.apache.org/jira/browse/HDFS-12471
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Weiwei Yang
>Priority: Trivial
> Fix For: HDFS-7240
>
> Attachments: HDFS-12471-HDFS-7240.001.patch
>
>
> Looks like we are logging a few no-op messages every minute in KSM/SCM log. 
> Should we reduce the log level to DEBUG or TRACE? cc: [~anu],[~cheersyang], 
> [~yuanbo].
> {code}
> 2017-09-14 23:42:15,022 [SCMBlockDeletingService#0] INFO  
> (SCMBlockDeletingService.java:103) - Running 
> DeletedBlockTransactionScanner
> 2017-09-14 23:42:15,024 [SCMBlockDeletingService#0] INFO  
> (SCMBlockDeletingService.java:136) - Scanned deleted blocks log and got 0 
> delTX to process
> 2017-09-14 23:42:24,139 [KeyDeletingService#1] INFO  
> (KeyDeletingService.java:123) - No pending deletion key found in KSM
> 2017-09-14 23:43:09,377 [BlockDeletingService#2] INFO  
> (BlockDeletingService.java:109) - Plan to choose 10 containers for block 
> deletion, actually returns 0 valid containers.
> 2017-09-14 23:43:15,027 [SCMBlockDeletingService#0] INFO  
> (SCMBlockDeletingService.java:103) - Running 
> DeletedBlockTransactionScanner
> 2017-09-14 23:43:15,027 [SCMBlockDeletingService#0] INFO  
> (SCMBlockDeletingService.java:136) - Scanned deleted blocks log and got 0 
> delTX to process
> 2017-09-14 23:43:24,146 [KeyDeletingService#1] INFO  
> (KeyDeletingService.java:123) - No pending deletion key found in KSM
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12583) Ozone: Fix swallow exceptions which makes hard to debug failures

2017-10-03 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12583:
-
Attachment: HDFS-12583-HDFS-7240.003.patch

> Ozone: Fix swallow exceptions which makes hard to debug failures
> 
>
> Key: HDFS-12583
> URL: https://issues.apache.org/jira/browse/HDFS-12583
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12583-HDFS-7240.001.patch, 
> HDFS-12583-HDFS-7240.002.patch, HDFS-12583-HDFS-7240.003.patch
>
>
> There are some places that swallow exceptions that makes client hard to debug 
> the failure. For example, if we get xceiver client from xceiver client 
> manager error, client only gets the error info like this:
> {noformat}
> org.apache.hadoop.ozone.web.exceptions.OzoneException: Exception getting 
> XceiverClient.
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> com.fasterxml.jackson.databind.introspect.AnnotatedConstructor.call(AnnotatedConstructor.java:119)
>   at 
> com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createUsingDefault(StdValueInstantiator.java:243)
> {noformat}
> The error exception stack is missing. We should print the error log as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12583) Ozone: Fix swallow exceptions which makes hard to debug failures

2017-10-03 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190720#comment-16190720
 ] 

Yiqun Lin edited comment on HDFS-12583 at 10/4/17 2:58 AM:
---

Thanks for the review and comment, [~vagarychen]!
bq. Could you please elaborate a bit on how you got the error in the 
description?
The error in description is just copied from previous failure test in 
HDFS-12307. Actually, we can just throw an IOException like following way to 
reproduce this:
{code}
  private XceiverClientSpi getClient(Pipeline pipeline)
  throws IOException {
String containerName = pipeline.getContainerName();
try {
  return clientCache.get(containerName,
  new Callable() {
  @Override
  public XceiverClientSpi call() throws Exception {
throw new IOException("Throw exception when getting 
XceiverClient.");
  }
});
} catch (Exception e) {
  throw new IOException("Exception getting XceiverClient.", e);
}
  }
{code}
Having tested in my local, the stack info can be printed as following:
{noformat}
2017-10-04 10:48:54,647 [Thread-206] ERROR handlers.KeyProcessTemplate 
(KeyProcessTemplate.java:handleCall(100)) ozone  
9aaa47d4-f6be-494c-b38f-f5c000997fadvolume/894199ed-b2c6-4ec8-be99-d768dbe58bc1bucket/test-key0
 hdfs 5e5f7f56-601f-4f43-b95f-98bc140ee686 - IOException. ex : {}
java.io.IOException: Exception getting XceiverClient.
at 
org.apache.hadoop.scm.XceiverClientManager.getClient(XceiverClientManager.java:158)
at 
org.apache.hadoop.scm.XceiverClientManager.acquireClient(XceiverClientManager.java:127)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.getFromKsmKeyInfo(ChunkGroupOutputStream.java:289)
at 
org.apache.hadoop.ozone.web.storage.DistributedStorageHandler.newKeyWriter(DistributedStorageHandler.java:397)
at 
org.apache.hadoop.ozone.web.handlers.KeyHandler$2.doProcess(KeyHandler.java:174)
at 
org.apache.hadoop.ozone.web.handlers.KeyProcessTemplate.handleCall(KeyProcessTemplate.java:91)
at 
org.apache.hadoop.ozone.web.handlers.KeyHandler.putKey(KeyHandler.java:199)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
at 
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at 
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
at 
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at 
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
at 
org.apache.hadoop.ozone.web.netty.ObjectStoreJerseyContainer$RequestRunner.run(ObjectStoreJerseyContainer.java:232)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Throw 
exception when getting XceiverClient.
at 
com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:289)
at 
com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:276)
at 
com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:111)
at 
com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:132)
at 
com.google.common.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2381)
at 
com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2351)
at 
com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
at 

[jira] [Commented] (HDFS-12583) Ozone: Fix swallow exceptions which makes hard to debug failures

2017-10-03 Thread Yiqun Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190720#comment-16190720
 ] 

Yiqun Lin commented on HDFS-12583:
--

Thanks for the review and comment, [~vagarychen]!
bq. Could you please elaborate a bit on how you got the error in the 
description?
The error in description is just copied from previous failure test in 
HDFS-12307. Actually, we can just throw a IOException like following way to 
reproduce this:
{code}
  private XceiverClientSpi getClient(Pipeline pipeline)
  throws IOException {
String containerName = pipeline.getContainerName();
try {
  return clientCache.get(containerName,
  new Callable() {
  @Override
  public XceiverClientSpi call() throws Exception {
throw new IOException("Throw exception when getting 
XceiverClient.");
  }
});
} catch (Exception e) {
  throw new IOException("Exception getting XceiverClient.", e);
}
  }
{code}
Having tested in my local, the stack info can be printed as following:
{noformat}
2017-10-04 10:48:54,647 [Thread-206] ERROR handlers.KeyProcessTemplate 
(KeyProcessTemplate.java:handleCall(100)) ozone  
9aaa47d4-f6be-494c-b38f-f5c000997fadvolume/894199ed-b2c6-4ec8-be99-d768dbe58bc1bucket/test-key0
 hdfs 5e5f7f56-601f-4f43-b95f-98bc140ee686 - IOException. ex : {}
java.io.IOException: Exception getting XceiverClient.
at 
org.apache.hadoop.scm.XceiverClientManager.getClient(XceiverClientManager.java:158)
at 
org.apache.hadoop.scm.XceiverClientManager.acquireClient(XceiverClientManager.java:127)
at 
org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.getFromKsmKeyInfo(ChunkGroupOutputStream.java:289)
at 
org.apache.hadoop.ozone.web.storage.DistributedStorageHandler.newKeyWriter(DistributedStorageHandler.java:397)
at 
org.apache.hadoop.ozone.web.handlers.KeyHandler$2.doProcess(KeyHandler.java:174)
at 
org.apache.hadoop.ozone.web.handlers.KeyProcessTemplate.handleCall(KeyProcessTemplate.java:91)
at 
org.apache.hadoop.ozone.web.handlers.KeyHandler.putKey(KeyHandler.java:199)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at 
com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
at 
com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at 
com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
at 
com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at 
com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at 
com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
at 
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
at 
org.apache.hadoop.ozone.web.netty.ObjectStoreJerseyContainer$RequestRunner.run(ObjectStoreJerseyContainer.java:232)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Throw 
exception when getting XceiverClient.
at 
com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:289)
at 
com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:276)
at 
com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:111)
at 
com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:132)
at 
com.google.common.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2381)
at 
com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2351)
at 
com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
at 

[jira] [Updated] (HDFS-12513) Ozone: Create UI page to show Ozone configs by tags

2017-10-03 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12513:

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

[~cheersyang] Thank you for the comments. [~ajayydv] Thanks for the 
contribution, I have committed this to the feature branch.

> Ozone: Create UI page to show Ozone configs by tags
> ---
>
> Key: HDFS-12513
> URL: https://issues.apache.org/jira/browse/HDFS-12513
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: HDFS-7240
>
> Attachments: HDFS-12513-HDFS-7240.001.patch, 
> HDFS-12513-HDFS-7240.002.patch, HDFS-12513-HDFS-7240.003.patch, 
> OzoneSettings.png
>
>
> Create UI page to show Ozone configs by tags



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12513) Ozone: Create UI page to show Ozone configs by tags

2017-10-03 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190681#comment-16190681
 ] 

Anu Engineer commented on HDFS-12513:
-

+1, I will commit this shortly. There is a checkStyle warning, I will fix that 
while committing. Thanks


> Ozone: Create UI page to show Ozone configs by tags
> ---
>
> Key: HDFS-12513
> URL: https://issues.apache.org/jira/browse/HDFS-12513
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: HDFS-7240
>
> Attachments: HDFS-12513-HDFS-7240.001.patch, 
> HDFS-12513-HDFS-7240.002.patch, HDFS-12513-HDFS-7240.003.patch, 
> OzoneSettings.png
>
>
> Create UI page to show Ozone configs by tags



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12513) Ozone: Create UI page to show Ozone configs by tags

2017-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190679#comment-16190679
 ] 

Hadoop QA commented on HDFS-12513:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 54 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
53s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
32s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 9s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
24s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
26s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
22s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  7s{color} | {color:orange} root: The patch generated 1 new + 323 unchanged 
- 0 fixed = 324 total (was 323) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 19s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 13s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
49s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 21s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-ozone in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}216m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.conf.TestCommonConfigurationFields |
|   | hadoop.hdfs.TestDFSOutputStream |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.ozone.TestOzoneConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12513 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-12513) Ozone: Create UI page to show Ozone configs by tags

2017-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190673#comment-16190673
 ] 

Hadoop QA commented on HDFS-12513:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m 
40s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 54 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
32s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
2s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
14s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
34s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
58s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
15s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m  
1s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 17s{color} | {color:orange} root: The patch generated 1 new + 323 unchanged 
- 0 fixed = 324 total (was 323) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 50s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
0s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}105m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 32s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}221m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.conf.TestCommonConfigurationFields |
|   | 
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.ozone.TestOzoneConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12513 |
| JIRA Patch URL | 

[jira] [Updated] (HDFS-12577) Rename Router tooling

2017-10-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12577:
---
Status: Patch Available  (was: Open)

> Rename Router tooling
> -
>
> Key: HDFS-12577
> URL: https://issues.apache.org/jira/browse/HDFS-12577
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>  Labels: RBF
> Fix For: HDFS-10467
>
> Attachments: HDFS-12577-HDFS-10467-000.patch
>
>
> Currently the naming for Router Based Federation has a couple conflicts:
> * Both YARN and HDFS have a Router component which may cause issues for the 
> PID file and JPS.
> * The tool to manage the mount table is called using {{hdfs federation}}. 
> This may cause confusion with the regular HDFS federation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12513) Ozone: Create UI page to show Ozone configs by tags

2017-10-03 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190652#comment-16190652
 ] 

Anu Engineer commented on HDFS-12513:
-

+1, Pending Jenkins. I think there are some check style issues.


> Ozone: Create UI page to show Ozone configs by tags
> ---
>
> Key: HDFS-12513
> URL: https://issues.apache.org/jira/browse/HDFS-12513
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: HDFS-7240
>
> Attachments: HDFS-12513-HDFS-7240.001.patch, 
> HDFS-12513-HDFS-7240.002.patch, HDFS-12513-HDFS-7240.003.patch, 
> OzoneSettings.png
>
>
> Create UI page to show Ozone configs by tags



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12533) NNThroughputBenchmark threads get stuck on UGI.getCurrentUser()

2017-10-03 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190626#comment-16190626
 ] 

Konstantin Shvachko commented on HDFS-12533:


We may still want to fix this for NNThroughputBenchmark, to exculde even a 
possibility of locking.

> NNThroughputBenchmark threads get stuck on UGI.getCurrentUser()
> ---
>
> Key: HDFS-12533
> URL: https://issues.apache.org/jira/browse/HDFS-12533
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Erik Krogen
>
> In {{NameNode#getRemoteUser()}}, it first attempts to fetch from the RPC user 
> (not a synchronized operation), and if there is no RPC call, it will call 
> {{UserGroupInformation#getCurrentUser()}} (which is {{synchronized}}). This 
> makes it efficient for RPC operations (the bulk) so that there is not too 
> much contention.
> In NNThroughputBenchmark, however, there is no RPC call since we bypass that 
> later, so with a high thread count many of the threads are getting stuck. At 
> one point I attached a profiler and found that quite a few threads had been 
> waiting for {{#getCurrentUser()}} for 2 minutes ( ! ). When taking this away 
> I found some improvement in the throughput numbers I was seeing. To more 
> closely emulate a real NN we should improve this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12387) Ozone: Support Ratis as a first class replication mechanism

2017-10-03 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12387:

Attachment: HDFS-12387-HDFS-7240.005.patch

Rebase the patch to current code.

> Ozone: Support Ratis as a first class replication mechanism
> ---
>
> Key: HDFS-12387
> URL: https://issues.apache.org/jira/browse/HDFS-12387
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Critical
>  Labels: ozoneMerge
> Attachments: HDFS-12387-HDFS-7240.001.patch, 
> HDFS-12387-HDFS-7240.002.patch, HDFS-12387-HDFS-7240.003.patch, 
> HDFS-12387-HDFS-7240.004.patch, HDFS-12387-HDFS-7240.005.patch
>
>
> Ozone container layer supports pluggable replication policies. This JIRA 
> brings Apache Ratis based replication to Ozone.  Apache Ratis is a java 
> implementation of Raft protocol.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12513) Ozone: Create UI page to show Ozone configs by tags

2017-10-03 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12513:
--
Attachment: HDFS-12513-HDFS-7240.003.patch

[~anu],[~xyao] thanks for offline discussion.
Summary of discussion:
move ozone-config.html to ksm
display latest value of each config
Both items addressed in patch v3

> Ozone: Create UI page to show Ozone configs by tags
> ---
>
> Key: HDFS-12513
> URL: https://issues.apache.org/jira/browse/HDFS-12513
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: HDFS-7240
>
> Attachments: HDFS-12513-HDFS-7240.001.patch, 
> HDFS-12513-HDFS-7240.002.patch, HDFS-12513-HDFS-7240.003.patch, 
> OzoneSettings.png
>
>
> Create UI page to show Ozone configs by tags



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12502) nntop should support a category based on FilesInGetListingOps

2017-10-03 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190613#comment-16190613
 ] 

Konstantin Shvachko commented on HDFS-12502:


The change looks good. Few comments:
# There are white-space and unused import warnings, reported by Jenkins.
# You should probably JavaDoc new metric in {{TopMetrics}}.
# Should have a constant for {{"filesInGetListing"}}, rather than hardcoded.
# There is a potential NPE in {{FSN.getListing()}} since {{dl}} can be null.
# Do I understand correctly that {{"filesInGetListing"}} will be recorded even 
if audit log is disabled? Which is probably not an expected behavior.

> nntop should support a category based on FilesInGetListingOps
> -
>
> Key: HDFS-12502
> URL: https://issues.apache.org/jira/browse/HDFS-12502
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Zhe Zhang
>Assignee: Zhe Zhang
> Attachments: HDFS-12502.00.patch, HDFS-12502.01.patch
>
>
> Large listing ops can oftentimes be the main contributor to NameNode 
> slowness. The aggregate cost of listing ops is proportional to the 
> {{FilesInGetListingOps}} rather than the number of listing ops. Therefore 
> it'd be very useful for nntop to support this category.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11201) Spelling errors in the logging, help, assertions and exception messages

2017-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190570#comment-16190570
 ] 

Hadoop QA commented on HDFS-11201:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-11201 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-11201 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12843481/HDFS-11201.4.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21511/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Spelling errors in the logging, help, assertions and exception messages
> ---
>
> Key: HDFS-11201
> URL: https://issues.apache.org/jira/browse/HDFS-11201
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, diskbalancer, httpfs, namenode, nfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HDFS-11201.1.patch, HDFS-11201.2.patch, 
> HDFS-11201.3.patch, HDFS-11201.4.patch
>
>
> Found a set of spelling errors in the user-facing code.
> Examples are:
> odlest -> oldest
> Illagal -> Illegal
> bounday -> boundary



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11942) make new chooseDataNode policy work in more operation like seek, fetch

2017-10-03 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11942:
---
Fix Version/s: (was: 3.0.0-beta1)
   3.0.0

> make new  chooseDataNode policy  work in more operation like seek, fetch
> 
>
> Key: HDFS-11942
> URL: https://issues.apache.org/jira/browse/HDFS-11942
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 2.6.0, 2.7.0, 3.0.0-alpha3
>Reporter: Fangyuan Deng
> Fix For: 3.0.0
>
> Attachments: HDFS-11942.0.patch, HDFS-11942.1.patch, 
> ssd-first-disable(default).png, ssd-first-enable.png
>
>
> in default policy, if a file is ONE_SSD, client will prior read the local 
> disk replica rather than the remote ssd replica.
> but now, the pci-e SSD and 10G ethernet make remote read SSD more faster than 
>  the local disk.
> HDFS-9666 give us a patch,  but the code is not complete and not updated for 
> a long time.
> this sub-task issue give a complete patch and 
> we have tested on three machines [ 32 core cpu, 128G mem , 1000M network, 
> 1.2T HDD, 800G SSD(intel P3600) ].
> with this feather, throughput of hbase table(ONE_SSD) is double of which 
> without this feather



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11201) Spelling errors in the logging, help, assertions and exception messages

2017-10-03 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-11201:
---
Fix Version/s: (was: 3.0.0-beta1)
   3.0.0

> Spelling errors in the logging, help, assertions and exception messages
> ---
>
> Key: HDFS-11201
> URL: https://issues.apache.org/jira/browse/HDFS-11201
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, diskbalancer, httpfs, namenode, nfs
>Affects Versions: 3.0.0-alpha1
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HDFS-11201.1.patch, HDFS-11201.2.patch, 
> HDFS-11201.3.patch, HDFS-11201.4.patch
>
>
> Found a set of spelling errors in the user-facing code.
> Examples are:
> odlest -> oldest
> Illagal -> Illegal
> bounday -> boundary



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12543) Ozone : allow create key without specifying size

2017-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190522#comment-16190522
 ] 

Hadoop QA commented on HDFS-12543:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
54s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
53s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
34s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}156m 24s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
50s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}220m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.TestMiniOzoneCluster |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 |
|   | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   

[jira] [Commented] (HDFS-12577) Rename Router tooling

2017-10-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190488#comment-16190488
 ] 

Íñigo Goiri commented on HDFS-12577:


000 has my proposal for renaming.
Following the feedback from [~aw] and [~steve_l], I wend with {{dfsrouter}}.
To be consistent, I added renamed the management tool to {{dfsrouteradmin}}.
For {{jps}}, I created {{DFSRouter}} which is just the old main to start the 
{{Router}}.

Does this cover the existing concerns?

> Rename Router tooling
> -
>
> Key: HDFS-12577
> URL: https://issues.apache.org/jira/browse/HDFS-12577
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>  Labels: RBF
> Fix For: HDFS-10467
>
> Attachments: HDFS-12577-HDFS-10467-000.patch
>
>
> Currently the naming for Router Based Federation has a couple conflicts:
> * Both YARN and HDFS have a Router component which may cause issues for the 
> PID file and JPS.
> * The tool to manage the mount table is called using {{hdfs federation}}. 
> This may cause confusion with the regular HDFS federation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12467) Ozone: SCM: NodeManager should log when it comes out of chill mode

2017-10-03 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190486#comment-16190486
 ] 

Chen Liang commented on HDFS-12467:
---

Thanks [~anu] for committing the patch and thanks [~nandakumar131] for the 
contribution! We can follow up on your proposal in a separate JIRA, which can 
be a post merge change I think.

> Ozone: SCM: NodeManager should log when it comes out of chill mode
> --
>
> Key: HDFS-12467
> URL: https://issues.apache.org/jira/browse/HDFS-12467
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
>Priority: Minor
> Fix For: HDFS-7240
>
> Attachments: HDFS-12467-HDFS-7240.000.patch, 
> HDFS-12467-HDFS-7240.001.patch, HDFS-12467-HDFS-7240.002.patch, 
> HDFS-12467-HDFS-7240.003.patch, HDFS-12467-HDFS-7240.004.patch
>
>
> {{NodeManager}} should add a log message when it comes out of chill mode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12420) Add an option to disallow 'namenode format -force'

2017-10-03 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190485#comment-16190485
 ] 

Ajay Kumar commented on HDFS-12420:
---

[~arpitagarwal], thanks for retriggering jenkins build. 4 failed test cases are 
unrelated as they pass locally. 

> Add an option to disallow 'namenode format -force'
> --
>
> Key: HDFS-12420
> URL: https://issues.apache.org/jira/browse/HDFS-12420
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12420.01.patch, HDFS-12420.02.patch, 
> HDFS-12420.03.patch, HDFS-12420.04.patch, HDFS-12420.05.patch, 
> HDFS-12420.06.patch, HDFS-12420.07.patch, HDFS-12420.08.patch, 
> HDFS-12420.09.patch, HDFS-12420.10.patch, HDFS-12420.11.patch, 
> HDFS-12420.12.patch
>
>
> Support for disabling NameNode format to avoid accidental formatting of 
> Namenode in production cluster. If someone really wants to delete the 
> complete fsImage, they can first delete the metadata dir and then run {code} 
> hdfs namenode -format{code} manually.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12467) Ozone: SCM: NodeManager should log when it comes out of chill mode

2017-10-03 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190484#comment-16190484
 ] 

Anu Engineer commented on HDFS-12467:
-

+1, on the refactoring proposal. 

> Ozone: SCM: NodeManager should log when it comes out of chill mode
> --
>
> Key: HDFS-12467
> URL: https://issues.apache.org/jira/browse/HDFS-12467
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
>Priority: Minor
> Fix For: HDFS-7240
>
> Attachments: HDFS-12467-HDFS-7240.000.patch, 
> HDFS-12467-HDFS-7240.001.patch, HDFS-12467-HDFS-7240.002.patch, 
> HDFS-12467-HDFS-7240.003.patch, HDFS-12467-HDFS-7240.004.patch
>
>
> {{NodeManager}} should add a log message when it comes out of chill mode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12577) Rename Router tooling

2017-10-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12577:
---
Attachment: HDFS-12577-HDFS-10467-000.patch

> Rename Router tooling
> -
>
> Key: HDFS-12577
> URL: https://issues.apache.org/jira/browse/HDFS-12577
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>  Labels: RBF
> Fix For: HDFS-10467
>
> Attachments: HDFS-12577-HDFS-10467-000.patch
>
>
> Currently the naming for Router Based Federation has a couple conflicts:
> * Both YARN and HDFS have a Router component which may cause issues for the 
> PID file and JPS.
> * The tool to manage the mount table is called using {{hdfs federation}}. 
> This may cause confusion with the regular HDFS federation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12579) JournalNodeSyncer should use fromUrl field of EditLogManifestResponse to construct servlet Url

2017-10-03 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-12579:
--
Attachment: HDFS-12579.001.patch

> JournalNodeSyncer should use fromUrl field of EditLogManifestResponse to 
> construct servlet Url
> --
>
> Key: HDFS-12579
> URL: https://issues.apache.org/jira/browse/HDFS-12579
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-12579.001.patch
>
>
> Currently in JournalNodeSyncer, we construct the remote JN http server url 
> using the JN host address and the http port that we get from the 
> {{GetEditLogManifestResponseProto}}.
> {code}
>   if (remoteJNproxy.httpServerUrl == null) {
> remoteJNproxy.httpServerUrl = getHttpServerURI("http",
> remoteJNproxy.jnAddr.getHostName(), response.getHttpPort());
>   }
> {code}
> The correct way would be to get the http server url of the remote JN from the 
> {{fromUrl}} field of the {{GetEditLogManifestResponseProto}}. This would take 
> care of the http policy set on the remote JN as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12513) Ozone: Create UI page to show Ozone configs by tags

2017-10-03 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12513:
--
Attachment: (was: HDFS-12513-HDFS-7240.002.patch)

> Ozone: Create UI page to show Ozone configs by tags
> ---
>
> Key: HDFS-12513
> URL: https://issues.apache.org/jira/browse/HDFS-12513
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: HDFS-7240
>
> Attachments: HDFS-12513-HDFS-7240.001.patch, 
> HDFS-12513-HDFS-7240.002.patch, OzoneSettings.png
>
>
> Create UI page to show Ozone configs by tags



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12513) Ozone: Create UI page to show Ozone configs by tags

2017-10-03 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12513:
--
Attachment: HDFS-12513-HDFS-7240.002.patch

> Ozone: Create UI page to show Ozone configs by tags
> ---
>
> Key: HDFS-12513
> URL: https://issues.apache.org/jira/browse/HDFS-12513
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: HDFS-7240
>
> Attachments: HDFS-12513-HDFS-7240.001.patch, 
> HDFS-12513-HDFS-7240.002.patch, OzoneSettings.png
>
>
> Create UI page to show Ozone configs by tags



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12513) Ozone: Create UI page to show Ozone configs by tags

2017-10-03 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190447#comment-16190447
 ] 

Ajay Kumar edited comment on HDFS-12513 at 10/3/17 10:11 PM:
-

[~anu],thanks for review. Fixed checkstyle issues and rebased it for current 
branch in patch v2. I checked path for {{OzoneConfiguration}} and it seems to 
be consistent with what we discussed. (i.e move it to hadoop-common 
org.apache.hadoop.conf)


was (Author: ajayydv):
[~anu],thanks for review. Fixed checkstyle issues and rebased it for current 
branch. I checked path for {{OzoneConfiguration}} and it seems to be consistent 
with what we discussed. (i.e move it to hadoop-common org.apache.hadoop.conf)

> Ozone: Create UI page to show Ozone configs by tags
> ---
>
> Key: HDFS-12513
> URL: https://issues.apache.org/jira/browse/HDFS-12513
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: HDFS-7240
>
> Attachments: HDFS-12513-HDFS-7240.001.patch, 
> HDFS-12513-HDFS-7240.002.patch, OzoneSettings.png
>
>
> Create UI page to show Ozone configs by tags



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12539) Ozone: refactor some functions in KSMMetadataManagerImpl to be more readable and reusable

2017-10-03 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12539:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

[~cheersyang] Thanks for the comments. [~yuanbo] Thank you for the 
contribution, I have committed this to the feature branch.


> Ozone: refactor some functions in KSMMetadataManagerImpl to be more readable 
> and reusable
> -
>
> Key: HDFS-12539
> URL: https://issues.apache.org/jira/browse/HDFS-12539
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
>Priority: Minor
> Fix For: HDFS-7240
>
> Attachments: HDFS-12539-HDFS-7240.001.patch, 
> HDFS-12539-HDFS-7240.002.patch, HDFS-12539-HDFS-7240.003.patch
>
>
> This is from [~anu]'s review comment in HDFS-12506, 
> [https://issues.apache.org/jira/browse/HDFS-12506?focusedCommentId=16178356=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16178356].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12513) Ozone: Create UI page to show Ozone configs by tags

2017-10-03 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12513:
--
Attachment: HDFS-12513-HDFS-7240.002.patch

> Ozone: Create UI page to show Ozone configs by tags
> ---
>
> Key: HDFS-12513
> URL: https://issues.apache.org/jira/browse/HDFS-12513
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: HDFS-7240
>
> Attachments: HDFS-12513-HDFS-7240.001.patch, 
> HDFS-12513-HDFS-7240.002.patch, OzoneSettings.png
>
>
> Create UI page to show Ozone configs by tags



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12513) Ozone: Create UI page to show Ozone configs by tags

2017-10-03 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190447#comment-16190447
 ] 

Ajay Kumar commented on HDFS-12513:
---

[~anu],thanks for review. Fixed checkstyle issues and rebased it for current 
branch. I checked path for {{OzoneConfiguration}} and it seems to be consistent 
with what we discussed. (i.e move it to hadoop-common org.apache.hadoop.conf)

> Ozone: Create UI page to show Ozone configs by tags
> ---
>
> Key: HDFS-12513
> URL: https://issues.apache.org/jira/browse/HDFS-12513
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: HDFS-7240
>
> Attachments: HDFS-12513-HDFS-7240.001.patch, OzoneSettings.png
>
>
> Create UI page to show Ozone configs by tags



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12539) Ozone: refactor some functions in KSMMetadataManagerImpl to be more readable and reusable

2017-10-03 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190440#comment-16190440
 ] 

Anu Engineer commented on HDFS-12539:
-

[~yuanbo]/ [~cheersyang] The Jenkins run is good the failures are not related 
to this patch. I will commit this shortly.


> Ozone: refactor some functions in KSMMetadataManagerImpl to be more readable 
> and reusable
> -
>
> Key: HDFS-12539
> URL: https://issues.apache.org/jira/browse/HDFS-12539
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
>Priority: Minor
> Attachments: HDFS-12539-HDFS-7240.001.patch, 
> HDFS-12539-HDFS-7240.002.patch, HDFS-12539-HDFS-7240.003.patch
>
>
> This is from [~anu]'s review comment in HDFS-12506, 
> [https://issues.apache.org/jira/browse/HDFS-12506?focusedCommentId=16178356=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16178356].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12467) Ozone: SCM: NodeManager should log when it comes out of chill mode

2017-10-03 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12467:

  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: HDFS-7240
Target Version/s: HDFS-7240
  Status: Resolved  (was: Patch Available)

[~vagarychen] Thanks for review comments. [~nandakumar131] Thank you for the 
contribution. I appreciate how much the simpler the code has become. I have 
committed this to the feature branch.

> Ozone: SCM: NodeManager should log when it comes out of chill mode
> --
>
> Key: HDFS-12467
> URL: https://issues.apache.org/jira/browse/HDFS-12467
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
>Priority: Minor
> Fix For: HDFS-7240
>
> Attachments: HDFS-12467-HDFS-7240.000.patch, 
> HDFS-12467-HDFS-7240.001.patch, HDFS-12467-HDFS-7240.002.patch, 
> HDFS-12467-HDFS-7240.003.patch, HDFS-12467-HDFS-7240.004.patch
>
>
> {{NodeManager}} should add a log message when it comes out of chill mode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12584) [READ] Fix errors in image generation tool from latest rebase

2017-10-03 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12584:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> [READ] Fix errors in image generation tool from latest rebase
> -
>
> Key: HDFS-12584
> URL: https://issues.apache.org/jira/browse/HDFS-12584
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12584-HDFS-9806.001.patch
>
>
> Fix compile errors, from the latest rebase, in FSImage generation tool



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12584) [READ] Fix errors in image generation tool from latest rebase

2017-10-03 Thread Virajith Jalaparti (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190400#comment-16190400
 ] 

Virajith Jalaparti commented on HDFS-12584:
---

It does work locally. Committed to the branch.

> [READ] Fix errors in image generation tool from latest rebase
> -
>
> Key: HDFS-12584
> URL: https://issues.apache.org/jira/browse/HDFS-12584
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12584-HDFS-9806.001.patch
>
>
> Fix compile errors, from the latest rebase, in FSImage generation tool



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12273) Federation UI

2017-10-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190396#comment-16190396
 ] 

Íñigo Goiri commented on HDFS-12273:


The {{javac}} errors are consistent but don't seem related to the patch.
I cannot reproduce them in my local machine either.
I would go ahead with 009.

A couple things to check:
* I added the fix for {{TestNamenodeHeartbeat}} in this patch.
* I modified the RPC part of the Router to support timeouts.

[~chris.douglas], do you mind taking a look at these couple things and verify 
that pushing this with the {{javac}} error is OK?

> Federation UI
> -
>
> Key: HDFS-12273
> URL: https://issues.apache.org/jira/browse/HDFS-12273
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Fix For: HDFS-10467
>
> Attachments: federationUI-1.png, federationUI-2.png, 
> federationUI-3.png, HDFS-12273-HDFS-10467-000.patch, 
> HDFS-12273-HDFS-10467-001.patch, HDFS-12273-HDFS-10467-002.patch, 
> HDFS-12273-HDFS-10467-003.patch, HDFS-12273-HDFS-10467-004.patch, 
> HDFS-12273-HDFS-10467-005.patch, HDFS-12273-HDFS-10467-006.patch, 
> HDFS-12273-HDFS-10467-007.patch, HDFS-12273-HDFS-10467-008.patch, 
> HDFS-12273-HDFS-10467-009.patch
>
>
> Add the Web UI to the Router to expose the status of the federated cluster. 
> It includes the federation metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12467) Ozone: SCM: NodeManager should log when it comes out of chill mode

2017-10-03 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190392#comment-16190392
 ] 

Anu Engineer commented on HDFS-12467:
-

+1, I will commit this shortly. Thanks for the reviews and comments 
[~vagarychen]

> Ozone: SCM: NodeManager should log when it comes out of chill mode
> --
>
> Key: HDFS-12467
> URL: https://issues.apache.org/jira/browse/HDFS-12467
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
>Priority: Minor
> Attachments: HDFS-12467-HDFS-7240.000.patch, 
> HDFS-12467-HDFS-7240.001.patch, HDFS-12467-HDFS-7240.002.patch, 
> HDFS-12467-HDFS-7240.003.patch, HDFS-12467-HDFS-7240.004.patch
>
>
> {{NodeManager}} should add a log message when it comes out of chill mode.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12584) [READ] Fix errors in image generation tool from latest rebase

2017-10-03 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190389#comment-16190389
 ] 

Chris Douglas commented on HDFS-12584:
--

If this works locally, we can push it to the branch. It looks like it's trying 
to recompile only the submodule

> [READ] Fix errors in image generation tool from latest rebase
> -
>
> Key: HDFS-12584
> URL: https://issues.apache.org/jira/browse/HDFS-12584
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12584-HDFS-9806.001.patch
>
>
> Fix compile errors, from the latest rebase, in FSImage generation tool



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12471) Ozone: Reduce some KSM/SCM deletion log messages from INFO to DEBUG

2017-10-03 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12471:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

[~xyao] Thanks for filing the issue. [~linyiqun] Thanks for comments. 
[~cheersyang] Thanks for the contribution. I have committed this to the feature 
branch.

> Ozone: Reduce some KSM/SCM deletion log messages from INFO to DEBUG
> ---
>
> Key: HDFS-12471
> URL: https://issues.apache.org/jira/browse/HDFS-12471
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Weiwei Yang
>Priority: Trivial
> Fix For: HDFS-7240
>
> Attachments: HDFS-12471-HDFS-7240.001.patch
>
>
> Looks like we are logging a few no-op messages every minute in KSM/SCM log. 
> Should we reduce the log level to DEBUG or TRACE? cc: [~anu],[~cheersyang], 
> [~yuanbo].
> {code}
> 2017-09-14 23:42:15,022 [SCMBlockDeletingService#0] INFO  
> (SCMBlockDeletingService.java:103) - Running 
> DeletedBlockTransactionScanner
> 2017-09-14 23:42:15,024 [SCMBlockDeletingService#0] INFO  
> (SCMBlockDeletingService.java:136) - Scanned deleted blocks log and got 0 
> delTX to process
> 2017-09-14 23:42:24,139 [KeyDeletingService#1] INFO  
> (KeyDeletingService.java:123) - No pending deletion key found in KSM
> 2017-09-14 23:43:09,377 [BlockDeletingService#2] INFO  
> (BlockDeletingService.java:109) - Plan to choose 10 containers for block 
> deletion, actually returns 0 valid containers.
> 2017-09-14 23:43:15,027 [SCMBlockDeletingService#0] INFO  
> (SCMBlockDeletingService.java:103) - Running 
> DeletedBlockTransactionScanner
> 2017-09-14 23:43:15,027 [SCMBlockDeletingService#0] INFO  
> (SCMBlockDeletingService.java:136) - Scanned deleted blocks log and got 0 
> delTX to process
> 2017-09-14 23:43:24,146 [KeyDeletingService#1] INFO  
> (KeyDeletingService.java:123) - No pending deletion key found in KSM
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12471) Ozone: Reduce some KSM/SCM deletion log messages from INFO to DEBUG

2017-10-03 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190352#comment-16190352
 ] 

Anu Engineer commented on HDFS-12471:
-

+1, I will commit this soon. Thanks for fixing this.

> Ozone: Reduce some KSM/SCM deletion log messages from INFO to DEBUG
> ---
>
> Key: HDFS-12471
> URL: https://issues.apache.org/jira/browse/HDFS-12471
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Weiwei Yang
>Priority: Trivial
> Attachments: HDFS-12471-HDFS-7240.001.patch
>
>
> Looks like we are logging a few no-op messages every minute in KSM/SCM log. 
> Should we reduce the log level to DEBUG or TRACE? cc: [~anu],[~cheersyang], 
> [~yuanbo].
> {code}
> 2017-09-14 23:42:15,022 [SCMBlockDeletingService#0] INFO  
> (SCMBlockDeletingService.java:103) - Running 
> DeletedBlockTransactionScanner
> 2017-09-14 23:42:15,024 [SCMBlockDeletingService#0] INFO  
> (SCMBlockDeletingService.java:136) - Scanned deleted blocks log and got 0 
> delTX to process
> 2017-09-14 23:42:24,139 [KeyDeletingService#1] INFO  
> (KeyDeletingService.java:123) - No pending deletion key found in KSM
> 2017-09-14 23:43:09,377 [BlockDeletingService#2] INFO  
> (BlockDeletingService.java:109) - Plan to choose 10 containers for block 
> deletion, actually returns 0 valid containers.
> 2017-09-14 23:43:15,027 [SCMBlockDeletingService#0] INFO  
> (SCMBlockDeletingService.java:103) - Running 
> DeletedBlockTransactionScanner
> 2017-09-14 23:43:15,027 [SCMBlockDeletingService#0] INFO  
> (SCMBlockDeletingService.java:136) - Scanned deleted blocks log and got 0 
> delTX to process
> 2017-09-14 23:43:24,146 [KeyDeletingService#1] INFO  
> (KeyDeletingService.java:123) - No pending deletion key found in KSM
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12577) Rename Router tooling

2017-10-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HDFS-12577:
--

Assignee: Íñigo Goiri

> Rename Router tooling
> -
>
> Key: HDFS-12577
> URL: https://issues.apache.org/jira/browse/HDFS-12577
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>  Labels: RBF
> Fix For: HDFS-10467
>
>
> Currently the naming for Router Based Federation has a couple conflicts:
> * Both YARN and HDFS have a Router component which may cause issues for the 
> PID file and JPS.
> * The tool to manage the mount table is called using {{hdfs federation}}. 
> This may cause confusion with the regular HDFS federation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12537) Ozone: Reduce key creation overhead in Corona

2017-10-03 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190348#comment-16190348
 ] 

Anu Engineer edited comment on HDFS-12537 at 10/3/17 9:03 PM:
--

[~ljain], Thank up for updating the patch. I am +1 on this change. There is a 
checkstyle warning that might require some code refactoring to get fixed. So I 
am going to wait until it is fixed.

./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/tools/Corona.java:485:
private CoronaJobInfo(String execTime, String 
averageVolumeCreationTime,:13: More than 7 parameters (found 10). 
\[ParameterNumber\]


was (Author: anu):
@Lokesh, Thank up for updating the patch. I am +1 on this change. There is a 
checkstyle warning that might require some code refactoring to get fixed. So I 
am going to wait until it is fixed.

./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/tools/Corona.java:485:
private CoronaJobInfo(String execTime, String 
averageVolumeCreationTime,:13: More than 7 parameters (found 10). 
\[ParameterNumber\]

> Ozone: Reduce key creation overhead in Corona
> -
>
> Key: HDFS-12537
> URL: https://issues.apache.org/jira/browse/HDFS-12537
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
> Attachments: HDFS-12537-HDFS-7240.001.patch, 
> HDFS-12537-HDFS-7240.002.patch, HDFS-12537-HDFS-7240.003.patch
>
>
> Currently Corona creates random key values for each key. This creates a lot 
> of overhead. An option should be provided to use a single key value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12537) Ozone: Reduce key creation overhead in Corona

2017-10-03 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190348#comment-16190348
 ] 

Anu Engineer commented on HDFS-12537:
-

@Lokesh, Thank up for updating the patch. I am +1 on this change. There is a 
checkstyle warning that might require some code refactoring to get fixed. So I 
am going to wait until it is fixed.

./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/ozone/tools/Corona.java:485:
private CoronaJobInfo(String execTime, String 
averageVolumeCreationTime,:13: More than 7 parameters (found 10). 
\[ParameterNumber\]

> Ozone: Reduce key creation overhead in Corona
> -
>
> Key: HDFS-12537
> URL: https://issues.apache.org/jira/browse/HDFS-12537
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
> Attachments: HDFS-12537-HDFS-7240.001.patch, 
> HDFS-12537-HDFS-7240.002.patch, HDFS-12537-HDFS-7240.003.patch
>
>
> Currently Corona creates random key values for each key. This creates a lot 
> of overhead. An option should be provided to use a single key value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12577) Rename Router tooling

2017-10-03 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar reassigned HDFS-12577:
-

Assignee: (was: Ajay Kumar)

> Rename Router tooling
> -
>
> Key: HDFS-12577
> URL: https://issues.apache.org/jira/browse/HDFS-12577
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>  Labels: RBF
> Fix For: HDFS-10467
>
>
> Currently the naming for Router Based Federation has a couple conflicts:
> * Both YARN and HDFS have a Router component which may cause issues for the 
> PID file and JPS.
> * The tool to manage the mount table is called using {{hdfs federation}}. 
> This may cause confusion with the regular HDFS federation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12583) Ozone: Fix swallow exceptions which makes hard to debug failures

2017-10-03 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12583:

Labels:   (was: ozoneMerge)

> Ozone: Fix swallow exceptions which makes hard to debug failures
> 
>
> Key: HDFS-12583
> URL: https://issues.apache.org/jira/browse/HDFS-12583
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12583-HDFS-7240.001.patch, 
> HDFS-12583-HDFS-7240.002.patch
>
>
> There are some places that swallow exceptions that makes client hard to debug 
> the failure. For example, if we get xceiver client from xceiver client 
> manager error, client only gets the error info like this:
> {noformat}
> org.apache.hadoop.ozone.web.exceptions.OzoneException: Exception getting 
> XceiverClient.
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> com.fasterxml.jackson.databind.introspect.AnnotatedConstructor.call(AnnotatedConstructor.java:119)
>   at 
> com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createUsingDefault(StdValueInstantiator.java:243)
> {noformat}
> The error exception stack is missing. We should print the error log as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12513) Ozone: Create UI page to show Ozone configs by tags

2017-10-03 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190336#comment-16190336
 ] 

Anu Engineer edited comment on HDFS-12513 at 10/3/17 8:54 PM:
--

[~ajayydv] Thanks for the contribution. Can you please rebase this patch, it 
fails on the top of the tree. While you are at it, you might want to fix the 
CheckStyle issues too.

Also, In OzoneConfiguration.java seems like the package path does not match the 
physical path.





was (Author: anu):
[~ajayydv] Thanks for the contribution. Can you please rebase this patch, it 
fails on the top of the tree. While you are at it, you might want to fix the 
CheckStyle issues too.



> Ozone: Create UI page to show Ozone configs by tags
> ---
>
> Key: HDFS-12513
> URL: https://issues.apache.org/jira/browse/HDFS-12513
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: HDFS-7240
>
> Attachments: HDFS-12513-HDFS-7240.001.patch, OzoneSettings.png
>
>
> Create UI page to show Ozone configs by tags



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12513) Ozone: Create UI page to show Ozone configs by tags

2017-10-03 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190336#comment-16190336
 ] 

Anu Engineer commented on HDFS-12513:
-

[~ajayydv] Thanks for the contribution. Can you please rebase this patch, it 
fails on the top of the tree. While you are at it, you might want to fix the 
CheckStyle issues too.



> Ozone: Create UI page to show Ozone configs by tags
> ---
>
> Key: HDFS-12513
> URL: https://issues.apache.org/jira/browse/HDFS-12513
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: HDFS-7240
>
> Attachments: HDFS-12513-HDFS-7240.001.patch, OzoneSettings.png
>
>
> Create UI page to show Ozone configs by tags



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12273) Federation UI

2017-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190335#comment-16190335
 ] 

Hadoop QA commented on HDFS-12273:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-10467 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 3s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-10467 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 50s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 5 new + 389 unchanged - 
5 fixed = 394 total (was 394) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}113m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}162m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery |
|   | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12273 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890197/HDFS-12273-HDFS-10467-009.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux c61f2facec0c 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10467 / 39305af |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21505/artifact/patchprocess/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Commented] (HDFS-12455) WebHDFS - Adding "snapshot enabled" status to ListStatus query result.

2017-10-03 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190329#comment-16190329
 ] 

Ajay Kumar commented on HDFS-12455:
---

[~xyao],[~anu] thanks for review. [~xyao] for committing this.

> WebHDFS - Adding "snapshot enabled" status to ListStatus query result.
> --
>
> Key: HDFS-12455
> URL: https://issues.apache.org/jira/browse/HDFS-12455
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots, webhdfs
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: 3.1.0
>
> Attachments: HDFS-12455.01.patch, HDFS-12455.02.patch, 
> HDFS-12455.03.patch, HDFS-12455.04.patch, HDFS-12455.05.patch
>
>
> WebHDFS - ListStatus query does not provide any information about a folder's 
> "snapshot enabled" status. Since "ListStatus" lists other attributes it will 
> be good to include this attribute as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12455) WebHDFS - Adding "snapshot enabled" status to ListStatus query result.

2017-10-03 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190329#comment-16190329
 ] 

Ajay Kumar edited comment on HDFS-12455 at 10/3/17 8:49 PM:


[~xyao],[~anu] thanks for review. [~xyao], thanks for committing this.


was (Author: ajayydv):
[~xyao],[~anu] thanks for review. [~xyao] for committing this.

> WebHDFS - Adding "snapshot enabled" status to ListStatus query result.
> --
>
> Key: HDFS-12455
> URL: https://issues.apache.org/jira/browse/HDFS-12455
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots, webhdfs
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: 3.1.0
>
> Attachments: HDFS-12455.01.patch, HDFS-12455.02.patch, 
> HDFS-12455.03.patch, HDFS-12455.04.patch, HDFS-12455.05.patch
>
>
> WebHDFS - ListStatus query does not provide any information about a folder's 
> "snapshot enabled" status. Since "ListStatus" lists other attributes it will 
> be good to include this attribute as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12544) SnapshotDiff - support diff generation on any snapshot root descendant directory

2017-10-03 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-12544:
-
Description: 
{noformat}
# hdfs snapshotDiff   
{noformat}

Using snapshot diff command, we can generate a diff report between any two 
given snapshots under a snapshot root directory. The command today only accepts 
the path that is a snapshot root. There are many deployments where the snapshot 
root is configured at the higher level directory but the diff report needed is 
only for a specific directory under the snapshot root. In these cases, the diff 
report can be filtered for changes pertaining to the directory we are 
interested in. But when the snapshot root directory is very huge, the snapshot 
diff report generation can take minutes even if we are interested to know the 
changes only in a small directory. So, it would be highly performant if the 
diff report calculation can be limited to only the interesting sub-directory of 
the snapshot root instead of the whole snapshot root.

  was:
{noformat}
# hdfs snapshotDiff   
{noformat}

Using snapshot diff command, we can generate a diff report between any two 
given snapshots under a snapshot root directory. The command today only accepts 
the path that is a snapshot root. There are many deployments where the snapshot 
root is configured at the higher level directory but the diff report needed is 
only for a specific directory under the snapshot root. In these cases, the diff 
report can be filtered for changes pertaining to the directory we are 
interested in. But when the snapshot root directory is very huge, the snapshot 
diff report generation can take minutes even if we are interested to know the 
changes only in a small directory. So, it would be highly performant if the 
diff report calculation can be limited to the snapshot directory only instead 
of the whole snapshot root.


> SnapshotDiff - support diff generation on any snapshot root descendant 
> directory
> 
>
> Key: HDFS-12544
> URL: https://issues.apache.org/jira/browse/HDFS-12544
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.0.0-beta1
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-12544.01.patch, HDFS-12544.02.patch
>
>
> {noformat}
> # hdfs snapshotDiff   
> 
> {noformat}
> Using snapshot diff command, we can generate a diff report between any two 
> given snapshots under a snapshot root directory. The command today only 
> accepts the path that is a snapshot root. There are many deployments where 
> the snapshot root is configured at the higher level directory but the diff 
> report needed is only for a specific directory under the snapshot root. In 
> these cases, the diff report can be filtered for changes pertaining to the 
> directory we are interested in. But when the snapshot root directory is very 
> huge, the snapshot diff report generation can take minutes even if we are 
> interested to know the changes only in a small directory. So, it would be 
> highly performant if the diff report calculation can be limited to only the 
> interesting sub-directory of the snapshot root instead of the whole snapshot 
> root.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12455) WebHDFS - Adding "snapshot enabled" status to ListStatus query result.

2017-10-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190288#comment-16190288
 ] 

Hudson commented on HDFS-12455:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13015 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13015/])
HDFS-12455. WebHDFS - Adding "snapshot enabled" status to ListStatus (xyao: rev 
107c177782a24a16c66113841f2fc5144f56207b)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/protocolPB/PBHelper.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/WebHDFS.md
* (edit) hadoop-common-project/hadoop-common/src/main/proto/FSProtos.proto
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolServerSideTranslatorPB.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
* (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/hdfs.proto
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java


> WebHDFS - Adding "snapshot enabled" status to ListStatus query result.
> --
>
> Key: HDFS-12455
> URL: https://issues.apache.org/jira/browse/HDFS-12455
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots, webhdfs
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: 3.1.0
>
> Attachments: HDFS-12455.01.patch, HDFS-12455.02.patch, 
> HDFS-12455.03.patch, HDFS-12455.04.patch, HDFS-12455.05.patch
>
>
> WebHDFS - ListStatus query does not provide any information about a folder's 
> "snapshot enabled" status. Since "ListStatus" lists other attributes it will 
> be good to include this attribute as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12110) libhdfs++: Rebase 8707 branch onto an up to date version of trunk

2017-10-03 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190271#comment-16190271
 ] 

James Clampffer commented on HDFS-12110:


Ah, my bad.  Thanks for posting the minimal patch for comparison, I'll take a 
look at that.

There's a chance I broke something in my environment when updating maven and 
cmake that caused the ctest issue.

> libhdfs++: Rebase 8707 branch onto an up to date version of trunk
> -
>
> Key: HDFS-12110
> URL: https://issues.apache.org/jira/browse/HDFS-12110
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Deepak Majeti
> Attachments: HDFS-12110.diff, HDFS-12110.HDFS-8707.000.patch
>
>
> It's been way too long since this has been done and it's time to start 
> knocking down blockers for merging into trunk.  Can most likely just 
> copy/paste the libhdfs++ directory into a newer version of master.  Want to 
> track it in a jira since it's likely to cause conflicts when pulling the 
> updated branch for the first time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12455) WebHDFS - Adding "snapshot enabled" status to ListStatus query result.

2017-10-03 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12455:
--
Component/s: snapshots

> WebHDFS - Adding "snapshot enabled" status to ListStatus query result.
> --
>
> Key: HDFS-12455
> URL: https://issues.apache.org/jira/browse/HDFS-12455
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots, webhdfs
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: 3.1.0
>
> Attachments: HDFS-12455.01.patch, HDFS-12455.02.patch, 
> HDFS-12455.03.patch, HDFS-12455.04.patch, HDFS-12455.05.patch
>
>
> WebHDFS - ListStatus query does not provide any information about a folder's 
> "snapshot enabled" status. Since "ListStatus" lists other attributes it will 
> be good to include this attribute as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12455) WebHDFS - Adding "snapshot enabled" status to ListStatus query result.

2017-10-03 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12455:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.0
   Status: Resolved  (was: Patch Available)

Thanks [~ajayydv] for the contribution and all for the reviews. I've commit the 
patch to the trunk.

> WebHDFS - Adding "snapshot enabled" status to ListStatus query result.
> --
>
> Key: HDFS-12455
> URL: https://issues.apache.org/jira/browse/HDFS-12455
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: 3.1.0
>
> Attachments: HDFS-12455.01.patch, HDFS-12455.02.patch, 
> HDFS-12455.03.patch, HDFS-12455.04.patch, HDFS-12455.05.patch
>
>
> WebHDFS - ListStatus query does not provide any information about a folder's 
> "snapshot enabled" status. Since "ListStatus" lists other attributes it will 
> be good to include this attribute as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12455) WebHDFS - Adding "snapshot enabled" status to ListStatus query result.

2017-10-03 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12455:
--
Component/s: webhdfs

> WebHDFS - Adding "snapshot enabled" status to ListStatus query result.
> --
>
> Key: HDFS-12455
> URL: https://issues.apache.org/jira/browse/HDFS-12455
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: 3.1.0
>
> Attachments: HDFS-12455.01.patch, HDFS-12455.02.patch, 
> HDFS-12455.03.patch, HDFS-12455.04.patch, HDFS-12455.05.patch
>
>
> WebHDFS - ListStatus query does not provide any information about a folder's 
> "snapshot enabled" status. Since "ListStatus" lists other attributes it will 
> be good to include this attribute as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12586) EZ createZone returns IllegalArgumentException when using protocol in path

2017-10-03 Thread Rushabh S Shah (JIRA)
Rushabh S Shah created HDFS-12586:
-

 Summary: EZ createZone returns IllegalArgumentException when using 
protocol in path
 Key: HDFS-12586
 URL: https://issues.apache.org/jira/browse/HDFS-12586
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rushabh S Shah
Assignee: Rushabh S Shah


When trying to create an EZ and sending protocol (hdfs://) as part of the path, 
-createZone reports an IllegalArgumentException.
IllegalArgumentException: hdfs:///tmp/fooez1 is not the root 
of an encryption zone. Do you mean /tmp/fooez1?

Here's a sequence:
1. mkdir the path
bash-4.1$ hadoop fs -mkdir /tmp/fooez1
2. try to make EZ using hdfs protocol, and get error
hdfs crypto -createZone -keyName key1 -path 
hdfs:///tmp/fooez1/
IllegalArgumentException: hdfs:///tmp/fooez1 is not the root 
of an encryption zone. Do you mean /tmp/fooez1?

It fails while provisioning trash for ez root directory.
The relevant chunk of code.
{code:title=HdfsAdmin.java|borderStyle=solid}
private void provisionEZTrash(Path path) throws IOException {
   ...
   ...
String ezPath = ez.getPath();
if (!path.toString().equals(ezPath)) {
  throw new IllegalArgumentException(path + " is not the root of an " +
  "encryption zone. Do you mean " + ez.getPath() + "?");
}
{code}
It is comparing the {{supplied path}} with path component of 
{{EncryptionZone#path}} which doesn't contain scheme and authority.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12455) WebHDFS - Adding "snapshot enabled" status to ListStatus query result.

2017-10-03 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12455:
--
Issue Type: Improvement  (was: Bug)

> WebHDFS - Adding "snapshot enabled" status to ListStatus query result.
> --
>
> Key: HDFS-12455
> URL: https://issues.apache.org/jira/browse/HDFS-12455
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12455.01.patch, HDFS-12455.02.patch, 
> HDFS-12455.03.patch, HDFS-12455.04.patch, HDFS-12455.05.patch
>
>
> WebHDFS - ListStatus query does not provide any information about a folder's 
> "snapshot enabled" status. Since "ListStatus" lists other attributes it will 
> be good to include this attribute as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12455) WebHDFS - Adding "snapshot enabled" status to ListStatus query result.

2017-10-03 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12455:
--
Summary: WebHDFS - Adding "snapshot enabled" status to ListStatus query 
result.  (was: WebHDFS - ListStatus query does not provide any information 
about a folder's "snapshot enabled" status)

> WebHDFS - Adding "snapshot enabled" status to ListStatus query result.
> --
>
> Key: HDFS-12455
> URL: https://issues.apache.org/jira/browse/HDFS-12455
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12455.01.patch, HDFS-12455.02.patch, 
> HDFS-12455.03.patch, HDFS-12455.04.patch, HDFS-12455.05.patch
>
>
> WebHDFS - ListStatus query does not provide any information about a folder's 
> "snapshot enabled" status. Since "ListStatus" lists other attributes it will 
> be good to include this attribute as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12584) [READ] Fix errors in image generation tool from latest rebase

2017-10-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190190#comment-16190190
 ] 

Hadoop QA commented on HDFS-12584:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-9806 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
18s{color} | {color:red} root in HDFS-9806 failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m  
5s{color} | {color:red} hadoop-fs2img in HDFS-9806 failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 5s{color} | {color:green} HDFS-9806 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m  
5s{color} | {color:red} hadoop-fs2img in HDFS-9806 failed. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  0m 
18s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m  
5s{color} | {color:red} hadoop-fs2img in HDFS-9806 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m  
4s{color} | {color:red} hadoop-fs2img in HDFS-9806 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  2m 
41s{color} | {color:red} hadoop-fs2img in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
14s{color} | {color:red} hadoop-fs2img in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 14s{color} 
| {color:red} hadoop-fs2img in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-tools/hadoop-fs2img: The patch generated 
1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-fs2img in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-fs2img in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-tools_hadoop-fs2img generated 30 new + 0 
unchanged - 0 fixed = 30 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 14s{color} 
| {color:red} hadoop-fs2img in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12584 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12890206/HDFS-12584-HDFS-9806.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux ac324940f5f0 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-9806 / 0579689 |
| Default Java | 1.8.0_144 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21507/artifact/patchprocess/branch-mvninstall-root.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21507/artifact/patchprocess/branch-compile-hadoop-tools_hadoop-fs2img.txt
 |
| mvnsite | 

[jira] [Assigned] (HDFS-12519) Ozone: Add a Lease Manager to SCM

2017-10-03 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar reassigned HDFS-12519:
-

Assignee: Nandakumar  (was: Anu Engineer)

> Ozone: Add a Lease Manager to SCM
> -
>
> Key: HDFS-12519
> URL: https://issues.apache.org/jira/browse/HDFS-12519
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Anu Engineer
>Assignee: Nandakumar
>  Labels: OzonePostMerge
>
> Many objects, including Containers and pipelines can time out during creating 
> process. We need a way to track these timeouts. This lease Manager allows SCM 
> to hold a lease on these objects and helps SCM timeout waiting for creating 
> of these objects.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12543) Ozone : allow create key without specifying size

2017-10-03 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12543:
--
Attachment: HDFS-12543-HDFS-7240.009.patch

Rebase with v009 patch.

> Ozone : allow create key without specifying size
> 
>
> Key: HDFS-12543
> URL: https://issues.apache.org/jira/browse/HDFS-12543
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>  Labels: ozoneMerge
> Attachments: HDFS-12543-HDFS-7240.001.patch, 
> HDFS-12543-HDFS-7240.002.patch, HDFS-12543-HDFS-7240.003.patch, 
> HDFS-12543-HDFS-7240.004.patch, HDFS-12543-HDFS-7240.005.patch, 
> HDFS-12543-HDFS-7240.006.patch, HDFS-12543-HDFS-7240.007.patch, 
> HDFS-12543-HDFS-7240.008.patch, HDFS-12543-HDFS-7240.009.patch
>
>
> Currently when creating a key, it is required to specify the total size of 
> the key. This makes it inconvenient for the case where a key is created and 
> data keeps coming and being appended. This JIRA is remove the requirement of 
> specifying the size on key creation, and allows appending to the key 
> indefinitely.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12585) Add description for each config in Ozone config UI

2017-10-03 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12585:

Issue Type: Sub-task  (was: Improvement)
Parent: HDFS-7240

> Add description for each config in Ozone config UI
> --
>
> Key: HDFS-12585
> URL: https://issues.apache.org/jira/browse/HDFS-12585
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: HDFS-7240
>
>
> Add description for each config in Ozone config UI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12585) Add description for each config in Ozone config UI

2017-10-03 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12585:
--
Issue Type: Improvement  (was: Bug)

> Add description for each config in Ozone config UI
> --
>
> Key: HDFS-12585
> URL: https://issues.apache.org/jira/browse/HDFS-12585
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: HDFS-7240
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Fix For: HDFS-7240
>
>
> Add description for each config in Ozone config UI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12585) Add description for each config in Ozone config UI

2017-10-03 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDFS-12585:
-

 Summary: Add description for each config in Ozone config UI
 Key: HDFS-12585
 URL: https://issues.apache.org/jira/browse/HDFS-12585
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: HDFS-7240
Reporter: Ajay Kumar
Assignee: Ajay Kumar
 Fix For: HDFS-7240


Add description for each config in Ozone config UI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12584) [READ] Fix errors in image generation tool from latest rebase

2017-10-03 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12584:
--
Status: Patch Available  (was: Open)

> [READ] Fix errors in image generation tool from latest rebase
> -
>
> Key: HDFS-12584
> URL: https://issues.apache.org/jira/browse/HDFS-12584
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12584-HDFS-9806.001.patch
>
>
> Fix compile errors, from the latest rebase, in FSImage generation tool



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12584) [READ] Fix errors in image generation tool from latest rebase

2017-10-03 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-12584:
--
Attachment: HDFS-12584-HDFS-9806.001.patch

Patch fixes (1) version of the FSimage tool (2) unnecessary try..catch clause.

> [READ] Fix errors in image generation tool from latest rebase
> -
>
> Key: HDFS-12584
> URL: https://issues.apache.org/jira/browse/HDFS-12584
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
> Attachments: HDFS-12584-HDFS-9806.001.patch
>
>
> Fix compile errors, from the latest rebase, in FSImage generation tool



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12584) [READ] Fix errors in image generation tool from latest rebase

2017-10-03 Thread Virajith Jalaparti (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti reassigned HDFS-12584:
-

Assignee: Virajith Jalaparti

> [READ] Fix errors in image generation tool from latest rebase
> -
>
> Key: HDFS-12584
> URL: https://issues.apache.org/jira/browse/HDFS-12584
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-12584-HDFS-9806.001.patch
>
>
> Fix compile errors, from the latest rebase, in FSImage generation tool



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12442) WebHdfsFileSystem#getFileBlockLocations will always return BlockLocation#corrupt as false

2017-10-03 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-12442.

Resolution: Invalid

> WebHdfsFileSystem#getFileBlockLocations will always return 
> BlockLocation#corrupt as false
> -
>
> Key: HDFS-12442
> URL: https://issues.apache.org/jira/browse/HDFS-12442
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Critical
> Attachments: HDFS-12442-1.patch
>
>
> Was going through {{JsonUtilClient#toBlockLocation}} code.
> Below is the relevant code snippet.
> {code:title=JsonUtilClient.java|borderStyle=solid}
>  /** Convert a Json map to BlockLocation. **/
>   static BlockLocation toBlockLocation(Map m)
>   throws IOException{
> ...
> ...  
> boolean corrupt = Boolean.
> getBoolean(m.get("corrupt").toString());
> ...
> ...
>   }
> {code}
> According to java docs for {{Boolean#getBoolean}}
> {noformat}
> Returns true if and only if the system property named by the argument exists 
> and is equal to the string "true". 
> {noformat}
> I assume, the map value for key {{corrupt}} will be populated with either 
> {{true}} or {{false}}.
> On the client side, {{Boolean#getBoolean}} will look for system property for 
> true or false.
> So it will always return false unless the system property is set for true or 
> false.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12442) WebHdfsFileSystem#getFileBlockLocations will always return BlockLocation#corrupt as false

2017-10-03 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12442:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

I think we can resolve given that we reverted the JIRA that broke this.

> WebHdfsFileSystem#getFileBlockLocations will always return 
> BlockLocation#corrupt as false
> -
>
> Key: HDFS-12442
> URL: https://issues.apache.org/jira/browse/HDFS-12442
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Critical
> Attachments: HDFS-12442-1.patch
>
>
> Was going through {{JsonUtilClient#toBlockLocation}} code.
> Below is the relevant code snippet.
> {code:title=JsonUtilClient.java|borderStyle=solid}
>  /** Convert a Json map to BlockLocation. **/
>   static BlockLocation toBlockLocation(Map m)
>   throws IOException{
> ...
> ...  
> boolean corrupt = Boolean.
> getBoolean(m.get("corrupt").toString());
> ...
> ...
>   }
> {code}
> According to java docs for {{Boolean#getBoolean}}
> {noformat}
> Returns true if and only if the system property named by the argument exists 
> and is equal to the string "true". 
> {noformat}
> I assume, the map value for key {{corrupt}} will be populated with either 
> {{true}} or {{false}}.
> On the client side, {{Boolean#getBoolean}} will look for system property for 
> true or false.
> So it will always return false unless the system property is set for true or 
> false.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-12442) WebHdfsFileSystem#getFileBlockLocations will always return BlockLocation#corrupt as false

2017-10-03 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reopened HDFS-12442:


> WebHdfsFileSystem#getFileBlockLocations will always return 
> BlockLocation#corrupt as false
> -
>
> Key: HDFS-12442
> URL: https://issues.apache.org/jira/browse/HDFS-12442
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Critical
> Attachments: HDFS-12442-1.patch
>
>
> Was going through {{JsonUtilClient#toBlockLocation}} code.
> Below is the relevant code snippet.
> {code:title=JsonUtilClient.java|borderStyle=solid}
>  /** Convert a Json map to BlockLocation. **/
>   static BlockLocation toBlockLocation(Map m)
>   throws IOException{
> ...
> ...  
> boolean corrupt = Boolean.
> getBoolean(m.get("corrupt").toString());
> ...
> ...
>   }
> {code}
> According to java docs for {{Boolean#getBoolean}}
> {noformat}
> Returns true if and only if the system property named by the argument exists 
> and is equal to the string "true". 
> {noformat}
> I assume, the map value for key {{corrupt}} will be populated with either 
> {{true}} or {{false}}.
> On the client side, {{Boolean#getBoolean}} will look for system property for 
> true or false.
> So it will always return false unless the system property is set for true or 
> false.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12568) Ozone: Cleanup the ozone-default.xml

2017-10-03 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12568:

  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: HDFS-7240
Target Version/s: HDFS-7240
  Status: Resolved  (was: Patch Available)

[~Weiwei Yang], [~msingh],[~ajayydv], [~xyao] Thanks for the reviews. I have 
committed this to the feature branch. I have fixed some checkstyle warnings 
while committing, and thanks to [~msingh] for taking care of 2 missing config 
entries in the ozone-default.xml, he will add them via HDFS-12572

> Ozone: Cleanup the ozone-default.xml
> 
>
> Key: HDFS-12568
> URL: https://issues.apache.org/jira/browse/HDFS-12568
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Blocker
>  Labels: ozoneMerge
> Fix For: HDFS-7240
>
> Attachments: HDFS-12568-HDFS-7240.001.patch, 
> HDFS-12568-HDFS-7240.002.patch, HDFS-12568-HDFS-7240.003.patch, 
> HDFS-12568-HDFS-7240.004.patch, HDFS-12568-HDFS-7240.005.patch
>
>
> This JIRA proposes to clean up the ozone-default.xml before the merge.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11968) ViewFS: StoragePolicies commands fail with HDFS federation

2017-10-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190135#comment-16190135
 ] 

Hudson commented on HDFS-11968:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13014 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13014/])
HDFS-11968. ViewFS: StoragePolicies commands fail with HDFS federation. (arp: 
rev b91305119b434d23b99ae7e755aea6639f48b6ab)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestStoragePolicyCommands.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/InodeTree.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/StoragePolicyAdmin.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestWebHDFSStoragePolicyCommands.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestViewFSStoragePolicyCommands.java


> ViewFS: StoragePolicies commands fail with HDFS federation
> --
>
> Key: HDFS-11968
> URL: https://issues.apache.org/jira/browse/HDFS-11968
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: 3.1.0
>
> Attachments: HDFS-11968.001.patch, HDFS-11968.002.patch, 
> HDFS-11968.003.patch, HDFS-11968.004.patch, HDFS-11968.005.patch, 
> HDFS-11968.006.patch, HDFS-11968.007.patch, HDFS-11968.008.patch, 
> HDFS-11968.009.patch, HDFS-11968.010.patch, HDFS-11968.011.patch
>
>
> hdfs storagepolicies command fails with HDFS federation.
> For storage policies commands, a given user path should be resolved to a HDFS 
> path and
> storage policy command should be applied onto the resolved HDFS path.
> {code}
>   static DistributedFileSystem getDFS(Configuration conf)
>   throws IOException {
> FileSystem fs = FileSystem.get(conf);
> if (!(fs instanceof DistributedFileSystem)) {
>   throw new IllegalArgumentException("FileSystem " + fs.getUri() +
>   " is not an HDFS file system");
> }
> return (DistributedFileSystem)fs;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12583) Ozone: Fix swallow exceptions which makes hard to debug failures

2017-10-03 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190134#comment-16190134
 ] 

Chen Liang commented on HDFS-12583:
---

Thanks [~linyiqun] for reporting this! We do need to improve logging on certain 
places. Could you please elaborate a bit on how you got the error in the 
description? Agree with Weiwei on that one key thing is we'd better be sure 
that no exceptions get logged more than once.

Also some comments:
1.BucketProcessTemplate logs the original IOException {{fsExp}} at the 
beginning of {{handleIOException}}, but {{handleIOException}} in 
{{VolumeProcessTemplate}} logs the transformed OzoneException at the end.
2. Seems the patch only handles IOException? Because there are other exception, 
such as IllegalArgumentException in VolumeProcessTemplate#handle that are still 
logged in debug.

> Ozone: Fix swallow exceptions which makes hard to debug failures
> 
>
> Key: HDFS-12583
> URL: https://issues.apache.org/jira/browse/HDFS-12583
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>  Labels: ozoneMerge
> Attachments: HDFS-12583-HDFS-7240.001.patch, 
> HDFS-12583-HDFS-7240.002.patch
>
>
> There are some places that swallow exceptions that makes client hard to debug 
> the failure. For example, if we get xceiver client from xceiver client 
> manager error, client only gets the error info like this:
> {noformat}
> org.apache.hadoop.ozone.web.exceptions.OzoneException: Exception getting 
> XceiverClient.
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> com.fasterxml.jackson.databind.introspect.AnnotatedConstructor.call(AnnotatedConstructor.java:119)
>   at 
> com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createUsingDefault(StdValueInstantiator.java:243)
> {noformat}
> The error exception stack is missing. We should print the error log as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12584) [READ] Fix errors in image generation tool from latest rebase

2017-10-03 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-12584:
-

 Summary: [READ] Fix errors in image generation tool from latest 
rebase
 Key: HDFS-12584
 URL: https://issues.apache.org/jira/browse/HDFS-12584
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Virajith Jalaparti


Fix compile errors, from the latest rebase, in FSImage generation tool



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11968) ViewFS: StoragePolicies commands fail with HDFS federation

2017-10-03 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-11968:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.0
   Status: Resolved  (was: Patch Available)

I've committed this. Thanks for the contribution [~msingh]. Thanks for the 
reviews and ideas Surendra and Manoj.

> ViewFS: StoragePolicies commands fail with HDFS federation
> --
>
> Key: HDFS-11968
> URL: https://issues.apache.org/jira/browse/HDFS-11968
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: 3.1.0
>
> Attachments: HDFS-11968.001.patch, HDFS-11968.002.patch, 
> HDFS-11968.003.patch, HDFS-11968.004.patch, HDFS-11968.005.patch, 
> HDFS-11968.006.patch, HDFS-11968.007.patch, HDFS-11968.008.patch, 
> HDFS-11968.009.patch, HDFS-11968.010.patch, HDFS-11968.011.patch
>
>
> hdfs storagepolicies command fails with HDFS federation.
> For storage policies commands, a given user path should be resolved to a HDFS 
> path and
> storage policy command should be applied onto the resolved HDFS path.
> {code}
>   static DistributedFileSystem getDFS(Configuration conf)
>   throws IOException {
> FileSystem fs = FileSystem.get(conf);
> if (!(fs instanceof DistributedFileSystem)) {
>   throw new IllegalArgumentException("FileSystem " + fs.getUri() +
>   " is not an HDFS file system");
> }
> return (DistributedFileSystem)fs;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8344) NameNode doesn't recover lease for files with missing blocks

2017-10-03 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-8344:
---
Target Version/s: 2.10.0  (was: 2.9.0)

> NameNode doesn't recover lease for files with missing blocks
> 
>
> Key: HDFS-8344
> URL: https://issues.apache.org/jira/browse/HDFS-8344
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.0
>Reporter: Ravi Prakash
>Assignee: Ravi Prakash
> Attachments: HDFS-8344.01.patch, HDFS-8344.02.patch, 
> HDFS-8344.03.patch, HDFS-8344.04.patch, HDFS-8344.05.patch, 
> HDFS-8344.06.patch, HDFS-8344.07.patch, HDFS-8344.08.patch, 
> HDFS-8344.09.patch, HDFS-8344.10.patch, TestHadoop.java
>
>
> I found another\(?) instance in which the lease is not recovered. This is 
> reproducible easily on a pseudo-distributed single node cluster
> # Before you start it helps if you set. This is not necessary, but simply 
> reduces how long you have to wait
> {code}
>   public static final long LEASE_SOFTLIMIT_PERIOD = 30 * 1000;
>   public static final long LEASE_HARDLIMIT_PERIOD = 2 * 
> LEASE_SOFTLIMIT_PERIOD;
> {code}
> # Client starts to write a file. (could be less than 1 block, but it hflushed 
> so some of the data has landed on the datanodes) (I'm copying the client code 
> I am using. I generate a jar and run it using $ hadoop jar TestHadoop.jar)
> # Client crashes. (I simulate this by kill -9 the $(hadoop jar 
> TestHadoop.jar) process after it has printed "Wrote to the bufferedWriter"
> # Shoot the datanode. (Since I ran on a pseudo-distributed cluster, there was 
> only 1)
> I believe the lease should be recovered and the block should be marked 
> missing. However this is not happening. The lease is never recovered.
> The effect of this bug for us was that nodes could not be decommissioned 
> cleanly. Although we knew that the client had crashed, the Namenode never 
> released the leases (even after restarting the Namenode) (even months 
> afterwards). There are actually several other cases too where we don't 
> consider what happens if ALL the datanodes die while the file is being 
> written, but I am going to punt on that for another time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12573) Divide the total block metrics into replica and ec

2017-10-03 Thread Manoj Govindassamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190083#comment-16190083
 ] 

Manoj Govindassamy commented on HDFS-12573:
---

Thanks for the patch revision [~tasanuma0829]. LGTM, +1.


> Divide the total block metrics into replica and ec
> --
>
> Key: HDFS-12573
> URL: https://issues.apache.org/jira/browse/HDFS-12573
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding, metrics, namenode
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
> Attachments: HDFS-12573.1.patch, HDFS-12573.2.patch, 
> HDFS-12573.3.patch
>
>
> Following HDFS-10999, let's separate total blocks metrics. It would be useful 
> for administrators.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12580) Rebasing HDFS-10467 after HDFS-12447

2017-10-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12580:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Rebasing HDFS-10467 after HDFS-12447
> 
>
> Key: HDFS-12580
> URL: https://issues.apache.org/jira/browse/HDFS-12580
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>  Labels: RBF
> Fix For: HDFS-10467
>
> Attachments: HDFS-12580-HDFS-10467.patch
>
>
> HDFS-12447 modified {{ClientProtocol#addErasureCodingPolicies}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12580) Rebasing HDFS-10467 after HDFS-12447

2017-10-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12580:
---
Labels: RBF  (was: )

> Rebasing HDFS-10467 after HDFS-12447
> 
>
> Key: HDFS-12580
> URL: https://issues.apache.org/jira/browse/HDFS-12580
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>  Labels: RBF
> Fix For: HDFS-10467
>
> Attachments: HDFS-12580-HDFS-10467.patch
>
>
> HDFS-12447 modified {{ClientProtocol#addErasureCodingPolicies}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12580) Rebasing HDFS-10467 after HDFS-12447

2017-10-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12580:
---
Status: Patch Available  (was: Open)

> Rebasing HDFS-10467 after HDFS-12447
> 
>
> Key: HDFS-12580
> URL: https://issues.apache.org/jira/browse/HDFS-12580
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Fix For: HDFS-10467
>
> Attachments: HDFS-12580-HDFS-10467.patch
>
>
> HDFS-12447 modified {{ClientProtocol#addErasureCodingPolicies}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12578) TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7

2017-10-03 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12578:
--
Status: Patch Available  (was: Open)

> TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7
> 
>
> Key: HDFS-12578
> URL: https://issues.apache.org/jira/browse/HDFS-12578
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiao Chen
>Assignee: Ajay Kumar
>Priority: Blocker
> Attachments: HDFS-12578-branch-2.7.001.patch
>
>
> It appears {{TestDeadDatanode#testNonDFSUsedONDeadNodeReReg}} is consistently 
> failing in branch-2.7. We should investigate and fix it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12578) TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7

2017-10-03 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12578:
--
Attachment: HDFS-12578-branch-2.7.001.patch

Hi [~xiaochen], [HDFS-9107] introduces a check in 
{{HeartbeatManager#heartbeatCheck}}.
{code}
 // check if an excessive GC pause has occurred
  if (shouldAbortHeartbeatCheck(0)) {
return;
  }
{code}
Due to this heartBeat check is aborted and DataNode is not declared dead. 
Please find attached patch. After that change value set for 
{{DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_KEY}} should be higher. 

> TestDeadDatanode#testNonDFSUsedONDeadNodeReReg failing in branch-2.7
> 
>
> Key: HDFS-12578
> URL: https://issues.apache.org/jira/browse/HDFS-12578
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Xiao Chen
>Assignee: Ajay Kumar
>Priority: Blocker
> Attachments: HDFS-12578-branch-2.7.001.patch
>
>
> It appears {{TestDeadDatanode#testNonDFSUsedONDeadNodeReReg}} is consistently 
> failing in branch-2.7. We should investigate and fix it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12450) Fixing TestNamenodeHeartbeat and support non-HA

2017-10-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190072#comment-16190072
 ] 

Íñigo Goiri commented on HDFS-12450:


Apparently the service port address was changed again in trunk and 
{{TestNamenodeHeartbeat}} is changing again.
For now, I'm disabling it in HDFS-12273 but we could also reopen here.

> Fixing TestNamenodeHeartbeat and support non-HA
> ---
>
> Key: HDFS-12450
> URL: https://issues.apache.org/jira/browse/HDFS-12450
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Fix For: HDFS-10467
>
> Attachments: HDFS-12450-HDFS-10467.000.patch, 
> HDFS-12450-HDFS-10467.001.patch, HDFS-12450-HDFS-10467.002.patch
>
>
> The way the service RPC address is obtained changed and showed a problem with 
> {{TestNamenodeHeartbeat}} where the address wasn't properly set for the unit 
> tests.
> In addition, the {{NamenodeHeartbeatService}} did not provide a good 
> experience for non-HA nameservices. This also covers a better logging for 
> those.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12273) Federation UI

2017-10-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16190065#comment-16190065
 ] 

Íñigo Goiri commented on HDFS-12273:


Not sure what the javac is about.

Fixed the unit tests but the main problem is that {{TestNamenodeHeartbeat}} 
fails because of the service port that was tuned in HDFS-12450.
I think the configuration for the service port has been changing in trunk 
lately.

> Federation UI
> -
>
> Key: HDFS-12273
> URL: https://issues.apache.org/jira/browse/HDFS-12273
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
> Fix For: HDFS-10467
>
> Attachments: federationUI-1.png, federationUI-2.png, 
> federationUI-3.png, HDFS-12273-HDFS-10467-000.patch, 
> HDFS-12273-HDFS-10467-001.patch, HDFS-12273-HDFS-10467-002.patch, 
> HDFS-12273-HDFS-10467-003.patch, HDFS-12273-HDFS-10467-004.patch, 
> HDFS-12273-HDFS-10467-005.patch, HDFS-12273-HDFS-10467-006.patch, 
> HDFS-12273-HDFS-10467-007.patch, HDFS-12273-HDFS-10467-008.patch, 
> HDFS-12273-HDFS-10467-009.patch
>
>
> Add the Web UI to the Router to expose the status of the federated cluster. 
> It includes the federation metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >