[jira] [Updated] (HDFS-12493) Correct javadoc for BackupNode#startActiveServices

2017-09-22 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDFS-12493:
-
Resolution: Not A Problem
Status: Resolved  (was: Patch Available)

[~anu], you are correct. This bug was raised on the editor warning. This 
doesn't shows up in the javadoc warning. Marking this as closed.

> Correct javadoc for BackupNode#startActiveServices
> --
>
> Key: HDFS-12493
> URL: https://issues.apache.org/jira/browse/HDFS-12493
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Trivial
> Attachments: HDFS-12493.001.patch
>
>
> Following javadoc warning needs to be fixed for 
> {{BackupNode#startActiveServices}}
> Javadoc links are not linked correctly.
> {code}
> /**
>  * Start services for BackupNode.
>  * 
>  * The following services should be muted
>  * (not run or not pass any control commands to DataNodes)
>  * on BackupNode:
>  * {@link LeaseManager.Monitor} protected by SafeMode.
>  * {@link BlockManager.RedundancyMonitor} protected by SafeMode.
>  * {@link HeartbeatManager.Monitor} protected by SafeMode.
>  * {@link DatanodeAdminManager.Monitor} need to prohibit refreshNodes().
>  * {@link PendingReconstructionBlocks.PendingReconstructionMonitor}
>  * harmless, because RedundancyMonitor is muted.
>  */
> @Override
> public void startActiveServices() throws IOException {
>   try {
> namesystem.startActiveServices();
>   } catch (Throwable t) {
> doImmediateShutdown(t);
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12498) Journal Syncer is not started in Federated + HA cluster

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177482#comment-16177482
 ] 

Hadoop QA commented on HDFS-12498:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 85 unchanged - 0 fixed = 87 total (was 85) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12498 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888633/HDFS-12498.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0bc4c2f6c619 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cda3378 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21318/artifact/patchprocess/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21318/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21318/artifact/patchprocess/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21318/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21318/artifact/patchprocess/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21318/artifact/patchprocess/patch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Updated] (HDFS-12498) Journal Syncer is not started in Federated + HA cluster

2017-09-22 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12498:
--
Status: Patch Available  (was: In Progress)

> Journal Syncer is not started in Federated + HA cluster
> ---
>
> Key: HDFS-12498
> URL: https://issues.apache.org/jira/browse/HDFS-12498
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12498.01.patch, hdfs-site.xml
>
>
> Journal Syncer is not getting started in HDFS + Federated cluster, when 
> dfs.shared.edits.dir.<> is provided, instead of 
> dfs.namenode.shared.edits.dir 
> *Log Snippet:*
> {code:java}
> 2017-09-19 21:42:40,598 WARN 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Could not construct 
> Shared Edits Uri
> 2017-09-19 21:42:40,598 WARN 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Other JournalNode 
> addresses not available. Journal Syncing cannot be done
> 2017-09-19 21:42:40,598 WARN 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Failed to start 
> SyncJournal daemon for journal ns1
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12498) Journal Syncer is not started in Federated + HA cluster

2017-09-22 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12498:
--
Attachment: HDFS-12498.01.patch

> Journal Syncer is not started in Federated + HA cluster
> ---
>
> Key: HDFS-12498
> URL: https://issues.apache.org/jira/browse/HDFS-12498
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12498.01.patch, hdfs-site.xml
>
>
> Journal Syncer is not getting started in HDFS + Federated cluster, when 
> dfs.shared.edits.dir.<> is provided, instead of 
> dfs.namenode.shared.edits.dir 
> *Log Snippet:*
> {code:java}
> 2017-09-19 21:42:40,598 WARN 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Could not construct 
> Shared Edits Uri
> 2017-09-19 21:42:40,598 WARN 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Other JournalNode 
> addresses not available. Journal Syncing cannot be done
> 2017-09-19 21:42:40,598 WARN 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Failed to start 
> SyncJournal daemon for journal ns1
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12455) WebHDFS - ListStatus query does not provide any information about a folder's "snapshot enabled" status

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177455#comment-16177455
 ] 

Hadoop QA commented on HDFS-12455:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
29s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  7s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
20s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 18s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
35s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}176m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ha.TestZKFailoverController |
|   | hadoop.net.TestDNS |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.server.balancer.TestBalancer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12455 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888608/HDFS-12455.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 541b2f19d97d 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e1b32e0 |

[jira] [Commented] (HDFS-12064) Reuse object mapper in HDFS

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177442#comment-16177442
 ] 

Hadoop QA commented on HDFS-12064:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 39s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}114m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12064 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888613/HDFS-12064.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c43283e0ebbc 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cda3378 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21315/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21315/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21315/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Reuse object mapper in HDFS
> ---
>
> Key: HDFS-12064
> URL: https://issues.apache.org/jira/browse/HDFS-12064
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Mingliang Liu
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HDFS-12064.001.patch, HDFS-12064.002.patch
>
>
> 

[jira] [Commented] (HDFS-12535) Change the Scope of the Class DFSUtilClient to Private

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177432#comment-16177432
 ] 

Hadoop QA commented on HDFS-12535:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The 
patch generated 2 new + 20 unchanged - 2 fixed = 22 total (was 22) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
12s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12535 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888619/HDFS-12535.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 77c83c385a42 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cda3378 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21317/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21317/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21317/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Change the Scope of the Class DFSUtilClient to Private
> --
>
> Key: HDFS-12535
> URL: https://issues.apache.org/jira/browse/HDFS-12535
> Project: Hadoop HDFS
>  Issue Type: Bug
>

[jira] [Commented] (HDFS-12534) Provide logical BlockLocations for EC files for better split calculation

2017-09-22 Thread Marcelo Vanzin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177425#comment-16177425
 ] 

Marcelo Vanzin commented on HDFS-12534:
---

bq. Are you sure we can split within a single S3 file?

Location != split. You can have x splits all with the same location. I'm pretty 
sure reading from a single s3 file using FileInputFormat generates multiple 
tasks (one per "split"). You may want to look at how it does that, it might be 
all client-side based on some client-side configuration.

> Provide logical BlockLocations for EC files for better split calculation
> 
>
> Key: HDFS-12534
> URL: https://issues.apache.org/jira/browse/HDFS-12534
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-beta1
>Reporter: Andrew Wang
>  Labels: hdfs-ec-3.0-must-do
>
> I talked to [~vanzin] and [~alex.behm] some more about split calculation with 
> EC. It turns out HDFS-1 was resolved prematurely. Applications depend on 
> HDFS BlockLocation to understand where the split points are. The current 
> scheme of returning one BlockLocation per block group loses this information.
> We should change this to provide logical blocks. Divide the file length by 
> the block size and provide suitable BlockLocations to match, with virtual 
> offsets and lengths too.
> I'm not marking this as incompatible, since changing it this way would in 
> fact make it more compatible from the perspective of applications that are 
> scheduling against replicated files. Thus, it'd be good for beta1 if 
> possible, but okay for later too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12529) get source for config tags from file name

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177424#comment-16177424
 ] 

Hadoop QA commented on HDFS-12529:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 252 unchanged - 0 fixed = 253 total (was 252) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 58s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12529 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888612/HDFS-12529.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6e54ebba3829 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cda3378 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21314/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21314/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21314/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21314/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> get source for config tags from file name
> -
>
> Key: HDFS-12529
> URL: https://issues.apache.org/jira/browse/HDFS-12529
> Project: Hadoop HDFS
> 

[jira] [Commented] (HDFS-12420) Add an option to disallow 'namenode format -force'

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177421#comment-16177421
 ] 

Hadoop QA commented on HDFS-12420:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
59s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 40s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 525 unchanged - 0 fixed = 528 total (was 525) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}121m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}152m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12420 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888596/HDFS-12420.09.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 5c16dfcadc69 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e1b32e0 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21312/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21312/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21312/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21312/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   

[jira] [Updated] (HDFS-12534) Provide logical BlockLocations for EC files for better split calculation

2017-09-22 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12534:
---
Target Version/s: 3.0.0  (was: 3.0.0-beta1)

> Provide logical BlockLocations for EC files for better split calculation
> 
>
> Key: HDFS-12534
> URL: https://issues.apache.org/jira/browse/HDFS-12534
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-beta1
>Reporter: Andrew Wang
>  Labels: hdfs-ec-3.0-must-do
>
> I talked to [~vanzin] and [~alex.behm] some more about split calculation with 
> EC. It turns out HDFS-1 was resolved prematurely. Applications depend on 
> HDFS BlockLocation to understand where the split points are. The current 
> scheme of returning one BlockLocation per block group loses this information.
> We should change this to provide logical blocks. Divide the file length by 
> the block size and provide suitable BlockLocations to match, with virtual 
> offsets and lengths too.
> I'm not marking this as incompatible, since changing it this way would in 
> fact make it more compatible from the perspective of applications that are 
> scheduling against replicated files. Thus, it'd be good for beta1 if 
> possible, but okay for later too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12535) Change the Scope of the Class DFSUtilClient to Private

2017-09-22 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12535:
--
Status: Patch Available  (was: In Progress)

> Change the Scope of the Class DFSUtilClient to Private
> --
>
> Key: HDFS-12535
> URL: https://issues.apache.org/jira/browse/HDFS-12535
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12535.01.patch
>
>
> # This is being done due to the review comments from HDFS-12486 jira and also 
> as now changed scope of method getConfValue(String defaultValue, String 
> keySuffix, Configuration conf, String... keys)  to public. (As these methods 
> are internally used by hadoop project, limiting it's scope)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12535) Change the Scope of the Class DFSUtilClient to Private

2017-09-22 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12535:
--
Attachment: HDFS-12535.01.patch

> Change the Scope of the Class DFSUtilClient to Private
> --
>
> Key: HDFS-12535
> URL: https://issues.apache.org/jira/browse/HDFS-12535
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12535.01.patch
>
>
> This is being done due to the review comments from HDFS-12486 jira and also 
> as now changed scope of method getConfValue(String defaultValue, String 
> keySuffix, Configuration conf, String... keys)  to public. (As these methods 
> are internally used by hadoop project, limiting it's scope)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12535) Change the Scope of the Class DFSUtilClient to Private

2017-09-22 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12535:
--
Description: 
# This is being done due to the review comments from HDFS-12486 jira and also 
as now changed scope of method getConfValue(String defaultValue, String 
keySuffix, Configuration conf, String... keys)  to public. (As these methods 
are internally used by hadoop project, limiting it's scope)



  was:
This is being done due to the review comments from HDFS-12486 jira and also as 
now changed scope of method getConfValue(String defaultValue, String keySuffix, 
Configuration conf, String... keys)  to public. (As these methods are 
internally used by hadoop project, limiting it's scope)




> Change the Scope of the Class DFSUtilClient to Private
> --
>
> Key: HDFS-12535
> URL: https://issues.apache.org/jira/browse/HDFS-12535
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12535.01.patch
>
>
> # This is being done due to the review comments from HDFS-12486 jira and also 
> as now changed scope of method getConfValue(String defaultValue, String 
> keySuffix, Configuration conf, String... keys)  to public. (As these methods 
> are internally used by hadoop project, limiting it's scope)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12535) Change the Scope of the Class DFSUtilClient to Private

2017-09-22 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12535:
--
Summary: Change the Scope of the Class DFSUtilClient to Private  (was: 
Change the Scope of the Class DFSUtilClient to LimitedPrivate)

> Change the Scope of the Class DFSUtilClient to Private
> --
>
> Key: HDFS-12535
> URL: https://issues.apache.org/jira/browse/HDFS-12535
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>
> This is being done due to the review comments from HDFS-12486 jira and also 
> as now changed scope of method getConfValue(String defaultValue, String 
> keySuffix, Configuration conf, String... keys)  to public. (As these methods 
> are internally used by hadoop project, limiting it's scope)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12534) Provide logical BlockLocations for EC files for better split calculation

2017-09-22 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reassigned HDFS-12534:
--

Assignee: (was: Andrew Wang)

> Provide logical BlockLocations for EC files for better split calculation
> 
>
> Key: HDFS-12534
> URL: https://issues.apache.org/jira/browse/HDFS-12534
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-beta1
>Reporter: Andrew Wang
>  Labels: hdfs-ec-3.0-must-do
>
> I talked to [~vanzin] and [~alex.behm] some more about split calculation with 
> EC. It turns out HDFS-1 was resolved prematurely. Applications depend on 
> HDFS BlockLocation to understand where the split points are. The current 
> scheme of returning one BlockLocation per block group loses this information.
> We should change this to provide logical blocks. Divide the file length by 
> the block size and provide suitable BlockLocations to match, with virtual 
> offsets and lengths too.
> I'm not marking this as incompatible, since changing it this way would in 
> fact make it more compatible from the perspective of applications that are 
> scheduling against replicated files. Thus, it'd be good for beta1 if 
> possible, but okay for later too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12534) Provide logical BlockLocations for EC files for better split calculation

2017-09-22 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177409#comment-16177409
 ] 

Andrew Wang commented on HDFS-12534:


This ends up being kind of complicated, since we don't have the 
preferredBlockSize in the LocatedBlock. We do have it in the FileStatus, but 
some of the client APIs only return a BlockLocation and don't query a 
FileStatus.

The most efficient solution is to add preferredBlockSize to the LocatedBlock 
proto. We already have some EC-specific fields for the LocatedStripedBlock 
subclass. It's hard to plumb this though, since LocatedBlock is created pretty 
far down in BlockManager, and the preferredBlockSize comes from the file in 
FSNamesystem.

We could also make the client make another RPC to get the FileStatus for EC 
files. This would be for the APIs that take a path and return a BlockLocation, 
since the LocatedFileStatus APIs already have a FileStatus. This comes at a 
performance cost.

I lean toward the efficient option. I didn't have time to plumb 
preferredBlockSize into the LocatedBlock today. I'm going to unassign myself 
for now in case [~HuafengWang] or someone else would like to pick this up.

Sidenote for [~vanzin], I checked S3AFileSystem and it looks like we just 
return a single location per file (the dummy FileSystem implementation), which 
[~fabbri] confirmed. Are you sure we can split within a single S3 file?

> Provide logical BlockLocations for EC files for better split calculation
> 
>
> Key: HDFS-12534
> URL: https://issues.apache.org/jira/browse/HDFS-12534
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-beta1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: hdfs-ec-3.0-must-do
>
> I talked to [~vanzin] and [~alex.behm] some more about split calculation with 
> EC. It turns out HDFS-1 was resolved prematurely. Applications depend on 
> HDFS BlockLocation to understand where the split points are. The current 
> scheme of returning one BlockLocation per block group loses this information.
> We should change this to provide logical blocks. Divide the file length by 
> the block size and provide suitable BlockLocations to match, with virtual 
> offsets and lengths too.
> I'm not marking this as incompatible, since changing it this way would in 
> fact make it more compatible from the perspective of applications that are 
> scheduling against replicated files. Thus, it'd be good for beta1 if 
> possible, but okay for later too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12536) Add documentation for getconf command with -journalnodes option

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177406#comment-16177406
 ] 

Hadoop QA commented on HDFS-12536:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12536 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888615/HDFS-12536.01.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 527faa19223f 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cda3378 |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21316/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add documentation for getconf command with -journalnodes option
> ---
>
> Key: HDFS-12536
> URL: https://issues.apache.org/jira/browse/HDFS-12536
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12536.01.patch
>
>
> Add documentation for getconf command with -journalnodes option.
> This feature is added in HDFS-12486 jira



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-12535) Change the Scope of the Class DFSUtilClient to LimitedPrivate

2017-09-22 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-12535 started by Bharat Viswanadham.
-
> Change the Scope of the Class DFSUtilClient to LimitedPrivate
> -
>
> Key: HDFS-12535
> URL: https://issues.apache.org/jira/browse/HDFS-12535
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>
> This is being done due to the review comments from HDFS-12486 jira and also 
> as now changed scope of method getConfValue(String defaultValue, String 
> keySuffix, Configuration conf, String... keys)  to public. (As these methods 
> are internally used by hadoop project, limiting it's scope)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12535) Change the Scope of the Class DFSUtilClient to LimitedPrivate

2017-09-22 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12535:
--
Description: 
This is being done due to the review comments from HDFS-12486 jira and also as 
now changed scope of method getConfValue(String defaultValue, String keySuffix, 
Configuration conf, String... keys)  to public. (As these methods are 
internally used by hadoop project, limiting it's scope)



  was:
This is being done due to the review comments from HDFS-12486 jira and also as 
now changed scope of method getConfValue(String defaultValue, String keySuffix, 
Configuration conf, String... keys)  to public. 




> Change the Scope of the Class DFSUtilClient to LimitedPrivate
> -
>
> Key: HDFS-12535
> URL: https://issues.apache.org/jira/browse/HDFS-12535
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>
> This is being done due to the review comments from HDFS-12486 jira and also 
> as now changed scope of method getConfValue(String defaultValue, String 
> keySuffix, Configuration conf, String... keys)  to public. (As these methods 
> are internally used by hadoop project, limiting it's scope)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12536) Add documentation for getconf command with -journalnodes option

2017-09-22 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12536:
--
Description: 
Add documentation for getconf command with -journalnodes option.
This feature is added in HDFS-12486 jira

  was:Add documentation for getconf command with -journalnodes option


> Add documentation for getconf command with -journalnodes option
> ---
>
> Key: HDFS-12536
> URL: https://issues.apache.org/jira/browse/HDFS-12536
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12536.01.patch
>
>
> Add documentation for getconf command with -journalnodes option.
> This feature is added in HDFS-12486 jira



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12536) Add documentation for getconf command with -journalnodes option

2017-09-22 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12536:
--
Status: Patch Available  (was: In Progress)

> Add documentation for getconf command with -journalnodes option
> ---
>
> Key: HDFS-12536
> URL: https://issues.apache.org/jira/browse/HDFS-12536
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12536.01.patch
>
>
> Add documentation for getconf command with -journalnodes option.
> This feature is added in HDFS-12486 jira



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12536) Add documentation for getconf command with -journalnodes option

2017-09-22 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12536:
--
Attachment: HDFS-12536.01.patch

> Add documentation for getconf command with -journalnodes option
> ---
>
> Key: HDFS-12536
> URL: https://issues.apache.org/jira/browse/HDFS-12536
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12536.01.patch
>
>
> Add documentation for getconf command with -journalnodes option.
> This feature is added in HDFS-12486 jira



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-12536) Add documentation for getconf command with -journalnodes option

2017-09-22 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-12536 started by Bharat Viswanadham.
-
> Add documentation for getconf command with -journalnodes option
> ---
>
> Key: HDFS-12536
> URL: https://issues.apache.org/jira/browse/HDFS-12536
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>
> Add documentation for getconf command with -journalnodes option



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12486) GetConf to get journalnodeslist

2017-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177394#comment-16177394
 ] 

Hudson commented on HDFS-12486:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12954 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12954/])
HDFS-12486. GetConf to get journalnodeslist. Contributed by Bharat (jitendra: 
rev cda3378659772f20fd951ae342dc7d9d6db29534)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestGetConf.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/GetConf.java


> GetConf to get journalnodeslist
> ---
>
> Key: HDFS-12486
> URL: https://issues.apache.org/jira/browse/HDFS-12486
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Fix For: 3.1.0
>
> Attachments: HDFS-12486.01.patch, HDFS-12486.02.patch, 
> HDFS-12486.03.patch, HDFS-12486.04.patch, HDFS-12486.05.patch, 
> HDFS-12486.06.patch, HDFS-12486.07.patch
>
>
> GetConf command to list journal nodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12486) GetConf to get journalnodeslist

2017-09-22 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177393#comment-16177393
 ] 

Bharat Viswanadham commented on HDFS-12486:
---

Thank You [~jnp]  and [~hanishakoneru] for reviewing and committing the changes.
And also updated the jira to add release notes

> GetConf to get journalnodeslist
> ---
>
> Key: HDFS-12486
> URL: https://issues.apache.org/jira/browse/HDFS-12486
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Fix For: 3.1.0
>
> Attachments: HDFS-12486.01.patch, HDFS-12486.02.patch, 
> HDFS-12486.03.patch, HDFS-12486.04.patch, HDFS-12486.05.patch, 
> HDFS-12486.06.patch, HDFS-12486.07.patch
>
>
> GetConf command to list journal nodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12486) GetConf to get journalnodeslist

2017-09-22 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12486:
--
Fix Version/s: 3.1.0

> GetConf to get journalnodeslist
> ---
>
> Key: HDFS-12486
> URL: https://issues.apache.org/jira/browse/HDFS-12486
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Fix For: 3.1.0
>
> Attachments: HDFS-12486.01.patch, HDFS-12486.02.patch, 
> HDFS-12486.03.patch, HDFS-12486.04.patch, HDFS-12486.05.patch, 
> HDFS-12486.06.patch, HDFS-12486.07.patch
>
>
> GetConf command to list journal nodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12064) Reuse object mapper in HDFS

2017-09-22 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-12064:
--
Attachment: HDFS-12064.002.patch

Thanks for reviewing the patch, [~anu]. 
Fixed the checkstyle warnings in patch v02.

> Reuse object mapper in HDFS
> ---
>
> Key: HDFS-12064
> URL: https://issues.apache.org/jira/browse/HDFS-12064
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Mingliang Liu
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HDFS-12064.001.patch, HDFS-12064.002.patch
>
>
> Currently there are a few places that are not following the recommended 
> pattern of using object mapper - reuse if possible. Actually we can use 
> {{ObjectReader}} or {{ObjectWriter}} to replace the object mapper in some 
> places: they are straightforward and thread safe.
> The benefit is all about performance, so in unit testing code I assume we 
> don't have to worry too much.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12536) Add documentation for getconf command with -journalnodes option

2017-09-22 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDFS-12536:
-

 Summary: Add documentation for getconf command with -journalnodes 
option
 Key: HDFS-12536
 URL: https://issues.apache.org/jira/browse/HDFS-12536
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


Add documentation for getconf command with -journalnodes option



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12535) Change the Scope of the Class DFSUtilClient to LimitedPrivate

2017-09-22 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDFS-12535:
-

Assignee: Bharat Viswanadham

> Change the Scope of the Class DFSUtilClient to LimitedPrivate
> -
>
> Key: HDFS-12535
> URL: https://issues.apache.org/jira/browse/HDFS-12535
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>
> This is being done due to the review comments from HDFS-12486 jira and also 
> as now changed scope of method getConfValue(String defaultValue, String 
> keySuffix, Configuration conf, String... keys)  to public. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12535) Change the Scope of the Class DFSUtilClient to LimitedPrivate

2017-09-22 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDFS-12535:
-

 Summary: Change the Scope of the Class DFSUtilClient to 
LimitedPrivate
 Key: HDFS-12535
 URL: https://issues.apache.org/jira/browse/HDFS-12535
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Bharat Viswanadham


This is being done due to the review comments from HDFS-12486 jira and also as 
now changed scope of method getConfValue(String defaultValue, String keySuffix, 
Configuration conf, String... keys)  to public. 





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12486) GetConf to get journalnodeslist

2017-09-22 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-12486:
--
Fix Version/s: (was: 3.1.0)
 Release Note: 
GetConf command has an option to list journal nodes.
Usage: hdfs getconf -journalnodes

> GetConf to get journalnodeslist
> ---
>
> Key: HDFS-12486
> URL: https://issues.apache.org/jira/browse/HDFS-12486
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12486.01.patch, HDFS-12486.02.patch, 
> HDFS-12486.03.patch, HDFS-12486.04.patch, HDFS-12486.05.patch, 
> HDFS-12486.06.patch, HDFS-12486.07.patch
>
>
> GetConf command to list journal nodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12486) GetConf to get journalnodeslist

2017-09-22 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDFS-12486:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

I have committed this to trunk. Thanks to [~bharatviswa].

> GetConf to get journalnodeslist
> ---
>
> Key: HDFS-12486
> URL: https://issues.apache.org/jira/browse/HDFS-12486
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12486.01.patch, HDFS-12486.02.patch, 
> HDFS-12486.03.patch, HDFS-12486.04.patch, HDFS-12486.05.patch, 
> HDFS-12486.06.patch, HDFS-12486.07.patch
>
>
> GetConf command to list journal nodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12486) GetConf to get journalnodeslist

2017-09-22 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDFS-12486:

Fix Version/s: 3.1.0

> GetConf to get journalnodeslist
> ---
>
> Key: HDFS-12486
> URL: https://issues.apache.org/jira/browse/HDFS-12486
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Fix For: 3.1.0
>
> Attachments: HDFS-12486.01.patch, HDFS-12486.02.patch, 
> HDFS-12486.03.patch, HDFS-12486.04.patch, HDFS-12486.05.patch, 
> HDFS-12486.06.patch, HDFS-12486.07.patch
>
>
> GetConf command to list journal nodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12529) get source for config tags from file name

2017-09-22 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177374#comment-16177374
 ] 

Ajay Kumar commented on HDFS-12529:
---

fixed failing test case.

> get source for config tags from file name
> -
>
> Key: HDFS-12529
> URL: https://issues.apache.org/jira/browse/HDFS-12529
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12529.01.patch, HDFS-12529.02.patch
>
>
> For tagging related properties together use resource name as source. 
> Currently it assumes source is configured in xml itself.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12529) get source for config tags from file name

2017-09-22 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12529:
--
Attachment: HDFS-12529.02.patch

> get source for config tags from file name
> -
>
> Key: HDFS-12529
> URL: https://issues.apache.org/jira/browse/HDFS-12529
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12529.01.patch, HDFS-12529.02.patch
>
>
> For tagging related properties together use resource name as source. 
> Currently it assumes source is configured in xml itself.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12064) Reuse object mapper in HDFS

2017-09-22 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177358#comment-16177358
 ] 

Anu Engineer commented on HDFS-12064:
-

[~hanishakoneru] +1, on this change. But we have 3 checkstyle warnings, could 
you please fix them? 

> Reuse object mapper in HDFS
> ---
>
> Key: HDFS-12064
> URL: https://issues.apache.org/jira/browse/HDFS-12064
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Mingliang Liu
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HDFS-12064.001.patch
>
>
> Currently there are a few places that are not following the recommended 
> pattern of using object mapper - reuse if possible. Actually we can use 
> {{ObjectReader}} or {{ObjectWriter}} to replace the object mapper in some 
> places: they are straightforward and thread safe.
> The benefit is all about performance, so in unit testing code I assume we 
> don't have to worry too much.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12455) WebHDFS - ListStatus query does not provide any information about a folder's "snapshot enabled" status

2017-09-22 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12455:
--
Attachment: HDFS-12455.02.patch

updated patch to fix failed test cases. TestFSImage is failing irrespective of 
patch. Will check further.

> WebHDFS - ListStatus query does not provide any information about a folder's 
> "snapshot enabled" status
> --
>
> Key: HDFS-12455
> URL: https://issues.apache.org/jira/browse/HDFS-12455
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12455.01.patch, HDFS-12455.02.patch
>
>
> WebHDFS - ListStatus query does not provide any information about a folder's 
> "snapshot enabled" status. Since "ListStatus" lists other attributes it will 
> be good to include this attribute as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12534) Provide logical BlockLocations for EC files for better split calculation

2017-09-22 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177356#comment-16177356
 ] 

Andrew Wang commented on HDFS-12534:


cc [~HuafengWang] and [~drankye] from HDFS-1. This is basically taking 
Huafeng's 004 patch and extending it a little further. I'll try and post 
something today for discussion.

> Provide logical BlockLocations for EC files for better split calculation
> 
>
> Key: HDFS-12534
> URL: https://issues.apache.org/jira/browse/HDFS-12534
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-beta1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>  Labels: hdfs-ec-3.0-must-do
>
> I talked to [~vanzin] and [~alex.behm] some more about split calculation with 
> EC. It turns out HDFS-1 was resolved prematurely. Applications depend on 
> HDFS BlockLocation to understand where the split points are. The current 
> scheme of returning one BlockLocation per block group loses this information.
> We should change this to provide logical blocks. Divide the file length by 
> the block size and provide suitable BlockLocations to match, with virtual 
> offsets and lengths too.
> I'm not marking this as incompatible, since changing it this way would in 
> fact make it more compatible from the perspective of applications that are 
> scheduling against replicated files. Thus, it'd be good for beta1 if 
> possible, but okay for later too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12534) Provide logical BlockLocations for EC files for better split calculation

2017-09-22 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-12534:
--

 Summary: Provide logical BlockLocations for EC files for better 
split calculation
 Key: HDFS-12534
 URL: https://issues.apache.org/jira/browse/HDFS-12534
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Affects Versions: 3.0.0-beta1
Reporter: Andrew Wang
Assignee: Andrew Wang


I talked to [~vanzin] and [~alex.behm] some more about split calculation with 
EC. It turns out HDFS-1 was resolved prematurely. Applications depend on 
HDFS BlockLocation to understand where the split points are. The current scheme 
of returning one BlockLocation per block group loses this information.

We should change this to provide logical blocks. Divide the file length by the 
block size and provide suitable BlockLocations to match, with virtual offsets 
and lengths too.

I'm not marking this as incompatible, since changing it this way would in fact 
make it more compatible from the perspective of applications that are 
scheduling against replicated files. Thus, it'd be good for beta1 if possible, 
but okay for later too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12516) Suppress the fsnamesystem lock warning on nn startup

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177348#comment-16177348
 ] 

Hadoop QA commented on HDFS-12516:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 43s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}127m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12516 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888576/HDFS-12516.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f735c2a86ad5 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8d29bf5 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21311/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21311/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21311/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Suppress the fsnamesystem lock warning on nn startup
> 
>
> Key: HDFS-12516
> URL: https://issues.apache.org/jira/browse/HDFS-12516
> Project: Hadoop HDFS
>  Issue Type: 

[jira] [Commented] (HDFS-12420) Add an option to disallow 'namenode format -force'

2017-09-22 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177347#comment-16177347
 ] 

Arpit Agarwal commented on HDFS-12420:
--

+1 pending Jenkins. The default behavior remains the same as today and I have 
removed the _Incompatible_ flag.

Will hold off committing until later next week.

bq. This is like linux having a sysctl controlling whether you can use rm -r.
Not too different from the {{--preserve-root}} option to 
[rm|https://linux.die.net/man/1/rm].

> Add an option to disallow 'namenode format -force'
> --
>
> Key: HDFS-12420
> URL: https://issues.apache.org/jira/browse/HDFS-12420
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12420.01.patch, HDFS-12420.02.patch, 
> HDFS-12420.03.patch, HDFS-12420.04.patch, HDFS-12420.05.patch, 
> HDFS-12420.06.patch, HDFS-12420.07.patch, HDFS-12420.08.patch, 
> HDFS-12420.09.patch
>
>
> Support for disabling NameNode format to avoid accidental formatting of 
> Namenode in production cluster. If someone really wants to delete the 
> complete fsImage, they can first delete the metadata dir and then run {code} 
> hdfs namenode -format{code} manually.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12420) Add an option to disallow 'namenode format -force'

2017-09-22 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-12420:
-
Hadoop Flags:   (was: Incompatible change)

> Add an option to disallow 'namenode format -force'
> --
>
> Key: HDFS-12420
> URL: https://issues.apache.org/jira/browse/HDFS-12420
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12420.01.patch, HDFS-12420.02.patch, 
> HDFS-12420.03.patch, HDFS-12420.04.patch, HDFS-12420.05.patch, 
> HDFS-12420.06.patch, HDFS-12420.07.patch, HDFS-12420.08.patch, 
> HDFS-12420.09.patch
>
>
> Support for disabling NameNode format to avoid accidental formatting of 
> Namenode in production cluster. If someone really wants to delete the 
> complete fsImage, they can first delete the metadata dir and then run {code} 
> hdfs namenode -format{code} manually.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12420) Add an option to disallow 'namenode format -force'

2017-09-22 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-12420:
-
Summary: Add an option to disallow 'namenode format -force'  (was: Add an 
optional to disallow 'namenode format -force')

> Add an option to disallow 'namenode format -force'
> --
>
> Key: HDFS-12420
> URL: https://issues.apache.org/jira/browse/HDFS-12420
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12420.01.patch, HDFS-12420.02.patch, 
> HDFS-12420.03.patch, HDFS-12420.04.patch, HDFS-12420.05.patch, 
> HDFS-12420.06.patch, HDFS-12420.07.patch, HDFS-12420.08.patch, 
> HDFS-12420.09.patch
>
>
> Support for disabling NameNode format to avoid accidental formatting of 
> Namenode in production cluster. If someone really wants to delete the 
> complete fsImage, they can first delete the metadata dir and then run {code} 
> hdfs namenode -format{code} manually.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12420) Add an optional to disallow 'namenode format -force'

2017-09-22 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-12420:
-
Summary: Add an optional to disallow 'namenode format -force'  (was: 
Disable Namenode format for prod clusters when data already exists)

> Add an optional to disallow 'namenode format -force'
> 
>
> Key: HDFS-12420
> URL: https://issues.apache.org/jira/browse/HDFS-12420
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12420.01.patch, HDFS-12420.02.patch, 
> HDFS-12420.03.patch, HDFS-12420.04.patch, HDFS-12420.05.patch, 
> HDFS-12420.06.patch, HDFS-12420.07.patch, HDFS-12420.08.patch, 
> HDFS-12420.09.patch
>
>
> Support for disabling NameNode format to avoid accidental formatting of 
> Namenode in production cluster. If someone really wants to delete the 
> complete fsImage, they can first delete the metadata dir and then run {code} 
> hdfs namenode -format{code} manually.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-5040) Audit log for admin commands/ logging output of all DFS admin commands

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177338#comment-16177338
 ] 

Hadoop QA commented on HDFS-5040:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 274 unchanged - 10 fixed = 275 total (was 284) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}119m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}145m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.web.TestWebHDFS |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-5040 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888566/HDFS-5040.009.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 802d3e7af4be 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4002bf0 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21310/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21310/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21310/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21310/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Audit log for admin commands/ logging output of all DFS admin commands
> 

[jira] [Commented] (HDFS-12486) GetConf to get journalnodeslist

2017-09-22 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177337#comment-16177337
 ] 

Jitendra Nath Pandey commented on HDFS-12486:
-

I think we should have following additional improvements
1) Declare DFSUtilClient to be {{InterfaceAudience}} Private or LimitedPrivate.
2) Update documentation to add '-journalnodes' to the getconf options.
These can be done in follow up jiras.

+1 for the patch. I will commit it shortly. Please also update the release 
notes in this jira with the usage of this new getconf option.

> GetConf to get journalnodeslist
> ---
>
> Key: HDFS-12486
> URL: https://issues.apache.org/jira/browse/HDFS-12486
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HDFS-12486.01.patch, HDFS-12486.02.patch, 
> HDFS-12486.03.patch, HDFS-12486.04.patch, HDFS-12486.05.patch, 
> HDFS-12486.06.patch, HDFS-12486.07.patch
>
>
> GetConf command to list journal nodes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12482) Provide a configuration to adjust the weight of EC recovery tasks to adjust the speed of recovery

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177316#comment-16177316
 ] 

Hadoop QA commented on HDFS-12482:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 5 new + 422 unchanged - 0 fixed = 427 total (was 422) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}114m 15s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}142m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.tools.TestHdfsConfigFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12482 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888564/HDFS-12482.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux dcb4d56c213a 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4002bf0 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21308/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21308/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21308/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21308/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Provide a configuration to adjust the weight of EC recovery tasks to adjust 
> the speed of recovery
> 

[jira] [Commented] (HDFS-12455) WebHDFS - ListStatus query does not provide any information about a folder's "snapshot enabled" status

2017-09-22 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177310#comment-16177310
 ] 

Ajay Kumar commented on HDFS-12455:
---

[~anu], yes some of the test cases are related. I am looking into it.

> WebHDFS - ListStatus query does not provide any information about a folder's 
> "snapshot enabled" status
> --
>
> Key: HDFS-12455
> URL: https://issues.apache.org/jira/browse/HDFS-12455
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12455.01.patch
>
>
> WebHDFS - ListStatus query does not provide any information about a folder's 
> "snapshot enabled" status. Since "ListStatus" lists other attributes it will 
> be good to include this attribute as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12455) WebHDFS - ListStatus query does not provide any information about a folder's "snapshot enabled" status

2017-09-22 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177297#comment-16177297
 ] 

Anu Engineer commented on HDFS-12455:
-

[~ajayydv] can you please verify if the test failures are related to this 
patch? Thanks in advance.

> WebHDFS - ListStatus query does not provide any information about a folder's 
> "snapshot enabled" status
> --
>
> Key: HDFS-12455
> URL: https://issues.apache.org/jira/browse/HDFS-12455
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12455.01.patch
>
>
> WebHDFS - ListStatus query does not provide any information about a folder's 
> "snapshot enabled" status. Since "ListStatus" lists other attributes it will 
> be good to include this attribute as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12420) Disable Namenode format for prod clusters when data already exists

2017-09-22 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177273#comment-16177273
 ] 

Ajay Kumar edited comment on HDFS-12420 at 9/22/17 10:31 PM:
-

[~arpitagarwal], Thanks for review.  Uploading patch v9 with suggested changes.


was (Author: ajayydv):
[~arpitagarwal], Thanks for review.  Uploading patch v9 suggested changes.

> Disable Namenode format for prod clusters when data already exists
> --
>
> Key: HDFS-12420
> URL: https://issues.apache.org/jira/browse/HDFS-12420
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12420.01.patch, HDFS-12420.02.patch, 
> HDFS-12420.03.patch, HDFS-12420.04.patch, HDFS-12420.05.patch, 
> HDFS-12420.06.patch, HDFS-12420.07.patch, HDFS-12420.08.patch, 
> HDFS-12420.09.patch
>
>
> Support for disabling NameNode format to avoid accidental formatting of 
> Namenode in production cluster. If someone really wants to delete the 
> complete fsImage, they can first delete the metadata dir and then run {code} 
> hdfs namenode -format{code} manually.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12420) Disable Namenode format for prod clusters when data already exists

2017-09-22 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12420:
--
Attachment: HDFS-12420.09.patch

[~arpitagarwal], Thanks for review.  Uploading patch v9 suggested changes.

> Disable Namenode format for prod clusters when data already exists
> --
>
> Key: HDFS-12420
> URL: https://issues.apache.org/jira/browse/HDFS-12420
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12420.01.patch, HDFS-12420.02.patch, 
> HDFS-12420.03.patch, HDFS-12420.04.patch, HDFS-12420.05.patch, 
> HDFS-12420.06.patch, HDFS-12420.07.patch, HDFS-12420.08.patch, 
> HDFS-12420.09.patch
>
>
> Support for disabling NameNode format to avoid accidental formatting of 
> Namenode in production cluster. If someone really wants to delete the 
> complete fsImage, they can first delete the metadata dir and then run {code} 
> hdfs namenode -format{code} manually.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11968) ViewFS: StoragePolicies commands fail with HDFS federation

2017-09-22 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177264#comment-16177264
 ] 

Arpit Agarwal commented on HDFS-11968:
--

Hi [~msingh], [~surendrasingh], 

What is the expected behavior now when running the getStoragePolicies command 
against a viewfs path?

> ViewFS: StoragePolicies commands fail with HDFS federation
> --
>
> Key: HDFS-11968
> URL: https://issues.apache.org/jira/browse/HDFS-11968
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.1
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Attachments: HDFS-11968.001.patch, HDFS-11968.002.patch, 
> HDFS-11968.003.patch, HDFS-11968.004.patch, HDFS-11968.005.patch, 
> HDFS-11968.006.patch, HDFS-11968.007.patch, HDFS-11968.008.patch, 
> HDFS-11968.009.patch, HDFS-11968.010.patch
>
>
> hdfs storagepolicies command fails with HDFS federation.
> For storage policies commands, a given user path should be resolved to a HDFS 
> path and
> storage policy command should be applied onto the resolved HDFS path.
> {code}
>   static DistributedFileSystem getDFS(Configuration conf)
>   throws IOException {
> FileSystem fs = FileSystem.get(conf);
> if (!(fs instanceof DistributedFileSystem)) {
>   throw new IllegalArgumentException("FileSystem " + fs.getUri() +
>   " is not an HDFS file system");
> }
> return (DistributedFileSystem)fs;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12452) TestDataNodeVolumeFailureReporting fails in trunk Jenkins runs

2017-09-22 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru reassigned HDFS-12452:
-

Assignee: Hanisha Koneru

> TestDataNodeVolumeFailureReporting fails in trunk Jenkins runs
> --
>
> Key: HDFS-12452
> URL: https://issues.apache.org/jira/browse/HDFS-12452
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Arpit Agarwal
>Assignee: Hanisha Koneru
>Priority: Critical
>  Labels: flaky-test
>
> TestDataNodeVolumeFailureReporting#testSuccessiveVolumeFailures fails 
> frequently in Jenkins runs but it passes locally on my dev machine.
> e.g. 
> https://builds.apache.org/job/PreCommit-HDFS-Build/21134/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailureReporting/testSuccessiveVolumeFailures/
> {code}
> Error Message
> test timed out after 12 milliseconds
> Stacktrace
> java.lang.Exception: test timed out after 12 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:761)
>   at 
> org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting.testSuccessiveVolumeFailures(TestDataNodeVolumeFailureReporting.java:189)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12527) javadoc: error - class file for org.apache.http.annotation.ThreadSafe not found

2017-09-22 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDFS-12527.
-
Resolution: Not A Problem

Trunk reverted HADOOP-14655, I have merged that change and now ozone branch 
builds correctly. [~msingh], [~szetszwo] and [~elek] appreciate your help in 
root causing and getting this fixed.


> javadoc: error - class file for org.apache.http.annotation.ThreadSafe not 
> found
> ---
>
> Key: HDFS-12527
> URL: https://issues.apache.org/jira/browse/HDFS-12527
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Mukul Kumar Singh
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:2.10.4:jar (module-javadocs) on 
> project hadoop-hdfs-client: MavenReportException: Error while generating 
> Javadoc: 
> [ERROR] Exit code: 1 - 
> /Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java:694:
>  warning - Tag @link: reference not found: StripingCell
> [ERROR] javadoc: error - class file for org.apache.http.annotation.ThreadSafe 
> not found
> [ERROR] 
> [ERROR] Command line was: 
> /Library/Java/JavaVirtualMachines/jdk1.8.0_144.jdk/Contents/Home/jre/../bin/javadoc
>  -J-Xmx768m @options @packages
> [ERROR] 
> [ERROR] Refer to the generated Javadoc files in 
> '/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs-client/target/api' 
> dir.
> {code}
> To reproduce the error above, run
> {code}
> mvn package -Pdist -DskipTests -DskipDocs -Dtar
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12503) Ozone: some UX improvements to oz_debug

2017-09-22 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177236#comment-16177236
 ] 

Chen Liang commented on HDFS-12503:
---

Thanks [~cheersyang] for filing this! v002 patch LGTM, pending jenkins.

> Ozone: some UX improvements to oz_debug
> ---
>
> Key: HDFS-12503
> URL: https://issues.apache.org/jira/browse/HDFS-12503
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12503-HDFS-7240.001.patch, 
> HDFS-12503-HDFS-7240.002.patch
>
>
> I tried to use {{oz_debug}} to dump KSM DB for offline analysis, found a few 
> problems need to be fixed in order to make this tool easier to use. I know 
> this is a debug tool for admins, but it's still necessary to improve the UX 
> so new users (like me) can figure out how to use it without reading more docs.
> # Support *--help* argument. --help is the general arg for all hdfs scripts 
> to print usage.
> # When specify output path {{-o}}, we need to add a description to let user 
> know the path needs to be a file (instead of a dir). If the path is specified 
> as a dir, it will end up with a funny error {{unable to open the database 
> file (out of memory)}}, which is pretty misleading. And it will be helpful to 
> add a check to make sure the specified path is not an existing dir.
> # SQLCLI currently swallows exception
> # We should remove {{levelDB}} words from the command output as we are by 
> default using rocksDB



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12533) NNThroughputBenchmark threads get stuck on UGI.getCurrentUser()

2017-09-22 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-12533:
---
Description: 
In {{NameNode#getRemoteUser()}}, it first attempts to fetch from the RPC user 
(not a synchronized operation), and if there is no RPC call, it will call 
{{UserGroupInformation#getCurrentUser()}} (which is {{synchronized}}). This 
makes it efficient for RPC operations (the bulk) so that there is not too much 
contention.

In NNThroughputBenchmark, however, there is no RPC call since we bypass that 
later, so with a high thread count many of the threads are getting stuck. At 
one point I attached a profiler and found that quite a few threads had been 
waiting for {{#getCurrentUser()}} for 2 minutes ( ! ). When taking this away I 
found some improvement in the throughput numbers I was seeing. To more closely 
emulate a real NN we should improve this issue.

  was:
In {{NameNode#getRemoteUser()}}, it first attempts to fetch from the RPC user 
(not a synchronized operation), and if there is no RPC call, it will call 
{{UserGroupInformation#getCurrentUser()}} (which is {{synchronized}}). This 
makes it efficient for RPC operations (the bulk) so that there is not too much 
contention.

In NNThroughputBenchmark, however, there is no RPC call since we bypass that 
later, so with a high thread count many of the threads are getting stuck. At 
one point I attached a profiler and found that quite a few threads had been 
waiting for {{#getCurrentUser()}} for 2 minutes (!). When taking this away I 
found some improvement in the throughput numbers I was seeing. To more closely 
emulate a real NN we should improve this issue.


> NNThroughputBenchmark threads get stuck on UGI.getCurrentUser()
> ---
>
> Key: HDFS-12533
> URL: https://issues.apache.org/jira/browse/HDFS-12533
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Erik Krogen
>
> In {{NameNode#getRemoteUser()}}, it first attempts to fetch from the RPC user 
> (not a synchronized operation), and if there is no RPC call, it will call 
> {{UserGroupInformation#getCurrentUser()}} (which is {{synchronized}}). This 
> makes it efficient for RPC operations (the bulk) so that there is not too 
> much contention.
> In NNThroughputBenchmark, however, there is no RPC call since we bypass that 
> later, so with a high thread count many of the threads are getting stuck. At 
> one point I attached a profiler and found that quite a few threads had been 
> waiting for {{#getCurrentUser()}} for 2 minutes ( ! ). When taking this away 
> I found some improvement in the throughput numbers I was seeing. To more 
> closely emulate a real NN we should improve this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12533) NNThroughputBenchmark threads get stuck on UGI.getCurrentUser()

2017-09-22 Thread Erik Krogen (JIRA)
Erik Krogen created HDFS-12533:
--

 Summary: NNThroughputBenchmark threads get stuck on 
UGI.getCurrentUser()
 Key: HDFS-12533
 URL: https://issues.apache.org/jira/browse/HDFS-12533
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Erik Krogen


In {{NameNode#getRemoteUser()}}, it first attempts to fetch from the RPC user 
(not a synchronized operation), and if there is no RPC call, it will call 
{{UserGroupInformation#getCurrentUser()}} (which is {{synchronized}}). This 
makes it efficient for RPC operations (the bulk) so that there is not too much 
contention.

In NNThroughputBenchmark, however, there is no RPC call since we bypass that 
later, so with a high thread count many of the threads are getting stuck. At 
one point I attached a profiler and found that quite a few threads had been 
waiting for {{#getCurrentUser()}} for 2 minutes (!). When taking this away I 
found some improvement in the throughput numbers I was seeing. To more closely 
emulate a real NN we should improve this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12532) DN Reg can Fail when principal doesn't contain hostname and floatingIP is configured.

2017-09-22 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177211#comment-16177211
 ] 

Daryn Sharp commented on HDFS-12532:


For starters, the exception has nothing to do with kerberos.  You will get the 
same exception regardless of the security setting.

bq. Configure principal without hostname (i.e hdfs/had...@hadoop.com)
That principal does have a hostname.  Did you mean h...@hadoop.com?

bq. Configure floatingIP
Is floating ip a dynamic dhcp address?  If yes, is this for testing? Anyway, I 
need to understand more about X.Y.Y.1 and X.Y.Y.100.  I'm assuming X.Y.Y.1 is 
localhost/127.0.0.1?  X.Y.Y.100 is the dhcp assigned address?

Here's the problem with your proposal:  "getLocalHost" will attempt to resolve 
the system assigned hostname, which you are assuming will always be in 
/etc/hosts, but falls back to localhost.  So let's say that the hostname 
doesn't resolve though – intentional or not – which forces a bind to localhost 
which isn't going to be able to connect to anything external.  Might work well 
for testing on a single node cluster when it doesn't matter if the host uses 
its public interface or localhost to connect to itself, but not for general use.

I'd rather see the NN squawking about unresolvable addresses than DNs silently 
failing to connect because they are binding to localhost.  It would also 
cripple other rpc clients whose servers don't care about dns at all.

I also not keen on having more confs w/o having a clearly stated use case.

> DN Reg can Fail when principal doesn't contain hostname and floatingIP is 
> configured.
> -
>
> Key: HDFS-12532
> URL: https://issues.apache.org/jira/browse/HDFS-12532
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>
> Configure principal without hostname (i.e hdfs/had...@hadoop.com)
> Configure floatingIP
> Start Cluster.
> Here DN will fail to register as it can take IP which is not in "/etc/hosts".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12420) Disable Namenode format for prod clusters when data already exists

2017-09-22 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177204#comment-16177204
 ] 

Arpit Agarwal commented on HDFS-12420:
--

Couple of comments:
# The setting description looks incorrect in hdfs-default.xml.
# In testNNFormatSuccess test case, the following lines are redundant:
{code}
FSNamesystem fsn = FSNamesystem.loadFromDisk(config);

doAnEdit(fsn, 1);
doAnEdit(fsn, 2);
assertEquals(3, fsn.getEditLog().getLastWrittenTxId());

config.setBoolean(DFSConfigKeys.DFS_REFORMAT_ENABLED, false);
DFSTestUtil.formatNameNode(config);
{code}
Instead we can disable reformat before the first format attempt. This will 
become the regression test.
# We don't need to call {{deleteNameDirs}} since the {{@After}} method does 
that.

> Disable Namenode format for prod clusters when data already exists
> --
>
> Key: HDFS-12420
> URL: https://issues.apache.org/jira/browse/HDFS-12420
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12420.01.patch, HDFS-12420.02.patch, 
> HDFS-12420.03.patch, HDFS-12420.04.patch, HDFS-12420.05.patch, 
> HDFS-12420.06.patch, HDFS-12420.07.patch, HDFS-12420.08.patch
>
>
> Support for disabling NameNode format to avoid accidental formatting of 
> Namenode in production cluster. If someone really wants to delete the 
> complete fsImage, they can first delete the metadata dir and then run {code} 
> hdfs namenode -format{code} manually.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12516) Suppress the fsnamesystem lock warning on nn startup

2017-09-22 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177187#comment-16177187
 ] 

Arpit Agarwal commented on HDFS-12516:
--

+1 pending Jenkins.

> Suppress the fsnamesystem lock warning on nn startup
> 
>
> Key: HDFS-12516
> URL: https://issues.apache.org/jira/browse/HDFS-12516
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12516.01.patch, HDFS-12516.02.patch
>
>
> Whenever FsNameSystemLock is held for more than configured value of 
> {{dfs.namenode.write-lock-reporting-threshold-ms}}, we log stacktrace and an 
> entry in metrics. Loading FSImage from disk will usually cross this 
> threshold. We can suppress this FsNamesystem lock warning on NameNode startup.
> {code}
> 17/09/20 21:41:39 INFO namenode.FSNamesystem: FSNamesystem write lock held 
> for 7159 ms via
> java.lang.Thread.getStackTrace(Thread.java:1552)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1659)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1074)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:703)
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:688)
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:752)
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:992)
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:976)
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1701)
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769)
> Number of suppressed write-lock reports: 0
> Longest write-lock held interval: 7159
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12455) WebHDFS - ListStatus query does not provide any information about a folder's "snapshot enabled" status

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177180#comment-16177180
 ] 

Hadoop QA commented on HDFS-12455:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
50s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
21s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
1s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}189m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestINodeFile |
|   | hadoop.hdfs.server.namenode.TestNameNodeXAttr |
|   | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
|   | hadoop.hdfs.TestEncryptionZones |
|   | hadoop.hdfs.server.namenode.TestFileContextAcl |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.security.TestPermissionSymlinks |
|   | hadoop.fs.TestSymlinkHdfsFileContext |
|   | hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.fs.TestSymlinkHdfsFileSystem |
|   | hadoop.hdfs.server.namenode.TestCheckpoint |
|   | hadoop.hdfs.TestListFilesInFileContext |
|   | 

[jira] [Commented] (HDFS-12516) Suppress the fsnamesystem lock warning on nn startup

2017-09-22 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177167#comment-16177167
 ] 

Ajay Kumar commented on HDFS-12516:
---

[~arpitagarwal] thanks for review. Patch v2 has suggested changes.

> Suppress the fsnamesystem lock warning on nn startup
> 
>
> Key: HDFS-12516
> URL: https://issues.apache.org/jira/browse/HDFS-12516
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12516.01.patch, HDFS-12516.02.patch
>
>
> Whenever FsNameSystemLock is held for more than configured value of 
> {{dfs.namenode.write-lock-reporting-threshold-ms}}, we log stacktrace and an 
> entry in metrics. Loading FSImage from disk will usually cross this 
> threshold. We can suppress this FsNamesystem lock warning on NameNode startup.
> {code}
> 17/09/20 21:41:39 INFO namenode.FSNamesystem: FSNamesystem write lock held 
> for 7159 ms via
> java.lang.Thread.getStackTrace(Thread.java:1552)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1659)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1074)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:703)
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:688)
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:752)
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:992)
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:976)
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1701)
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769)
> Number of suppressed write-lock reports: 0
> Longest write-lock held interval: 7159
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12516) Suppress the fsnamesystem lock warning on nn startup

2017-09-22 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12516:
--
Attachment: HDFS-12516.02.patch

> Suppress the fsnamesystem lock warning on nn startup
> 
>
> Key: HDFS-12516
> URL: https://issues.apache.org/jira/browse/HDFS-12516
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12516.01.patch, HDFS-12516.02.patch
>
>
> Whenever FsNameSystemLock is held for more than configured value of 
> {{dfs.namenode.write-lock-reporting-threshold-ms}}, we log stacktrace and an 
> entry in metrics. Loading FSImage from disk will usually cross this 
> threshold. We can suppress this FsNamesystem lock warning on NameNode startup.
> {code}
> 17/09/20 21:41:39 INFO namenode.FSNamesystem: FSNamesystem write lock held 
> for 7159 ms via
> java.lang.Thread.getStackTrace(Thread.java:1552)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1659)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1074)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:703)
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:688)
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:752)
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:992)
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:976)
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1701)
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769)
> Number of suppressed write-lock reports: 0
> Longest write-lock held interval: 7159
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12064) Reuse object mapper in HDFS

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177147#comment-16177147
 ] 

Hadoop QA commented on HDFS-12064:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 3 new + 29 unchanged - 0 fixed = 32 total (was 29) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 11s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}111m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12064 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888551/HDFS-12064.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bafa8f9f377e 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 08fca50 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21306/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21306/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21306/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21306/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Reuse object mapper in HDFS
> ---
>
> Key: HDFS-12064
> URL: 

[jira] [Updated] (HDFS-12516) Suppress the fsnamesystem lock warning on nn startup

2017-09-22 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-12516:
-
Description: 
Whenever FsNameSystemLock is held for more than configured value of 
{{dfs.namenode.write-lock-reporting-threshold-ms}}, we log stacktrace and an 
entry in metrics. Loading FSImage from disk will usually cross this threshold. 
We can suppress this FsNamesystem lock warning on NameNode startup.
{code}
17/09/20 21:41:39 INFO namenode.FSNamesystem: FSNamesystem write lock held for 
7159 ms via
java.lang.Thread.getStackTrace(Thread.java:1552)
org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1659)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1074)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:703)
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:688)
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:752)
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:992)
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:976)
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1701)
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769)
Number of suppressed write-lock reports: 0
Longest write-lock held interval: 7159
{code}


  was:
Whenever FsNameSystemLock is held for more than configured value of 
{{dfs.lock.suppress.warning.interval}}, we log stacktrace and an entry in 
metrics. Loading FSImage from disk will usually cross this threshold. We can 
suppress this FsNamesystem lock warning on NameNode startup.
{code}
17/09/20 21:41:39 INFO namenode.FSNamesystem: FSNamesystem write lock held for 
7159 ms via
java.lang.Thread.getStackTrace(Thread.java:1552)
org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1659)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1074)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:703)
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:688)
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:752)
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:992)
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:976)
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1701)
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769)
Number of suppressed write-lock reports: 0
Longest write-lock held interval: 7159
{code}



> Suppress the fsnamesystem lock warning on nn startup
> 
>
> Key: HDFS-12516
> URL: https://issues.apache.org/jira/browse/HDFS-12516
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12516.01.patch
>
>
> Whenever FsNameSystemLock is held for more than configured value of 
> {{dfs.namenode.write-lock-reporting-threshold-ms}}, we log stacktrace and an 
> entry in metrics. Loading FSImage from disk will usually cross this 
> threshold. We can suppress this FsNamesystem lock warning on NameNode startup.
> {code}
> 17/09/20 21:41:39 INFO namenode.FSNamesystem: FSNamesystem write lock held 
> for 7159 ms via
> java.lang.Thread.getStackTrace(Thread.java:1552)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1659)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1074)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:703)
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:688)
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:752)
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:992)
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:976)
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1701)
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769)
> Number of suppressed write-lock reports: 0
> Longest write-lock held interval: 7159
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12529) get source for config tags from file name

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177146#comment-16177146
 ] 

Hadoop QA commented on HDFS-12529:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 2 new + 252 unchanged - 0 fixed = 254 total (was 252) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  0s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
30s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.conf.TestConfiguration |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12529 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888550/HDFS-12529.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f85df7ea8597 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 08fca50 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21307/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21307/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21307/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21307/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21307/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> get source for config tags from file name
> -
>
> 

[jira] [Commented] (HDFS-12527) javadoc: error - class file for org.apache.http.annotation.ThreadSafe not found

2017-09-22 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177144#comment-16177144
 ] 

Anu Engineer commented on HDFS-12527:
-

thanks for root causing this, care to post a patch ? 

> javadoc: error - class file for org.apache.http.annotation.ThreadSafe not 
> found
> ---
>
> Key: HDFS-12527
> URL: https://issues.apache.org/jira/browse/HDFS-12527
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Mukul Kumar Singh
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:2.10.4:jar (module-javadocs) on 
> project hadoop-hdfs-client: MavenReportException: Error while generating 
> Javadoc: 
> [ERROR] Exit code: 1 - 
> /Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java:694:
>  warning - Tag @link: reference not found: StripingCell
> [ERROR] javadoc: error - class file for org.apache.http.annotation.ThreadSafe 
> not found
> [ERROR] 
> [ERROR] Command line was: 
> /Library/Java/JavaVirtualMachines/jdk1.8.0_144.jdk/Contents/Home/jre/../bin/javadoc
>  -J-Xmx768m @options @packages
> [ERROR] 
> [ERROR] Refer to the generated Javadoc files in 
> '/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs-client/target/api' 
> dir.
> {code}
> To reproduce the error above, run
> {code}
> mvn package -Pdist -DskipTests -DskipDocs -Dtar
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12527) javadoc: error - class file for org.apache.http.annotation.ThreadSafe not found

2017-09-22 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177140#comment-16177140
 ] 

Elek, Marton commented on HDFS-12527:
-

The problem is that the httpclient and httpcore versions are incompatible.

We have two version definition in hadoop-project/pom.xml:
 
{code}

org.apache.httpcomponents
httpclient
4.5.2
  
  
org.apache.httpcomponents
httpcore
4.4.4
  
{code}

The problem is that the second one is a dependency of the first one. We should 
define exactly the same version in the second one which is used by the first 
one. 

The problem was that both the versions are bumped (HADOOP-14654 and 
HADOOP-14655) but HADOOP-14655 is reverted. Now we use a different httpcore 
version and not the one which is used by httpclient 4.5.2 by default. So the 
two jiras should be applyed together or reverted together.

> javadoc: error - class file for org.apache.http.annotation.ThreadSafe not 
> found
> ---
>
> Key: HDFS-12527
> URL: https://issues.apache.org/jira/browse/HDFS-12527
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Mukul Kumar Singh
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:2.10.4:jar (module-javadocs) on 
> project hadoop-hdfs-client: MavenReportException: Error while generating 
> Javadoc: 
> [ERROR] Exit code: 1 - 
> /Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java:694:
>  warning - Tag @link: reference not found: StripingCell
> [ERROR] javadoc: error - class file for org.apache.http.annotation.ThreadSafe 
> not found
> [ERROR] 
> [ERROR] Command line was: 
> /Library/Java/JavaVirtualMachines/jdk1.8.0_144.jdk/Contents/Home/jre/../bin/javadoc
>  -J-Xmx768m @options @packages
> [ERROR] 
> [ERROR] Refer to the generated Javadoc files in 
> '/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs-client/target/api' 
> dir.
> {code}
> To reproduce the error above, run
> {code}
> mvn package -Pdist -DskipTests -DskipDocs -Dtar
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12387) Ozone: Support Ratis as a first class replication mechanism

2017-09-22 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177120#comment-16177120
 ] 

Anu Engineer commented on HDFS-12387:
-

I have also filed issues HDFS-12519, HDFS-12520, HDFS-12521, HDFS-12522 to 
track some of the issues that came out during code review.

> Ozone: Support Ratis as a first class replication mechanism
> ---
>
> Key: HDFS-12387
> URL: https://issues.apache.org/jira/browse/HDFS-12387
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Critical
>  Labels: ozoneMerge
> Attachments: HDFS-12387-HDFS-7240.001.patch, 
> HDFS-12387-HDFS-7240.002.patch
>
>
> Ozone container layer supports pluggable replication policies. This JIRA 
> brings Apache Ratis based replication to Ozone.  Apache Ratis is a java 
> implementation of Raft protocol.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12387) Ozone: Support Ratis as a first class replication mechanism

2017-09-22 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-12387:

Attachment: HDFS-12387-HDFS-7240.002.patch

[~xyao] and [~nandakumar131] Thanks for the review comments. I have updated the 
patch to address most comments and to the latest tree.

bq. ContainerState Change
Had an offline chat with Nanda and moving this to another JIRA since the fix is 
large and complex.

bq. PipelineSelector#newPipelineFromNodesAt the start of the function I have 
added 
{code}
  Preconditions.checkNotNull(nodes);
  Preconditions.checkArgument(nodes.size() > 0);
 {code}
 Please let me know if you had anything else in mind.


 bq.ChunkGroupOutputStream.java:Line 293: comment does not apply
Fixed.

bq. XceiverClientRatis.java Document and add log
Fixed.



bq. ContainerOperationClient.java: Line 98: can you add a TODO to update the 
Pipeline state from ALLOCATED to OPEN
Fixed.

bq. Line 162: NIT: commentlog can be removed. 
Fixed.

bq. Line 163: NIT:  should we change to debug log or remove it?
Fixed.

Line 97/201:  how do we handle if the pipeline state is not in either ALLOCATED 
or OPEN state?  Can we add a precondition assert here to ensure the pipeline is 
in OPEN state?
Fixed.

 
Line 260/299: we should add {{if (LOG.isDebugEnabled(O)) }} .
Fixed.

 
 
BlockContainerInfo.java
 
bq. Line 28: class document needs to be updated, the name of the class itself 
may also need to be udpated as the ContainerInfo is not only for Block svc only 
anymore with this change.
Fixed the class documentation.
 
bq. Line 64: NIT: "last used time"
Fixed.
bq. Line 80/82: the exceptions are not declared on Line 86
Both of them are runTimeExceptions, so I am copying the class 
comments from the parent. I can remove them if you wish.

 
Pipeline.java
 
bq. Line 62: should we remove containerName from the Pipeline? This can be 
handled in a separate ticket. My understanding is that we will need a pipeline 
name for the container like we defined in Ozone.proto.
Eventually yes, I have left a comment to that effect in ozone.proto. If I 
remove now, the code
churn is too much, so would like to do it another JIRA.
 
bq. Line 65/251: lifeCycleStates -> lifeCycleState
Thanks, Fixed.

 
bq. BlockManagerImpl.java:Line 200: can we update the javadoc with the new 
parameters?
Thanks, Fixed.


bq. Line 252 The size/usage information is not being persisted when 
allocateBlock via BlockManager. Can we add an API on SCM to allow update the 
container usage info? For now, we can still let block manager to call this API 
to update usage before the container report is available. I found the TODO in 
ContainerMapping to update the size via container report. With that, we still 
could have over provision problem wrt. Container size. 
Let us tackle this after we have the container report. That will give us a 
better idea about what to do here.
 
 
bq. With the new container selection algorithm, how do we handle more than 20 
clients allocate block at the same time? We may not want to hand over the same 
ALLOCATED containers to different clients for them to create on the DNs? 

Great Question, we need to have a container lease manager, that will be added 
later. The code does not regress from the current state, since we always used 
to do this work around in the client path.


bq. Otherwise, clients will need to compete for the container creation on DNs 
with some fails and retry? I see the patch change the default container 
provision size from 5 to 20. This can mitigate the above issue to some extent 
but cannot solve it completely. As we discussed offline, we might need an 
additional state to indicate that an ALLOCATED container has been assigned so 
that it will not be handed over to other clients. 
Completely, Agree. I will fix this issue as a follow up patch. I will file a 
JIRA.


bq. Line 305: can we add WARN or ERROR when the getMachines.size() == 0
Fixed.
 
Line 408-413: can re update the API that support get OPEN containers with type 
and replication factors as parameter?
This can be done with a follow up JIRA?
I will file a JIRA for this.

 
ContainerMapping.java
Line 251: blockreports-> container report
 
bq. ContainerMapping#updateContainerState needs to be updated (missing in the 
change)
It should update the in memory map via 
containerStateManager#updateContainerState
Not sure I understand this comment clearly, Skipping for now.
 
 
bq. ContainerStateManager.java Line 141: we will need to pass the 
containerStore from ContainerMapping so that the ContainerStateManager can load 
the containers from persisted store after reboot.
Filed a JIRA to track this issue.

 
bq. Line 161: NIT: can we change this OZONE_SCM_BLOCK_SIZE_KEY to 
OZONE_SCM_BLOCK_SIZE_MB
Fixed.

bq. Line 193: ContainerKey contains owner,type, replicationFactor and state.  
Will the look up of container by name with the same container key using 
priority 

[jira] [Updated] (HDFS-5040) Audit log for admin commands/ logging output of all DFS admin commands

2017-09-22 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated HDFS-5040:
--
Attachment: HDFS-5040.009.patch

Added audit logging to getDatanodeStorageReport() and added a test as well. 
Just one operationName for all kinds of reports. Let me know if that looks ok. 
Thanks a lot for the comments/review.

> Audit log for admin commands/ logging output of all DFS admin commands
> --
>
> Key: HDFS-5040
> URL: https://issues.apache.org/jira/browse/HDFS-5040
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Raghu C Doppalapudi
>Assignee: Kuhu Shukla
>  Labels: BB2015-05-TBR
> Attachments: HDFS-5040.001.patch, HDFS-5040.004.patch, 
> HDFS-5040.005.patch, HDFS-5040.006.patch, HDFS-5040.007.patch, 
> HDFS-5040.008.patch, HDFS-5040.009.patch, HDFS-5040.patch, HDFS-5040.patch, 
> HDFS-5040.patch
>
>
> enable audit log for all the admin commands/also provide ability to log all 
> the admin commands in separate log file, at this point all the logging is 
> displayed on the console.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12482) Provide a configuration to adjust the weight of EC recovery tasks to adjust the speed of recovery

2017-09-22 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12482:
-
Attachment: HDFS-12482.00.patch

Add a weight to adjust xmits for performance.

> Provide a configuration to adjust the weight of EC recovery tasks to adjust 
> the speed of recovery
> -
>
> Key: HDFS-12482
> URL: https://issues.apache.org/jira/browse/HDFS-12482
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12482.00.patch
>
>
> The relative speed of EC recovery comparing to 3x replica recovery is a 
> function of (EC codec, number of sources, NIC speed, and CPU speed, and etc). 
> Currently the EC recovery has a fixed {{xmitsInProgress}} of {{max(# of 
> sources, # of targets)}} comparing to {{1}} for 3x replica recovery, and NN 
> uses {{xmitsInProgress}} to decide how much recovery tasks to schedule to the 
> DataNode this we can add a coefficient for user to tune the weight of EC 
> recovery tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12482) Provide a configuration to adjust the weight of EC recovery tasks to adjust the speed of recovery

2017-09-22 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12482:
-
Status: Patch Available  (was: Open)

> Provide a configuration to adjust the weight of EC recovery tasks to adjust 
> the speed of recovery
> -
>
> Key: HDFS-12482
> URL: https://issues.apache.org/jira/browse/HDFS-12482
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-12482.00.patch
>
>
> The relative speed of EC recovery comparing to 3x replica recovery is a 
> function of (EC codec, number of sources, NIC speed, and CPU speed, and etc). 
> Currently the EC recovery has a fixed {{xmitsInProgress}} of {{max(# of 
> sources, # of targets)}} comparing to {{1}} for 3x replica recovery, and NN 
> uses {{xmitsInProgress}} to decide how much recovery tasks to schedule to the 
> DataNode this we can add a coefficient for user to tune the weight of EC 
> recovery tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12525) Ozone: OzoneClient: Verify bucket/volume name in create calls

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177064#comment-16177064
 ] 

Hadoop QA commented on HDFS-12525:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 0s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
47s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in HDFS-7240 
has 1 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-hdfs-client in HDFS-7240 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-hdfs in HDFS-7240 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
28s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m  6s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}128m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12525 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888534/HDFS-12525-HDFS-7240.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d0def635bdc2 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 0880d5e |
| Default 

[jira] [Updated] (HDFS-12529) get source for config tags from file name

2017-09-22 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12529:
--
Status: Patch Available  (was: Open)

> get source for config tags from file name
> -
>
> Key: HDFS-12529
> URL: https://issues.apache.org/jira/browse/HDFS-12529
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12529.01.patch
>
>
> For tagging related properties together use resource name as source. 
> Currently it assumes source is configured in xml itself.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12381) [Documentation] Adding configuration keys for the Router

2017-09-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177036#comment-16177036
 ] 

Íñigo Goiri commented on HDFS-12381:


Committed. Thanks for the comments [~brahmareddy], [~chris.douglas] and 
[~manojg] for the comments and review!

> [Documentation] Adding configuration keys for the Router
> 
>
> Key: HDFS-12381
> URL: https://issues.apache.org/jira/browse/HDFS-12381
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Fix For: HDFS-10467
>
> Attachments: HDFS-12381-HDFS-10467.000.patch, 
> HDFS-12381-HDFS-10467.001.patch, HDFS-12381-HDFS-10467.002.patch, 
> HDFS-12381-HDFS-10467.003.patch
>
>
> Adding configuration options in tabular format.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12381) [Documentation] Adding configuration keys for the Router

2017-09-22 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12381:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> [Documentation] Adding configuration keys for the Router
> 
>
> Key: HDFS-12381
> URL: https://issues.apache.org/jira/browse/HDFS-12381
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Minor
> Fix For: HDFS-10467
>
> Attachments: HDFS-12381-HDFS-10467.000.patch, 
> HDFS-12381-HDFS-10467.001.patch, HDFS-12381-HDFS-10467.002.patch, 
> HDFS-12381-HDFS-10467.003.patch
>
>
> Adding configuration options in tabular format.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12064) Reuse object mapper in HDFS

2017-09-22 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-12064:
--
Status: Patch Available  (was: Open)

> Reuse object mapper in HDFS
> ---
>
> Key: HDFS-12064
> URL: https://issues.apache.org/jira/browse/HDFS-12064
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Mingliang Liu
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HDFS-12064.001.patch
>
>
> Currently there are a few places that are not following the recommended 
> pattern of using object mapper - reuse if possible. Actually we can use 
> {{ObjectReader}} or {{ObjectWriter}} to replace the object mapper in some 
> places: they are straightforward and thread safe.
> The benefit is all about performance, so in unit testing code I assume we 
> don't have to worry too much.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12064) Reuse object mapper in HDFS

2017-09-22 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-12064:
--
Attachment: HDFS-12064.001.patch

> Reuse object mapper in HDFS
> ---
>
> Key: HDFS-12064
> URL: https://issues.apache.org/jira/browse/HDFS-12064
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Mingliang Liu
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HDFS-12064.001.patch
>
>
> Currently there are a few places that are not following the recommended 
> pattern of using object mapper - reuse if possible. Actually we can use 
> {{ObjectReader}} or {{ObjectWriter}} to replace the object mapper in some 
> places: they are straightforward and thread safe.
> The benefit is all about performance, so in unit testing code I assume we 
> don't have to worry too much.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12529) get source for config tags from file name

2017-09-22 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12529:
--
Attachment: HDFS-12529.01.patch

> get source for config tags from file name
> -
>
> Key: HDFS-12529
> URL: https://issues.apache.org/jira/browse/HDFS-12529
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12529.01.patch
>
>
> For tagging related properties together use resource name as source. 
> Currently it assumes source is configured in xml itself.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12455) WebHDFS - ListStatus query does not provide any information about a folder's "snapshot enabled" status

2017-09-22 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12455:
--
Status: Patch Available  (was: Open)

> WebHDFS - ListStatus query does not provide any information about a folder's 
> "snapshot enabled" status
> --
>
> Key: HDFS-12455
> URL: https://issues.apache.org/jira/browse/HDFS-12455
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12455.01.patch
>
>
> WebHDFS - ListStatus query does not provide any information about a folder's 
> "snapshot enabled" status. Since "ListStatus" lists other attributes it will 
> be good to include this attribute as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12455) WebHDFS - ListStatus query does not provide any information about a folder's "snapshot enabled" status

2017-09-22 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDFS-12455:
--
Attachment: HDFS-12455.01.patch

> WebHDFS - ListStatus query does not provide any information about a folder's 
> "snapshot enabled" status
> --
>
> Key: HDFS-12455
> URL: https://issues.apache.org/jira/browse/HDFS-12455
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12455.01.patch
>
>
> WebHDFS - ListStatus query does not provide any information about a folder's 
> "snapshot enabled" status. Since "ListStatus" lists other attributes it will 
> be good to include this attribute as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12525) Ozone: OzoneClient: Verify bucket/volume name in create calls

2017-09-22 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-12525:
--
Attachment: HDFS-12525-HDFS-7240.000.patch

Uploading patch to re-trigger build.

> Ozone: OzoneClient: Verify bucket/volume name in create calls
> -
>
> Key: HDFS-12525
> URL: https://issues.apache.org/jira/browse/HDFS-12525
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Nandakumar
>Assignee: Nandakumar
>  Labels: ozoneMerge
> Attachments: HDFS-12525-HDFS-7240.000.patch, 
> HDFS-12525-HDFS-7240.000.patch
>
>
> The new OzoneClient API has to verify bucket/volume name during creation 
> call. Volume/Bucket name shouldn't support any special characters other {{.}} 
> and {{-}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12516) Suppress the fsnamesystem lock warning on nn startup

2017-09-22 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176755#comment-16176755
 ] 

Arpit Agarwal commented on HDFS-12516:
--

The change looks great! Minor comments:
# The method {{writeUnlockFS(String opName, boolean suppressWriteLockReport)}} 
can be combined into writeUnlock as there is no other caller.
# We can update the check in writeUnlockFS so suppressWriteLockReport is rolled 
into needReport e.g.
{code}
final boolean needReport =
!suppressWriteLockReport &&
coarseLock.getWriteHoldCount() == 1 &&
coarseLock.isWriteLockedByCurrentThread();
{code}

Also the test case seems to pass even without the suppressWriteLockReport check 
in writeUnlockFS. Didn't debug it further.

> Suppress the fsnamesystem lock warning on nn startup
> 
>
> Key: HDFS-12516
> URL: https://issues.apache.org/jira/browse/HDFS-12516
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12516.01.patch
>
>
> Whenever FsNameSystemLock is held for more than configured value of 
> {{dfs.lock.suppress.warning.interval}}, we log stacktrace and an entry in 
> metrics. Loading FSImage from disk will usually cross this threshold. We can 
> suppress this FsNamesystem lock warning on NameNode startup.
> {code}
> 17/09/20 21:41:39 INFO namenode.FSNamesystem: FSNamesystem write lock held 
> for 7159 ms via
> java.lang.Thread.getStackTrace(Thread.java:1552)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1659)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1074)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:703)
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:688)
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:752)
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:992)
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:976)
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1701)
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769)
> Number of suppressed write-lock reports: 0
> Longest write-lock held interval: 7159
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12530) Processor argument in Offline Image Viewer should be case insensitive

2017-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176727#comment-16176727
 ] 

Hudson commented on HDFS-12530:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12947 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12947/])
HDFS-12530. Processor argument in Offline Image Viewer should be case (arp: rev 
08fca508e66e8eddc5d8fd1608ec0c9cd54fc990)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/OfflineImageViewerPB.java


> Processor argument in Offline Image Viewer should be case insensitive
> -
>
> Key: HDFS-12530
> URL: https://issues.apache.org/jira/browse/HDFS-12530
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HDFS-12530.001.patch, HDFS-12530.002.patch
>
>
> Currently, the processor argument in the Offline Image Viewer (oiv) is case 
> sensitive. For example, it accepts "XML" but does not recognize "xml" as a 
> valid processor argument.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12530) Processor argument in Offline Image Viewer should be case insensitive

2017-09-22 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-12530:
-
Component/s: tools

> Processor argument in Offline Image Viewer should be case insensitive
> -
>
> Key: HDFS-12530
> URL: https://issues.apache.org/jira/browse/HDFS-12530
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HDFS-12530.001.patch, HDFS-12530.002.patch
>
>
> Currently, the processor argument in the Offline Image Viewer (oiv) is case 
> sensitive. For example, it accepts "XML" but does not recognize "xml" as a 
> valid processor argument.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12530) Processor argument in Offline Image Viewer should be case insensitive

2017-09-22 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-12530:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-beta1
   2.9.0
   Status: Resolved  (was: Patch Available)

I've committed this. Thanks for the improvement [~hanishakoneru] and thanks 
[~jojochuang] for the review.

> Processor argument in Offline Image Viewer should be case insensitive
> -
>
> Key: HDFS-12530
> URL: https://issues.apache.org/jira/browse/HDFS-12530
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: tools
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HDFS-12530.001.patch, HDFS-12530.002.patch
>
>
> Currently, the processor argument in the Offline Image Viewer (oiv) is case 
> sensitive. For example, it accepts "XML" but does not recognize "xml" as a 
> valid processor argument.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-12498) Journal Syncer is not started in Federated + HA cluster

2017-09-22 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-12498 started by Bharat Viswanadham.
-
> Journal Syncer is not started in Federated + HA cluster
> ---
>
> Key: HDFS-12498
> URL: https://issues.apache.org/jira/browse/HDFS-12498
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: hdfs-site.xml
>
>
> Journal Syncer is not getting started in HDFS + Federated cluster, when 
> dfs.shared.edits.dir.<> is provided, instead of 
> dfs.namenode.shared.edits.dir 
> *Log Snippet:*
> {code:java}
> 2017-09-19 21:42:40,598 WARN 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Could not construct 
> Shared Edits Uri
> 2017-09-19 21:42:40,598 WARN 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Other JournalNode 
> addresses not available. Journal Syncing cannot be done
> 2017-09-19 21:42:40,598 WARN 
> org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Failed to start 
> SyncJournal daemon for journal ns1
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12530) Processor argument in Offline Image Viewer should be case insensitive

2017-09-22 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176698#comment-16176698
 ] 

Arpit Agarwal commented on HDFS-12530:
--

+1 I will commit this shortly.

> Processor argument in Offline Image Viewer should be case insensitive
> -
>
> Key: HDFS-12530
> URL: https://issues.apache.org/jira/browse/HDFS-12530
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HDFS-12530.001.patch, HDFS-12530.002.patch
>
>
> Currently, the processor argument in the Offline Image Viewer (oiv) is case 
> sensitive. For example, it accepts "XML" but does not recognize "xml" as a 
> valid processor argument.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12531) Fix conflict in the javadoc of UnderReplicatedBlocks.java in branch-2

2017-09-22 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176678#comment-16176678
 ] 

Bharat Viswanadham commented on HDFS-12531:
---

The test failures are unrelated to this code changes.

> Fix conflict in the javadoc of UnderReplicatedBlocks.java in branch-2
> -
>
> Key: HDFS-12531
> URL: https://issues.apache.org/jira/browse/HDFS-12531
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Akira Ajisaka
>Assignee: Bharat Viswanadham
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-12531-branch-2.01.patch
>
>
> In HDFS-9205, the following was pushed without fixing conflicts.
> {noformat}
>  * 
>  * The policy for choosing which priority to give added blocks
> <<< HEAD
>  * is implemented in {@link #getPriority(int, int, int)}.
> ===
>  * is implemented in {@link #getPriority(BlockInfo, int, int, int, int)}.
> >>> 5411dc5... HDFS-9205. Do not schedule corrupt blocks for replication. 
> >>>  (szetszwo)
>  * 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9957) HDFS's use of mlock() is not portable

2017-09-22 Thread Jan Kunigk (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176651#comment-16176651
 ] 

Jan Kunigk commented on HDFS-9957:
--

The way I understand this manpage entry is:
Any user appliaction (C-binary, JVM) that calls mlock under Linux will get a 
page aligned buffer.
POSIX says that it is valid for other implementations of mlock (e.g. UNIX, 
etc.) to not do that automatically and leave it as a responsibility of the 
application.

But, given that the Linux implementation's alignment feature will most likely 
not go away and that HDFS is very unlikely to officially run under any other OS 
than Linux in the near future, I don't really see a problem here.

Also, I don't understand how the mlocked region of a file used by the JVM 
affects the JVM itself... The JVM process itself, including the heap lives in 
anonymous user space memory, no? I.e. it is not backed by a file (it may be 
swapped, but not paged)... Libraries used by the JVM may indeed also be mmaped, 
but those are different invocations of mmaps/mlocks.

That's the way I understand it...

> HDFS's use of mlock() is not portable
> -
>
> Key: HDFS-9957
> URL: https://issues.apache.org/jira/browse/HDFS-9957
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: native
>Affects Versions: 2.7.2
> Environment: Any UNIX system other than Linux
>Reporter: Alan Burlison
>Assignee: Alan Burlison
>
> HDFS uses mlock() to lock in the memory used to back java.nio.Buffer. 
> Unfortunately the way it is done is not standards-compliant. As the Linux 
> manpage for mlock() says:
> {quote}
>Under Linux, mlock(), mlock2(), and munlock() automatically round
>addr down to the nearest page boundary.  However, the POSIX.1
>specification of mlock() and munlock() allows an implementation to
>require that addr is page aligned, so portable applications should
>ensure this.
> {quote}
> The HDFS code does not do any such alignment, nor is it true that the backing 
> buffers for java.nio.Buffer are necessarily page aligned. And even if the 
> address was aligned by the code, it would end up calling mlock() on other 
> random JVM data structures that shared the same page. That seems potentially 
> dangerous.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12516) Suppress the fsnamesystem lock warning on nn startup

2017-09-22 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176647#comment-16176647
 ] 

Ajay Kumar commented on HDFS-12516:
---

Test failures are unrelated. All 3 failed tests passes locally. In jenkins 2 
were timed out while 3rd one has connection refused error.

> Suppress the fsnamesystem lock warning on nn startup
> 
>
> Key: HDFS-12516
> URL: https://issues.apache.org/jira/browse/HDFS-12516
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
> Attachments: HDFS-12516.01.patch
>
>
> Whenever FsNameSystemLock is held for more than configured value of 
> {{dfs.lock.suppress.warning.interval}}, we log stacktrace and an entry in 
> metrics. Loading FSImage from disk will usually cross this threshold. We can 
> suppress this FsNamesystem lock warning on NameNode startup.
> {code}
> 17/09/20 21:41:39 INFO namenode.FSNamesystem: FSNamesystem write lock held 
> for 7159 ms via
> java.lang.Thread.getStackTrace(Thread.java:1552)
> org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:945)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1659)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1074)
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:703)
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:688)
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:752)
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:992)
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:976)
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1701)
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769)
> Number of suppressed write-lock reports: 0
> Longest write-lock held interval: 7159
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11897) Ozone: KSM: Changing log level for client calls in KSM

2017-09-22 Thread Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandakumar updated HDFS-11897:
--
  Resolution: Fixed
   Fix Version/s: HDFS-7240
Target Version/s: HDFS-7240
  Status: Resolved  (was: Patch Available)

> Ozone: KSM: Changing log level for client calls in KSM
> --
>
> Key: HDFS-11897
> URL: https://issues.apache.org/jira/browse/HDFS-11897
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Nandakumar
>Assignee: Shashikant Banerjee
>  Labels: ozoneMerge
> Fix For: HDFS-7240
>
> Attachments: HDFS-11897-HDFS-7240.001.patch, 
> HDFS-11897-HDFS-7240.002.patch, HDFS-11897-HDFS-7240.003.patch
>
>
> Whenever there is no Volume/Bucker/Key found in MetadataDB for a client call, 
> KSM logs ERROR which is not necessary. The level of these log messages can be 
> changed to DEBUG, which will be helpful in debugging.
> Changes are to be made in the following classes
> * VolumeManagerImpl
> * BucketManagerImpl
> * KeyManagerImpl



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11897) Ozone: KSM: Changing log level for client calls in KSM

2017-09-22 Thread Nandakumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176564#comment-16176564
 ] 

Nandakumar commented on HDFS-11897:
---

Thanks [~shashikant] for the contribution. I've commit the patch to feature 
branch.

> Ozone: KSM: Changing log level for client calls in KSM
> --
>
> Key: HDFS-11897
> URL: https://issues.apache.org/jira/browse/HDFS-11897
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Nandakumar
>Assignee: Shashikant Banerjee
>  Labels: ozoneMerge
> Fix For: HDFS-7240
>
> Attachments: HDFS-11897-HDFS-7240.001.patch, 
> HDFS-11897-HDFS-7240.002.patch, HDFS-11897-HDFS-7240.003.patch
>
>
> Whenever there is no Volume/Bucker/Key found in MetadataDB for a client call, 
> KSM logs ERROR which is not necessary. The level of these log messages can be 
> changed to DEBUG, which will be helpful in debugging.
> Changes are to be made in the following classes
> * VolumeManagerImpl
> * BucketManagerImpl
> * KeyManagerImpl



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12531) Fix conflict in the javadoc of UnderReplicatedBlocks.java in branch-2

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176493#comment-16176493
 ] 

Hadoop QA commented on HDFS-12531:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 18m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
45s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 34 unchanged - 1 fixed = 34 total (was 35) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}602m 46s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  8m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}651m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestGetContentSummaryWithPermission |
|   | hadoop.hdfs.server.namenode.TestEditLogRace |
|   | hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots |
|   | hadoop.hdfs.server.namenode.TestStreamFile |
|   | hadoop.hdfs.TestLocalDFS |
|   | hadoop.hdfs.server.namenode.TestUpgradeDomainBlockPlacementPolicy |
|   | hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication |
|   | hadoop.hdfs.server.namenode.ha.TestHAMetrics |
|   | hadoop.hdfs.server.namenode.TestAuditLogs |
|   | hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus |
|   | hadoop.hdfs.TestLeaseRecovery |
|   | hadoop.hdfs.TestDataTransferProtocol |
|   | hadoop.hdfs.server.namenode.ha.TestQuotasWithHA |
|   | hadoop.fs.contract.hdfs.TestHDFSContractRename |
|   | hadoop.security.TestRefreshUserMappings |
|   | hadoop.hdfs.TestHFlush |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogsDuringFailover |
|   | hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM |
|   | hadoop.hdfs.TestWriteConfigurationToDFS |
|   | hadoop.hdfs.server.namenode.TestFavoredNodesEndToEnd |
|   | hadoop.hdfs.TestFetchImage |
|   | hadoop.hdfs.server.namenode.TestFSNamesystemMBean |
|   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
|   | hadoop.hdfs.server.namenode.TestSecondaryWebUi |
|   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotRename |
|   

[jira] [Commented] (HDFS-12386) Add fsserver defaults call to WebhdfsFileSystem.

2017-09-22 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176487#comment-16176487
 ] 

Rushabh S Shah commented on HDFS-12386:
---

Ran all the failed tests locally, it passes on my laptop.
{noformat}
---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeUUID
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.06 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeUUID
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.893 sec - 
in org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure
Running 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 88.126 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting
Running org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 146.28 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
Running org.apache.hadoop.hdfs.server.datanode.TestIncrementalBlockReports
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.545 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestIncrementalBlockReports
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110
Tests run: 16, Failures: 0, Errors: 0, Skipped: 12, Time elapsed: 184.282 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180
Tests run: 16, Failures: 0, Errors: 0, Skipped: 12, Time elapsed: 190.988 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180

Results :

Tests run: 61, Failures: 0, Errors: 0, Skipped: 24
{noformat}

> Add fsserver defaults call to WebhdfsFileSystem.
> 
>
> Key: HDFS-12386
> URL: https://issues.apache.org/jira/browse/HDFS-12386
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: webhdfs
>Reporter: Rushabh S Shah
>Assignee: Rushabh S Shah
>Priority: Minor
> Attachments: HDFS-12386-1.patch, HDFS-12386-2.patch, 
> HDFS-12386-3.patch, HDFS-12386-4.patch, HDFS-12386-5.patch, HDFS-12386.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12469) Ozone: Create docker-compose definition to easily test real clusters

2017-09-22 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDFS-12469:

Labels: ozoneMerge  (was: )

> Ozone: Create docker-compose definition to easily test real clusters
> 
>
> Key: HDFS-12469
> URL: https://issues.apache.org/jira/browse/HDFS-12469
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>  Labels: ozoneMerge
> Fix For: HDFS-7240
>
> Attachments: HDFS-12469-HDFS-7240.WIP1.patch, 
> HDFS-12469-HDFS-7240.WIP2.patch, HDFS-12477-HDFS-7240.000.patch
>
>
> The goal here is to create a docker-compose definition for ozone 
> pseudo-cluster with docker (one component per container). 
> Ideally after a full build the ozone cluster could be started easily with 
> after a simple docker-compose up command.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12469) Ozone: Create docker-compose definition to easily test real clusters

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176449#comment-16176449
 ] 

Hadoop QA commented on HDFS-12469:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  0m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HDFS-12469 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888496/HDFS-12477-HDFS-7240.000.patch
 |
| Optional Tests |  asflicense  |
| uname | Linux 1a1d289c52b5 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 90f3fb6 |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/21303/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Create docker-compose definition to easily test real clusters
> 
>
> Key: HDFS-12469
> URL: https://issues.apache.org/jira/browse/HDFS-12469
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>  Labels: ozoneMerge
> Fix For: HDFS-7240
>
> Attachments: HDFS-12469-HDFS-7240.WIP1.patch, 
> HDFS-12469-HDFS-7240.WIP2.patch, HDFS-12477-HDFS-7240.000.patch
>
>
> The goal here is to create a docker-compose definition for ozone 
> pseudo-cluster with docker (one component per container). 
> Ideally after a full build the ozone cluster could be started easily with 
> after a simple docker-compose up command.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12469) Ozone: Create docker-compose definition to easily test real clusters

2017-09-22 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176448#comment-16176448
 ] 

Elek, Marton commented on HDFS-12469:
-

So I uploaded the first patch, ready to review and merge:

1. I moved every developer specific instruction to the wiki: 
https://cwiki.apache.org/confluence/display/HADOOP/Ozone
2. This patch contains docker-compose files, which could be tested according to 
this guide: 
https://cwiki.apache.org/confluence/display/HADOOP/Dev+cluster+with+docker
3. The definition is not part of this patch, but I also created docker images 
to test ozone without building the source. Long term it will be part of 
https://issues.apache.org/jira/browse/HADOOP-14898, but now you can test it 
according to the 
https://cwiki.apache.org/confluence/display/HADOOP/Getting+Started+with+docker 
Or the GettingStarted.md from this patch.
4. I moved the configuration to a separated page to a.) simplify the getting 
started guide b.) create space for additional notes about the more important 
config keys. (or for HDFS-12475)

To test this patch please try to run it with docker according to two guides 
mentioned in 2 and 3.

2. needs a dist build and use the local version of ozone 3. doesn't need any 
source at all but use an older version of ozone (about 5 days old, but I will 
update it frequently)

> Ozone: Create docker-compose definition to easily test real clusters
> 
>
> Key: HDFS-12469
> URL: https://issues.apache.org/jira/browse/HDFS-12469
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>  Labels: ozoneMerge
> Fix For: HDFS-7240
>
> Attachments: HDFS-12469-HDFS-7240.WIP1.patch, 
> HDFS-12469-HDFS-7240.WIP2.patch, HDFS-12477-HDFS-7240.000.patch
>
>
> The goal here is to create a docker-compose definition for ozone 
> pseudo-cluster with docker (one component per container). 
> Ideally after a full build the ozone cluster could be started easily with 
> after a simple docker-compose up command.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >