[jira] [Commented] (HDFS-14079) RBF: RouterAdmin should have failover concept for router

2018-11-27 Thread Surendra Singh Lilhore (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701471#comment-16701471
 ] 

Surendra Singh Lilhore commented on HDFS-14079:
---

Thanks [~elgoiri] for review..
{quote}Instead of {{admin-address.list}}, for consistency with the NN say, we 
may want to do the suffixes {{admin-address.r1}} and reuse all that logic to 
get addresses.
{quote}
Existing logic is depend on nameservice Id. If we want to reuse it, we have to 
define nsid for router in one property and RouterAdmin will use it to get the 
admin address list.

Do you want me to write complete logic again without nsid and just use router 
ID to get address ?

> RBF: RouterAdmin should have failover concept for router
> 
>
> Key: HDFS-14079
> URL: https://issues.apache.org/jira/browse/HDFS-14079
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-14079-HDFS-13891.01.patch, 
> HDFS-14079-HDFS-13891.02.patch
>
>
> Currenlty {{RouterAdmin}} connect with only one router for admin operation, 
> if the configured router is down then router admin command is failing. It 
> should allow to configure all the router admin address.
> {code}
> // Initialize RouterClient
> try {
>   String address = getConf().getTrimmed(
>   RBFConfigKeys.DFS_ROUTER_ADMIN_ADDRESS_KEY,
>   RBFConfigKeys.DFS_ROUTER_ADMIN_ADDRESS_DEFAULT);
>   InetSocketAddress routerSocket = NetUtils.createSocketAddr(address);
>   client = new RouterClient(routerSocket, getConf());
> } catch (RPC.VersionMismatch v) {
>   System.err.println(
>   "Version mismatch between client and server... command aborted");
>   return exitCode;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-642) Add chill mode exit condition for pipeline availability

2018-11-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701461#comment-16701461
 ] 

Hadoop QA commented on HDDS-642:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} hadoop-hdds: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 28s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 27s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-642 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12949794/HDDS-642.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux d8df968f08b5 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 34a914b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | 

[jira] [Commented] (HDDS-876) add blockade tests for flaky network

2018-11-27 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701437#comment-16701437
 ] 

Jitendra Nath Pandey commented on HDDS-876:
---

Its a great idea to test with flaky network.

+1 for the patch.

> add blockade tests for flaky network
> 
>
> Key: HDDS-876
> URL: https://issues.apache.org/jira/browse/HDDS-876
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-876.001.patch
>
>
> Blockade is a container utility to simulate network and node failures and 
> network partitions. https://blockade.readthedocs.io/en/latest/guide.html.
> This jira proposes to add a simple test to test freon with a flaky network.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-642) Add chill mode exit condition for pipeline availability

2018-11-27 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701414#comment-16701414
 ] 

Yiqun Lin commented on HDDS-642:


Since we have introduced a new config, we don't need to use 
{{setPipelineAvailable}} function anymore. Clean up this and attach the v07 
patch.

> Add chill mode exit condition for pipeline availability
> ---
>
> Key: HDDS-642
> URL: https://issues.apache.org/jira/browse/HDDS-642
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Arpit Agarwal
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDDS-642.001.patch, HDDS-642.002.patch, 
> HDDS-642.003.patch, HDDS-642.004.patch, HDDS-642.005.patch, 
> HDDS-642.006.patch, HDDS-642.007.patch
>
>
> SCM should not exit chill-mode until at least 1 write pipeline is available. 
> Else smoke tests are unreliable.
> This is not an issue for real clusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-642) Add chill mode exit condition for pipeline availability

2018-11-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701420#comment-16701420
 ] 

Hadoop QA commented on HDDS-642:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 22s{color} | {color:orange} hadoop-hdds: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 32s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 28s{color} 
| {color:red} server-scm in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-642 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12949791/HDDS-642.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 36f8f0bfe64b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 34a914b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | 

[jira] [Updated] (HDDS-642) Add chill mode exit condition for pipeline availability

2018-11-27 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDDS-642:
---
Attachment: HDDS-642.007.patch

> Add chill mode exit condition for pipeline availability
> ---
>
> Key: HDDS-642
> URL: https://issues.apache.org/jira/browse/HDDS-642
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Arpit Agarwal
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDDS-642.001.patch, HDDS-642.002.patch, 
> HDDS-642.003.patch, HDDS-642.004.patch, HDDS-642.005.patch, 
> HDDS-642.006.patch, HDDS-642.007.patch
>
>
> SCM should not exit chill-mode until at least 1 write pipeline is available. 
> Else smoke tests are unreliable.
> This is not an issue for real clusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-642) Add chill mode exit condition for pipeline availability

2018-11-27 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701393#comment-16701393
 ] 

Yiqun Lin commented on HDDS-642:


Thanks [~ajayydv] for the review! Addressed all your comments. I added a new 
config to control this and turn it off by default. If pipelineManager is null, 
we will also avoid adding pipeline rule.

> Add chill mode exit condition for pipeline availability
> ---
>
> Key: HDDS-642
> URL: https://issues.apache.org/jira/browse/HDDS-642
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Arpit Agarwal
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDDS-642.001.patch, HDDS-642.002.patch, 
> HDDS-642.003.patch, HDDS-642.004.patch, HDDS-642.005.patch, HDDS-642.006.patch
>
>
> SCM should not exit chill-mode until at least 1 write pipeline is available. 
> Else smoke tests are unreliable.
> This is not an issue for real clusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-642) Add chill mode exit condition for pipeline availability

2018-11-27 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDDS-642:
---
Attachment: HDDS-642.006.patch

> Add chill mode exit condition for pipeline availability
> ---
>
> Key: HDDS-642
> URL: https://issues.apache.org/jira/browse/HDDS-642
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Arpit Agarwal
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDDS-642.001.patch, HDDS-642.002.patch, 
> HDDS-642.003.patch, HDDS-642.004.patch, HDDS-642.005.patch, HDDS-642.006.patch
>
>
> SCM should not exit chill-mode until at least 1 write pipeline is available. 
> Else smoke tests are unreliable.
> This is not an issue for real clusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14085) RBF: LS command for root shows wrong owner and permission information.

2018-11-27 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701372#comment-16701372
 ] 

Akira Ajisaka commented on HDFS-14085:
--

LGTM, +1. Thanks [~ayushtkn].

> RBF: LS command for root shows wrong owner and permission information.
> --
>
> Key: HDFS-14085
> URL: https://issues.apache.org/jira/browse/HDFS-14085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14085-HDFS-13891-01.patch, 
> HDFS-14085-HDFS-13891-02.patch, HDFS-14085-HDFS-13891-03.patch, 
> HDFS-14085-HDFS-13891-04.patch
>
>
> The LS command for / lists all the mount entries but the permission displayed 
> is the default permission (777) and the owner and group info same as that of 
> the user calling it; Which actually should be the same as that of the 
> destination of the mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14085) RBF: LS command for root shows wrong owner and permission information.

2018-11-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701365#comment-16701365
 ] 

Hadoop QA commented on HDFS-14085:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
23s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
50s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14085 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12949784/HDFS-14085-HDFS-13891-04.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c37e6b4cd52b 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 99621b6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25656/testReport/ |
| Max. process+thread count | 1340 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25656/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: LS command for root shows wrong owner and permission information.
> 

[jira] [Commented] (HDDS-858) Start a Standalone Ratis Server on OM

2018-11-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701356#comment-16701356
 ] 

Hadoop QA commented on HDDS-858:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m  
3s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
46s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 51s{color} | {color:orange} root: The patch generated 5 new + 0 unchanged - 
1 fixed = 5 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 38s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 38s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 35s{color} 
| {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 37s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-858 |
| JIRA Patch URL | 

[jira] [Commented] (HDFS-14081) hdfs dfsadmin -metasave metasave_test results NPE

2018-11-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701326#comment-16701326
 ] 

Hadoop QA commented on HDFS-14081:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}133m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14081 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12949774/HDFS-14081.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux fc924dec2653 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f657a2a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25654/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25654/testReport/ |
| Max. process+thread count | 4180 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25654/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was 

[jira] [Updated] (HDFS-14085) RBF: LS command for root shows wrong owner and permission information.

2018-11-27 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14085:

Attachment: HDFS-14085-HDFS-13891-04.patch

> RBF: LS command for root shows wrong owner and permission information.
> --
>
> Key: HDFS-14085
> URL: https://issues.apache.org/jira/browse/HDFS-14085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14085-HDFS-13891-01.patch, 
> HDFS-14085-HDFS-13891-02.patch, HDFS-14085-HDFS-13891-03.patch, 
> HDFS-14085-HDFS-13891-04.patch
>
>
> The LS command for / lists all the mount entries but the permission displayed 
> is the default permission (777) and the owner and group info same as that of 
> the user calling it; Which actually should be the same as that of the 
> destination of the mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14085) RBF: LS command for root shows wrong owner and permission information.

2018-11-27 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701313#comment-16701313
 ] 

Ayush Saxena commented on HDFS-14085:
-

Thanx [~ajisakaa] & [~elgoiri]

uploaded v4 with said changes.

Pls Review!!!

> RBF: LS command for root shows wrong owner and permission information.
> --
>
> Key: HDFS-14085
> URL: https://issues.apache.org/jira/browse/HDFS-14085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14085-HDFS-13891-01.patch, 
> HDFS-14085-HDFS-13891-02.patch, HDFS-14085-HDFS-13891-03.patch, 
> HDFS-14085-HDFS-13891-04.patch
>
>
> The LS command for / lists all the mount entries but the permission displayed 
> is the default permission (777) and the owner and group info same as that of 
> the user calling it; Which actually should be the same as that of the 
> destination of the mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-851) Provide official apache docker image for Ozone

2018-11-27 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701307#comment-16701307
 ] 

Anu Engineer commented on HDDS-851:
---

+1. Feel free to commit. I have committed HDDS-839, please verify that docker 
hub has the right image before committing this.

> Provide official apache docker image for Ozone 
> ---
>
> Key: HDDS-851
> URL: https://issues.apache.org/jira/browse/HDDS-851
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: docker-ozone-latest.tar.gz
>
>
> Similar to the apache/hadoop:2 and apache/hadoop:3 images I propose to 
> provide apache/ozone docker images which includes the voted release binaries.
> The image can follow all the conventions from HADOOP-14898
> 1. BRANCHING
> I propose to create new docker branches:
> docker-ozone-0.3.0-alpha
> docker-ozone-latest
> And ask INFRA to register docker-ozone-(.*) in the dockerhub to create 
> apache/ozone: images
> 2. RUNNING
> I propose to create a default runner script which starts om + scm + datanode 
> + s3g all together. With this approach you can start a full ozone cluster as 
> easy as
> {code}
> docker run -p 9878:9878 -p 9876:9876 -p 9874:9874 -d apache/ozone
> {code}
> That's all. This is an all-in-one docker image which is ready to try out.
> 3. RUNNING with compose
> I propose to include a default docker-compose + config file in the image. To 
> start a multi-node pseudo cluster it will be enough to execute:
> {code}
> docker run apache/ozone cat docker-compose.yaml > docker-compose.yaml
> docker run apache/ozone cat docker-config > docker-config
> docker-compose up -d
> {code}
> That's all, and you have a multi-(pseudo)node ozone cluster which could be 
> scaled up and down with ozone.
> 4. k8s
> Later we can also provide k8s resource files with the same approach:
> {code}
> docker run apache/ozone cat k8s.yaml | kubectl apply -f -
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14085) RBF: LS command for root shows wrong owner and permission information.

2018-11-27 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701304#comment-16701304
 ] 

Akira Ajisaka commented on HDFS-14085:
--

Thanks [~ayushtkn] and [~elgoiri].
Minor nit: The first argument of {{assertEquals}} is {{expected}} and the 
second is {{actual}}, so
{code}
assertEquals(finfo.getOwner(), "owner1")
assertEquals(finfo1[0].getOwner(), "owner1");
assertEquals(finfo.getGroup(), ("group1")
assertEquals(finfo1[0].getGroup(), "group1");
{code}
should be
{code}
assertEquals("owner1", finfo.getOwner())
assertEquals("owner1", finfo1[0].getOwner());
assertEquals("group1", finfo.getGroup())
assertEquals("group1", finfo1[0].getGroup());
{code}

> RBF: LS command for root shows wrong owner and permission information.
> --
>
> Key: HDFS-14085
> URL: https://issues.apache.org/jira/browse/HDFS-14085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14085-HDFS-13891-01.patch, 
> HDFS-14085-HDFS-13891-02.patch, HDFS-14085-HDFS-13891-03.patch
>
>
> The LS command for / lists all the mount entries but the permission displayed 
> is the default permission (777) and the owner and group info same as that of 
> the user calling it; Which actually should be the same as that of the 
> destination of the mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-851) Provide official apache docker image for Ozone

2018-11-27 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701303#comment-16701303
 ] 

Anu Engineer commented on HDDS-851:
---

{quote}docker run -d -p 9878:9878 -p 9876:9876 -p 9874:9874 apache/ozone {quote}

For that command, the port 9878 returns a 500 error., Request failed. But SCM 
and OM came up, so I will commit this soon.


> Provide official apache docker image for Ozone 
> ---
>
> Key: HDDS-851
> URL: https://issues.apache.org/jira/browse/HDDS-851
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: docker-ozone-latest.tar.gz
>
>
> Similar to the apache/hadoop:2 and apache/hadoop:3 images I propose to 
> provide apache/ozone docker images which includes the voted release binaries.
> The image can follow all the conventions from HADOOP-14898
> 1. BRANCHING
> I propose to create new docker branches:
> docker-ozone-0.3.0-alpha
> docker-ozone-latest
> And ask INFRA to register docker-ozone-(.*) in the dockerhub to create 
> apache/ozone: images
> 2. RUNNING
> I propose to create a default runner script which starts om + scm + datanode 
> + s3g all together. With this approach you can start a full ozone cluster as 
> easy as
> {code}
> docker run -p 9878:9878 -p 9876:9876 -p 9874:9874 -d apache/ozone
> {code}
> That's all. This is an all-in-one docker image which is ready to try out.
> 3. RUNNING with compose
> I propose to include a default docker-compose + config file in the image. To 
> start a multi-node pseudo cluster it will be enough to execute:
> {code}
> docker run apache/ozone cat docker-compose.yaml > docker-compose.yaml
> docker run apache/ozone cat docker-config > docker-config
> docker-compose up -d
> {code}
> That's all, and you have a multi-(pseudo)node ozone cluster which could be 
> scaled up and down with ozone.
> 4. k8s
> Later we can also provide k8s resource files with the same approach:
> {code}
> docker run apache/ozone cat k8s.yaml | kubectl apply -f -
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13870) WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT

2018-11-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701302#comment-16701302
 ] 

Hadoop QA commented on HDFS-13870:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
32m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13870 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12949778/HDFS-13870.001.patch |
| Optional Tests |  dupname  asflicense  mvnsite  |
| uname | Linux 0fe731d38db9 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f657a2a |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 401 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25655/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT
> 
>
> Key: HDFS-13870
> URL: https://issues.apache.org/jira/browse/HDFS-13870
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation, webhdfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Minor
> Attachments: HDFS-13870.001.patch
>
>
> Adding ALLOWSNAPSHOT and DISALLOWSNAPSHOT (since 2.8.0, HDFS-9057) to WebHDFS 
> REST API 
> [doc|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html].
> Below are my examples of the APIs:
> {code:bash}
> # ALLOWSNAPSHOT uses http method PUT.
> curl -X "PUT" 
> "http://:/webhdfs/v1/snaptest/?op=ALLOWSNAPSHOT=hdfs"
> Response on success:
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> {code}
> {code:bash}
> # DISALLOWSNAPSHOT uses http method PUT.
> curl -X "PUT" 
> "http://:/webhdfs/v1/snaptest/?op=DISALLOWSNAPSHOT=hdfs"
> Response on success:
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> {code}
> Note: GETSNAPSHOTDIFF and GETSNAPSHOTTABLEDIRECTORYLIST are already 
> documented.
> {code:bash}
> # GETSNAPSHOTDIFF uses GET.
> curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap1=snap2"
> Response on success (example):
> HTTP/1.1 200 OK
> Content-Type: application/json
> {"SnapshotDiffReport":{"diffList":[{"sourcePath":"","type":"MODIFY"},{"sourcePath":"newfile.txt","type":"CREATE"}],"fromSnapshot":"snapOld","snapshotRoot":"/snaptest","toSnapshot":"snapNew"}}
> {code}
> {code:bash}
> # GETSNAPSHOTTABLEDIRECTORYLIST uses GET.
> curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTTABLEDIRECTORYLIST=hdfs"
> Response on success (example):
> HTTP/1.1 200 OK
> Content-Type: application/json
> 

[jira] [Commented] (HDDS-846) Exports ozone metrics to prometheus

2018-11-27 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701301#comment-16701301
 ] 

Hudson commented on HDDS-846:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15513 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15513/])
HDDS-846. Exports ozone metrics to prometheus. Contributed by Elek, (aengineer: 
rev 34a914be03b507ac287e0bbdc9485c0e041a5387)
* (edit) hadoop-hdds/common/src/main/resources/ozone-default.xml
* (add) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/PrometheusMetricsSink.java
* (delete) hadoop-ozone/dist/src/main/compose/ozoneperf/compose-all.sh
* (edit) hadoop-ozone/dist/src/main/compose/ozoneperf/docker-compose.yaml
* (edit) hadoop-hdds/docs/config.yaml
* (edit) hadoop-ozone/dist/src/main/compose/ozoneperf/README.md
* (edit) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/BaseHttpServer.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/HddsConfigKeys.java
* (delete) hadoop-ozone/dist/src/main/compose/ozoneperf/init.sh
* (add) 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/server/PrometheusServlet.java
* (edit) hadoop-ozone/dist/src/main/compose/ozoneperf/docker-config
* (edit) hadoop-ozone/dist/src/main/compose/ozoneperf/prometheus.yml
* (add) 
hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/TestPrometheusMetricsSink.java
* (delete) 
hadoop-ozone/dist/src/main/compose/ozoneperf/docker-compose-freon.yaml
* (add) hadoop-hdds/docs/content/Prometheus.md


> Exports ozone metrics to prometheus
> ---
>
> Key: HDDS-846
> URL: https://issues.apache.org/jira/browse/HDDS-846
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-846.001.patch
>
>
> As described in the parent issue, as of now we use a java agent based dark 
> magic to get all the hadoop metrics values in prometheus http format. 
> The format is very simple 
> (https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md)
>  and it would be easy to implement a simple form of prometheus servlet.
> With publishing the metrics with an included servlet the k8s deployment could 
> be more simple and will be easier to provide ways to run ozone in cloud 
> native environments. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-839) Wait for other services in the started script of hadoop-runner base docker image

2018-11-27 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-839:
--
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

[~arpitagarwal] Thanks for the review. [~elek] Thanks for the contribution. I 
have committed this patch to docker-hadoop-runner.

> Wait for other services in the started script of hadoop-runner base docker 
> image
> 
>
> Key: HDDS-839
> URL: https://issues.apache.org/jira/browse/HDDS-839
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-839-docker-hadoop-runner.001.patch, 
> HDDS-839-docker-hadoop-runner.002.patch
>
>
> As described in the parent issue, we need a simple method to handle service 
> dependencies in kubernetes clusters (usually as a workaround when some 
> clients can't re-try with renewed dns information).
> But it also could be useful to minimize the wait time in the docker-compose 
> clusters.
> The easiest implementation is modifying the started script of the 
> apache/hadoop-runner base image and add a bash loop which checks the 
> availability of the TCP port (with netcat). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-846) Exports ozone metrics to prometheus

2018-11-27 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-846:
--
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

[~elek] Thanks for the fix, I have committed this to trunk.

> Exports ozone metrics to prometheus
> ---
>
> Key: HDDS-846
> URL: https://issues.apache.org/jira/browse/HDDS-846
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager, SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-846.001.patch
>
>
> As described in the parent issue, as of now we use a java agent based dark 
> magic to get all the hadoop metrics values in prometheus http format. 
> The format is very simple 
> (https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md)
>  and it would be easy to implement a simple form of prometheus servlet.
> With publishing the metrics with an included servlet the k8s deployment could 
> be more simple and will be easier to provide ways to run ozone in cloud 
> native environments. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13870) WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT

2018-11-27 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701253#comment-16701253
 ] 

Siyao Meng commented on HDFS-13870:
---

[~linyiqun] Sorry I must have missed the email notification. Just worked on 
this and submitted a patch. Thanks!

> WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT
> 
>
> Key: HDFS-13870
> URL: https://issues.apache.org/jira/browse/HDFS-13870
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation, webhdfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Minor
> Attachments: HDFS-13870.001.patch
>
>
> Adding ALLOWSNAPSHOT and DISALLOWSNAPSHOT (since 2.8.0, HDFS-9057) to WebHDFS 
> REST API 
> [doc|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html].
> Below are my examples of the APIs:
> {code:bash}
> # ALLOWSNAPSHOT uses http method PUT.
> curl -X "PUT" 
> "http://:/webhdfs/v1/snaptest/?op=ALLOWSNAPSHOT=hdfs"
> Response on success:
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> {code}
> {code:bash}
> # DISALLOWSNAPSHOT uses http method PUT.
> curl -X "PUT" 
> "http://:/webhdfs/v1/snaptest/?op=DISALLOWSNAPSHOT=hdfs"
> Response on success:
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> {code}
> Note: GETSNAPSHOTDIFF and GETSNAPSHOTTABLEDIRECTORYLIST are already 
> documented.
> {code:bash}
> # GETSNAPSHOTDIFF uses GET.
> curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap1=snap2"
> Response on success (example):
> HTTP/1.1 200 OK
> Content-Type: application/json
> {"SnapshotDiffReport":{"diffList":[{"sourcePath":"","type":"MODIFY"},{"sourcePath":"newfile.txt","type":"CREATE"}],"fromSnapshot":"snapOld","snapshotRoot":"/snaptest","toSnapshot":"snapNew"}}
> {code}
> {code:bash}
> # GETSNAPSHOTTABLEDIRECTORYLIST uses GET.
> curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTTABLEDIRECTORYLIST=hdfs"
> Response on success (example):
> HTTP/1.1 200 OK
> Content-Type: application/json
> {"SnapshottableDirectoryList":[{"dirStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16392,"group":"supergroup","length":0,"modificationTime":1535151813500,"owner":"hdfs","pathSuffix":"snaptest","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},"parentFullPath":"/","snapshotNumber":2,"snapshotQuota":65536}]}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13870) WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT

2018-11-27 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13870:
--
Attachment: HDFS-13870.001.patch
Status: Patch Available  (was: Open)

> WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT
> 
>
> Key: HDFS-13870
> URL: https://issues.apache.org/jira/browse/HDFS-13870
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation, webhdfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Minor
> Attachments: HDFS-13870.001.patch
>
>
> Adding ALLOWSNAPSHOT and DISALLOWSNAPSHOT (since 2.8.0, HDFS-9057) to WebHDFS 
> REST API 
> [doc|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html].
> Below are my examples of the APIs:
> {code:bash}
> # ALLOWSNAPSHOT uses http method PUT.
> curl -X "PUT" 
> "http://:/webhdfs/v1/snaptest/?op=ALLOWSNAPSHOT=hdfs"
> Response on success:
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> {code}
> {code:bash}
> # DISALLOWSNAPSHOT uses http method PUT.
> curl -X "PUT" 
> "http://:/webhdfs/v1/snaptest/?op=DISALLOWSNAPSHOT=hdfs"
> Response on success:
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> {code}
> Note: GETSNAPSHOTDIFF and GETSNAPSHOTTABLEDIRECTORYLIST are already 
> documented.
> {code:bash}
> # GETSNAPSHOTDIFF uses GET.
> curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap1=snap2"
> Response on success (example):
> HTTP/1.1 200 OK
> Content-Type: application/json
> {"SnapshotDiffReport":{"diffList":[{"sourcePath":"","type":"MODIFY"},{"sourcePath":"newfile.txt","type":"CREATE"}],"fromSnapshot":"snapOld","snapshotRoot":"/snaptest","toSnapshot":"snapNew"}}
> {code}
> {code:bash}
> # GETSNAPSHOTTABLEDIRECTORYLIST uses GET.
> curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTTABLEDIRECTORYLIST=hdfs"
> Response on success (example):
> HTTP/1.1 200 OK
> Content-Type: application/json
> {"SnapshottableDirectoryList":[{"dirStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16392,"group":"supergroup","length":0,"modificationTime":1535151813500,"owner":"hdfs","pathSuffix":"snaptest","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},"parentFullPath":"/","snapshotNumber":2,"snapshotQuota":65536}]}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13870) WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT

2018-11-27 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13870:
--
Description: 
Adding ALLOWSNAPSHOT and DISALLOWSNAPSHOT (since 2.8.0, HDFS-9057) to WebHDFS 
REST API 
[doc|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html].

Below are my examples of the APIs:

{code:bash}
# ALLOWSNAPSHOT uses http method PUT.
curl -X "PUT" 
"http://:/webhdfs/v1/snaptest/?op=ALLOWSNAPSHOT=hdfs"

Response on success:

HTTP/1.1 200 OK
Content-Type: application/octet-stream
{code}

{code:bash}
# DISALLOWSNAPSHOT uses http method PUT.
curl -X "PUT" 
"http://:/webhdfs/v1/snaptest/?op=DISALLOWSNAPSHOT=hdfs"

Response on success:

HTTP/1.1 200 OK
Content-Type: application/octet-stream
{code}

Note: GETSNAPSHOTDIFF and GETSNAPSHOTTABLEDIRECTORYLIST are already documented.

{code:bash}
# GETSNAPSHOTDIFF uses GET.
curl 
"http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap1=snap2"

Response on success (example):

HTTP/1.1 200 OK
Content-Type: application/json

{"SnapshotDiffReport":{"diffList":[{"sourcePath":"","type":"MODIFY"},{"sourcePath":"newfile.txt","type":"CREATE"}],"fromSnapshot":"snapOld","snapshotRoot":"/snaptest","toSnapshot":"snapNew"}}
{code}

{code:bash}
# GETSNAPSHOTTABLEDIRECTORYLIST uses GET.
curl 
"http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTTABLEDIRECTORYLIST=hdfs"

Response on success (example):

HTTP/1.1 200 OK
Content-Type: application/json

{"SnapshottableDirectoryList":[{"dirStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16392,"group":"supergroup","length":0,"modificationTime":1535151813500,"owner":"hdfs","pathSuffix":"snaptest","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},"parentFullPath":"/","snapshotNumber":2,"snapshotQuota":65536}]}
{code}


  was:
ALLOWSNAPSHOT, DISALLOWSNAPSHOT (since 2.8.0, HDFS-9057),

GETSNAPSHOTDIFF (since 3.0.3, HDFS-13052), GETSNAPSHOTTABLEDIRECTORYLIST 
(HDFS-13141) don't have their API usage documentation in the [official 
doc|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html]
 yet.

 

Below are my examples of those undocumented APIs:

{code:bash}
# ALLOWSNAPSHOT uses http method PUT.
curl -X "PUT" 
"http://:/webhdfs/v1/snaptest/?op=ALLOWSNAPSHOT=hdfs"

Response on success:

HTTP/1.1 200 OK
Content-Type: application/octet-stream
{code}

{code:bash}
# DISALLOWSNAPSHOT uses http method PUT.
curl -X "PUT" 
"http://:/webhdfs/v1/snaptest/?op=DISALLOWSNAPSHOT=hdfs"

Response on success:

HTTP/1.1 200 OK
Content-Type: application/octet-stream
{code}

{code:bash}
# GETSNAPSHOTDIFF uses GET.
curl 
"http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap1=snap2"

Response on success (example):

HTTP/1.1 200 OK
Content-Type: application/json

{"SnapshotDiffReport":{"diffList":[{"sourcePath":"","type":"MODIFY"},{"sourcePath":"newfile.txt","type":"CREATE"}],"fromSnapshot":"snapOld","snapshotRoot":"/snaptest","toSnapshot":"snapNew"}}
{code}

{code:bash}
# GETSNAPSHOTTABLEDIRECTORYLIST uses GET.
curl 
"http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTTABLEDIRECTORYLIST=hdfs"

Response on success (example):

HTTP/1.1 200 OK
Content-Type: application/json

{"SnapshottableDirectoryList":[{"dirStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16392,"group":"supergroup","length":0,"modificationTime":1535151813500,"owner":"hdfs","pathSuffix":"snaptest","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},"parentFullPath":"/","snapshotNumber":2,"snapshotQuota":65536}]}
{code}



> WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT
> 
>
> Key: HDFS-13870
> URL: https://issues.apache.org/jira/browse/HDFS-13870
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation, webhdfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Minor
>
> Adding ALLOWSNAPSHOT and DISALLOWSNAPSHOT (since 2.8.0, HDFS-9057) to WebHDFS 
> REST API 
> [doc|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html].
> Below are my examples of the APIs:
> {code:bash}
> # ALLOWSNAPSHOT uses http method PUT.
> curl -X "PUT" 
> "http://:/webhdfs/v1/snaptest/?op=ALLOWSNAPSHOT=hdfs"
> Response on success:
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> {code}
> {code:bash}
> # DISALLOWSNAPSHOT uses http method PUT.
> curl -X "PUT" 
> "http://:/webhdfs/v1/snaptest/?op=DISALLOWSNAPSHOT=hdfs"
> Response on success:
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> {code}
> Note: GETSNAPSHOTDIFF and GETSNAPSHOTTABLEDIRECTORYLIST are already 
> documented.
> {code:bash}
> # GETSNAPSHOTDIFF uses GET.
> curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap1=snap2"
> Response on success (example):
> HTTP/1.1 200 OK
> Content-Type: 

[jira] [Commented] (HDDS-284) CRC for ChunksData

2018-11-27 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701249#comment-16701249
 ] 

Hanisha Koneru commented on HDDS-284:
-

Thanks [~shashikant]. I will wait till tomorrow to commit patch v06.

> CRC for ChunksData
> --
>
> Key: HDDS-284
> URL: https://issues.apache.org/jira/browse/HDDS-284
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: CRC and Error Detection for Containers.pdf, 
> HDDS-284.00.patch, HDDS-284.005.patch, HDDS-284.006.patch, HDDS-284.01.patch, 
> HDDS-284.02.patch, HDDS-284.03.patch, HDDS-284.04.patch, Interleaving CRC and 
> Error Detection for Containers.pdf
>
>
> This Jira is to add CRC for chunks data.
>  Right now a Chunk Info structure looks like this:
> _message ChunkInfo {_
>   _required string chunkName =_ _1__;_
>   _required uint64 offset =_ _2__;_
>   _required uint64 len =_ _3__;_
>   _optional string checksum =_ _4__;_
>   _repeated KeyValue metadata =_ _5__;_
>  _}_
>  
> Proposal is to change ChunkInfo structure as below: 
> _message ChunkInfo {_
>  _required string chunkName = 1 ;_
>  _required uint64 offset = 2 ;_
>  _required uint64 len = 3 ;_
>  _repeated KeyValue metadata = 4;_
>  _required ChecksumData checksumData = 5;_
> _}_
>  
> The ChecksumData structure would be as follows: 
> _message ChecksumData {_
>  _required ChecksumType type = 1;_ 
>  _required uint32 bytesPerChecksum = 2;_ 
>  _repeated bytes checksums = 3;_
> _}_
>  
> Instead of changing disk format, we put the checksum into chunkInfo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-858) Start a Standalone Ratis Server on OM

2018-11-27 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-858:

Status: Patch Available  (was: Open)

> Start a Standalone Ratis Server on OM
> -
>
> Key: HDDS-858
> URL: https://issues.apache.org/jira/browse/HDDS-858
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS_858.001.patch
>
>
> We propose implementing a standalone Ratis server on OM, as a start. Once the 
> Ratis server and state machine are integrated into OM, then the replicated 
> Ratis state machine can be implemented for OM.
> This Jira aims to just start a Ratis server on OM start. The client-OM 
> communication and OM state would not be changed in this Jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-858) Start a Standalone Ratis Server on OM

2018-11-27 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701248#comment-16701248
 ] 

Hanisha Koneru commented on HDDS-858:
-

Uploaded a patch which starts a Ratis server on the OM if enabled (default is 
off). This patch does not make any change to the functioning of the OM.

cc. [~msingh]

> Start a Standalone Ratis Server on OM
> -
>
> Key: HDDS-858
> URL: https://issues.apache.org/jira/browse/HDDS-858
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS_858.001.patch
>
>
> We propose implementing a standalone Ratis server on OM, as a start. Once the 
> Ratis server and state machine are integrated into OM, then the replicated 
> Ratis state machine can be implemented for OM.
> This Jira aims to just start a Ratis server on OM start. The client-OM 
> communication and OM state would not be changed in this Jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13870) WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT

2018-11-27 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDFS-13870:
--
Summary: WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT  (was: 
WebHDFS: Document new APIs)

> WebHDFS: Document ALLOWSNAPSHOT and DISALLOWSNAPSHOT
> 
>
> Key: HDFS-13870
> URL: https://issues.apache.org/jira/browse/HDFS-13870
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation, webhdfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Minor
>
> ALLOWSNAPSHOT, DISALLOWSNAPSHOT (since 2.8.0, HDFS-9057),
> GETSNAPSHOTDIFF (since 3.0.3, HDFS-13052), GETSNAPSHOTTABLEDIRECTORYLIST 
> (HDFS-13141) don't have their API usage documentation in the [official 
> doc|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html]
>  yet.
>  
> Below are my examples of those undocumented APIs:
> {code:bash}
> # ALLOWSNAPSHOT uses http method PUT.
> curl -X "PUT" 
> "http://:/webhdfs/v1/snaptest/?op=ALLOWSNAPSHOT=hdfs"
> Response on success:
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> {code}
> {code:bash}
> # DISALLOWSNAPSHOT uses http method PUT.
> curl -X "PUT" 
> "http://:/webhdfs/v1/snaptest/?op=DISALLOWSNAPSHOT=hdfs"
> Response on success:
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> {code}
> {code:bash}
> # GETSNAPSHOTDIFF uses GET.
> curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap1=snap2"
> Response on success (example):
> HTTP/1.1 200 OK
> Content-Type: application/json
> {"SnapshotDiffReport":{"diffList":[{"sourcePath":"","type":"MODIFY"},{"sourcePath":"newfile.txt","type":"CREATE"}],"fromSnapshot":"snapOld","snapshotRoot":"/snaptest","toSnapshot":"snapNew"}}
> {code}
> {code:bash}
> # GETSNAPSHOTTABLEDIRECTORYLIST uses GET.
> curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTTABLEDIRECTORYLIST=hdfs"
> Response on success (example):
> HTTP/1.1 200 OK
> Content-Type: application/json
> {"SnapshottableDirectoryList":[{"dirStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16392,"group":"supergroup","length":0,"modificationTime":1535151813500,"owner":"hdfs","pathSuffix":"snaptest","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},"parentFullPath":"/","snapshotNumber":2,"snapshotQuota":65536}]}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-858) Start a Standalone Ratis Server on OM

2018-11-27 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-858:

Attachment: HDDS_858.001.patch

> Start a Standalone Ratis Server on OM
> -
>
> Key: HDDS-858
> URL: https://issues.apache.org/jira/browse/HDDS-858
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS_858.001.patch
>
>
> We propose implementing a standalone Ratis server on OM, as a start. Once the 
> Ratis server and state machine are integrated into OM, then the replicated 
> Ratis state machine can be implemented for OM.
> This Jira aims to just start a Ratis server on OM start. The client-OM 
> communication and OM state would not be changed in this Jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13870) WebHDFS: Document new APIs

2018-11-27 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng reassigned HDFS-13870:
-

Assignee: Siyao Meng

> WebHDFS: Document new APIs
> --
>
> Key: HDFS-13870
> URL: https://issues.apache.org/jira/browse/HDFS-13870
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: documentation, webhdfs
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Minor
>
> ALLOWSNAPSHOT, DISALLOWSNAPSHOT (since 2.8.0, HDFS-9057),
> GETSNAPSHOTDIFF (since 3.0.3, HDFS-13052), GETSNAPSHOTTABLEDIRECTORYLIST 
> (HDFS-13141) don't have their API usage documentation in the [official 
> doc|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/WebHDFS.html]
>  yet.
>  
> Below are my examples of those undocumented APIs:
> {code:bash}
> # ALLOWSNAPSHOT uses http method PUT.
> curl -X "PUT" 
> "http://:/webhdfs/v1/snaptest/?op=ALLOWSNAPSHOT=hdfs"
> Response on success:
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> {code}
> {code:bash}
> # DISALLOWSNAPSHOT uses http method PUT.
> curl -X "PUT" 
> "http://:/webhdfs/v1/snaptest/?op=DISALLOWSNAPSHOT=hdfs"
> Response on success:
> HTTP/1.1 200 OK
> Content-Type: application/octet-stream
> {code}
> {code:bash}
> # GETSNAPSHOTDIFF uses GET.
> curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTDIFF=hdfs=snap1=snap2"
> Response on success (example):
> HTTP/1.1 200 OK
> Content-Type: application/json
> {"SnapshotDiffReport":{"diffList":[{"sourcePath":"","type":"MODIFY"},{"sourcePath":"newfile.txt","type":"CREATE"}],"fromSnapshot":"snapOld","snapshotRoot":"/snaptest","toSnapshot":"snapNew"}}
> {code}
> {code:bash}
> # GETSNAPSHOTTABLEDIRECTORYLIST uses GET.
> curl 
> "http://:/webhdfs/v1/snaptest/?op=GETSNAPSHOTTABLEDIRECTORYLIST=hdfs"
> Response on success (example):
> HTTP/1.1 200 OK
> Content-Type: application/json
> {"SnapshottableDirectoryList":[{"dirStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":16392,"group":"supergroup","length":0,"modificationTime":1535151813500,"owner":"hdfs","pathSuffix":"snaptest","permission":"755","replication":0,"storagePolicy":0,"type":"DIRECTORY"},"parentFullPath":"/","snapshotNumber":2,"snapshotQuota":65536}]}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14092) Remove two-step create/append in WebHdfsFileSystem

2018-11-27 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701179#comment-16701179
 ] 

Siyao Meng edited comment on HDFS-14092 at 11/28/18 12:50 AM:
--

I've taken a look at the client.
{code:java}
  /** Expects HTTP response 307 "Temporary Redirect". */
  public static class TemporaryRedirectOp implements Op {
static final TemporaryRedirectOp CREATE = new TemporaryRedirectOp(
PutOpParam.Op.CREATE);
static final TemporaryRedirectOp APPEND = new TemporaryRedirectOp(
PostOpParam.Op.APPEND);
static final TemporaryRedirectOp OPEN = new TemporaryRedirectOp(
GetOpParam.Op.OPEN);
static final TemporaryRedirectOp GETFILECHECKSUM = new TemporaryRedirectOp(
GetOpParam.Op.GETFILECHECKSUM);
...
{code}
Only for those 4 operations the WebHDFS client would: 1. Send a request to 
server and expect 307 Temporary Redirect; 2. Grab the new URL from HTTP header 
Location key and send a second request to it (which points to a DN). 
WebHdfsFileSystem#connect(URL) performs the logic.
-So my understanding for the problem is that, the Java 6 library was faulty so 
it couldn't redirect the request automatically (like user browsers normally do 
when encountered HTTP 3xx). Therefore, we had to do it manually. CMIIW-
I just read HDFS-2540 which added this two-step code. And I am a bit confused. 
What is the expected logic if we want to remove two-step?
Any comments?
CC [~szetszwo]


was (Author: smeng):
I've taken a look at the client.
{code:java}
  /** Expects HTTP response 307 "Temporary Redirect". */
  public static class TemporaryRedirectOp implements Op {
static final TemporaryRedirectOp CREATE = new TemporaryRedirectOp(
PutOpParam.Op.CREATE);
static final TemporaryRedirectOp APPEND = new TemporaryRedirectOp(
PostOpParam.Op.APPEND);
static final TemporaryRedirectOp OPEN = new TemporaryRedirectOp(
GetOpParam.Op.OPEN);
static final TemporaryRedirectOp GETFILECHECKSUM = new TemporaryRedirectOp(
GetOpParam.Op.GETFILECHECKSUM);
...
{code}
Only for those 4 operations the WebHDFS client would: 1. Send a request to 
server and expect 307 Temporary Redirect; 2. Grab the new URL from HTTP header 
Location key and send a second request to it (which points to a DN). 
WebHdfsFileSystem#connect(URL) performs the logic.

 

So my understanding for the problem is that, the Java 6 library was faulty so 
it couldn't redirect the request automatically (like user browsers normally do 
when encountered HTTP 3xx). Therefore, we had to do it manually. CMIIW

> Remove two-step create/append in WebHdfsFileSystem
> --
>
> Key: HDFS-14092
> URL: https://issues.apache.org/jira/browse/HDFS-14092
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 3.2.0
>Reporter: Daniel Templeton
>Assignee: Siyao Meng
>Priority: Major
>
> Per the javadoc on the {{WebHdfsFileSystem.connect()}} method:
> {code}/**
>  * Two-step requests redirected to a DN
>  *
>  * Create/Append:
>  * Step 1) Submit a Http request with neither auto-redirect nor data.
>  * Step 2) Submit another Http request with the URL from the Location 
> header
>  * with data.
>  *
>  * The reason of having two-step create/append is for preventing clients 
> to
>  * send out the data before the redirect. This issue is addressed by the
>  * "Expect: 100-continue" header in HTTP/1.1; see RFC 2616, Section 8.2.3.
>  * Unfortunately, there are software library bugs (e.g. Jetty 6 http 
> server
>  * and Java 6 http client), which do not correctly implement "Expect:
>  * 100-continue". The two-step create/append is a temporary workaround for
>  * the software library bugs.
>  *
>  * Open/Checksum
>  * Also implements two-step connects for other operations redirected to
>  * a DN such as open and checksum
>  */{code}
> We should validate that it's safe to remove the two-step process and do so.  
> FYI, [~smeng].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14081) hdfs dfsadmin -metasave metasave_test results NPE

2018-11-27 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701207#comment-16701207
 ] 

Shweta commented on HDFS-14081:
---

Thanks [~xiaochen] for the suggestion on fix version, explanation and [~hgadre] 
for the review.

I have updated the code and uploaded the patch to address the check style 
issues. Also, the failing unit tests pass locally on my machine.

Please review my latest patch and suggest if any changes needed. Thanks.

> hdfs dfsadmin -metasave metasave_test results NPE
> -
>
> Key: HDFS-14081
> URL: https://issues.apache.org/jira/browse/HDFS-14081
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-14081.001.patch, HDFS-14081.002.patch
>
>
> Race condition is encountered while adding Block to 
> postponedMisreplicatedBlocks which in turn tried to retrieve Block from 
> BlockManager in which it may not be present. 
> This happens in HA, metasave in first NN succeeded but failed in second NN, 
> StackTrace showing NPE is as follows:
> {code}
> 2018-07-12 21:39:09,783 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 24 on 8020, call Call#1 Retry#0 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.metaSave from 
> 172.26.9.163:602342018-07-12 21:39:09,783 WARN org.apache.hadoop.ipc.Server: 
> IPC Server handler 24 on 8020, call Call#1 Retry#0 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.metaSave from 
> 172.26.9.163:60234java.lang.NullPointerException at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseSourceDatanodes(BlockManager.java:2175)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.dumpBlockMeta(BlockManager.java:830)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.metaSave(BlockManager.java:762)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.metaSave(FSNamesystem.java:1782)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.metaSave(FSNamesystem.java:1766)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.metaSave(NameNodeRpcServer.java:1320)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.metaSave(ClientNamenodeProtocolServerSideTranslatorPB.java:928)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14081) hdfs dfsadmin -metasave metasave_test results NPE

2018-11-27 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-14081:
--
Attachment: HDFS-14081.002.patch

> hdfs dfsadmin -metasave metasave_test results NPE
> -
>
> Key: HDFS-14081
> URL: https://issues.apache.org/jira/browse/HDFS-14081
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.1
>Reporter: Shweta
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-14081.001.patch, HDFS-14081.002.patch
>
>
> Race condition is encountered while adding Block to 
> postponedMisreplicatedBlocks which in turn tried to retrieve Block from 
> BlockManager in which it may not be present. 
> This happens in HA, metasave in first NN succeeded but failed in second NN, 
> StackTrace showing NPE is as follows:
> {code}
> 2018-07-12 21:39:09,783 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 24 on 8020, call Call#1 Retry#0 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.metaSave from 
> 172.26.9.163:602342018-07-12 21:39:09,783 WARN org.apache.hadoop.ipc.Server: 
> IPC Server handler 24 on 8020, call Call#1 Retry#0 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.metaSave from 
> 172.26.9.163:60234java.lang.NullPointerException at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseSourceDatanodes(BlockManager.java:2175)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.dumpBlockMeta(BlockManager.java:830)
>  at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.metaSave(BlockManager.java:762)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.metaSave(FSNamesystem.java:1782)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.metaSave(FSNamesystem.java:1766)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.metaSave(NameNodeRpcServer.java:1320)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.metaSave(ClientNamenodeProtocolServerSideTranslatorPB.java:928)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675) {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-877) Ensure correct surefire version is being used for HDDS-4

2018-11-27 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701198#comment-16701198
 ] 

Xiaoyu Yao commented on HDDS-877:
-

Now I understand why this is happening, HDDS-702 fix hadoop jar version to 
3.2.1-SNAPSHOT (surefire version 2.21.0) which does not contain fix for 
HADOOP-15916 that change the surefire version to 3.0.0-M1. 

 

> Ensure correct surefire version is being used for HDDS-4
> 
>
> Key: HDDS-877
> URL: https://issues.apache.org/jira/browse/HDDS-877
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-877-HDDS-4.001.patch
>
>
> Currently all test are failing due to buggy version of surefile is being used 
> even after HADOOP-15916.  This ticket is opened to fix this in HDDS-4 or 
> trunk. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-877) Ensure correct surefire version is being used for HDDS-4

2018-11-27 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-877:

Status: Open  (was: Patch Available)

> Ensure correct surefire version is being used for HDDS-4
> 
>
> Key: HDDS-877
> URL: https://issues.apache.org/jira/browse/HDDS-877
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-877-HDDS-4.001.patch
>
>
> Currently all test are failing due to buggy version of surefile is being used 
> even after HADOOP-15916.  This ticket is opened to fix this in HDDS-4 or 
> trunk. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-539) ozone datanode ignores the invalid options

2018-11-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701188#comment-16701188
 ] 

Hadoop QA commented on HDDS-539:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
48s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m  5s{color} | {color:orange} root: The patch generated 3 new + 1 unchanged - 
0 fixed = 4 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 45s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 41s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-539 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12949757/HDDS-539.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4b337ace4e58 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 

[jira] [Commented] (HDFS-14092) Remove two-step create/append in WebHdfsFileSystem

2018-11-27 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701179#comment-16701179
 ] 

Siyao Meng commented on HDFS-14092:
---

I've taken a look at the client.
{code:java}
  /** Expects HTTP response 307 "Temporary Redirect". */
  public static class TemporaryRedirectOp implements Op {
static final TemporaryRedirectOp CREATE = new TemporaryRedirectOp(
PutOpParam.Op.CREATE);
static final TemporaryRedirectOp APPEND = new TemporaryRedirectOp(
PostOpParam.Op.APPEND);
static final TemporaryRedirectOp OPEN = new TemporaryRedirectOp(
GetOpParam.Op.OPEN);
static final TemporaryRedirectOp GETFILECHECKSUM = new TemporaryRedirectOp(
GetOpParam.Op.GETFILECHECKSUM);
...
{code}
Only for those 4 operations the WebHDFS client would: 1. Send a request to 
server and expect 307 Temporary Redirect; 2. Grab the new URL from HTTP header 
Location key and send a second request to it (which points to a DN). 
WebHdfsFileSystem#connect(URL) performs the logic.

 

So my understanding for the problem is that, the Java 6 library was faulty so 
it couldn't redirect the request automatically (like user browsers normally do 
when encountered HTTP 3xx). Therefore, we had to do it manually. CMIIW

> Remove two-step create/append in WebHdfsFileSystem
> --
>
> Key: HDFS-14092
> URL: https://issues.apache.org/jira/browse/HDFS-14092
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 3.2.0
>Reporter: Daniel Templeton
>Assignee: Siyao Meng
>Priority: Major
>
> Per the javadoc on the {{WebHdfsFileSystem.connect()}} method:
> {code}/**
>  * Two-step requests redirected to a DN
>  *
>  * Create/Append:
>  * Step 1) Submit a Http request with neither auto-redirect nor data.
>  * Step 2) Submit another Http request with the URL from the Location 
> header
>  * with data.
>  *
>  * The reason of having two-step create/append is for preventing clients 
> to
>  * send out the data before the redirect. This issue is addressed by the
>  * "Expect: 100-continue" header in HTTP/1.1; see RFC 2616, Section 8.2.3.
>  * Unfortunately, there are software library bugs (e.g. Jetty 6 http 
> server
>  * and Java 6 http client), which do not correctly implement "Expect:
>  * 100-continue". The two-step create/append is a temporary workaround for
>  * the software library bugs.
>  *
>  * Open/Checksum
>  * Also implements two-step connects for other operations redirected to
>  * a DN such as open and checksum
>  */{code}
> We should validate that it's safe to remove the two-step process and do so.  
> FYI, [~smeng].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14108) BlockManager Data Structures

2018-11-27 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701172#comment-16701172
 ] 

BELUGA BEHR commented on HDFS-14108:


Failed tests are unrelated.  They are all related to the 
{{org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts}} test suit and this seems to 
be a known issue.  I've seen it on several different runs now across various 
JIRAs.  Please consider accepting this patch into the project.

> BlockManager Data Structures
> 
>
> Key: HDFS-14108
> URL: https://issues.apache.org/jira/browse/HDFS-14108
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14108.1.patch
>
>
> # Prefer {{ArrayList}} to {{LinkedList}} when simply adding/iterating
> # Prefer {{HashSet}} to {{TreeSet}} when no ordering is required
> # Other performance improvements
> # Check style fixes
> https://stackoverflow.com/questions/322715/when-to-use-linkedlist-over-arraylist-in-java
> {code:java}
> final Set excludedNodes = new HashSet<>();
> for(BlockReconstructionWork rw : reconWork){
>   // Do no bother wasting time clearing out the collection, let GC do 
> that work later
>   excludedNodes.clear();
>   // use {{addAll}} here
>   for (DatanodeDescriptor dn : rw.getContainingNodes()) {
> excludedNodes.add(dn);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-877) Ensure correct surefire version is being used for HDDS-4

2018-11-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701166#comment-16701166
 ] 

Hadoop QA commented on HDDS-877:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  7m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 19m 
45s{color} | {color:red} root in HDDS-4 failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 14m 
50s{color} | {color:red} root in HDDS-4 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-hdds in HDDS-4 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-ozone in HDDS-4 failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
55m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-hdds in HDDS-4 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-ozone in HDDS-4 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 13m 
57s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 57s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 35s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 31s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 26s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-877 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12949756/HDDS-877-HDDS-4.001.patch
 |
| Optional Tests |  asflicense  

[jira] [Work started] (HDFS-14092) Remove two-step create/append in WebHdfsFileSystem

2018-11-27 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-14092 started by Siyao Meng.
-
> Remove two-step create/append in WebHdfsFileSystem
> --
>
> Key: HDFS-14092
> URL: https://issues.apache.org/jira/browse/HDFS-14092
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 3.2.0
>Reporter: Daniel Templeton
>Assignee: Siyao Meng
>Priority: Major
>
> Per the javadoc on the {{WebHdfsFileSystem.connect()}} method:
> {code}/**
>  * Two-step requests redirected to a DN
>  *
>  * Create/Append:
>  * Step 1) Submit a Http request with neither auto-redirect nor data.
>  * Step 2) Submit another Http request with the URL from the Location 
> header
>  * with data.
>  *
>  * The reason of having two-step create/append is for preventing clients 
> to
>  * send out the data before the redirect. This issue is addressed by the
>  * "Expect: 100-continue" header in HTTP/1.1; see RFC 2616, Section 8.2.3.
>  * Unfortunately, there are software library bugs (e.g. Jetty 6 http 
> server
>  * and Java 6 http client), which do not correctly implement "Expect:
>  * 100-continue". The two-step create/append is a temporary workaround for
>  * the software library bugs.
>  *
>  * Open/Checksum
>  * Also implements two-step connects for other operations redirected to
>  * a DN such as open and checksum
>  */{code}
> We should validate that it's safe to remove the two-step process and do so.  
> FYI, [~smeng].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14108) BlockManager Data Structures

2018-11-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701154#comment-16701154
 ] 

Hadoop QA commented on HDFS-14108:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14108 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12949751/HDFS-14108.1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1d5e6f3521a5 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4c106fc |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25653/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25653/testReport/ |
| Max. process+thread count | 5848 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 

[jira] [Assigned] (HDFS-14092) Remove two-step create/append in WebHdfsFileSystem

2018-11-27 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng reassigned HDFS-14092:
-

Assignee: Siyao Meng

> Remove two-step create/append in WebHdfsFileSystem
> --
>
> Key: HDFS-14092
> URL: https://issues.apache.org/jira/browse/HDFS-14092
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 3.2.0
>Reporter: Daniel Templeton
>Assignee: Siyao Meng
>Priority: Major
>
> Per the javadoc on the {{WebHdfsFileSystem.connect()}} method:
> {code}/**
>  * Two-step requests redirected to a DN
>  *
>  * Create/Append:
>  * Step 1) Submit a Http request with neither auto-redirect nor data.
>  * Step 2) Submit another Http request with the URL from the Location 
> header
>  * with data.
>  *
>  * The reason of having two-step create/append is for preventing clients 
> to
>  * send out the data before the redirect. This issue is addressed by the
>  * "Expect: 100-continue" header in HTTP/1.1; see RFC 2616, Section 8.2.3.
>  * Unfortunately, there are software library bugs (e.g. Jetty 6 http 
> server
>  * and Java 6 http client), which do not correctly implement "Expect:
>  * 100-continue". The two-step create/append is a temporary workaround for
>  * the software library bugs.
>  *
>  * Open/Checksum
>  * Also implements two-step connects for other operations redirected to
>  * a DN such as open and checksum
>  */{code}
> We should validate that it's safe to remove the two-step process and do so.  
> FYI, [~smeng].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-696) Bootstrap genesis SCM(CA) with self-signed certificate.

2018-11-27 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-696:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~ajayydv], [~xyao] Thanks for the reviews. I have committed this to the 
feature branch.

> Bootstrap genesis SCM(CA) with self-signed certificate.
> ---
>
> Key: HDDS-696
> URL: https://issues.apache.org/jira/browse/HDDS-696
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-696-HDDS-4.001.patch, HDDS-696-HDDS-4.002.patch, 
> HDDS-696-HDDS-4.003.patch, HDDS-696-HDDS-4.004.patch
>
>
> If security is enabled, SCM will generate the CA certs and bootstrap a CA. If 
> it is already  bootstrapped it the keys and root certificates are read from 
> the secure store, if not, they are generated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-642) Add chill mode exit condition for pipeline availability

2018-11-27 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701147#comment-16701147
 ] 

Ajay Kumar commented on HDDS-642:
-

[~linyiqun] thanks for updating the patch. Patch looks very good to me. Few 
NITs:
 # SCMChillModeManager
 ** L168, shall we mention in javadoc that this function is only for testing 
purpose.
 ** L79 either we should either avoid instantiating PipelineChillModeRule or 
fail if  pipelineManager is null.
 ** Since in a real cluster we may not have a open pipeline in beginning, this 
rule should be turned off by default.How about a config to turn it on/off? 
(Default value to be off, we can enable it explicitly for smoke tests)
 # TestScmChillMode
 ** L254 By default chill mode is turned on so this should be true. Shall we 
replace this wait logic with assert?

> Add chill mode exit condition for pipeline availability
> ---
>
> Key: HDDS-642
> URL: https://issues.apache.org/jira/browse/HDDS-642
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Arpit Agarwal
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HDDS-642.001.patch, HDDS-642.002.patch, 
> HDDS-642.003.patch, HDDS-642.004.patch, HDDS-642.005.patch
>
>
> SCM should not exit chill-mode until at least 1 write pipeline is available. 
> Else smoke tests are unreliable.
> This is not an issue for real clusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-696) Bootstrap genesis SCM(CA) with self-signed certificate.

2018-11-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701144#comment-16701144
 ] 

Hadoop QA commented on HDDS-696:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
22s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
52s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
21s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} HDDS-4 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} HDDS-4 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
11s{color} | {color:green} root: The patch generated 0 new + 3 unchanged - 3 
fixed = 3 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 47s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 42s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-696 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12949753/HDDS-696-HDDS-4.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7ee9a2e6f2f1 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 

[jira] [Commented] (HDDS-839) Wait for other services in the started script of hadoop-runner base docker image

2018-11-27 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701135#comment-16701135
 ] 

Anu Engineer commented on HDDS-839:
---

+1, Thanks for the patch.

> Wait for other services in the started script of hadoop-runner base docker 
> image
> 
>
> Key: HDDS-839
> URL: https://issues.apache.org/jira/browse/HDDS-839
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Attachments: HDDS-839-docker-hadoop-runner.001.patch, 
> HDDS-839-docker-hadoop-runner.002.patch
>
>
> As described in the parent issue, we need a simple method to handle service 
> dependencies in kubernetes clusters (usually as a workaround when some 
> clients can't re-try with renewed dns information).
> But it also could be useful to minimize the wait time in the docker-compose 
> clusters.
> The easiest implementation is modifying the started script of the 
> apache/hadoop-runner base image and add a bash loop which checks the 
> availability of the TCP port (with netcat). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14102) verifyBlockPlacement

2018-11-27 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701116#comment-16701116
 ] 

BELUGA BEHR commented on HDFS-14102:


A different set of unit tests failed.  Please consider this patch for inclusion.

> verifyBlockPlacement
> 
>
> Key: HDFS-14102
> URL: https://issues.apache.org/jira/browse/HDFS-14102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14102.1.patch, HDFS-14102.2.patch, 
> HDFS-14102.3.patch
>
>
>  
> {code:java|title=BlockPlacementPolicyDefault.java}
> // 1. Check that all locations are different.
> // 2. Count locations on different racks.
> Set racks = new TreeSet<>();
> for (DatanodeInfo dn : locs)
>   racks.add(dn.getNetworkLocation());
> ...
> racks.size(){code}
>  
>  Here, the code is counting the number of distinct Network Locations. 
> However, it is using a TreeSet which has overhead to maintain item order and 
> uses a linked structure internally. This overhead is unneeded since all that 
> is required here is a count.
> {quote}A NavigableSet implementation based on a TreeMap. The elements are 
> ordered using their natural ordering, or by a Comparator provided at set 
> creation time, depending on which constructor is used.
>  This implementation provides guaranteed log(n) time cost for the basic 
> operations (add, remove and contains).
> [https://docs.oracle.com/javase/7/docs/api/java/util/TreeSet.html]
> {quote}
>  
>  Use Java streams for readability and because it uses a {{HashSet}} under the 
> covers to perform the distinct action. {{HashSet}} uses an array internally 
> and has constant time performance for the {{add}} method.
> [https://github.com/apache/hadoop/blob/27978bcb66a9130cbf26d37ec454c0b7fcdc2530/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java#L1042]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14085) RBF: LS command for root shows wrong owner and permission information.

2018-11-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701114#comment-16701114
 ] 

Íñigo Goiri commented on HDFS-14085:


[^HDFS-14085-HDFS-13891-03.patch] LGTM.
A minor nit; instead of:
{code}
assertTrue(finfo.getOwner().equals("owner1")
&& finfo1[0].getOwner().equals("owner1"));
assertTrue(finfo.getGroup().equals("group1")
&& finfo1[0].getGroup().equals("group1"));
{code}
I would do:
{code}
assertEquals(finfo.getOwner(), "owner1")
assertEquals(finfo1[0].getOwner(), "owner1");
assertEquals(finfo.getGroup(), ("group1")
assertEquals(finfo1[0].getGroup(), "group1");
{code}

> RBF: LS command for root shows wrong owner and permission information.
> --
>
> Key: HDFS-14085
> URL: https://issues.apache.org/jira/browse/HDFS-14085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14085-HDFS-13891-01.patch, 
> HDFS-14085-HDFS-13891-02.patch, HDFS-14085-HDFS-13891-03.patch
>
>
> The LS command for / lists all the mount entries but the permission displayed 
> is the default permission (777) and the owner and group info same as that of 
> the user calling it; Which actually should be the same as that of the 
> destination of the mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-539) ozone datanode ignores the invalid options

2018-11-27 Thread Vinicius Higa Murakami (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinicius Higa Murakami updated HDDS-539:

Attachment: HDDS-539.007.patch

> ozone datanode ignores the invalid options
> --
>
> Key: HDDS-539
> URL: https://issues.apache.org/jira/browse/HDDS-539
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Namit Maheshwari
>Assignee: Vinicius Higa Murakami
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-539.003.patch, HDDS-539.004.patch, 
> HDDS-539.005.patch, HDDS-539.006.patch, HDDS-539.007.patch, HDDS-539.patch
>
>
> ozone datanode command starts datanode and ignores the invalid option, apart 
> from help
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone datanode -help
> Starts HDDS Datanode
> {code}
> For all the other invalid options, it just ignores and starts the DN like 
> below:
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone datanode -ABC
> 2018-09-22 00:59:34,462 [main] INFO - STARTUP_MSG:
> /
> STARTUP_MSG: Starting HddsDatanodeService
> STARTUP_MSG: host = 
> ctr-e138-1518143905142-481027-01-02.hwx.site/172.27.54.20
> STARTUP_MSG: args = [-ABC]
> STARTUP_MSG: version = 3.2.0-SNAPSHOT
> STARTUP_MSG: classpath = 
> 

[jira] [Commented] (HDFS-14106) Improve NamenodeFsck copyBlock

2018-11-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701105#comment-16701105
 ] 

Hadoop QA commented on HDFS-14106:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 90 unchanged - 2 fixed = 91 total (was 92) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 43s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14106 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12949741/HDFS-14106.1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8d44fc0f8955 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 300f772 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25649/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25649/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDDS-696) Bootstrap genesis SCM(CA) with self-signed certificate.

2018-11-27 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701102#comment-16701102
 ] 

Xiaoyu Yao commented on HDDS-696:
-

Thanks [~anu] for the update. Patch v4 looks good to me. +1, pending Jenkins.

> Bootstrap genesis SCM(CA) with self-signed certificate.
> ---
>
> Key: HDDS-696
> URL: https://issues.apache.org/jira/browse/HDDS-696
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-696-HDDS-4.001.patch, HDDS-696-HDDS-4.002.patch, 
> HDDS-696-HDDS-4.003.patch, HDDS-696-HDDS-4.004.patch
>
>
> If security is enabled, SCM will generate the CA certs and bootstrap a CA. If 
> it is already  bootstrapped it the keys and root certificates are read from 
> the secure store, if not, they are generated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14102) verifyBlockPlacement

2018-11-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701103#comment-16701103
 ] 

Hadoop QA commented on HDFS-14102:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 35 unchanged - 2 fixed = 35 total (was 37) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 16s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14102 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12949742/HDFS-14102.3.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3a973f770e7c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 300f772 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25650/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25650/testReport/ |
| Max. process+thread count | 4319 (vs. 

[jira] [Updated] (HDDS-877) Ensure correct surefire version is being used for HDDS-4

2018-11-27 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-877:

Status: Patch Available  (was: Open)

> Ensure correct surefire version is being used for HDDS-4
> 
>
> Key: HDDS-877
> URL: https://issues.apache.org/jira/browse/HDDS-877
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-877-HDDS-4.001.patch
>
>
> Currently all test are failing due to buggy version of surefile is being used 
> even after HADOOP-15916.  This ticket is opened to fix this in HDDS-4 or 
> trunk. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-877) Ensure correct surefire version is being used for HDDS-4

2018-11-27 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-877:

Attachment: HDDS-877-HDDS-4.001.patch

> Ensure correct surefire version is being used for HDDS-4
> 
>
> Key: HDDS-877
> URL: https://issues.apache.org/jira/browse/HDDS-877
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-877-HDDS-4.001.patch
>
>
> Currently all test are failing due to buggy version of surefile is being used 
> even after HADOOP-15916.  This ticket is opened to fix this in HDDS-4 or 
> trunk. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-877) Ensure correct surefire version is being used for HDDS-4

2018-11-27 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-877:
---

 Summary: Ensure correct surefire version is being used for HDDS-4
 Key: HDDS-877
 URL: https://issues.apache.org/jira/browse/HDDS-877
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


Currently all test are failing due to buggy version of surefile is being used 
even after HADOOP-15916.  This ticket is opened to fix this in HDDS-4 or trunk. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14085) RBF: LS command for root shows wrong owner and permission information.

2018-11-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701086#comment-16701086
 ] 

Hadoop QA commented on HDFS-14085:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
56s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 
26s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 88m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14085 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12949744/HDFS-14085-HDFS-13891-03.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ece5c40e2fd2 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / 99621b6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25651/testReport/ |
| Max. process+thread count | 1046 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25651/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: LS command for root shows wrong owner and permission information.
> 

[jira] [Commented] (HDFS-14107) FileContext Delete on Exit Improvements

2018-11-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701067#comment-16701067
 ] 

Hadoop QA commented on HDFS-14107:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
40s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m  
2s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m  2s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 53 unchanged - 2 fixed = 53 total (was 55) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  0m 
39s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 44s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14107 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12949745/HDFS-14107.1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3b93a2e27ab1 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 300f772 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25652/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25652/artifact/out/patch-compile-root.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25652/artifact/out/patch-compile-root.txt
 |
| mvnsite | 

[jira] [Commented] (HDDS-696) Bootstrap genesis SCM(CA) with self-signed certificate.

2018-11-27 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701045#comment-16701045
 ] 

Anu Engineer commented on HDDS-696:
---

[~xyao] Thanks for the review and comments. Patch v4 fixes those issues. Please 
see more detailed comment below.
{quote}BlockTokenException.java#Line 26: NIT: accidental change can be removed.
{quote}
Fixed.
{quote}CertificateCodec.java - Files.setPosixFilePermissions already have it 
coverred.
{quote}
You are absolutely right. Thanks for pointing this out. Removed this code. In 
the KeyCodec, this function is used in test cases. I did not repeat the same 
test in certificates, even though it was the idea.
{quote}static JcaX509CertificateConverter, This will be useful for CA. Also, we 
need to call setProvider() to honor the "BC"
{quote}
Fixed , For the provider we want to use the default JAVA class here. When we 
use the BC provider we get a parse error. I can investigate this more.
{quote}Line 201: basePath is not hornored in the code. (Same on Line 248)
{quote}
Fixed.
{quote}Line 255: need to use the getInstance with provider name parameter to 
honor "BC" provider from security config.
{quote}
I am sorry, did you mean for the CertificateHolder?, that is a BC class not 
from the JCA.
{quote}CertificateServer.java#Line 56: SCMSecurityException can be removed.
{quote}
Fixed.
{quote}CertificateSignRequest.java. The file location does not match the 
package declaration
{quote}
Moved all files to certificates.utils.
{quote}DefaultCAServer.java# Line 63: NIT: can we start a new line for "1. 
Success…", Line 84: NIT: typo: "success"
{quote}
Fixed.
{quote}Line 227/245: should we remove the securityConfig parameter and use the 
member variable config instead if we could
{quote}
Fixed.
{quote}it has been initialized outside the DefaultCAServer anyway?
{quote}
The init call does that. Do you want this to be passed via ctor?
{quote}Line 65-68: NIT: let's be consistent with the order of "final static"
{quote}
Fixed.
{quote}Line 324 will throw if it is not posix, do we still need a separate 
check here?
{quote}
I use this in tests to simulate failure as if the file system is not posix.
{quote}SelfSignedCertificate.java# Line 20: file need to be moved under 
certificate.utils with the package name change.
{quote}
Fixed.
{quote}I think we should simply use endDate.atTime(LocalTime.MAX) to indicate 
proper end time or
{quote}
Thanks, I converted both begin and endDate to use LocalTime.MIN and 
LocalTime.MAX respectively.
{quote}Line 216: do we need to +1 considering we allow the certificate to be 
valid from the begin
{quote}
Fixed.

> Bootstrap genesis SCM(CA) with self-signed certificate.
> ---
>
> Key: HDDS-696
> URL: https://issues.apache.org/jira/browse/HDDS-696
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-696-HDDS-4.001.patch, HDDS-696-HDDS-4.002.patch, 
> HDDS-696-HDDS-4.003.patch, HDDS-696-HDDS-4.004.patch
>
>
> If security is enabled, SCM will generate the CA certs and bootstrap a CA. If 
> it is already  bootstrapped it the keys and root certificates are read from 
> the secure store, if not, they are generated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14103) Review Logging of BlockPlacementPolicyDefault

2018-11-27 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701036#comment-16701036
 ] 

BELUGA BEHR commented on HDFS-14103:


Example: no 'null' guard:

https://github.com/apache/hadoop/blob/27978bcb66a9130cbf26d37ec454c0b7fcdc2530/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java#L895-L904

But ya, if debug logging is enabled, there will be a builder there, so no 
reason to double-check:

{code}
StringBuilder builder = null;
if (LOG.isDebugEnabled()) {
  builder = debugLoggingBuilder.get();
  builder.setLength(0);
  builder.append("[");
}
{code}

> Review Logging of BlockPlacementPolicyDefault
> -
>
> Key: HDFS-14103
> URL: https://issues.apache.org/jira/browse/HDFS-14103
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14103.1.patch
>
>
> Review use of SLF4J in {{BlockPlacementPolicyDefault.java}}
> Other minor logging improvements.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-696) Bootstrap genesis SCM(CA) with self-signed certificate.

2018-11-27 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-696:
--
Attachment: HDDS-696-HDDS-4.004.patch

> Bootstrap genesis SCM(CA) with self-signed certificate.
> ---
>
> Key: HDDS-696
> URL: https://issues.apache.org/jira/browse/HDDS-696
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-696-HDDS-4.001.patch, HDDS-696-HDDS-4.002.patch, 
> HDDS-696-HDDS-4.003.patch, HDDS-696-HDDS-4.004.patch
>
>
> If security is enabled, SCM will generate the CA certs and bootstrap a CA. If 
> it is already  bootstrapped it the keys and root certificates are read from 
> the secure store, if not, they are generated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14108) BlockManager Data Structures

2018-11-27 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14108:
---
Attachment: HDFS-14108.1.patch

> BlockManager Data Structures
> 
>
> Key: HDFS-14108
> URL: https://issues.apache.org/jira/browse/HDFS-14108
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14108.1.patch
>
>
> # Prefer {{ArrayList}} to {{LinkedList}} when simply adding/iterating
> # Prefer {{HashSet}} to {{TreeSet}} when no ordering is required
> # Other performance improvements
> # Check style fixes
> https://stackoverflow.com/questions/322715/when-to-use-linkedlist-over-arraylist-in-java
> {code:java}
> final Set excludedNodes = new HashSet<>();
> for(BlockReconstructionWork rw : reconWork){
>   // Do no bother wasting time clearing out the collection, let GC do 
> that work later
>   excludedNodes.clear();
>   // use {{addAll}} here
>   for (DatanodeDescriptor dn : rw.getContainingNodes()) {
> excludedNodes.add(dn);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14108) BlockManager Data Structures

2018-11-27 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HDFS-14108:
--

 Summary: BlockManager Data Structures
 Key: HDFS-14108
 URL: https://issues.apache.org/jira/browse/HDFS-14108
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Affects Versions: 3.2.0
Reporter: BELUGA BEHR
Assignee: BELUGA BEHR
 Attachments: HDFS-14108.1.patch

# Prefer {{ArrayList}} to {{LinkedList}} when simply adding/iterating
# Prefer {{HashSet}} to {{TreeSet}} when no ordering is required
# Other performance improvements
# Check style fixes

https://stackoverflow.com/questions/322715/when-to-use-linkedlist-over-arraylist-in-java

{code:java}
final Set excludedNodes = new HashSet<>();
for(BlockReconstructionWork rw : reconWork){
  // Do no bother wasting time clearing out the collection, let GC do that 
work later
  excludedNodes.clear();
  // use {{addAll}} here
  for (DatanodeDescriptor dn : rw.getContainingNodes()) {
excludedNodes.add(dn);
  }
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14103) Review Logging of BlockPlacementPolicyDefault

2018-11-27 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701031#comment-16701031
 ] 

BELUGA BEHR commented on HDFS-14103:


The unit test is:

{code}
java.lang.AssertionError: Wrong number of PendingReplication blocks 
expected:<2> but was:<1>
{code}

I reviewed my work again and I only changed logging, so would not expect any 
functional changes.

The {{builder}} object can never be 'null'.  There are several other instances 
in the code where {{builder}} is accessed without checking for a 'null' value 
so I am simply unifying the code.

> Review Logging of BlockPlacementPolicyDefault
> -
>
> Key: HDFS-14103
> URL: https://issues.apache.org/jira/browse/HDFS-14103
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14103.1.patch
>
>
> Review use of SLF4J in {{BlockPlacementPolicyDefault.java}}
> Other minor logging improvements.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14105) NamenodeFsck HashSet

2018-11-27 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HDFS-14105:

Status: Patch Available  (was: Open)

> NamenodeFsck HashSet
> 
>
> Key: HDFS-14105
> URL: https://issues.apache.org/jira/browse/HDFS-14105
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Attachments: HDFS-14105.1.patch
>
>
> Use {{HashSet}} instead of {{TreeSet}}.  {{TreeSet}} has the overhead of 
> keeping elements in order even though ordering is not taken into 
> consideration in this class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-876) add blockade tests for flaky network

2018-11-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701030#comment-16701030
 ] 

Hadoop QA commented on HDDS-876:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
16s{color} | {color:red} dist in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m  
1s{color} | {color:orange} The patch generated 50 new + 0 unchanged - 0 fixed = 
50 total (was 0) {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
17s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} dist in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
24s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-876 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12949743/HDDS-876.001.patch |
| Optional Tests |  asflicense  shellcheck  shelldocs  compile  javac  javadoc  
mvninstall  mvnsite  unit  shadedclient  pylint  |
| uname | Linux d4c033fa24c2 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 300f772 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| shellcheck | v0.4.6 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1813/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
| pylint | v1.9.2 |
| pylint | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1813/artifact/out/diff-patch-pylint.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1813/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1813/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 446 (vs. ulimit of 1) |
| modules | C: 

[jira] [Updated] (HDFS-14105) NamenodeFsck HashSet

2018-11-27 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HDFS-14105:

Status: Open  (was: Patch Available)

> NamenodeFsck HashSet
> 
>
> Key: HDFS-14105
> URL: https://issues.apache.org/jira/browse/HDFS-14105
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Attachments: HDFS-14105.1.patch
>
>
> Use {{HashSet}} instead of {{TreeSet}}.  {{TreeSet}} has the overhead of 
> keeping elements in order even though ordering is not taken into 
> consideration in this class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14105) NamenodeFsck HashSet

2018-11-27 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701027#comment-16701027
 ] 

Giovanni Matteo Fumarola commented on HDFS-14105:
-

Thanks [~belugabehr] . We should re run the unit tests for this patch.

> NamenodeFsck HashSet
> 
>
> Key: HDFS-14105
> URL: https://issues.apache.org/jira/browse/HDFS-14105
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Trivial
> Attachments: HDFS-14105.1.patch
>
>
> Use {{HashSet}} instead of {{TreeSet}}.  {{TreeSet}} has the overhead of 
> keeping elements in order even though ordering is not taken into 
> consideration in this class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14108) BlockManager Data Structures

2018-11-27 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14108:
---
Status: Patch Available  (was: Open)

> BlockManager Data Structures
> 
>
> Key: HDFS-14108
> URL: https://issues.apache.org/jira/browse/HDFS-14108
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14108.1.patch
>
>
> # Prefer {{ArrayList}} to {{LinkedList}} when simply adding/iterating
> # Prefer {{HashSet}} to {{TreeSet}} when no ordering is required
> # Other performance improvements
> # Check style fixes
> https://stackoverflow.com/questions/322715/when-to-use-linkedlist-over-arraylist-in-java
> {code:java}
> final Set excludedNodes = new HashSet<>();
> for(BlockReconstructionWork rw : reconWork){
>   // Do no bother wasting time clearing out the collection, let GC do 
> that work later
>   excludedNodes.clear();
>   // use {{addAll}} here
>   for (DatanodeDescriptor dn : rw.getContainingNodes()) {
> excludedNodes.add(dn);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14103) Review Logging of BlockPlacementPolicyDefault

2018-11-27 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701021#comment-16701021
 ] 

Giovanni Matteo Fumarola edited comment on HDFS-14103 at 11/27/18 8:58 PM:
---

Thanks [~belugabehr] . Is the failed test related to the change?

 

I don't understand why you removed: "&& builder != null"


was (Author: giovanni.fumarola):
Thanks [~belugabehr] . Is the failed test related to the change?

> Review Logging of BlockPlacementPolicyDefault
> -
>
> Key: HDFS-14103
> URL: https://issues.apache.org/jira/browse/HDFS-14103
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14103.1.patch
>
>
> Review use of SLF4J in {{BlockPlacementPolicyDefault.java}}
> Other minor logging improvements.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14103) Review Logging of BlockPlacementPolicyDefault

2018-11-27 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16701021#comment-16701021
 ] 

Giovanni Matteo Fumarola commented on HDFS-14103:
-

Thanks [~belugabehr] . Is the failed test related to the change?

> Review Logging of BlockPlacementPolicyDefault
> -
>
> Key: HDFS-14103
> URL: https://issues.apache.org/jira/browse/HDFS-14103
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14103.1.patch
>
>
> Review use of SLF4J in {{BlockPlacementPolicyDefault.java}}
> Other minor logging improvements.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14107) FileContext Delete on Exit Improvements

2018-11-27 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14107:
---
Attachment: HDFS-14107.1.patch

> FileContext Delete on Exit Improvements
> ---
>
> Key: HDFS-14107
> URL: https://issues.apache.org/jira/browse/HDFS-14107
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14107.1.patch
>
>
> {code:java|FileContext.java}
> synchronized (DELETE_ON_EXIT) {
>   Set>> set = DELETE_ON_EXIT.entrySet();
>   for (Entry> entry : set) {
> FileContext fc = entry.getKey();
> Set paths = entry.getValue();
> for (Path path : paths) {
>   try {
> fc.delete(path, true);
>   } catch (IOException e) {
> LOG.warn("Ignoring failure to deleteOnExit for path " + path);
>   }
> }
>   }
>   DELETE_ON_EXIT.clear();
> {code}
> # Include the {{IOException}} in the logging so that admins can know why the 
> file was not deleted
> # Do not bother clearing out the data structure.  This code is only called if 
> the JVM is going down.  Better to spend the time allowing another shutdown 
> hook to run than to spend time cleaning this thing up.
> # Use Guava {{MultiMap}} for readability
> # Paths are currently stored in a {{TreeSet}}.  This set implementation 
> orders the files by names.  It does not seem worth much to order the files.  
> Use a faster {{HashSet}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-804) Block token: Add secret token manager

2018-11-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16700972#comment-16700972
 ] 

Hadoop QA commented on HDDS-804:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  7m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 35m 
30s{color} | {color:red} root in HDDS-4 failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 15m 
41s{color} | {color:red} root in HDDS-4 failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
25s{color} | {color:green} HDDS-4 passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
24s{color} | {color:red} common in HDDS-4 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
28s{color} | {color:red} common in HDDS-4 failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
28s{color} | {color:red} ozone-manager in HDDS-4 failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} common in HDDS-4 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
22s{color} | {color:red} common in HDDS-4 failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} ozone-manager in HDDS-4 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} common in HDDS-4 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
22s{color} | {color:red} common in HDDS-4 failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} ozone-manager in HDDS-4 failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
11s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
10s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
10s{color} | {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 15m 
15s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 15m 15s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
22s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
23s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
22s{color} | {color:red} ozone-manager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
22s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
22s{color} | {color:red} ozone-manager in 

[jira] [Updated] (HDFS-14107) FileContext Delete on Exit Improvements

2018-11-27 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14107:
---
Status: Patch Available  (was: Open)

> FileContext Delete on Exit Improvements
> ---
>
> Key: HDFS-14107
> URL: https://issues.apache.org/jira/browse/HDFS-14107
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14107.1.patch
>
>
> {code:java|FileContext.java}
> synchronized (DELETE_ON_EXIT) {
>   Set>> set = DELETE_ON_EXIT.entrySet();
>   for (Entry> entry : set) {
> FileContext fc = entry.getKey();
> Set paths = entry.getValue();
> for (Path path : paths) {
>   try {
> fc.delete(path, true);
>   } catch (IOException e) {
> LOG.warn("Ignoring failure to deleteOnExit for path " + path);
>   }
> }
>   }
>   DELETE_ON_EXIT.clear();
> {code}
> # Include the {{IOException}} in the logging so that admins can know why the 
> file was not deleted
> # Do not bother clearing out the data structure.  This code is only called if 
> the JVM is going down.  Better to spend the time allowing another shutdown 
> hook to run than to spend time cleaning this thing up.
> # Use Guava {{MultiMap}} for readability
> # Paths are currently stored in a {{TreeSet}}.  This set implementation 
> orders the files by names.  It does not seem worth much to order the files.  
> Use a faster {{HashSet}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14107) FileContext Delete on Exit Improvements

2018-11-27 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HDFS-14107:
--

 Summary: FileContext Delete on Exit Improvements
 Key: HDFS-14107
 URL: https://issues.apache.org/jira/browse/HDFS-14107
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.2.0
Reporter: BELUGA BEHR


{code:java|FileContext.java}
synchronized (DELETE_ON_EXIT) {
  Set>> set = DELETE_ON_EXIT.entrySet();
  for (Entry> entry : set) {
FileContext fc = entry.getKey();
Set paths = entry.getValue();
for (Path path : paths) {
  try {
fc.delete(path, true);
  } catch (IOException e) {
LOG.warn("Ignoring failure to deleteOnExit for path " + path);
  }
}
  }
  DELETE_ON_EXIT.clear();
{code}

# Include the {{IOException}} in the logging so that admins can know why the 
file was not deleted
# Do not bother clearing out the data structure.  This code is only called if 
the JVM is going down.  Better to spend the time allowing another shutdown hook 
to run than to spend time cleaning this thing up.
# Use Guava {{MultiMap}} for readability
# Paths are currently stored in a {{TreeSet}}.  This set implementation orders 
the files by names.  It does not seem worth much to order the files.  Use a 
faster {{HashSet}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14107) FileContext Delete on Exit Improvements

2018-11-27 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR reassigned HDFS-14107:
--

Assignee: BELUGA BEHR

> FileContext Delete on Exit Improvements
> ---
>
> Key: HDFS-14107
> URL: https://issues.apache.org/jira/browse/HDFS-14107
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
>
> {code:java|FileContext.java}
> synchronized (DELETE_ON_EXIT) {
>   Set>> set = DELETE_ON_EXIT.entrySet();
>   for (Entry> entry : set) {
> FileContext fc = entry.getKey();
> Set paths = entry.getValue();
> for (Path path : paths) {
>   try {
> fc.delete(path, true);
>   } catch (IOException e) {
> LOG.warn("Ignoring failure to deleteOnExit for path " + path);
>   }
> }
>   }
>   DELETE_ON_EXIT.clear();
> {code}
> # Include the {{IOException}} in the logging so that admins can know why the 
> file was not deleted
> # Do not bother clearing out the data structure.  This code is only called if 
> the JVM is going down.  Better to spend the time allowing another shutdown 
> hook to run than to spend time cleaning this thing up.
> # Use Guava {{MultiMap}} for readability
> # Paths are currently stored in a {{TreeSet}}.  This set implementation 
> orders the files by names.  It does not seem worth much to order the files.  
> Use a faster {{HashSet}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14105) NamenodeFsck HashSet

2018-11-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16700968#comment-16700968
 ] 

Hadoop QA commented on HDFS-14105:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 91 unchanged - 1 fixed = 91 total (was 92) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 18s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14105 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12949730/HDFS-14105.1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 70fe14443b6a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 96c104d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25648/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25648/testReport/ |
| Max. process+thread count | 4288 (vs. ulimit of 1) |
| modules | C: 

[jira] [Commented] (HDFS-14085) RBF: LS command for root shows wrong owner and permission information.

2018-11-27 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16700962#comment-16700962
 ] 

Ayush Saxena commented on HDFS-14085:
-

Uploaded v3 with the added test case checking resolution of mount point in both 
scenarios and fixed checkstyles.

> RBF: LS command for root shows wrong owner and permission information.
> --
>
> Key: HDFS-14085
> URL: https://issues.apache.org/jira/browse/HDFS-14085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14085-HDFS-13891-01.patch, 
> HDFS-14085-HDFS-13891-02.patch, HDFS-14085-HDFS-13891-03.patch
>
>
> The LS command for / lists all the mount entries but the permission displayed 
> is the default permission (777) and the owner and group info same as that of 
> the user calling it; Which actually should be the same as that of the 
> destination of the mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14085) RBF: LS command for root shows wrong owner and permission information.

2018-11-27 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14085:

Attachment: HDFS-14085-HDFS-13891-03.patch

> RBF: LS command for root shows wrong owner and permission information.
> --
>
> Key: HDFS-14085
> URL: https://issues.apache.org/jira/browse/HDFS-14085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14085-HDFS-13891-01.patch, 
> HDFS-14085-HDFS-13891-02.patch, HDFS-14085-HDFS-13891-03.patch
>
>
> The LS command for / lists all the mount entries but the permission displayed 
> is the default permission (777) and the owner and group info same as that of 
> the user calling it; Which actually should be the same as that of the 
> destination of the mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14104) Review getImageTxIdToRetain

2018-11-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16700935#comment-16700935
 ] 

Hadoop QA commented on HDFS-14104:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 5 unchanged - 1 fixed = 5 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}132m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14104 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12949725/HDFS-14104.1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 26474ed0f44e 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 96c104d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25647/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25647/testReport/ |
| Max. process+thread count | 3860 (vs. ulimit of 1) |
| modules | C: 

[jira] [Updated] (HDDS-876) add blockade tests for flaky network

2018-11-27 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-876:
---
Status: Patch Available  (was: Open)

> add blockade tests for flaky network
> 
>
> Key: HDDS-876
> URL: https://issues.apache.org/jira/browse/HDDS-876
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-876.001.patch
>
>
> Blockade is a container utility to simulate network and node failures and 
> network partitions. https://blockade.readthedocs.io/en/latest/guide.html.
> This jira proposes to add a simple test to test freon with a flaky network.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-876) add blockade tests for flaky network

2018-11-27 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-876:
---
Attachment: HDDS-876.001.patch

> add blockade tests for flaky network
> 
>
> Key: HDDS-876
> URL: https://issues.apache.org/jira/browse/HDDS-876
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-876.001.patch
>
>
> Blockade is a container utility to simulate network and node failures and 
> network partitions. https://blockade.readthedocs.io/en/latest/guide.html.
> This jira proposes to add a simple test to test freon with a flaky network.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-876) add blockade tests for flaky network

2018-11-27 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-876:
--

 Summary: add blockade tests for flaky network
 Key: HDDS-876
 URL: https://issues.apache.org/jira/browse/HDDS-876
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Affects Versions: 0.4.0
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: 0.4.0


Blockade is a container utility to simulate network and node failures and 
network partitions. https://blockade.readthedocs.io/en/latest/guide.html.

This jira proposes to add a simple test to test freon with a flaky network.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14102) verifyBlockPlacement

2018-11-27 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14102:
---
Status: Open  (was: Patch Available)

> verifyBlockPlacement
> 
>
> Key: HDFS-14102
> URL: https://issues.apache.org/jira/browse/HDFS-14102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14102.1.patch, HDFS-14102.2.patch
>
>
>  
> {code:java|title=BlockPlacementPolicyDefault.java}
> // 1. Check that all locations are different.
> // 2. Count locations on different racks.
> Set racks = new TreeSet<>();
> for (DatanodeInfo dn : locs)
>   racks.add(dn.getNetworkLocation());
> ...
> racks.size(){code}
>  
>  Here, the code is counting the number of distinct Network Locations. 
> However, it is using a TreeSet which has overhead to maintain item order and 
> uses a linked structure internally. This overhead is unneeded since all that 
> is required here is a count.
> {quote}A NavigableSet implementation based on a TreeMap. The elements are 
> ordered using their natural ordering, or by a Comparator provided at set 
> creation time, depending on which constructor is used.
>  This implementation provides guaranteed log(n) time cost for the basic 
> operations (add, remove and contains).
> [https://docs.oracle.com/javase/7/docs/api/java/util/TreeSet.html]
> {quote}
>  
>  Use Java streams for readability and because it uses a {{HashSet}} under the 
> covers to perform the distinct action. {{HashSet}} uses an array internally 
> and has constant time performance for the {{add}} method.
> [https://github.com/apache/hadoop/blob/27978bcb66a9130cbf26d37ec454c0b7fcdc2530/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java#L1042]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14106) Improve NamenodeFsck copyBlock

2018-11-27 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR reassigned HDFS-14106:
--

Assignee: BELUGA BEHR

> Improve NamenodeFsck copyBlock
> --
>
> Key: HDFS-14106
> URL: https://issues.apache.org/jira/browse/HDFS-14106
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.2.0
> Environment: # Code is performing copy with a 1K buffer.  8K is the 
> standard these ways
> # Improve code design; do not catch one's own exception, do not log and throw 
> (only do one or the other, never both)
> # Refactor to make a new method for copy
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14106.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14102) verifyBlockPlacement

2018-11-27 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14102:
---
Attachment: HDFS-14102.3.patch

> verifyBlockPlacement
> 
>
> Key: HDFS-14102
> URL: https://issues.apache.org/jira/browse/HDFS-14102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14102.1.patch, HDFS-14102.2.patch, 
> HDFS-14102.3.patch
>
>
>  
> {code:java|title=BlockPlacementPolicyDefault.java}
> // 1. Check that all locations are different.
> // 2. Count locations on different racks.
> Set racks = new TreeSet<>();
> for (DatanodeInfo dn : locs)
>   racks.add(dn.getNetworkLocation());
> ...
> racks.size(){code}
>  
>  Here, the code is counting the number of distinct Network Locations. 
> However, it is using a TreeSet which has overhead to maintain item order and 
> uses a linked structure internally. This overhead is unneeded since all that 
> is required here is a count.
> {quote}A NavigableSet implementation based on a TreeMap. The elements are 
> ordered using their natural ordering, or by a Comparator provided at set 
> creation time, depending on which constructor is used.
>  This implementation provides guaranteed log(n) time cost for the basic 
> operations (add, remove and contains).
> [https://docs.oracle.com/javase/7/docs/api/java/util/TreeSet.html]
> {quote}
>  
>  Use Java streams for readability and because it uses a {{HashSet}} under the 
> covers to perform the distinct action. {{HashSet}} uses an array internally 
> and has constant time performance for the {{add}} method.
> [https://github.com/apache/hadoop/blob/27978bcb66a9130cbf26d37ec454c0b7fcdc2530/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java#L1042]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14102) verifyBlockPlacement

2018-11-27 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14102:
---
Status: Patch Available  (was: Open)

Attaching same patch again to see if unit tests pass on second attempt

> verifyBlockPlacement
> 
>
> Key: HDFS-14102
> URL: https://issues.apache.org/jira/browse/HDFS-14102
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14102.1.patch, HDFS-14102.2.patch, 
> HDFS-14102.3.patch
>
>
>  
> {code:java|title=BlockPlacementPolicyDefault.java}
> // 1. Check that all locations are different.
> // 2. Count locations on different racks.
> Set racks = new TreeSet<>();
> for (DatanodeInfo dn : locs)
>   racks.add(dn.getNetworkLocation());
> ...
> racks.size(){code}
>  
>  Here, the code is counting the number of distinct Network Locations. 
> However, it is using a TreeSet which has overhead to maintain item order and 
> uses a linked structure internally. This overhead is unneeded since all that 
> is required here is a count.
> {quote}A NavigableSet implementation based on a TreeMap. The elements are 
> ordered using their natural ordering, or by a Comparator provided at set 
> creation time, depending on which constructor is used.
>  This implementation provides guaranteed log(n) time cost for the basic 
> operations (add, remove and contains).
> [https://docs.oracle.com/javase/7/docs/api/java/util/TreeSet.html]
> {quote}
>  
>  Use Java streams for readability and because it uses a {{HashSet}} under the 
> covers to perform the distinct action. {{HashSet}} uses an array internally 
> and has constant time performance for the {{add}} method.
> [https://github.com/apache/hadoop/blob/27978bcb66a9130cbf26d37ec454c0b7fcdc2530/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java#L1042]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14106) Improve NamenodeFsck copyBlock

2018-11-27 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14106:
---
Description: 
# Code is performing copy with a 1K buffer.  8K is the standard these days
# Improve code design; do not catch one's own exception, do not log and throw 
(only do one or the other, never both)
# Refactor to make a new method for copy

  was:
# Code is performing copy with a 1K buffer.  8K is the standard these ways
# Improve code design; do not catch one's own exception, do not log and throw 
(only do one or the other, never both)
# Refactor to make a new method for copy


> Improve NamenodeFsck copyBlock
> --
>
> Key: HDFS-14106
> URL: https://issues.apache.org/jira/browse/HDFS-14106
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14106.1.patch
>
>
> # Code is performing copy with a 1K buffer.  8K is the standard these days
> # Improve code design; do not catch one's own exception, do not log and throw 
> (only do one or the other, never both)
> # Refactor to make a new method for copy



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14106) Improve NamenodeFsck copyBlock

2018-11-27 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14106:
---
Status: Patch Available  (was: Open)

> Improve NamenodeFsck copyBlock
> --
>
> Key: HDFS-14106
> URL: https://issues.apache.org/jira/browse/HDFS-14106
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14106.1.patch
>
>
> # Code is performing copy with a 1K buffer.  8K is the standard these ways
> # Improve code design; do not catch one's own exception, do not log and throw 
> (only do one or the other, never both)
> # Refactor to make a new method for copy



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14106) Improve NamenodeFsck copyBlock

2018-11-27 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14106:
---
Attachment: HDFS-14106.1.patch

> Improve NamenodeFsck copyBlock
> --
>
> Key: HDFS-14106
> URL: https://issues.apache.org/jira/browse/HDFS-14106
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14106.1.patch
>
>
> # Code is performing copy with a 1K buffer.  8K is the standard these ways
> # Improve code design; do not catch one's own exception, do not log and throw 
> (only do one or the other, never both)
> # Refactor to make a new method for copy



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14106) Improve NamenodeFsck copyBlock

2018-11-27 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14106:
---
Environment: (was: # Code is performing copy with a 1K buffer.  8K is 
the standard these ways
# Improve code design; do not catch one's own exception, do not log and throw 
(only do one or the other, never both)
# Refactor to make a new method for copy)

> Improve NamenodeFsck copyBlock
> --
>
> Key: HDFS-14106
> URL: https://issues.apache.org/jira/browse/HDFS-14106
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14106.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14106) Improve NamenodeFsck copyBlock

2018-11-27 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HDFS-14106:
--

 Summary: Improve NamenodeFsck copyBlock
 Key: HDFS-14106
 URL: https://issues.apache.org/jira/browse/HDFS-14106
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.2.0
 Environment: # Code is performing copy with a 1K buffer.  8K is the 
standard these ways
# Improve code design; do not catch one's own exception, do not log and throw 
(only do one or the other, never both)
# Refactor to make a new method for copy
Reporter: BELUGA BEHR
 Attachments: HDFS-14106.1.patch





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14106) Improve NamenodeFsck copyBlock

2018-11-27 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14106:
---
Description: 
# Code is performing copy with a 1K buffer.  8K is the standard these ways
# Improve code design; do not catch one's own exception, do not log and throw 
(only do one or the other, never both)
# Refactor to make a new method for copy

> Improve NamenodeFsck copyBlock
> --
>
> Key: HDFS-14106
> URL: https://issues.apache.org/jira/browse/HDFS-14106
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-14106.1.patch
>
>
> # Code is performing copy with a 1K buffer.  8K is the standard these ways
> # Improve code design; do not catch one's own exception, do not log and throw 
> (only do one or the other, never both)
> # Refactor to make a new method for copy



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14085) RBF: LS command for root shows wrong owner and permission information.

2018-11-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16700903#comment-16700903
 ] 

Íñigo Goiri commented on HDFS-14085:


OK, let's also add a test that fails by itself.

> RBF: LS command for root shows wrong owner and permission information.
> --
>
> Key: HDFS-14085
> URL: https://issues.apache.org/jira/browse/HDFS-14085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14085-HDFS-13891-01.patch, 
> HDFS-14085-HDFS-13891-02.patch
>
>
> The LS command for / lists all the mount entries but the permission displayed 
> is the default permission (777) and the owner and group info same as that of 
> the user calling it; Which actually should be the same as that of the 
> destination of the mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14102) verifyBlockPlacement

2018-11-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16700899#comment-16700899
 ] 

Hadoop QA commented on HDFS-14102:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 
0 new + 35 unchanged - 2 fixed = 35 total (was 37) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}136m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockManager |
|   | hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14102 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12949722/HDFS-14102.2.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 775e93e8f1e6 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 96c104d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25646/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-14103) Review Logging of BlockPlacementPolicyDefault

2018-11-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16700889#comment-16700889
 ] 

Hadoop QA commented on HDFS-14103:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  7s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 20s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}167m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14103 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12949714/HDFS-14103.1.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f7b8b35e9578 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 96c104d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25645/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25645/testReport/ |
| Max. process+thread count | 2614 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console 

[jira] [Commented] (HDFS-12946) Add a tool to check rack configuration against EC policies

2018-11-27 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16700874#comment-16700874
 ] 

Hadoop QA commented on HDFS-12946:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  0s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
362 unchanged - 0 fixed = 363 total (was 362) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 17s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
12s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
28s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-12946 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12949712/HDFS-12946.10.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 683a5dd6d93a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git 

[jira] [Commented] (HDFS-14085) RBF: LS command for root shows wrong owner and permission information.

2018-11-27 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16700866#comment-16700866
 ] 

Ayush Saxena commented on HDFS-14085:
-

[~elgoiri]

Since we are using getListing() in the tests . All the four test cases written 
will fail if this is not there!!! You can give it a try running them by just 
changing the said line to
this for the purpose of verifying :) 
String mName = name.startsWith("/") ? name : name;

> RBF: LS command for root shows wrong owner and permission information.
> --
>
> Key: HDFS-14085
> URL: https://issues.apache.org/jira/browse/HDFS-14085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14085-HDFS-13891-01.patch, 
> HDFS-14085-HDFS-13891-02.patch
>
>
> The LS command for / lists all the mount entries but the permission displayed 
> is the default permission (777) and the owner and group info same as that of 
> the user calling it; Which actually should be the same as that of the 
> destination of the mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14085) RBF: LS command for root shows wrong owner and permission information.

2018-11-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16700856#comment-16700856
 ] 

Íñigo Goiri commented on HDFS-14085:


I'm fine with one of the two:
* Do the After method cleaning all the stuff left behind.
* Fixing the checkstyle for the current one.

For the / in the start, I just want to have a unit test that fails if this is 
not there:
{code}
String mName = name.startsWith("/") ? name : "/" + name;
{code}

> RBF: LS command for root shows wrong owner and permission information.
> --
>
> Key: HDFS-14085
> URL: https://issues.apache.org/jira/browse/HDFS-14085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14085-HDFS-13891-01.patch, 
> HDFS-14085-HDFS-13891-02.patch
>
>
> The LS command for / lists all the mount entries but the permission displayed 
> is the default permission (777) and the owner and group info same as that of 
> the user calling it; Which actually should be the same as that of the 
> destination of the mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14085) RBF: LS command for root shows wrong owner and permission information.

2018-11-27 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16700846#comment-16700846
 ] 

Ayush Saxena commented on HDFS-14085:
-

Thanks [~elgoiri] for the review!!!
{quote}I think we could have an After method to cleanup instead of having to do 
the finally.
{quote}
I too had this same intent but there is one problem going this way that firstly 
name and number of files is different in the tests and moreover one test I 
guess doesn't create even.We can't explicitly delete with the name because if 
that file was not created in the test it will fetch us an 
FileNotFoundException. The only way to do this call Ls on root of both the 
Namespace and iterate throughout to check if there is a file then delete.I did 
it but it little more time taking thats why I came back to this approach.No 
doubt we can go for putting it in the After method.If you confirm.
{quote}We were going through that part of the code and we were able to list the 
paths with no issues?
{quote}
If I am getting your concern correctly.You are confirming that Are we covering 
the part where "/" was removed? If that is the question then Yes.All the Tests 
are going through the same path which removes the "/" .

Actually the getMountPointStatus() where actually this processing is happening 
is called up by two methods one is the getListing() which is used here which 
removes the "/" and second by getFileInfo() which keeps it. In the previous 
test getFileInfo() was used that is why no discrepancy or error was not 
encountered .

I will upload the next patch with fixed checkstyle, Once confirmed regarding 
the After method approach :)

> RBF: LS command for root shows wrong owner and permission information.
> --
>
> Key: HDFS-14085
> URL: https://issues.apache.org/jira/browse/HDFS-14085
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14085-HDFS-13891-01.patch, 
> HDFS-14085-HDFS-13891-02.patch
>
>
> The LS command for / lists all the mount entries but the permission displayed 
> is the default permission (777) and the owner and group info same as that of 
> the user calling it; Which actually should be the same as that of the 
> destination of the mount point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14101) Random failure of testListCorruptFilesCorruptedBlock

2018-11-27 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16700853#comment-16700853
 ] 

Ayush Saxena commented on HDFS-14101:
-

Thanks [~kihwal] and [~zvenczel] for the analysis.

I too think that is only the case causing it a failure.The probability of 
selecting 1 from total 512 values is quite less.I guess that is why we didn't 
see it quite frequently.

Would you mind adding even a small comment there too so as to mention the 
reason for choosing two not one.Might be little helpful for some one going 
through it in the future. :) 

> Random failure of testListCorruptFilesCorruptedBlock
> 
>
> Key: HDFS-14101
> URL: https://issues.apache.org/jira/browse/HDFS-14101
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.2.0, 3.0.3, 2.8.5
>Reporter: Kihwal Lee
>Assignee: Zsolt Venczel
>Priority: Major
>  Labels: newbie
> Attachments: HDFS-14101.01.patch
>
>
> We've seen this occasionally.
> {noformat}
> java.lang.IllegalArgumentException: Negative position
>   at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:755)
>   at org.apache.hadoop.hdfs.server.namenode.
>  
> TestListCorruptFileBlocks.testListCorruptFilesCorruptedBlock(TestListCorruptFileBlocks.java:105)
> {noformat}
> The test has a flaw.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >