[jira] [Commented] (HDFS-13852) RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured in RBFConfigKeys.

2018-11-12 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684829#comment-16684829
 ] 

Brahma Reddy Battula commented on HDFS-13852:
-

FYI. HDFS-13891 is rebased. So that it's having HADOOP-15916 now.

> RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured 
> in RBFConfigKeys.
> -
>
> Key: HDFS-13852
> URL: https://issues.apache.org/jira/browse/HDFS-13852
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.1.0, 2.9.1, 3.0.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Attachments: HDFS-13852-HDFS-13891.0.patch, HDFS-13852.001.patch, 
> HDFS-13852.002.patch, HDFS-13852.003.patch, HDFS-13852.004.patch
>
>
> In the NamenodeBeanMetrics the router will invokes 'getDataNodeReport' 
> periodically. And we can set the dfs.federation.router.dn-report.time-out and 
> dfs.federation.router.dn-report.cache-expire to avoid time out. But when we 
> start the router, the FederationMetrics will also invoke the method to get 
> node usage. If time out error happened, we cannot adjust the parameter 
> time_out. And the time_out in the FederationMetrics and NamenodeBeanMetrics 
> should be the same.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14070) Refactor NameNodeWebHdfsMethods to allow better extensibility

2018-11-12 Thread Brahma Reddy Battula (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HDFS-14070:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.1
   3.3.0
   3.1.2
   3.0.4
   Status: Resolved  (was: Patch Available)

Committed to trunk,branch-3.2,branch-3.1 and branch-3.0. [~crh] thanks for 
contribution.

[~elgoiri] thanks for additional review.

> Refactor NameNodeWebHdfsMethods to allow better extensibility
> -
>
> Key: HDFS-14070
> URL: https://issues.apache.org/jira/browse/HDFS-14070
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: HDFS-14070.001.patch
>
>
> Router extends NamenodeWebHdfsMethods, methods such as renewDelegationToken, 
> cancelDelegationToken and generateDelegationTokens should be extensible. 
> Router can then have its own implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-675) Add blocking buffer and use watchApi for flush/close in OzoneClient

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684819#comment-16684819
 ] 

Hadoop QA commented on HDDS-675:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
24s{color} | {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 13s{color} | {color:orange} root: The patch generated 9 new + 17 unchanged - 
1 fixed = 26 total (was 18) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
40s{color} | {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
34s{color} | {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 44s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 43s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 35s{color} 
| {color:red} client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 31s{color} 
| {color:red} objectstore-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 39s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit 

[jira] [Commented] (HDDS-675) Add blocking buffer and use watchApi for flush/close in OzoneClient

2018-11-12 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684796#comment-16684796
 ] 

Jitendra Nath Pandey commented on HDDS-675:
---

+1 for the patch, pending jenkins.

> Add blocking buffer and use watchApi for flush/close in OzoneClient
> ---
>
> Key: HDDS-675
> URL: https://issues.apache.org/jira/browse/HDDS-675
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-675.000.patch, HDDS-675.001.patch, 
> HDDS-675.002.patch, HDDS-675.003.patch, HDDS-675.004.patch, 
> HDDS-675.005.patch, HDDS-675.006.patch
>
>
> For handling 2 node failures, a blocking buffer will be used which will wait 
> for the flush commit index to get updated on all replicas of a container via 
> Ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



***UNCHECKED*** [jira] [Updated] (HDDS-675) Add blocking buffer and use watchApi for flush/close in OzoneClient

2018-11-12 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-675:
-
Attachment: HDDS-675.006.patch

> Add blocking buffer and use watchApi for flush/close in OzoneClient
> ---
>
> Key: HDDS-675
> URL: https://issues.apache.org/jira/browse/HDDS-675
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-675.000.patch, HDDS-675.001.patch, 
> HDDS-675.002.patch, HDDS-675.003.patch, HDDS-675.004.patch, 
> HDDS-675.005.patch, HDDS-675.006.patch
>
>
> For handling 2 node failures, a blocking buffer will be used which will wait 
> for the flush commit index to get updated on all replicas of a container via 
> Ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-675) Add blocking buffer and use watchApi for flush/close in OzoneClient

2018-11-12 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-675:
-
Attachment: (was: HDDS-675.006.patch)

> Add blocking buffer and use watchApi for flush/close in OzoneClient
> ---
>
> Key: HDDS-675
> URL: https://issues.apache.org/jira/browse/HDDS-675
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-675.000.patch, HDDS-675.001.patch, 
> HDDS-675.002.patch, HDDS-675.003.patch, HDDS-675.004.patch, HDDS-675.005.patch
>
>
> For handling 2 node failures, a blocking buffer will be used which will wait 
> for the flush commit index to get updated on all replicas of a container via 
> Ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13852) RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured in RBFConfigKeys.

2018-11-12 Thread yanghuafeng (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684781#comment-16684781
 ] 

yanghuafeng commented on HDFS-13852:


OK [~ajisakaa], when HADOOP-15916 back ported to HDFS-13891, I will submit this 
patch again. [~elgoiri]

> RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured 
> in RBFConfigKeys.
> -
>
> Key: HDFS-13852
> URL: https://issues.apache.org/jira/browse/HDFS-13852
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.1.0, 2.9.1, 3.0.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Attachments: HDFS-13852-HDFS-13891.0.patch, HDFS-13852.001.patch, 
> HDFS-13852.002.patch, HDFS-13852.003.patch, HDFS-13852.004.patch
>
>
> In the NamenodeBeanMetrics the router will invokes 'getDataNodeReport' 
> periodically. And we can set the dfs.federation.router.dn-report.time-out and 
> dfs.federation.router.dn-report.cache-expire to avoid time out. But when we 
> start the router, the FederationMetrics will also invoke the method to get 
> node usage. If time out error happened, we cannot adjust the parameter 
> time_out. And the time_out in the FederationMetrics and NamenodeBeanMetrics 
> should be the same.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14035) NN status discovery does not leverage delegation token

2018-11-12 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684778#comment-16684778
 ] 

Konstantin Shvachko commented on HDFS-14035:


+1 on v13. Let's commit it then.

> NN status discovery does not leverage delegation token
> --
>
> Key: HDFS-14035
> URL: https://issues.apache.org/jira/browse/HDFS-14035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14035-HDFS-12943.001.patch, 
> HDFS-14035-HDFS-12943.002.patch, HDFS-14035-HDFS-12943.003.patch, 
> HDFS-14035-HDFS-12943.004.patch, HDFS-14035-HDFS-12943.005.patch, 
> HDFS-14035-HDFS-12943.006.patch, HDFS-14035-HDFS-12943.007.patch, 
> HDFS-14035-HDFS-12943.008.patch, HDFS-14035-HDFS-12943.009.patch, 
> HDFS-14035-HDFS-12943.010.patch, HDFS-14035-HDFS-12943.011.patch, 
> HDFS-14035-HDFS-12943.012.patch, HDFS-14035-HDFS-12943.013.patch
>
>
> Currently ObserverReadProxyProvider uses 
> {{HAServiceProtocol#getServiceStatus}} to get the status of each NN. However 
> {{HAServiceProtocol}} does not leverage delegation token. So when running an 
> application on YARN and when YARN node manager makes this call 
> getServiceStatus, token authentication will fail, causing the application to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14070) Refactor NameNodeWebHdfsMethods to allow better extensibility

2018-11-12 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684764#comment-16684764
 ] 

Brahma Reddy Battula commented on HDFS-14070:
-

bq.Router will extend the new methods and have its own implementation w.r.t 
webhdfs token management.

Yes,this refactor required.Thanks for reporting.

+1 on HDFS-14070.001.patch.

Will commit.

> Refactor NameNodeWebHdfsMethods to allow better extensibility
> -
>
> Key: HDFS-14070
> URL: https://issues.apache.org/jira/browse/HDFS-14070
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14070.001.patch
>
>
> Router extends NamenodeWebHdfsMethods, methods such as renewDelegationToken, 
> cancelDelegationToken and generateDelegationTokens should be extensible. 
> Router can then have its own implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13852) RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured in RBFConfigKeys.

2018-11-12 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684759#comment-16684759
 ] 

Akira Ajisaka commented on HDFS-13852:
--

The test failure is related to HADOOP-15916 and it should be backported to 
HDFS-13891 branch.

> RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured 
> in RBFConfigKeys.
> -
>
> Key: HDFS-13852
> URL: https://issues.apache.org/jira/browse/HDFS-13852
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.1.0, 2.9.1, 3.0.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Attachments: HDFS-13852-HDFS-13891.0.patch, HDFS-13852.001.patch, 
> HDFS-13852.002.patch, HDFS-13852.003.patch, HDFS-13852.004.patch
>
>
> In the NamenodeBeanMetrics the router will invokes 'getDataNodeReport' 
> periodically. And we can set the dfs.federation.router.dn-report.time-out and 
> dfs.federation.router.dn-report.cache-expire to avoid time out. But when we 
> start the router, the FederationMetrics will also invoke the method to get 
> node usage. If time out error happened, we cannot adjust the parameter 
> time_out. And the time_out in the FederationMetrics and NamenodeBeanMetrics 
> should be the same.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-675) Add blocking buffer and use watchApi for flush/close in OzoneClient

2018-11-12 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684743#comment-16684743
 ] 

Shashikant Banerjee commented on HDDS-675:
--

Thanks [~jnp], for the review. patch v6 addresses the comments as per our 
discussion.

> Add blocking buffer and use watchApi for flush/close in OzoneClient
> ---
>
> Key: HDDS-675
> URL: https://issues.apache.org/jira/browse/HDDS-675
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-675.000.patch, HDDS-675.001.patch, 
> HDDS-675.002.patch, HDDS-675.003.patch, HDDS-675.004.patch, 
> HDDS-675.005.patch, HDDS-675.006.patch
>
>
> For handling 2 node failures, a blocking buffer will be used which will wait 
> for the flush commit index to get updated on all replicas of a container via 
> Ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-675) Add blocking buffer and use watchApi for flush/close in OzoneClient

2018-11-12 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-675:
-
Attachment: HDDS-675.006.patch

> Add blocking buffer and use watchApi for flush/close in OzoneClient
> ---
>
> Key: HDDS-675
> URL: https://issues.apache.org/jira/browse/HDDS-675
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-675.000.patch, HDDS-675.001.patch, 
> HDDS-675.002.patch, HDDS-675.003.patch, HDDS-675.004.patch, 
> HDDS-675.005.patch, HDDS-675.006.patch
>
>
> For handling 2 node failures, a blocking buffer will be used which will wait 
> for the flush commit index to get updated on all replicas of a container via 
> Ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14065) Failed Storage Locations shows nothing in the Datanode Volume Failures

2018-11-12 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684740#comment-16684740
 ] 

Brahma Reddy Battula commented on HDFS-14065:
-

Linking the broken jira. Nice Catch [~ayushtkn]..

> Failed Storage Locations shows nothing in the Datanode Volume Failures
> --
>
> Key: HDFS-14065
> URL: https://issues.apache.org/jira/browse/HDFS-14065
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: AfterChange.png, BeforeChange.png, HDFS-14065.patch
>
>
> The failed storage locations in the *DataNode Volume Failure* UI shows 
> nothing. Despite having failed Storages.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-734) Remove create container logic from OzoneClient

2018-11-12 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain resolved HDDS-734.
--
Resolution: Duplicate

This issue has been fixed via HDDS-733.

> Remove create container logic from OzoneClient
> --
>
> Key: HDDS-734
> URL: https://issues.apache.org/jira/browse/HDDS-734
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Reporter: Nanda kumar
>Assignee: Shashikant Banerjee
>Priority: Major
>
> After HDDS-733, the container will be created as part of the first chunk 
> write, we don't need explicit container creation code in {{OzoneClient}} 
> anymore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-735) Remove ALLOCATED and CREATING state from ContainerStateManager

2018-11-12 Thread Lokesh Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain resolved HDDS-735.
--
Resolution: Duplicate

This issue has been fixed via HDDS-733.

> Remove ALLOCATED and CREATING state from ContainerStateManager
> --
>
> Key: HDDS-735
> URL: https://issues.apache.org/jira/browse/HDDS-735
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Lokesh Jain
>Priority: Major
>
> After HDDS-733 and HDDS-734, we don't need ALLOCATED and CREATING state for 
> containers in SCM. The container will move to OPEN state as soon as it is 
> allocated in SCM. Since the container creation happens as part of the first 
> chunk write and container creation operation in datanode idempotent we don't 
> have to worry about giving out the same container to multiple clients as soon 
> as it is allocated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14048) DFSOutputStream close() throws exception on subsequent call after DataNode restart

2018-11-12 Thread Akira Ajisaka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-14048:
-
Fix Version/s: (was: 2.9.2)
   2.9.3

> DFSOutputStream close() throws exception on subsequent call after DataNode 
> restart
> --
>
> Key: HDFS-14048
> URL: https://issues.apache.org/jira/browse/HDFS-14048
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.3.0
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Fix For: 2.10.0, 3.0.4, 3.1.2, 3.3.0, 3.2.1, 2.9.3
>
> Attachments: HDFS-14048-branch-2.000.patch, HDFS-14048.000.patch
>
>
> We recently discovered an issue in which, during a rolling upgrade, some jobs 
> were failing with exceptions like (sadly this is the whole stack trace):
> {code}
> java.io.IOException: A datanode is restarting: 
> DatanodeInfoWithStorage[1.1.1.1:71,BP-,DISK]
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:877)
> {code}
> with an earlier statement in the log like:
> {code}
> INFO [main] org.apache.hadoop.hdfs.DFSClient: A datanode is restarting: 
> DatanodeInfoWithStorage[1.1.1.1:71,BP-,DISK]
> {code}
> Strangely we did not see any other logs about the {{DFSOutputStream}} failing 
> after waiting for the DataNode restart. We eventually realized that in some 
> cases {{DFSOutputStream#close()}} may be called more than once, and that if 
> so, the {{IOException}} above is thrown on the _second_ call to {{close()}} 
> (this is even with HDFS-5335; prior to this it would have been thrown on all 
> calls to {{close()}} besides the first).
> The problem is that in {{DataStreamer#createBlockOutputStream()}}, after the 
> new output stream is created, it resets the error states:
> {code}
> errorState.resetInternalError();
> // remove all restarting nodes from failed nodes list
> failed.removeAll(restartingNodes);
> restartingNodes.clear(); 
> {code}
> But it forgets to clear {{lastException}}. When 
> {{DFSOutputStream#closeImpl()}} is called a second time, this block is 
> triggered:
> {code}
> if (isClosed()) {
>   LOG.debug("Closing an already closed stream. [Stream:{}, streamer:{}]",
>   closed, getStreamer().streamerClosed());
>   try {
> getStreamer().getLastException().check(true);
> {code}
> The second time, {{isClosed()}} is true, so the exception checking occurs and 
> the "Datanode is restarting" exception is thrown even though the stream has 
> already been successfully closed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14070) Refactor NameNodeWebHdfsMethods to allow better extensibility

2018-11-12 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684701#comment-16684701
 ] 

CR Hota commented on HDFS-14070:


The test failures are unrelated to this change.

> Refactor NameNodeWebHdfsMethods to allow better extensibility
> -
>
> Key: HDFS-14070
> URL: https://issues.apache.org/jira/browse/HDFS-14070
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14070.001.patch
>
>
> Router extends NamenodeWebHdfsMethods, methods such as renewDelegationToken, 
> cancelDelegationToken and generateDelegationTokens should be extensible. 
> Router can then have its own implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13852) RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured in RBFConfigKeys.

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684696#comment-16684696
 ] 

Hadoop QA commented on HDFS-13852:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
14s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} HDFS-13891 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 11m 
56s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 12m 
14s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 38s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13852 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947935/HDFS-13852-HDFS-13891.0.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 1d80e4bd901c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / f311303 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25498/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25498/testReport/ |
| Max. process+thread count | 99 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 

[jira] [Commented] (HDFS-14070) Refactor NameNodeWebHdfsMethods to allow better extensibility

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684680#comment-16684680
 ] 

Hadoop QA commented on HDFS-14070:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 53s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.TestDFSClientRetries |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14070 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947924/HDFS-14070.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d6deccf561fa 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b6d4e19 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25496/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25496/testReport/ |
| Max. process+thread count | 2981 (vs. ulimit of 1) |
| modules | 

[jira] [Commented] (HDDS-576) Move ContainerWithPipeline creation to RPC endpoint

2018-11-12 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684659#comment-16684659
 ] 

Yiqun Lin commented on HDDS-576:


[~nandakumar131], as we have removed ALLOCATED, CREATING State for containers. 
We can reword the javadoc of {{ContainerStateManager}} in a separate jira.
{noformat}
 * This is how a create container happens: 1. When a container is created, the
 * Server(or SCM) marks that Container as ALLOCATED state. In this state, SCM
 * has chosen a pipeline for container to live on. However, the container is not
 * created yet. This container along with the pipeline is returned to the
 * client.
 * 
 * 2. The client when it sees the Container state as ALLOCATED understands that
 * container needs to be created on the specified pipeline. The client lets the
 * SCM know that saw this flag and is initiating the on the data nodes.
 * 

{noformat}

> Move ContainerWithPipeline creation to RPC endpoint
> ---
>
> Key: HDDS-576
> URL: https://issues.apache.org/jira/browse/HDDS-576
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Mukul Kumar Singh
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-576.000.patch
>
>
> With independent Pipeline and Container Managers in SCM, the creation of 
> ContainerWithPipeline can be moved to RPC endpoint. This will ensure clear 
> separation of the pipeline Manager and Container Manager



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14017) ObserverReadProxyProviderWithIPFailover should work with HA configuration

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684658#comment-16684658
 ] 

Hadoop QA commented on HDFS-14017:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
41s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 21s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-client: The 
patch generated 10 new + 3 unchanged - 3 fixed = 13 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
39s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14017 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947928/HDFS-14017-HDFS-12943.009.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 614a7ac5efb0 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-12943 / 8b5277f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25497/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25497/testReport/ |
| Max. process+thread count | 307 (vs. ulimit of 1) |
| modules | C: 

[jira] [Updated] (HDFS-13852) RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured in RBFConfigKeys.

2018-11-12 Thread yanghuafeng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yanghuafeng updated HDFS-13852:
---
Attachment: HDFS-13852-HDFS-13891.0.patch

> RBF: The DN_REPORT_TIME_OUT and DN_REPORT_CACHE_EXPIRE should be configured 
> in RBFConfigKeys.
> -
>
> Key: HDFS-13852
> URL: https://issues.apache.org/jira/browse/HDFS-13852
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, hdfs
>Affects Versions: 3.1.0, 2.9.1, 3.0.1
>Reporter: yanghuafeng
>Assignee: yanghuafeng
>Priority: Major
> Attachments: HDFS-13852-HDFS-13891.0.patch, HDFS-13852.001.patch, 
> HDFS-13852.002.patch, HDFS-13852.003.patch, HDFS-13852.004.patch
>
>
> In the NamenodeBeanMetrics the router will invokes 'getDataNodeReport' 
> periodically. And we can set the dfs.federation.router.dn-report.time-out and 
> dfs.federation.router.dn-report.cache-expire to avoid time out. But when we 
> start the router, the FederationMetrics will also invoke the method to get 
> node usage. If time out error happened, we cannot adjust the parameter 
> time_out. And the time_out in the FederationMetrics and NamenodeBeanMetrics 
> should be the same.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-831) TestOzoneShell in integration-test is flaky

2018-11-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684649#comment-16684649
 ] 

Hudson commented on HDDS-831:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15414 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15414/])
HDDS-831. TestOzoneShell in integration-test is flaky. Contributed by (yqlin: 
rev f8713f8adea9d69330933a2cde594ed11ed9520c)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/ozShell/TestOzoneShell.java


> TestOzoneShell in integration-test is flaky
> ---
>
> Key: HDDS-831
> URL: https://issues.apache.org/jira/browse/HDDS-831
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-831.000.patch
>
>
> TestOzoneShell in integration-test is flaky, fails in few Jenkins runs.
> https://builds.apache.org/job/PreCommit-HDDS-Build/1685/artifact/out/patch-unit-hadoop-ozone_integration-test.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-831) TestOzoneShell in integration-test is flaky

2018-11-12 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684640#comment-16684640
 ] 

Yiqun Lin commented on HDDS-831:


Good catch! LGTM, +1.
Committing this.

> TestOzoneShell in integration-test is flaky
> ---
>
> Key: HDDS-831
> URL: https://issues.apache.org/jira/browse/HDDS-831
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-831.000.patch
>
>
> TestOzoneShell in integration-test is flaky, fails in few Jenkins runs.
> https://builds.apache.org/job/PreCommit-HDDS-Build/1685/artifact/out/patch-unit-hadoop-ozone_integration-test.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-825) Code cleanup based on messages from ErrorProne

2018-11-12 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684643#comment-16684643
 ] 

Hanisha Koneru commented on HDDS-825:
-

Thanks [~anu] for cleaning up the code base.
LGTM overall. 
{{TestOzoneVolumes#testGetVolumesOfAnotherUserShouldFail}} is failing locally 
too for me. The other two tests are passing locally.
+1 with that addressed (This particular test was not running before, so I think 
we can skip enabling it in this patch and fix it later).

> Code cleanup based on messages from ErrorProne
> --
>
> Key: HDDS-825
> URL: https://issues.apache.org/jira/browse/HDDS-825
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.3.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-825.001.patch, HDDS-825.002.patch, 
> HDDS-825.003.patch
>
>
> I ran ErrorProne (http://errorprone.info/) on Ozone/HDDS code base and it 
> threw lots of errors. This patch fixes many issues pointed out by ErrorProne.
> The main classes of errors fixed in this patch are:
> * http://errorprone.info/bugpattern/DefaultCharset
> * http://errorprone.info/bugpattern/ComparableType
> * http://errorprone.info/bugpattern/StringSplitter
> * http://errorprone.info/bugpattern/IntLongMath
> * http://errorprone.info/bugpattern/JavaLangClash
> * http://errorprone.info/bugpattern/CatchFail
> * http://errorprone.info/bugpattern/JdkObsolete
> * http://errorprone.info/bugpattern/AssertEqualsArgumentOrderChecker
> * http://errorprone.info/bugpattern/CatchAndPrintStackTrace
> It is pretty educative to read through these errors and see the mistakes we 
> made.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-831) TestOzoneShell in integration-test is flaky

2018-11-12 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDDS-831:
---
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk.
Thanks [~nandakumar131] for fixing this.

> TestOzoneShell in integration-test is flaky
> ---
>
> Key: HDDS-831
> URL: https://issues.apache.org/jira/browse/HDDS-831
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-831.000.patch
>
>
> TestOzoneShell in integration-test is flaky, fails in few Jenkins runs.
> https://builds.apache.org/job/PreCommit-HDDS-Build/1685/artifact/out/patch-unit-hadoop-ozone_integration-test.txt



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2018-11-12 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684637#comment-16684637
 ] 

Weiwei Yang commented on HDFS-6874:
---

Hi [~elgoiri]

I have corrected the logging, and removed GET_BLOCK_LOCATIONS from 
HttpFSParametersProvider in v10 patch. The httpfs only needs to support 
GETFILEBLOCKLOCATIONS API. Pls take a look, thanks.

> Add GETFILEBLOCKLOCATIONS operation to HttpFS
> -
>
> Key: HDFS-6874
> URL: https://issues.apache.org/jira/browse/HDFS-6874
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 2.4.1, 2.7.3
>Reporter: Gao Zhong Liang
>Assignee: Weiwei Yang
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6874-1.patch, HDFS-6874-branch-2.6.0.patch, 
> HDFS-6874.02.patch, HDFS-6874.03.patch, HDFS-6874.04.patch, 
> HDFS-6874.05.patch, HDFS-6874.06.patch, HDFS-6874.07.patch, 
> HDFS-6874.08.patch, HDFS-6874.09.patch, HDFS-6874.10.patch, HDFS-6874.patch
>
>
> GETFILEBLOCKLOCATIONS operation is missing in HttpFS, which is already 
> supported in WebHDFS.  For the request of GETFILEBLOCKLOCATIONS in 
> org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far:
> ...
>  case GETFILEBLOCKLOCATIONS: {
> response = Response.status(Response.Status.BAD_REQUEST).build();
> break;
>   }
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14070) Refactor NameNodeWebHdfsMethods to allow better extensibility

2018-11-12 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684633#comment-16684633
 ] 

CR Hota commented on HDFS-14070:


[~elgoiri] Thanks for reviewing.

In RouterWebHDFSMethods (which extends NamenodeWebHdfsMethods), I plan to 
override three methods i.e. getDelegationToken, cancelDelegationToken and 
renewDelegationToken. In the override, we can use RouterRpcServer instead of 
namenoderpcserver. With this refactoring namenode's webhdfs can continue to use 
namenoderpcserver as that now becomes implementation detail instead of earlier 
dependency on Namenode as an input parameter. We can this way re-use a lot of 
current name node code.

 

> Refactor NameNodeWebHdfsMethods to allow better extensibility
> -
>
> Key: HDFS-14070
> URL: https://issues.apache.org/jira/browse/HDFS-14070
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14070.001.patch
>
>
> Router extends NamenodeWebHdfsMethods, methods such as renewDelegationToken, 
> cancelDelegationToken and generateDelegationTokens should be extensible. 
> Router can then have its own implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14035) NN status discovery does not leverage delegation token

2018-11-12 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684621#comment-16684621
 ] 

Chen Liang commented on HDFS-14035:
---

I ran the tests locally, none of TestEditLogTailer, TestNamenodeCapacityReport 
or TestBPOfferService failed. The failed CTEST are irrelevant.

> NN status discovery does not leverage delegation token
> --
>
> Key: HDFS-14035
> URL: https://issues.apache.org/jira/browse/HDFS-14035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14035-HDFS-12943.001.patch, 
> HDFS-14035-HDFS-12943.002.patch, HDFS-14035-HDFS-12943.003.patch, 
> HDFS-14035-HDFS-12943.004.patch, HDFS-14035-HDFS-12943.005.patch, 
> HDFS-14035-HDFS-12943.006.patch, HDFS-14035-HDFS-12943.007.patch, 
> HDFS-14035-HDFS-12943.008.patch, HDFS-14035-HDFS-12943.009.patch, 
> HDFS-14035-HDFS-12943.010.patch, HDFS-14035-HDFS-12943.011.patch, 
> HDFS-14035-HDFS-12943.012.patch, HDFS-14035-HDFS-12943.013.patch
>
>
> Currently ObserverReadProxyProvider uses 
> {{HAServiceProtocol#getServiceStatus}} to get the status of each NN. However 
> {{HAServiceProtocol}} does not leverage delegation token. So when running an 
> application on YARN and when YARN node manager makes this call 
> getServiceStatus, token authentication will fail, causing the application to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14035) NN status discovery does not leverage delegation token

2018-11-12 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684621#comment-16684621
 ] 

Chen Liang edited comment on HDFS-14035 at 11/13/18 2:06 AM:
-

I ran the tests locally, none of TestEditLogTailer, TestNamenodeCapacityReport 
or TestBPOfferService failed. The failed CTEST are unrelated.


was (Author: vagarychen):
I ran the tests locally, none of TestEditLogTailer, TestNamenodeCapacityReport 
or TestBPOfferService failed. The failed CTEST are irrelevant.

> NN status discovery does not leverage delegation token
> --
>
> Key: HDFS-14035
> URL: https://issues.apache.org/jira/browse/HDFS-14035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14035-HDFS-12943.001.patch, 
> HDFS-14035-HDFS-12943.002.patch, HDFS-14035-HDFS-12943.003.patch, 
> HDFS-14035-HDFS-12943.004.patch, HDFS-14035-HDFS-12943.005.patch, 
> HDFS-14035-HDFS-12943.006.patch, HDFS-14035-HDFS-12943.007.patch, 
> HDFS-14035-HDFS-12943.008.patch, HDFS-14035-HDFS-12943.009.patch, 
> HDFS-14035-HDFS-12943.010.patch, HDFS-14035-HDFS-12943.011.patch, 
> HDFS-14035-HDFS-12943.012.patch, HDFS-14035-HDFS-12943.013.patch
>
>
> Currently ObserverReadProxyProvider uses 
> {{HAServiceProtocol#getServiceStatus}} to get the status of each NN. However 
> {{HAServiceProtocol}} does not leverage delegation token. So when running an 
> application on YARN and when YARN node manager makes this call 
> getServiceStatus, token authentication will fail, causing the application to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14045) Use different metrics in DataNode to better measure latency of heartbeat/blockReports/incrementalBlockReports of Active/Standby NN

2018-11-12 Thread Jiandan Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684619#comment-16684619
 ] 

Jiandan Yang  commented on HDFS-14045:
--

Hi, [~xkrogen] 
I have updated patch according to your review comments, please help reviewing 
again.

> Use different metrics in DataNode to better measure latency of 
> heartbeat/blockReports/incrementalBlockReports of Active/Standby NN
> --
>
> Key: HDFS-14045
> URL: https://issues.apache.org/jira/browse/HDFS-14045
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Jiandan Yang 
>Assignee: Jiandan Yang 
>Priority: Major
> Attachments: HDFS-14045.001.patch, HDFS-14045.002.patch, 
> HDFS-14045.003.patch, HDFS-14045.004.patch, HDFS-14045.005.patch, 
> HDFS-14045.006.patch, HDFS-14045.007.patch, HDFS-14045.008.patch
>
>
> Currently DataNode uses same metrics to measure rpc latency of NameNode, but 
> Active and Standby usually have different performance at the same time, 
> especially in large cluster. For example, rpc latency of Standby is very long 
> when Standby is catching up editlog. We may misunderstand the state of HDFS. 
> Using different metrics for Active and standby can help us obtain more 
> precise metric data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14017) ObserverReadProxyProviderWithIPFailover should work with HA configuration

2018-11-12 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684614#comment-16684614
 ] 

Chen Liang commented on HDFS-14017:
---

Post v009 patch.

Had some offline discussion with [~shv] and [~xkrogen]. The main point of v009 
patch is that is mainly to not to use all the configured physical address, but 
only physical addresses of one arbitrary name services. If there is only one 
though, there is no difference. Ideally we would like to resolve the 
inconsistency of virtual IP and name services, and behave more reasonable in 
federation, we still needs to come up with a plan for that. This is only meant 
to be the current temporary solution.

> ObserverReadProxyProviderWithIPFailover should work with HA configuration
> -
>
> Key: HDFS-14017
> URL: https://issues.apache.org/jira/browse/HDFS-14017
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14017-HDFS-12943.001.patch, 
> HDFS-14017-HDFS-12943.002.patch, HDFS-14017-HDFS-12943.003.patch, 
> HDFS-14017-HDFS-12943.004.patch, HDFS-14017-HDFS-12943.005.patch, 
> HDFS-14017-HDFS-12943.006.patch, HDFS-14017-HDFS-12943.008.patch, 
> HDFS-14017-HDFS-12943.009.patch
>
>
> Currently {{ObserverReadProxyProviderWithIPFailover}} extends 
> {{ObserverReadProxyProvider}}, and the only difference is changing the proxy 
> factory to use {{IPFailoverProxyProvider}}. However this is not enough 
> because when calling constructor of {{ObserverReadProxyProvider}} in 
> super(...), the follow line:
> {code:java}
> nameNodeProxies = getProxyAddresses(uri,
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_ADDRESS_KEY);
> {code}
> will try to resolve the all configured NN addresses to do configured 
> failover. But in the case of IPFailover, this does not really apply.
>  
> A second issue closely related is about delegation token. For example, in 
> current IPFailover setup, say we have a virtual host nn.xyz.com, which points 
> to either of two physical nodes nn1.xyz.com or nn2.xyz.com. In current HDFS, 
> there is always only one DT being exchanged, which has hostname nn.xyz.com. 
> Server only issues this DT, and client only knows the host nn.xyz.com, so all 
> is good. But in Observer read, even with IPFailover, the client will no 
> longer contacting nn.xyz.com, but will actively reaching to nn1.xyz.com and 
> nn2.xyz.com. During this process, current code will look for DT associated 
> with hostname nn1.xyz.com or nn2.xyz.com, which is different from the DT 
> given by NN. causing Token authentication to fail. This happens in 
> {{AbstractDelegationTokenSelector#selectToken}}. New IPFailover proxy 
> provider will need to resolve this as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14035) NN status discovery does not leverage delegation token

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684613#comment-16684613
 ] 

Hadoop QA commented on HDFS-14035:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
14s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
43s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
10s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
20s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
18s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 11s{color} | {color:orange} hadoop-hdfs-project: The patch generated 27 new 
+ 183 unchanged - 0 fixed = 210 total (was 183) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
47s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 29s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 20s{color} 
| {color:red} hadoop-hdfs-native-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
47s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | 

[jira] [Updated] (HDFS-14017) ObserverReadProxyProviderWithIPFailover should work with HA configuration

2018-11-12 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14017:
--
Attachment: HDFS-14017-HDFS-12943.009.patch

> ObserverReadProxyProviderWithIPFailover should work with HA configuration
> -
>
> Key: HDFS-14017
> URL: https://issues.apache.org/jira/browse/HDFS-14017
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14017-HDFS-12943.001.patch, 
> HDFS-14017-HDFS-12943.002.patch, HDFS-14017-HDFS-12943.003.patch, 
> HDFS-14017-HDFS-12943.004.patch, HDFS-14017-HDFS-12943.005.patch, 
> HDFS-14017-HDFS-12943.006.patch, HDFS-14017-HDFS-12943.008.patch, 
> HDFS-14017-HDFS-12943.009.patch
>
>
> Currently {{ObserverReadProxyProviderWithIPFailover}} extends 
> {{ObserverReadProxyProvider}}, and the only difference is changing the proxy 
> factory to use {{IPFailoverProxyProvider}}. However this is not enough 
> because when calling constructor of {{ObserverReadProxyProvider}} in 
> super(...), the follow line:
> {code:java}
> nameNodeProxies = getProxyAddresses(uri,
> HdfsClientConfigKeys.DFS_NAMENODE_RPC_ADDRESS_KEY);
> {code}
> will try to resolve the all configured NN addresses to do configured 
> failover. But in the case of IPFailover, this does not really apply.
>  
> A second issue closely related is about delegation token. For example, in 
> current IPFailover setup, say we have a virtual host nn.xyz.com, which points 
> to either of two physical nodes nn1.xyz.com or nn2.xyz.com. In current HDFS, 
> there is always only one DT being exchanged, which has hostname nn.xyz.com. 
> Server only issues this DT, and client only knows the host nn.xyz.com, so all 
> is good. But in Observer read, even with IPFailover, the client will no 
> longer contacting nn.xyz.com, but will actively reaching to nn1.xyz.com and 
> nn2.xyz.com. During this process, current code will look for DT associated 
> with hostname nn1.xyz.com or nn2.xyz.com, which is different from the DT 
> given by NN. causing Token authentication to fail. This happens in 
> {{AbstractDelegationTokenSelector#selectToken}}. New IPFailover proxy 
> provider will need to resolve this as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-832) Docs folder is missing from the Ozone distribution package

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684604#comment-16684604
 ] 

Hadoop QA commented on HDDS-832:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} ozone-0.3 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
45s{color} | {color:green} ozone-0.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
30s{color} | {color:green} ozone-0.3 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 17m 
43s{color} | {color:green} ozone-0.3 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 11m  
9s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
52s{color} | {color:green} ozone-0.3 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 15m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
13s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
7s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 11m 
19s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m 33s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-832 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947910/HDDS-832-ozone-0.3.001.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  compile  
javac  javadoc  mvninstall  shadedclient  xml  |
| uname | Linux 13363fda069b 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | ozone-0.3 / 612236b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| shellcheck | v0.4.6 |
| whitespace | 

[jira] [Commented] (HDDS-825) Code cleanup based on messages from ErrorProne

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684586#comment-16684586
 ] 

Hadoop QA commented on HDDS-825:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 49 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
24m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  5m 
 6s{color} | {color:green} root: The patch generated 0 new + 0 unchanged - 18 
fixed = 0 total (was 18) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
6s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} framework in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
49s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
32s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | 

[jira] [Commented] (HDFS-14070) Refactor NameNodeWebHdfsMethods to allow better extensibility

2018-11-12 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684582#comment-16684582
 ] 

Íñigo Goiri commented on HDFS-14070:


Thanks [~crh] for the patch.
This looks reasonable; for more context, can you specify which methods you 
would override and how?

> Refactor NameNodeWebHdfsMethods to allow better extensibility
> -
>
> Key: HDFS-14070
> URL: https://issues.apache.org/jira/browse/HDFS-14070
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14070.001.patch
>
>
> Router extends NamenodeWebHdfsMethods, methods such as renewDelegationToken, 
> cancelDelegationToken and generateDelegationTokens should be extensible. 
> Router can then have its own implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-675) Add blocking buffer and use watchApi for flush/close in OzoneClient

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684580#comment-16684580
 ] 

Hadoop QA commented on HDDS-675:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m  
3s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
6s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
21s{color} | {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 15s{color} | {color:orange} root: The patch generated 8 new + 17 unchanged - 
1 fixed = 25 total (was 18) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
33s{color} | {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
26s{color} | {color:red} tools in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 41s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 43s{color} 
| {color:red} container-service in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 32s{color} 
| {color:red} client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 30s{color} 
| {color:red} objectstore-service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 38s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit 

[jira] [Commented] (HDFS-14070) Refactor NameNodeWebHdfsMethods to allow better extensibility

2018-11-12 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684569#comment-16684569
 ] 

CR Hota commented on HDFS-14070:


[~elgoiri]   [~brahmareddy]

Could you help review and commit this. Router will extend the new methods and 
have its own implementation w.r.t webhdfs token management.

> Refactor NameNodeWebHdfsMethods to allow better extensibility
> -
>
> Key: HDFS-14070
> URL: https://issues.apache.org/jira/browse/HDFS-14070
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14070.001.patch
>
>
> Router extends NamenodeWebHdfsMethods, methods such as renewDelegationToken, 
> cancelDelegationToken and generateDelegationTokens should be extensible. 
> Router can then have its own implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-675) Add blocking buffer and use watchApi for flush/close in OzoneClient

2018-11-12 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684573#comment-16684573
 ] 

Jitendra Nath Pandey commented on HDDS-675:
---

# The purpose of {{overWriteFlag}} in {{ChunkOutputStream}} is not clear to me. 
Are you using it for a retry upon an exception? Why wouldn't it work if we just 
rely on {{lastSuccessfulFlushIndex}}?
 # Default {{watch.request.timeout}} of 5 seconds is too aggressive, we should 
make it at least 30 seconds.
 # Change in {{XceiverClientManager}} seems unnecessary. If it is a cleanup, we 
should rather do it in a separate jira, because if Ratis is not relevant in 
this class, that should be removed as well.

> Add blocking buffer and use watchApi for flush/close in OzoneClient
> ---
>
> Key: HDDS-675
> URL: https://issues.apache.org/jira/browse/HDDS-675
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-675.000.patch, HDDS-675.001.patch, 
> HDDS-675.002.patch, HDDS-675.003.patch, HDDS-675.004.patch, HDDS-675.005.patch
>
>
> For handling 2 node failures, a blocking buffer will be used which will wait 
> for the flush commit index to get updated on all replicas of a container via 
> Ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14070) Refactor NameNodeWebHdfsMethods to allow better extensibility

2018-11-12 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-14070:
---
Attachment: HDFS-14070.001.patch
Status: Patch Available  (was: Open)

> Refactor NameNodeWebHdfsMethods to allow better extensibility
> -
>
> Key: HDFS-14070
> URL: https://issues.apache.org/jira/browse/HDFS-14070
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14070.001.patch
>
>
> Router extends NamenodeWebHdfsMethods, methods such as renewDelegationToken, 
> cancelDelegationToken and generateDelegationTokens should be extensible. 
> Router can then have its own implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-819) Match OzoneFileSystem behavior with S3AFileSystem

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684562#comment-16684562
 ] 

Hadoop QA commented on HDDS-819:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
59s{color} | {color:green} ozonefs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-819 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947917/HDDS-819.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a18e0a924b36 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / b6d4e19 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1692/testReport/ |
| Max. process+thread count | 2628 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/ozonefs U: hadoop-ozone/ozonefs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1692/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Match OzoneFileSystem behavior with S3AFileSystem
> -
>
> Key: HDDS-819
> URL: https://issues.apache.org/jira/browse/HDDS-819
>  

[jira] [Commented] (HDFS-14067) Allow manual failover between standby and observer

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684555#comment-16684555
 ] 

Hadoop QA commented on HDFS-14067:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
55s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
59s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
20s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
30s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
26s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}104m 
55s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 4s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}222m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14067 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947893/HDFS-14067-HDFS-12943.000.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 295749d2ed5c 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-12943 / 8b5277f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25492/testReport/ |
| Max. process+thread count | 

[jira] [Created] (HDFS-14070) Refactor NameNodeWebHdfsMethods to allow better extensibility

2018-11-12 Thread CR Hota (JIRA)
CR Hota created HDFS-14070:
--

 Summary: Refactor NameNodeWebHdfsMethods to allow better 
extensibility
 Key: HDFS-14070
 URL: https://issues.apache.org/jira/browse/HDFS-14070
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: CR Hota
Assignee: CR Hota


Router extends NamenodeWebHdfsMethods, methods such as renewDelegationToken, 
cancelDelegationToken and generateDelegationTokens should be extensible. Router 
can then have its own implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14067) Allow manual failover between standby and observer

2018-11-12 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684545#comment-16684545
 ] 

Erik Krogen commented on HDFS-14067:


{quote}
Is the concern about that the states could be cached somewhere? or potential 
conflicts between manual and auto failover, where a standby could be involved 
in both?
{quote}
I think more along the latter. Maybe let me rephrase my question: for what 
reason are manual transitions between active and standby disallowed, and what 
is different about the standby/observer transition that makes it allowed? 
Intuitively it makes sense, but we should be careful about any assumptions that 
we might break.

> Allow manual failover between standby and observer
> --
>
> Key: HDFS-14067
> URL: https://issues.apache.org/jira/browse/HDFS-14067
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14067-HDFS-12943.000.patch
>
>
> Currently if automatic failover is enabled in a HA environment, transition 
> from standby to observer would be blocked:
> {code}
> [hdfs@*** hadoop-3.3.0-SNAPSHOT]$ bin/hdfs haadmin -transitionToObserver ha2
> Automatic failover is enabled for NameNode at 
> Refusing to manually manage HA state, since it may cause
> a split-brain scenario or other incorrect state.
> If you are very sure you know what you are doing, please
> specify the --forcemanual flag.
> {code}
> We should allow manual transition between standby and observer in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14067) Allow manual failover between standby and observer

2018-11-12 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684539#comment-16684539
 ] 

Chao Sun commented on HDFS-14067:
-

Thanks [~xkrogen] and [~shv]!

bq. but is the current state of each NN tracked somewhere that could become 
confused if a standby suddenly appears or disappears because of a manual 
transition to/from observer?

Is the concern about that the states could be cached somewhere? or potential 
conflicts between manual and auto failover, where a standby could be involved 
in both?

bq. Also, you sometimes use HAServiceState.STATE_NAME, and sometimes refer 
directly to the state name via the static import, can you use one or the other 
throughout the patch?

Sure will fix. 

bq. We should wait for HDFS-14035 here, since it adds 
ClientProtocol.getHAServiceState(), which should be used here instead of 
HAServiceProtocol. Otherwise we will have the same problems with delegation 
token as in HDFS-14035.

Hmm why we should use {{ClientProtocol.getHAServiceState()}}? we are already 
calling {{HAServiceProtocol}} methods in the state transition, so whoever calls 
it should already be authenticated, is that correct?


> Allow manual failover between standby and observer
> --
>
> Key: HDFS-14067
> URL: https://issues.apache.org/jira/browse/HDFS-14067
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14067-HDFS-12943.000.patch
>
>
> Currently if automatic failover is enabled in a HA environment, transition 
> from standby to observer would be blocked:
> {code}
> [hdfs@*** hadoop-3.3.0-SNAPSHOT]$ bin/hdfs haadmin -transitionToObserver ha2
> Automatic failover is enabled for NameNode at 
> Refusing to manually manage HA state, since it may cause
> a split-brain scenario or other incorrect state.
> If you are very sure you know what you are doing, please
> specify the --forcemanual flag.
> {code}
> We should allow manual transition between standby and observer in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14069) Better debuggability for datanode decommissioning

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684527#comment-16684527
 ] 

Hadoop QA commented on HDFS-14069:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
34s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 56s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 8 new + 649 unchanged - 0 fixed = 657 total (was 649) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
39s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 36s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14069 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947912/HDFS-14069.000.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux ab208045e22f 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e269c3f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25495/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| compile | 

[jira] [Commented] (HDFS-14063) Support noredirect param for CREATE/APPEND/OPEN in HttpFS

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684522#comment-16684522
 ] 

Hadoop QA commented on HDFS-14063:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
24s{color} | {color:red} hadoop-hdfs-httpfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-hdfs-httpfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 23s{color} 
| {color:red} hadoop-hdfs-httpfs in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-httpfs: The 
patch generated 22 new + 312 unchanged - 4 fixed = 334 total (was 316) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-hdfs-httpfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-hdfs-httpfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 25s{color} 
| {color:red} hadoop-hdfs-httpfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14063 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947913/HDFS-14063.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b9e77234f487 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e269c3f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25494/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25494/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/25494/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
 |
| 

[jira] [Commented] (HDFS-14065) Failed Storage Locations shows nothing in the Datanode Volume Failures

2018-11-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684494#comment-16684494
 ] 

Hudson commented on HDFS-14065:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15412 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15412/])
HDFS-14065. Failed Storage Locations shows nothing in the Datanode (arp: rev 
b6d4e19f34f474ea8068ebb374f55e0db2f714da)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/dfshealth.html


> Failed Storage Locations shows nothing in the Datanode Volume Failures
> --
>
> Key: HDFS-14065
> URL: https://issues.apache.org/jira/browse/HDFS-14065
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: AfterChange.png, BeforeChange.png, HDFS-14065.patch
>
>
> The failed storage locations in the *DataNode Volume Failure* UI shows 
> nothing. Despite having failed Storages.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14065) Failed Storage Locations shows nothing in the Datanode Volume Failures

2018-11-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-14065:
-
Hadoop Flags: Reviewed

> Failed Storage Locations shows nothing in the Datanode Volume Failures
> --
>
> Key: HDFS-14065
> URL: https://issues.apache.org/jira/browse/HDFS-14065
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: AfterChange.png, BeforeChange.png, HDFS-14065.patch
>
>
> The failed storage locations in the *DataNode Volume Failure* UI shows 
> nothing. Despite having failed Storages.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14065) Failed Storage Locations shows nothing in the Datanode Volume Failures

2018-11-12 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-14065:
-
   Resolution: Fixed
Fix Version/s: 3.2.1
   3.3.0
   3.1.2
   3.0.4
   Status: Resolved  (was: Patch Available)

+1 I've committed this.

Thanks for reporting and fixing this [~ayushtkn].

> Failed Storage Locations shows nothing in the Datanode Volume Failures
> --
>
> Key: HDFS-14065
> URL: https://issues.apache.org/jira/browse/HDFS-14065
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: AfterChange.png, BeforeChange.png, HDFS-14065.patch
>
>
> The failed storage locations in the *DataNode Volume Failure* UI shows 
> nothing. Despite having failed Storages.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-832) Docs folder is missing from the Ozone distribution package

2018-11-12 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684480#comment-16684480
 ] 

Anu Engineer commented on HDDS-832:
---

I have committed this to ozone-0.3. I will leave it to you if you want to bring 
this in into trunk. Thanks for so quickly fixing it.

> Docs folder is missing from the Ozone distribution package
> --
>
> Key: HDDS-832
> URL: https://issues.apache.org/jira/browse/HDDS-832
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Attachments: HDDS-832-ozone-0.3.001.patch
>
>
> After the 0.2.1 release the dist package create (together with the classpath 
> generation) are changed. 
> Problems: 
> 1. /docs folder is missing from the dist package
> 2. /docs is missing from the scm/om ui



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-819) Match OzoneFileSystem behavior with S3AFileSystem

2018-11-12 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684478#comment-16684478
 ] 

Hanisha Koneru commented on HDDS-819:
-

Thank you [~arpitagarwal] for the review. I have updated the patch to replace 
ListStatusIterator#subDirPaths and added javadocs for the new functions.

> Match OzoneFileSystem behavior with S3AFileSystem
> -
>
> Key: HDDS-819
> URL: https://issues.apache.org/jira/browse/HDDS-819
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-819.001.patch, HDDS-819.002.patch, 
> HDDS-819.003.patch
>
>
> To match the behavior of o3fs with that of the S3AFileSystem, following 
> changes need to be made to OzoneFileSystem.
>  # When creating files, we should add only 1 key. Keys corresponding to the 
> parent directories should not be created.
>  # {{GetFileStatus}} should return the status for fake directories 
> (directories which do not actually exist as a key but there exists a key 
> which is a child of this directory). For example, if there exists a key 
> _/dir1/dir2/file2_, {{GetFileStatus("/dir1/")}} should return _/dir1/_ as a 
> directory.
>  # {{ListStatus}} on a directory should list fake sub-directories also along 
> with files.
>  # {\{ListStatus}} on a directory should also files and sub-directories with 
> the same name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14069) Better debuggability for datanode decommissioning

2018-11-12 Thread Danny Becker (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Becker updated HDFS-14069:

Description: 
Currently, we don't provide any debugging info for decommissioning DN, it is 
difficult to determine which blocks are on their last replica. We have two 
design options:
 # Add block info for blocks with low replication (configurable)
 ** Advantages:
 *** Initial debugging information would be more thorough
 *** Easier initial implementation
 ** Disadvantages:
 *** Add load to normal NN operation by checking every time a DN is 
decommissioned
 *** More difficult to add debugging information later on
 # Create a new api for querying more detailed info about one DN
 ** Advantages:
 *** We wouldnt be adding more load to the NN in normal operation
 *** Much easier to extend in the future with more info
 ** Disadvantages:
 *** Getting the info on demand for this case will be much more expensive 
actually, cause we will have to find all the blocks on that DN, and then go 
through all the blocks again and count how many replicas we have etc.

  was:
Currently, we only provide "minLiveReplicas" per DN that is being 
decommissioned, this is not enough info because it is difficult to determine 
which blocks are on their last replica. We have two design options:
 # Add it to the existing report, on top of minLiveReplicas
 ** Advantages:
 *** Initial debugging information would be more thorough
 *** Easier initial implementation
 ** Disadvantages:
 *** Add load to normal NN operation by checking every time a DN is 
decommissioned
 *** More difficult to add debugging information later on
 # Create a new api for querying more detailed info about one DN
 ** Advantages:
 *** We wouldnt be adding more load to the NN in normal operation
 *** Much easier to extend in the future with more info
 ** Disadvantages:
 *** Getting the info on demand for this case will be much more expensive 
actually, cause we will have to find all the blocks on that DN, and then go 
through all the blocks again and count how many replicas we have etc.


> Better debuggability for datanode decommissioning
> -
>
> Key: HDFS-14069
> URL: https://issues.apache.org/jira/browse/HDFS-14069
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs, namenode
>Reporter: Danny Becker
>Assignee: Danny Becker
>Priority: Major
> Attachments: HDFS-14069.000.patch
>
>
> Currently, we don't provide any debugging info for decommissioning DN, it is 
> difficult to determine which blocks are on their last replica. We have two 
> design options:
>  # Add block info for blocks with low replication (configurable)
>  ** Advantages:
>  *** Initial debugging information would be more thorough
>  *** Easier initial implementation
>  ** Disadvantages:
>  *** Add load to normal NN operation by checking every time a DN is 
> decommissioned
>  *** More difficult to add debugging information later on
>  # Create a new api for querying more detailed info about one DN
>  ** Advantages:
>  *** We wouldnt be adding more load to the NN in normal operation
>  *** Much easier to extend in the future with more info
>  ** Disadvantages:
>  *** Getting the info on demand for this case will be much more expensive 
> actually, cause we will have to find all the blocks on that DN, and then go 
> through all the blocks again and count how many replicas we have etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-819) Match OzoneFileSystem behavior with S3AFileSystem

2018-11-12 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-819:

Attachment: HDDS-819.003.patch

> Match OzoneFileSystem behavior with S3AFileSystem
> -
>
> Key: HDDS-819
> URL: https://issues.apache.org/jira/browse/HDDS-819
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-819.001.patch, HDDS-819.002.patch, 
> HDDS-819.003.patch
>
>
> To match the behavior of o3fs with that of the S3AFileSystem, following 
> changes need to be made to OzoneFileSystem.
>  # When creating files, we should add only 1 key. Keys corresponding to the 
> parent directories should not be created.
>  # {{GetFileStatus}} should return the status for fake directories 
> (directories which do not actually exist as a key but there exists a key 
> which is a child of this directory). For example, if there exists a key 
> _/dir1/dir2/file2_, {{GetFileStatus("/dir1/")}} should return _/dir1/_ as a 
> directory.
>  # {{ListStatus}} on a directory should list fake sub-directories also along 
> with files.
>  # {\{ListStatus}} on a directory should also files and sub-directories with 
> the same name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14069) Better debuggability for datanode decomissioning

2018-11-12 Thread Danny Becker (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Becker updated HDFS-14069:

Description: 
Currently, we only provide "minLiveReplicas" per DN that is being 
decommissioned, this is not enough info because it is difficult to determine 
which blocks are on their last replica. We have two design options:
 # Add it to the existing report, on top of minLiveReplicas
 ** Advantages:
 *** Initial debugging information would be more thorough
 ** Disadvantages:
 *** Add load to normal NN operation by checking every time a DN is 
decommissioned
 # Create a new api for querying more detailed info about one DN
 ** Advantages:
 *** We wouldnt be adding more load to the NN in normal operation
 *** Much easier to extend in the future with more info
 ** Disadvantages:
 *** Getting the info on demand for this case will be much more expensive 
actually, cause we will have to find all the blocks on that DN, and then go 
through all the blocks again and count how many replicas we have etc.

  was:
Currently, we only provide "minLiveReplicas" per DN that is being 
decommissioned, this is not enough info because it is difficult to determine 
which blocks are on their last replica. We have two design options:
 # Add it to the existing report, on top of minLiveReplicas
 ** Advantages:
 *** Initial debugging information would be more thorough
 ** Disadvantages:
 *** 
 # Create a new api for querying more detailed info about one DN
 ** Advantages:
 *** We wouldnt be adding more load to the NN in normal operation
 *** Much easier to extend in the future with more info
 ** Disadvantages:
 *** Getting the info on demand for this case will be much more expensive 
actually, cause we will have to find all the blocks on that DN, and then go 
through all the blocks again and count how many replicas we have etc.


> Better debuggability for datanode decomissioning
> 
>
> Key: HDFS-14069
> URL: https://issues.apache.org/jira/browse/HDFS-14069
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs, namenode
>Reporter: Danny Becker
>Priority: Major
> Attachments: HDFS-14069.000.patch
>
>
> Currently, we only provide "minLiveReplicas" per DN that is being 
> decommissioned, this is not enough info because it is difficult to determine 
> which blocks are on their last replica. We have two design options:
>  # Add it to the existing report, on top of minLiveReplicas
>  ** Advantages:
>  *** Initial debugging information would be more thorough
>  ** Disadvantages:
>  *** Add load to normal NN operation by checking every time a DN is 
> decommissioned
>  # Create a new api for querying more detailed info about one DN
>  ** Advantages:
>  *** We wouldnt be adding more load to the NN in normal operation
>  *** Much easier to extend in the future with more info
>  ** Disadvantages:
>  *** Getting the info on demand for this case will be much more expensive 
> actually, cause we will have to find all the blocks on that DN, and then go 
> through all the blocks again and count how many replicas we have etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-819) Match OzoneFileSystem behavior with S3AFileSystem

2018-11-12 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-819:

Attachment: HDDS-819.003.patch

> Match OzoneFileSystem behavior with S3AFileSystem
> -
>
> Key: HDDS-819
> URL: https://issues.apache.org/jira/browse/HDDS-819
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-819.001.patch, HDDS-819.002.patch, 
> HDDS-819.003.patch
>
>
> To match the behavior of o3fs with that of the S3AFileSystem, following 
> changes need to be made to OzoneFileSystem.
>  # When creating files, we should add only 1 key. Keys corresponding to the 
> parent directories should not be created.
>  # {{GetFileStatus}} should return the status for fake directories 
> (directories which do not actually exist as a key but there exists a key 
> which is a child of this directory). For example, if there exists a key 
> _/dir1/dir2/file2_, {{GetFileStatus("/dir1/")}} should return _/dir1/_ as a 
> directory.
>  # {{ListStatus}} on a directory should list fake sub-directories also along 
> with files.
>  # {\{ListStatus}} on a directory should also files and sub-directories with 
> the same name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-832) Docs folder is missing from the Ozone distribution package

2018-11-12 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684474#comment-16684474
 ] 

Anu Engineer commented on HDDS-832:
---

+1. I will commit this shortly.

> Docs folder is missing from the Ozone distribution package
> --
>
> Key: HDDS-832
> URL: https://issues.apache.org/jira/browse/HDDS-832
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Attachments: HDDS-832-ozone-0.3.001.patch
>
>
> After the 0.2.1 release the dist package create (together with the classpath 
> generation) are changed. 
> Problems: 
> 1. /docs folder is missing from the dist package
> 2. /docs is missing from the scm/om ui



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14069) Better debuggability for datanode decommissioning

2018-11-12 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14069:
---
Assignee: Danny Becker
  Status: Patch Available  (was: Open)

> Better debuggability for datanode decommissioning
> -
>
> Key: HDFS-14069
> URL: https://issues.apache.org/jira/browse/HDFS-14069
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs, namenode
>Reporter: Danny Becker
>Assignee: Danny Becker
>Priority: Major
> Attachments: HDFS-14069.000.patch
>
>
> Currently, we only provide "minLiveReplicas" per DN that is being 
> decommissioned, this is not enough info because it is difficult to determine 
> which blocks are on their last replica. We have two design options:
>  # Add it to the existing report, on top of minLiveReplicas
>  ** Advantages:
>  *** Initial debugging information would be more thorough
>  *** Easier initial implementation
>  ** Disadvantages:
>  *** Add load to normal NN operation by checking every time a DN is 
> decommissioned
>  *** More difficult to add debugging information later on
>  # Create a new api for querying more detailed info about one DN
>  ** Advantages:
>  *** We wouldnt be adding more load to the NN in normal operation
>  *** Much easier to extend in the future with more info
>  ** Disadvantages:
>  *** Getting the info on demand for this case will be much more expensive 
> actually, cause we will have to find all the blocks on that DN, and then go 
> through all the blocks again and count how many replicas we have etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14063) Support noredirect param for CREATE/APPEND/OPEN in HttpFS

2018-11-12 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684467#comment-16684467
 ] 

Íñigo Goiri commented on HDFS-14063:


[^HDFS-14063.002.patch] adds a unit test.
I cannot reproduce the failed unit tests right now, let's see what Yetus says 
this time.

> Support noredirect param for CREATE/APPEND/OPEN in HttpFS
> -
>
> Key: HDFS-14063
> URL: https://issues.apache.org/jira/browse/HDFS-14063
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14063.000.patch, HDFS-14063.001.patch, 
> HDFS-14063.002.patch
>
>
> Currently HttpFS always redirects the URI. However, the WebUI uses 
> noredirect=true which means it only wants a response with the location. This 
> is properly done in {{NamenodeWebHDFSMethods}}. HttpFS should do the same.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-819) Match OzoneFileSystem behavior with S3AFileSystem

2018-11-12 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-819:

Attachment: (was: HDDS-819.003.patch)

> Match OzoneFileSystem behavior with S3AFileSystem
> -
>
> Key: HDDS-819
> URL: https://issues.apache.org/jira/browse/HDDS-819
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-819.001.patch, HDDS-819.002.patch
>
>
> To match the behavior of o3fs with that of the S3AFileSystem, following 
> changes need to be made to OzoneFileSystem.
>  # When creating files, we should add only 1 key. Keys corresponding to the 
> parent directories should not be created.
>  # {{GetFileStatus}} should return the status for fake directories 
> (directories which do not actually exist as a key but there exists a key 
> which is a child of this directory). For example, if there exists a key 
> _/dir1/dir2/file2_, {{GetFileStatus("/dir1/")}} should return _/dir1/_ as a 
> directory.
>  # {{ListStatus}} on a directory should list fake sub-directories also along 
> with files.
>  # {\{ListStatus}} on a directory should also files and sub-directories with 
> the same name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14069) Better debuggability for datanode decommissioning

2018-11-12 Thread Danny Becker (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Becker updated HDFS-14069:

Description: 
Currently, we only provide "minLiveReplicas" per DN that is being 
decommissioned, this is not enough info because it is difficult to determine 
which blocks are on their last replica. We have two design options:
 # Add it to the existing report, on top of minLiveReplicas
 ** Advantages:
 *** Initial debugging information would be more thorough
 *** Easier initial implementation
 ** Disadvantages:
 *** Add load to normal NN operation by checking every time a DN is 
decommissioned
 *** More difficult to add debugging information later on
 # Create a new api for querying more detailed info about one DN
 ** Advantages:
 *** We wouldnt be adding more load to the NN in normal operation
 *** Much easier to extend in the future with more info
 ** Disadvantages:
 *** Getting the info on demand for this case will be much more expensive 
actually, cause we will have to find all the blocks on that DN, and then go 
through all the blocks again and count how many replicas we have etc.

  was:
Currently, we only provide "minLiveReplicas" per DN that is being 
decommissioned, this is not enough info because it is difficult to determine 
which blocks are on their last replica. We have two design options:
 # Add it to the existing report, on top of minLiveReplicas
 ** Advantages:
 *** Initial debugging information would be more thorough
 ** Disadvantages:
 *** Add load to normal NN operation by checking every time a DN is 
decommissioned
 # Create a new api for querying more detailed info about one DN
 ** Advantages:
 *** We wouldnt be adding more load to the NN in normal operation
 *** Much easier to extend in the future with more info
 ** Disadvantages:
 *** Getting the info on demand for this case will be much more expensive 
actually, cause we will have to find all the blocks on that DN, and then go 
through all the blocks again and count how many replicas we have etc.


> Better debuggability for datanode decommissioning
> -
>
> Key: HDFS-14069
> URL: https://issues.apache.org/jira/browse/HDFS-14069
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs, namenode
>Reporter: Danny Becker
>Priority: Major
> Attachments: HDFS-14069.000.patch
>
>
> Currently, we only provide "minLiveReplicas" per DN that is being 
> decommissioned, this is not enough info because it is difficult to determine 
> which blocks are on their last replica. We have two design options:
>  # Add it to the existing report, on top of minLiveReplicas
>  ** Advantages:
>  *** Initial debugging information would be more thorough
>  *** Easier initial implementation
>  ** Disadvantages:
>  *** Add load to normal NN operation by checking every time a DN is 
> decommissioned
>  *** More difficult to add debugging information later on
>  # Create a new api for querying more detailed info about one DN
>  ** Advantages:
>  *** We wouldnt be adding more load to the NN in normal operation
>  *** Much easier to extend in the future with more info
>  ** Disadvantages:
>  *** Getting the info on demand for this case will be much more expensive 
> actually, cause we will have to find all the blocks on that DN, and then go 
> through all the blocks again and count how many replicas we have etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14063) Support noredirect param for CREATE/APPEND/OPEN in HttpFS

2018-11-12 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14063:
---
Attachment: HDFS-14063.002.patch

> Support noredirect param for CREATE/APPEND/OPEN in HttpFS
> -
>
> Key: HDFS-14063
> URL: https://issues.apache.org/jira/browse/HDFS-14063
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14063.000.patch, HDFS-14063.001.patch, 
> HDFS-14063.002.patch
>
>
> Currently HttpFS always redirects the URI. However, the WebUI uses 
> noredirect=true which means it only wants a response with the location. This 
> is properly done in {{NamenodeWebHDFSMethods}}. HttpFS should do the same.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14069) Better debuggability for datanode decommissioning

2018-11-12 Thread Danny Becker (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Becker updated HDFS-14069:

Summary: Better debuggability for datanode decommissioning  (was: Better 
debuggability for datanode decomissioning)

> Better debuggability for datanode decommissioning
> -
>
> Key: HDFS-14069
> URL: https://issues.apache.org/jira/browse/HDFS-14069
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs, namenode
>Reporter: Danny Becker
>Priority: Major
> Attachments: HDFS-14069.000.patch
>
>
> Currently, we only provide "minLiveReplicas" per DN that is being 
> decommissioned, this is not enough info because it is difficult to determine 
> which blocks are on their last replica. We have two design options:
>  # Add it to the existing report, on top of minLiveReplicas
>  ** Advantages:
>  *** Initial debugging information would be more thorough
>  ** Disadvantages:
>  *** Add load to normal NN operation by checking every time a DN is 
> decommissioned
>  # Create a new api for querying more detailed info about one DN
>  ** Advantages:
>  *** We wouldnt be adding more load to the NN in normal operation
>  *** Much easier to extend in the future with more info
>  ** Disadvantages:
>  *** Getting the info on demand for this case will be much more expensive 
> actually, cause we will have to find all the blocks on that DN, and then go 
> through all the blocks again and count how many replicas we have etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14069) Better debuggability for datanode decomissioning

2018-11-12 Thread Danny Becker (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Becker updated HDFS-14069:

Description: 
Currently, we only provide "minLiveReplicas" per DN that is being 
decommissioned, this is not enough info because it is difficult to determine 
which blocks are on their last replica. We have two design options:
 # Add it to the existing report, on top of minLiveReplicas
 ** Advantages:
 *** Initial debugging information would be more thorough
 ** Disadvantages:
 *** 
 # Create a new api for querying more detailed info about one DN
 ** Advantages:
 *** We wouldnt be adding more load to the NN in normal operation
 *** Much easier to extend in the future with more info
 ** Disadvantages:
 *** Getting the info on demand for this case will be much more expensive 
actually, cause we will have to find all the blocks on that DN, and then go 
through all the blocks again and count how many replicas we have etc.

  was:
Currently, we only provide "minLiveReplicas" per DN that is being 
decommissioned, this is not enough info because it is difficult to determine 
which blocks are on their last replica. We have two design options:
 # Add it to the existing report, on top of minLiveReplicas
 ** Advantages:
 *** 
 ** Disadvantages:
 *** 
 # Create a new api for querying more detailed info about one DN
 ** Advantages:
 *** We wouldnt be adding more load to the NN in normal operation
 *** Much easier to extend in the future with more info
 ** Disadvantages:
 *** Getting the info on demand for this case will be much more expensive 
actually, cause we will have to find all the blocks on that DN, and then go 
through all the blocks again and count how many replicas we have etc.


> Better debuggability for datanode decomissioning
> 
>
> Key: HDFS-14069
> URL: https://issues.apache.org/jira/browse/HDFS-14069
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs, namenode
>Reporter: Danny Becker
>Priority: Major
> Attachments: HDFS-14069.000.patch
>
>
> Currently, we only provide "minLiveReplicas" per DN that is being 
> decommissioned, this is not enough info because it is difficult to determine 
> which blocks are on their last replica. We have two design options:
>  # Add it to the existing report, on top of minLiveReplicas
>  ** Advantages:
>  *** Initial debugging information would be more thorough
>  ** Disadvantages:
>  *** 
>  # Create a new api for querying more detailed info about one DN
>  ** Advantages:
>  *** We wouldnt be adding more load to the NN in normal operation
>  *** Much easier to extend in the future with more info
>  ** Disadvantages:
>  *** Getting the info on demand for this case will be much more expensive 
> actually, cause we will have to find all the blocks on that DN, and then go 
> through all the blocks again and count how many replicas we have etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14069) Better debuggability for datanode decomissioning

2018-11-12 Thread Danny Becker (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Becker updated HDFS-14069:

Description: 
Currently, we only provide "minLiveReplicas" per DN that is being 
decommissioned, this is not enough info because it is difficult to determine 
which blocks are on their last replica. We have two design options:
 # Add it to the existing report, on top of minLiveReplicas
 ** Advantages:
 *** 
 ** Disadvantages:
 *** 
 # Create a new api for querying more detailed info about one DN
 ** Advantages:
 *** We wouldnt be adding more load to the NN in normal operation
 *** Much easier to extend in the future with more info
 ** Disadvantages:
 *** Getting the info on demand for this case will be much more expensive 
actually, cause we will have to find all the blocks on that DN, and then go 
through all the blocks again and count how many replicas we have etc.

  was:
Currently, we only provide "minLiveReplicas" per DN that is being 
decommissioned, this is not enough info because it is difficult to determine 
which blocks are on their last replica. We have two design options:
 # Add it to the existing report, on top of minLiveReplicas
 ** Advantages:
 ***
 ** Disadvantages:
 *** 
 # Create a new api for querying more detailed info about one DN
 ** Advantages:
 *** We wouldnt be adding more load to the NN in normal operation
 *** Much easier to extend in the future with more info
 ** Disadvantages:
 *** Getting the info on demand for this case will be much more expensive 
actually, cause we will have to find all the blocks on that DN, and then go 
through all the blocks again and count how many replicas we have etc.


> Better debuggability for datanode decomissioning
> 
>
> Key: HDFS-14069
> URL: https://issues.apache.org/jira/browse/HDFS-14069
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs, namenode
>Reporter: Danny Becker
>Priority: Major
> Attachments: HDFS-14069.000.patch
>
>
> Currently, we only provide "minLiveReplicas" per DN that is being 
> decommissioned, this is not enough info because it is difficult to determine 
> which blocks are on their last replica. We have two design options:
>  # Add it to the existing report, on top of minLiveReplicas
>  ** Advantages:
>  *** 
>  ** Disadvantages:
>  *** 
>  # Create a new api for querying more detailed info about one DN
>  ** Advantages:
>  *** We wouldnt be adding more load to the NN in normal operation
>  *** Much easier to extend in the future with more info
>  ** Disadvantages:
>  *** Getting the info on demand for this case will be much more expensive 
> actually, cause we will have to find all the blocks on that DN, and then go 
> through all the blocks again and count how many replicas we have etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14069) Better debuggability for datanode decomissioning

2018-11-12 Thread Danny Becker (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Becker updated HDFS-14069:

Description: 
Currently, we only provide "minLiveReplicas" per DN that is being 
decommissioned, this is not enough info because it is difficult to determine 
which blocks are on their last replica. We have two design options:
 # Add it to the existing report, on top of minLiveReplicas
 # Create a new api for querying more detailed info about one DN
 ** Advantages:
 *** We wouldnt be adding more load to the NN in normal operation
 *** Much easier to extend in the future with more info
 ** Disadvantages:
 *** Getting the info on demand for this case will be much more expensive 
actually, cause we will have to find all the blocks on that DN, and then go 
through all the blocks again and count how many replicas we have etc.

  was:
Currently, we only provide "minLiveReplicas" per DN that is being 
decommissioned, this is not enough info because it is difficult to determine 
which blocks are on their last replica. We have two design options:
 # Add it to the existing report, on top of minLiveReplicas
 # Create a new api for querying more detailed info about one DN
 ** Advantages:
 *** we wouldnt be adding more load to the NN in normal operation
 *** much easier to extend in the future with more info
 ** Disadvantages:


> Better debuggability for datanode decomissioning
> 
>
> Key: HDFS-14069
> URL: https://issues.apache.org/jira/browse/HDFS-14069
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs, namenode
>Reporter: Danny Becker
>Priority: Major
> Attachments: HDFS-14069.000.patch
>
>
> Currently, we only provide "minLiveReplicas" per DN that is being 
> decommissioned, this is not enough info because it is difficult to determine 
> which blocks are on their last replica. We have two design options:
>  # Add it to the existing report, on top of minLiveReplicas
>  # Create a new api for querying more detailed info about one DN
>  ** Advantages:
>  *** We wouldnt be adding more load to the NN in normal operation
>  *** Much easier to extend in the future with more info
>  ** Disadvantages:
>  *** Getting the info on demand for this case will be much more expensive 
> actually, cause we will have to find all the blocks on that DN, and then go 
> through all the blocks again and count how many replicas we have etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14069) Better debuggability for datanode decomissioning

2018-11-12 Thread Danny Becker (JIRA)


[jira] [Updated] (HDFS-14069) Better debuggability for datanode decomissioning

2018-11-12 Thread Danny Becker (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Becker updated HDFS-14069:

Summary: Better debuggability for datanode decomissioning  (was: Better 
debuggability for datanode decommissioning)

> Better debuggability for datanode decomissioning
> 
>
> Key: HDFS-14069
> URL: https://issues.apache.org/jira/browse/HDFS-14069
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs, namenode
>Reporter: Danny Becker
>Priority: Major
> Attachments: HDFS-14069.000.patch
>
>
> Currently, we only provide "minLiveReplicas" per DN that is being decomission
> Add totalAccessibleBlocks to NumberReplicas
>  Add logic to track blocks that have less than the maxReplicasTracked
>  Add Map of low replica blockids to DatanodeDescriptor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14069) Better debuggability for datanode decomissioning

2018-11-12 Thread Danny Becker (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Becker updated HDFS-14069:

Description: 
Currently, we only provide "minLiveReplicas" per DN that is being 
decommissioned, this is not enough info because it is difficult to determine 
which blocks are on their last replica. We have two design options:
 # Add it to the existing report, on top of minLiveReplicas
 # Create a new api for querying more detailed info about one DN
 ** Advantages:
 *** we wouldnt be adding more load to the NN in normal operation
 *** much easier to extend in the future with more info
 ** Disadvantages:

  was:
Currently, we only provide "minLiveReplicas" per DN that is being 
decommissioned, this is not enough info because it is difficult to determine 
which blocks are on their last replica. We have two design options:
\t


> Better debuggability for datanode decomissioning
> 
>
> Key: HDFS-14069
> URL: https://issues.apache.org/jira/browse/HDFS-14069
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs, namenode
>Reporter: Danny Becker
>Priority: Major
> Attachments: HDFS-14069.000.patch
>
>
> Currently, we only provide "minLiveReplicas" per DN that is being 
> decommissioned, this is not enough info because it is difficult to determine 
> which blocks are on their last replica. We have two design options:
>  # Add it to the existing report, on top of minLiveReplicas
>  # Create a new api for querying more detailed info about one DN
>  ** Advantages:
>  *** we wouldnt be adding more load to the NN in normal operation
>  *** much easier to extend in the future with more info
>  ** Disadvantages:



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-709) Modify Close Container handling sequence on datanodes

2018-11-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684464#comment-16684464
 ] 

Hudson commented on HDDS-709:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15410 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15410/])
HDDS-709. Modify Close Container handling sequence on datanodes. (jitendra: rev 
f944f3383246450a1aa2b34f55f99a9e86e10c42)
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupOutputStream.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/TestCSMMetrics.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueHandler.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/HddsDispatcher.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestCloseContainerHandlingByClient.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/ContainerNotOpenException.java
* (add) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/InvalidContainerStateException.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
* (edit) hadoop-hdds/common/src/main/proto/DatanodeContainerProtocol.proto
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CloseContainerCommandHandler.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/ContainerDispatcher.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestContainerStateMachineFailures.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/server/TestContainerServer.java


> Modify Close Container handling sequence on datanodes
> -
>
> Key: HDDS-709
> URL: https://issues.apache.org/jira/browse/HDDS-709
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-709.000.patch, HDDS-709.001.patch, 
> HDDS-709.002.patch, HDDS-709.003.patch, HDDS-709.004.patch, 
> HDDS-709.005.patch, HDDS-709.006.patch, HDDS-709.007.patch, HDDS-709.008.patch
>
>
> With quasi closed container state for handling majority node failures, the 
> close container handling sequence in Datanodes need to change. Once the 
> datanodes receive a close container command from SCM, the open container 
> replicas individually be marked in the closing state. In a closing state, 
> only the transactions coming from the Ratis leader  are allowed , all other 
> write transaction will fail. A close container transaction will be queued via 
> Ratis on the leader which will be replayed to the followers which makes it 
> transition to CLOSED/QUASI CLOSED state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14069) Better debuggability for datanode decomissioning

2018-11-12 Thread Danny Becker (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Becker updated HDFS-14069:

Description: 
Currently, we only provide "minLiveReplicas" per DN that is being 
decommissioned, this is not enough info because it is difficult to determine 
which blocks are on their last replica. We have two design options:
\t

  was:
Currently, we only provide "minLiveReplicas" per DN that is being decomission

Add totalAccessibleBlocks to NumberReplicas
 Add logic to track blocks that have less than the maxReplicasTracked
 Add Map of low replica blockids to DatanodeDescriptor


> Better debuggability for datanode decomissioning
> 
>
> Key: HDFS-14069
> URL: https://issues.apache.org/jira/browse/HDFS-14069
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs, namenode
>Reporter: Danny Becker
>Priority: Major
> Attachments: HDFS-14069.000.patch
>
>
> Currently, we only provide "minLiveReplicas" per DN that is being 
> decommissioned, this is not enough info because it is difficult to determine 
> which blocks are on their last replica. We have two design options:
> \t



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-675) Add blocking buffer and use watchApi for flush/close in OzoneClient

2018-11-12 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-675:
--
Status: Patch Available  (was: Open)

> Add blocking buffer and use watchApi for flush/close in OzoneClient
> ---
>
> Key: HDDS-675
> URL: https://issues.apache.org/jira/browse/HDDS-675
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-675.000.patch, HDDS-675.001.patch, 
> HDDS-675.002.patch, HDDS-675.003.patch, HDDS-675.004.patch, HDDS-675.005.patch
>
>
> For handling 2 node failures, a blocking buffer will be used which will wait 
> for the flush commit index to get updated on all replicas of a container via 
> Ratis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14069) Better debuggability for datanode decommissioning

2018-11-12 Thread Danny Becker (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Becker updated HDFS-14069:

Description: 
Currently, we only provide "minLiveReplicas" per DN that is being decomission

Add totalAccessibleBlocks to NumberReplicas
 Add logic to track blocks that have less than the maxReplicasTracked
 Add Map of low replica blockids to DatanodeDescriptor

  was:
 

Add totalAccessibleBlocks to NumberReplicas
 Add logic to track blocks that have less than the maxReplicasTracked
 Add Map of low replica blockids to DatanodeDescriptor


> Better debuggability for datanode decommissioning
> -
>
> Key: HDFS-14069
> URL: https://issues.apache.org/jira/browse/HDFS-14069
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs, namenode
>Reporter: Danny Becker
>Priority: Major
> Attachments: HDFS-14069.000.patch
>
>
> Currently, we only provide "minLiveReplicas" per DN that is being decomission
> Add totalAccessibleBlocks to NumberReplicas
>  Add logic to track blocks that have less than the maxReplicasTracked
>  Add Map of low replica blockids to DatanodeDescriptor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14069) Add BlockIds to JMX info

2018-11-12 Thread Danny Becker (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Becker updated HDFS-14069:

Description: 
 

Add totalAccessibleBlocks to NumberReplicas
 Add logic to track blocks that have less than the maxReplicasTracked
 Add Map of low replica blockids to DatanodeDescriptor

  was:
Add totalAccessibleBlocks to NumberReplicas
Add logic to track blocks that have less than the maxReplicasTracked
Add Map of low replica blockids to DatanodeDescriptor


> Add BlockIds to JMX info
> 
>
> Key: HDFS-14069
> URL: https://issues.apache.org/jira/browse/HDFS-14069
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs, namenode
>Reporter: Danny Becker
>Priority: Major
> Attachments: HDFS-14069.000.patch
>
>
>  
> Add totalAccessibleBlocks to NumberReplicas
>  Add logic to track blocks that have less than the maxReplicasTracked
>  Add Map of low replica blockids to DatanodeDescriptor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14069) Better debuggability for datanode decommissioning

2018-11-12 Thread Danny Becker (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Becker updated HDFS-14069:

Summary: Better debuggability for datanode decommissioning  (was: Add 
BlockIds to JMX info)

> Better debuggability for datanode decommissioning
> -
>
> Key: HDFS-14069
> URL: https://issues.apache.org/jira/browse/HDFS-14069
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs, namenode
>Reporter: Danny Becker
>Priority: Major
> Attachments: HDFS-14069.000.patch
>
>
>  
> Add totalAccessibleBlocks to NumberReplicas
>  Add logic to track blocks that have less than the maxReplicasTracked
>  Add Map of low replica blockids to DatanodeDescriptor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-832) Docs folder is missing from the distribution package

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-832:
--
Attachment: HDDS-832-ozone-0.3.001.patch

> Docs folder is missing from the distribution package
> 
>
> Key: HDDS-832
> URL: https://issues.apache.org/jira/browse/HDDS-832
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Attachments: HDDS-832-ozone-0.3.001.patch
>
>
> After the 0.2.1 release the dist package create (together with the classpath 
> generation) are changed. 
> Problems: 
> 1. /docs folder is missing from the dist package
> 2. /docs is missing from the scm/om ui



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14069) Add BlockIds to JMX info

2018-11-12 Thread Danny Becker (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Becker updated HDFS-14069:

Description: 
Add totalAccessibleBlocks to NumberReplicas
Add logic to track blocks that have less than the maxReplicasTracked
Add Map of low replica blockids to DatanodeDescriptor

> Add BlockIds to JMX info
> 
>
> Key: HDFS-14069
> URL: https://issues.apache.org/jira/browse/HDFS-14069
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs, namenode
>Reporter: Danny Becker
>Priority: Major
> Attachments: HDFS-14069.000.patch
>
>
> Add totalAccessibleBlocks to NumberReplicas
> Add logic to track blocks that have less than the maxReplicasTracked
> Add Map of low replica blockids to DatanodeDescriptor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14069) Add BlockIds to JMX info

2018-11-12 Thread Danny Becker (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Becker updated HDFS-14069:

Attachment: HDFS-14069.000.patch

> Add BlockIds to JMX info
> 
>
> Key: HDFS-14069
> URL: https://issues.apache.org/jira/browse/HDFS-14069
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, hdfs, namenode
>Reporter: Danny Becker
>Priority: Major
> Attachments: HDFS-14069.000.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14069) Add BlockIds to JMX info

2018-11-12 Thread Danny Becker (JIRA)
Danny Becker created HDFS-14069:
---

 Summary: Add BlockIds to JMX info
 Key: HDFS-14069
 URL: https://issues.apache.org/jira/browse/HDFS-14069
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, hdfs, namenode
Reporter: Danny Becker






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-709) Modify Close Container handling sequence on datanodes

2018-11-12 Thread Jitendra Nath Pandey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HDDS-709:
--
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

I have committed this to trunk. Thanks [~shashikant].

> Modify Close Container handling sequence on datanodes
> -
>
> Key: HDDS-709
> URL: https://issues.apache.org/jira/browse/HDDS-709
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-709.000.patch, HDDS-709.001.patch, 
> HDDS-709.002.patch, HDDS-709.003.patch, HDDS-709.004.patch, 
> HDDS-709.005.patch, HDDS-709.006.patch, HDDS-709.007.patch, HDDS-709.008.patch
>
>
> With quasi closed container state for handling majority node failures, the 
> close container handling sequence in Datanodes need to change. Once the 
> datanodes receive a close container command from SCM, the open container 
> replicas individually be marked in the closing state. In a closing state, 
> only the transactions coming from the Ratis leader  are allowed , all other 
> write transaction will fail. A close container transaction will be queued via 
> Ratis on the leader which will be replayed to the followers which makes it 
> transition to CLOSED/QUASI CLOSED state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-832) Docs folder is missing from the Ozone distribution package

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-832:
--
Summary: Docs folder is missing from the Ozone distribution package  (was: 
Docs folder is missing from the distribution package)

> Docs folder is missing from the Ozone distribution package
> --
>
> Key: HDDS-832
> URL: https://issues.apache.org/jira/browse/HDDS-832
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Attachments: HDDS-832-ozone-0.3.001.patch
>
>
> After the 0.2.1 release the dist package create (together with the classpath 
> generation) are changed. 
> Problems: 
> 1. /docs folder is missing from the dist package
> 2. /docs is missing from the scm/om ui



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14067) Allow manual failover between standby and observer

2018-11-12 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684452#comment-16684452
 ] 

Konstantin Shvachko commented on HDFS-14067:


We should wait for HDFS-14035 here, since it adds 
{{ClientProtocol.getHAServiceState()}}, which should be used here instead of 
{{HAServiceProtocol}}. Otherwise we will have the same problems with delegation 
token as in HDFS-14035.

> Allow manual failover between standby and observer
> --
>
> Key: HDFS-14067
> URL: https://issues.apache.org/jira/browse/HDFS-14067
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14067-HDFS-12943.000.patch
>
>
> Currently if automatic failover is enabled in a HA environment, transition 
> from standby to observer would be blocked:
> {code}
> [hdfs@*** hadoop-3.3.0-SNAPSHOT]$ bin/hdfs haadmin -transitionToObserver ha2
> Automatic failover is enabled for NameNode at 
> Refusing to manually manage HA state, since it may cause
> a split-brain scenario or other incorrect state.
> If you are very sure you know what you are doing, please
> specify the --forcemanual flag.
> {code}
> We should allow manual transition between standby and observer in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-832) Docs folder is missing from the distribution package

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-832:
--
Status: Patch Available  (was: Open)

Patch is uploaded.

Test is simple.

1.) Do a full build and 
2). check "firefox hadoop-ozone/dist/taret/ozone-0.3.0-SNAPSHOT/docs/index.html"
3.) start a docker-compose cluster and check the docs menu in om/scm web ui.

> Docs folder is missing from the distribution package
> 
>
> Key: HDDS-832
> URL: https://issues.apache.org/jira/browse/HDDS-832
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
> Attachments: HDDS-832-ozone-0.3.001.patch
>
>
> After the 0.2.1 release the dist package create (together with the classpath 
> generation) are changed. 
> Problems: 
> 1. /docs folder is missing from the dist package
> 2. /docs is missing from the scm/om ui



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14035) NN status discovery does not leverage delegation token

2018-11-12 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684433#comment-16684433
 ] 

Chen Liang commented on HDFS-14035:
---

Thanks for the review [~shv]! The failed test TestConsistentReadsObserver is 
related. Turns out a side effect of using client protocol to discover server 
state is that the call to {{changeProxy}} could potentially updating client 
alignment context state id to most recent, if talked to active, introducing a 
race condition to {{testMsyncSimple}}. Post v0013 patch to resolve this.

> NN status discovery does not leverage delegation token
> --
>
> Key: HDFS-14035
> URL: https://issues.apache.org/jira/browse/HDFS-14035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14035-HDFS-12943.001.patch, 
> HDFS-14035-HDFS-12943.002.patch, HDFS-14035-HDFS-12943.003.patch, 
> HDFS-14035-HDFS-12943.004.patch, HDFS-14035-HDFS-12943.005.patch, 
> HDFS-14035-HDFS-12943.006.patch, HDFS-14035-HDFS-12943.007.patch, 
> HDFS-14035-HDFS-12943.008.patch, HDFS-14035-HDFS-12943.009.patch, 
> HDFS-14035-HDFS-12943.010.patch, HDFS-14035-HDFS-12943.011.patch, 
> HDFS-14035-HDFS-12943.012.patch, HDFS-14035-HDFS-12943.013.patch
>
>
> Currently ObserverReadProxyProvider uses 
> {{HAServiceProtocol#getServiceStatus}} to get the status of each NN. However 
> {{HAServiceProtocol}} does not leverage delegation token. So when running an 
> application on YARN and when YARN node manager makes this call 
> getServiceStatus, token authentication will fail, causing the application to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-825) Code cleanup based on messages from ErrorProne

2018-11-12 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684432#comment-16684432
 ] 

Anu Engineer commented on HDDS-825:
---

Opportunistically cleaned up some FindBugs and CheckStyle issues too in Patch 
v3.


> Code cleanup based on messages from ErrorProne
> --
>
> Key: HDDS-825
> URL: https://issues.apache.org/jira/browse/HDDS-825
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.3.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-825.001.patch, HDDS-825.002.patch, 
> HDDS-825.003.patch
>
>
> I ran ErrorProne (http://errorprone.info/) on Ozone/HDDS code base and it 
> threw lots of errors. This patch fixes many issues pointed out by ErrorProne.
> The main classes of errors fixed in this patch are:
> * http://errorprone.info/bugpattern/DefaultCharset
> * http://errorprone.info/bugpattern/ComparableType
> * http://errorprone.info/bugpattern/StringSplitter
> * http://errorprone.info/bugpattern/IntLongMath
> * http://errorprone.info/bugpattern/JavaLangClash
> * http://errorprone.info/bugpattern/CatchFail
> * http://errorprone.info/bugpattern/JdkObsolete
> * http://errorprone.info/bugpattern/AssertEqualsArgumentOrderChecker
> * http://errorprone.info/bugpattern/CatchAndPrintStackTrace
> It is pretty educative to read through these errors and see the mistakes we 
> made.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-825) Code cleanup based on messages from ErrorProne

2018-11-12 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-825:
--
Attachment: HDDS-825.003.patch

> Code cleanup based on messages from ErrorProne
> --
>
> Key: HDDS-825
> URL: https://issues.apache.org/jira/browse/HDDS-825
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.3.0
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-825.001.patch, HDDS-825.002.patch, 
> HDDS-825.003.patch
>
>
> I ran ErrorProne (http://errorprone.info/) on Ozone/HDDS code base and it 
> threw lots of errors. This patch fixes many issues pointed out by ErrorProne.
> The main classes of errors fixed in this patch are:
> * http://errorprone.info/bugpattern/DefaultCharset
> * http://errorprone.info/bugpattern/ComparableType
> * http://errorprone.info/bugpattern/StringSplitter
> * http://errorprone.info/bugpattern/IntLongMath
> * http://errorprone.info/bugpattern/JavaLangClash
> * http://errorprone.info/bugpattern/CatchFail
> * http://errorprone.info/bugpattern/JdkObsolete
> * http://errorprone.info/bugpattern/AssertEqualsArgumentOrderChecker
> * http://errorprone.info/bugpattern/CatchAndPrintStackTrace
> It is pretty educative to read through these errors and see the mistakes we 
> made.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14035) NN status discovery does not leverage delegation token

2018-11-12 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-14035:
--
Attachment: HDFS-14035-HDFS-12943.013.patch

> NN status discovery does not leverage delegation token
> --
>
> Key: HDFS-14035
> URL: https://issues.apache.org/jira/browse/HDFS-14035
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-14035-HDFS-12943.001.patch, 
> HDFS-14035-HDFS-12943.002.patch, HDFS-14035-HDFS-12943.003.patch, 
> HDFS-14035-HDFS-12943.004.patch, HDFS-14035-HDFS-12943.005.patch, 
> HDFS-14035-HDFS-12943.006.patch, HDFS-14035-HDFS-12943.007.patch, 
> HDFS-14035-HDFS-12943.008.patch, HDFS-14035-HDFS-12943.009.patch, 
> HDFS-14035-HDFS-12943.010.patch, HDFS-14035-HDFS-12943.011.patch, 
> HDFS-14035-HDFS-12943.012.patch, HDFS-14035-HDFS-12943.013.patch
>
>
> Currently ObserverReadProxyProvider uses 
> {{HAServiceProtocol#getServiceStatus}} to get the status of each NN. However 
> {{HAServiceProtocol}} does not leverage delegation token. So when running an 
> application on YARN and when YARN node manager makes this call 
> getServiceStatus, token authentication will fail, causing the application to 
> fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-832) Docs folder is missing from the distribution package

2018-11-12 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684429#comment-16684429
 ] 

Elek, Marton commented on HDDS-832:
---

The problem in the HttpServer2:

{code}
 resourceUrl =
  getClass().getClassLoader().getResource("webapps/" + appName);
{code}

Here the resourceUrl is defined. This is the base prefix for for all the web 
resources. But in the new layout this is not a directory any more but a jar 
file location. (jar:///..hadoop-hdds-server-scm.jar:/webapps/scm).

So it's not enough to put something to the classpath. It should be in exactly 
the same jar file (or directory) where the original web folder is. And the web 
folder are parts of the jar files (servier-scm/ozone-manager)

For the easier workaround we need to put the docs folder (which is generated by 
hdds-ozone/docs) to the jar file of hadoop-hdds/server-scm/ and 
hadoop-ozone/ozone-manager/. To avoid circular dependencies we need to move 
hadoop-ozone/docs to hadoop-hdds/docs (a hdds project (eg. server-scm) 
shouldn't depends on a ozone projects).





> Docs folder is missing from the distribution package
> 
>
> Key: HDDS-832
> URL: https://issues.apache.org/jira/browse/HDDS-832
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>
> After the 0.2.1 release the dist package create (together with the classpath 
> generation) are changed. 
> Problems: 
> 1. /docs folder is missing from the dist package
> 2. /docs is missing from the scm/om ui



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-709) Modify Close Container handling sequence on datanodes

2018-11-12 Thread Jitendra Nath Pandey (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684420#comment-16684420
 ] 

Jitendra Nath Pandey commented on HDDS-709:
---

+1 for the patch. I will commit shortly.

> Modify Close Container handling sequence on datanodes
> -
>
> Key: HDDS-709
> URL: https://issues.apache.org/jira/browse/HDDS-709
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-709.000.patch, HDDS-709.001.patch, 
> HDDS-709.002.patch, HDDS-709.003.patch, HDDS-709.004.patch, 
> HDDS-709.005.patch, HDDS-709.006.patch, HDDS-709.007.patch, HDDS-709.008.patch
>
>
> With quasi closed container state for handling majority node failures, the 
> close container handling sequence in Datanodes need to change. Once the 
> datanodes receive a close container command from SCM, the open container 
> replicas individually be marked in the closing state. In a closing state, 
> only the transactions coming from the Ratis leader  are allowed , all other 
> write transaction will fail. A close container transaction will be queued via 
> Ratis on the leader which will be replayed to the followers which makes it 
> transition to CLOSED/QUASI CLOSED state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-8) Add OzoneManager Delegation Token support

2018-11-12 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684423#comment-16684423
 ] 

Ajay Kumar commented on HDDS-8:
---

will address jenkins issues with review comments.

> Add OzoneManager Delegation Token support
> -
>
> Key: HDDS-8
> URL: https://issues.apache.org/jira/browse/HDDS-8
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-8-HDDS-4.00.patch, HDDS-8-HDDS-4.01.patch, 
> HDDS-8-HDDS-4.02.patch, HDDS-8-HDDS-4.03.patch, HDDS-8-HDDS-4.04.patch, 
> HDDS-8-HDDS-4.05.patch, HDDS-8-HDDS-4.06.patch, HDDS-8-HDDS-4.07.patch, 
> HDDS-8-HDDS-4.08.patch, HDDS-8-HDDS-4.09.patch, HDDS-8-HDDS-4.10.patch, 
> HDDS-8-HDDS-4.11.patch, HDDS-8-HDDS-4.12.patch, HDDS-8-HDDS-4.13.patch
>
>
> Add delegation token functionality to Ozone layer. We will re-use hadoop rpc 
> layer TOKEN authentication.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-709) Modify Close Container handling sequence on datanodes

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684419#comment-16684419
 ] 

Hadoop QA commented on HDDS-709:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
9s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
51s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 31s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 35s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.ozShell.TestOzoneShell |
\\
\\
|| Subsystem || Report/Notes ||

[jira] [Created] (HDDS-832) Docs folder is missing from the distribution package

2018-11-12 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-832:
-

 Summary: Docs folder is missing from the distribution package
 Key: HDDS-832
 URL: https://issues.apache.org/jira/browse/HDDS-832
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Elek, Marton
Assignee: Elek, Marton


After the 0.2.1 release the dist package create (together with the classpath 
generation) are changed. 

Problems: 
1. /docs folder is missing from the dist package
2. /docs is missing from the scm/om ui



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-832) Docs folder is missing from the distribution package

2018-11-12 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-832:
--
Priority: Blocker  (was: Major)

> Docs folder is missing from the distribution package
> 
>
> Key: HDDS-832
> URL: https://issues.apache.org/jira/browse/HDDS-832
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>
> After the 0.2.1 release the dist package create (together with the classpath 
> generation) are changed. 
> Problems: 
> 1. /docs folder is missing from the dist package
> 2. /docs is missing from the scm/om ui



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14067) Allow manual failover between standby and observer

2018-11-12 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684354#comment-16684354
 ] 

Erik Krogen commented on HDFS-14067:


Hey [~csun], I'm not too familiar with automatic failover so not sure if this 
is a valid concern, but is the current state of each NN tracked somewhere that 
could become confused if a standby suddenly appears or disappears because of a 
manual transition to/from observer?

Also, you sometimes use {{HAServiceState.STATE_NAME}}, and sometimes refer 
directly to the state name via the static import, can you use one or the other 
throughout the patch?

> Allow manual failover between standby and observer
> --
>
> Key: HDFS-14067
> URL: https://issues.apache.org/jira/browse/HDFS-14067
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14067-HDFS-12943.000.patch
>
>
> Currently if automatic failover is enabled in a HA environment, transition 
> from standby to observer would be blocked:
> {code}
> [hdfs@*** hadoop-3.3.0-SNAPSHOT]$ bin/hdfs haadmin -transitionToObserver ha2
> Automatic failover is enabled for NameNode at 
> Refusing to manually manage HA state, since it may cause
> a split-brain scenario or other incorrect state.
> If you are very sure you know what you are doing, please
> specify the --forcemanual flag.
> {code}
> We should allow manual transition between standby and observer in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14063) Support noredirect param for CREATE in HttpFS

2018-11-12 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684349#comment-16684349
 ] 

Íñigo Goiri commented on HDFS-14063:


OPEN also has the {{noredirect}} parameter, added it to 
[^HDFS-14063.001.patch]; changed the JIRA title to reflect this.
The checkstyle error are about indentation and I don't think we should tweak 
those.

> Support noredirect param for CREATE in HttpFS
> -
>
> Key: HDFS-14063
> URL: https://issues.apache.org/jira/browse/HDFS-14063
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14063.000.patch, HDFS-14063.001.patch
>
>
> Currently HttpFS always redirects the URI. However, the WebUI uses 
> noredirect=true which means it only wants a response with the location. This 
> is properly done in {{NamenodeWebHDFSMethods}}. HttpFS should do the same.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14063) Support noredirect param for CREATE/APPEND/OPEN in HttpFS

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684357#comment-16684357
 ] 

Hadoop QA commented on HDFS-14063:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-httpfs: The 
patch generated 22 new + 312 unchanged - 4 fixed = 334 total (was 316) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m 36s{color} 
| {color:red} hadoop-hdfs-httpfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem |
|   | hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem |
|   | hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem |
|   | hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem |
|   | hadoop.fs.http.server.TestHttpFSServer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14063 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947887/HDFS-14063.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 42f80c26220a 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1f9c4f3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Updated] (HDFS-14063) Support noredirect param for CREATE/APPEND/OPEN in HttpFS

2018-11-12 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14063:
---
Summary: Support noredirect param for CREATE/APPEND/OPEN in HttpFS  (was: 
Support noredirect param for CREATE in HttpFS)

> Support noredirect param for CREATE/APPEND/OPEN in HttpFS
> -
>
> Key: HDFS-14063
> URL: https://issues.apache.org/jira/browse/HDFS-14063
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14063.000.patch, HDFS-14063.001.patch
>
>
> Currently HttpFS always redirects the URI. However, the WebUI uses 
> noredirect=true which means it only wants a response with the location. This 
> is properly done in {{NamenodeWebHDFSMethods}}. HttpFS should do the same.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6874) Add GETFILEBLOCKLOCATIONS operation to HttpFS

2018-11-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16684345#comment-16684345
 ] 

Hadoop QA commented on HDFS-6874:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  6s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
511 unchanged - 0 fixed = 512 total (was 511) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
34s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 94m 
38s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
21s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}176m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-6874 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947865/HDFS-6874.10.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 59922c5da270 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 42f3a70 |
| maven | version: Apache Maven 

[jira] [Updated] (HDFS-14067) Allow manual failover between standby and observer

2018-11-12 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-14067:

Status: Patch Available  (was: Open)

Submit patch v0.

> Allow manual failover between standby and observer
> --
>
> Key: HDFS-14067
> URL: https://issues.apache.org/jira/browse/HDFS-14067
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14067-HDFS-12943.000.patch
>
>
> Currently if automatic failover is enabled in a HA environment, transition 
> from standby to observer would be blocked:
> {code}
> [hdfs@*** hadoop-3.3.0-SNAPSHOT]$ bin/hdfs haadmin -transitionToObserver ha2
> Automatic failover is enabled for NameNode at 
> Refusing to manually manage HA state, since it may cause
> a split-brain scenario or other incorrect state.
> If you are very sure you know what you are doing, please
> specify the --forcemanual flag.
> {code}
> We should allow manual transition between standby and observer in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >