[jira] [Commented] (HDFS-14305) Serial number in BlockTokenSecretManager could overlap between different namenodes

2019-02-26 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16779006#comment-16779006
 ] 

He Xiaoqiao commented on HDFS-14305:


 [^HDFS-14305.006.patch] fix code style, I try to run fail test 
TestBPOfferService#testTrySendErrorReportWhenNNThrowsIOException and 
TestEditLogTailer#testRollEditLogIOExceptionForRemoteNN at local and it passed, 
Please help to double check.
Another failure unit test TestJournalNodeSync, I believe it is not related to 
this patch. FYI.

> Serial number in BlockTokenSecretManager could overlap between different 
> namenodes
> --
>
> Key: HDFS-14305
> URL: https://issues.apache.org/jira/browse/HDFS-14305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Reporter: Chao Sun
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14305.001.patch, HDFS-14305.002.patch, 
> HDFS-14305.003.patch, HDFS-14305.004.patch, HDFS-14305.005.patch, 
> HDFS-14305.006.patch
>
>
> Currently, a {{BlockTokenSecretManager}} starts with a random integer as the 
> initial serial number, and then use this formula to rotate it:
> {code:java}
> this.intRange = Integer.MAX_VALUE / numNNs;
> this.nnRangeStart = intRange * nnIndex;
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
>  {code}
> while {{numNNs}} is the total number of NameNodes in the cluster, and 
> {{nnIndex}} is the index of the current NameNode specified in the 
> configuration {{dfs.ha.namenodes.}}.
> However, with this approach, different NameNode could have overlapping ranges 
> for serial number. For simplicity, let's assume {{Integer.MAX_VALUE}} is 100, 
> and we have 2 NameNodes {{nn1}} and {{nn2}} in configuration. Then the ranges 
> for these two are:
> {code}
> nn1 -> [-49, 49]
> nn2 -> [1, 99]
> {code}
> This is because the initial serial number could be any negative integer.
> Moreover, when the keys are updated, the serial number will again be updated 
> with the formula:
> {code}
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
> {code}
> which means the new serial number could be updated to a range that belongs to 
> a different NameNode, thus increasing the chance of collision again.
> When the collision happens, DataNodes could overwrite an existing key which 
> will cause clients to fail because of {{InvalidToken}} error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14305) Serial number in BlockTokenSecretManager could overlap between different namenodes

2019-02-26 Thread He Xiaoqiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HDFS-14305:
---
Attachment: HDFS-14305.006.patch

> Serial number in BlockTokenSecretManager could overlap between different 
> namenodes
> --
>
> Key: HDFS-14305
> URL: https://issues.apache.org/jira/browse/HDFS-14305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Reporter: Chao Sun
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14305.001.patch, HDFS-14305.002.patch, 
> HDFS-14305.003.patch, HDFS-14305.004.patch, HDFS-14305.005.patch, 
> HDFS-14305.006.patch
>
>
> Currently, a {{BlockTokenSecretManager}} starts with a random integer as the 
> initial serial number, and then use this formula to rotate it:
> {code:java}
> this.intRange = Integer.MAX_VALUE / numNNs;
> this.nnRangeStart = intRange * nnIndex;
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
>  {code}
> while {{numNNs}} is the total number of NameNodes in the cluster, and 
> {{nnIndex}} is the index of the current NameNode specified in the 
> configuration {{dfs.ha.namenodes.}}.
> However, with this approach, different NameNode could have overlapping ranges 
> for serial number. For simplicity, let's assume {{Integer.MAX_VALUE}} is 100, 
> and we have 2 NameNodes {{nn1}} and {{nn2}} in configuration. Then the ranges 
> for these two are:
> {code}
> nn1 -> [-49, 49]
> nn2 -> [1, 99]
> {code}
> This is because the initial serial number could be any negative integer.
> Moreover, when the keys are updated, the serial number will again be updated 
> with the formula:
> {code}
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
> {code}
> which means the new serial number could be updated to a range that belongs to 
> a different NameNode, thus increasing the chance of collision again.
> When the collision happens, DataNodes could overwrite an existing key which 
> will cause clients to fail because of {{InvalidToken}} error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14118) Support using DNS to resolve nameservices to IP addresses

2019-02-26 Thread Fengnan Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778981#comment-16778981
 ] 

Fengnan Li commented on HDFS-14118:
---

[~yzhangal] Added release not. Sorry for the late catch!

> Support using DNS to resolve nameservices to IP addresses
> -
>
> Key: HDFS-14118
> URL: https://issues.apache.org/jira/browse/HDFS-14118
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: DNS testing log, HDFS design doc_ Single domain name for 
> clients - Google Docs-1.pdf, HDFS design doc_ Single domain name for clients 
> - Google Docs.pdf, HDFS-14118.001.patch, HDFS-14118.002.patch, 
> HDFS-14118.003.patch, HDFS-14118.004.patch, HDFS-14118.005.patch, 
> HDFS-14118.006.patch, HDFS-14118.007.patch, HDFS-14118.008.patch, 
> HDFS-14118.009.patch, HDFS-14118.010.patch, HDFS-14118.011.patch, 
> HDFS-14118.012.patch, HDFS-14118.013.patch, HDFS-14118.014.patch, 
> HDFS-14118.015.patch, HDFS-14118.016.patch, HDFS-14118.017.patch, 
> HDFS-14118.018.patch, HDFS-14118.019.patch, HDFS-14118.020.patch, 
> HDFS-14118.021.patch, HDFS-14118.022.patch, HDFS-14118.023.patch, 
> HDFS-14118.024.patch, HDFS-14118.patch
>
>
> In router based federation (RBF), clients will need to know about routers to 
> talk to the HDFS cluster (obviously), and having routers updating 
> (adding/removing) will have to make config change in every client, which is a 
> painful process.
> DNS can be used here to resolve the single domain name clients knows to a 
> list of routers in the current config. However, DNS won't be able to consider 
> only resolving to the working router based on certain health thresholds.
> There are some ways about how this can be solved. One way is to have a 
> separate script to regularly check the status of the router and update the 
> DNS records if a router fails the health thresholds. In this way, security 
> might be carefully considered for this way. Another way is to have the client 
> do the normal connecting/failover after they get the list of routers, which 
> requires the change of current failover proxy provider.
> See the attached design document for details about the proposed solution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14118) Support using DNS to resolve nameservices to IP addresses

2019-02-26 Thread Fengnan Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778981#comment-16778981
 ] 

Fengnan Li edited comment on HDFS-14118 at 2/27/19 7:05 AM:


[~yzhangal] Added release note. Sorry for the late catch!


was (Author: fengnanli):
[~yzhangal] Added release not. Sorry for the late catch!

> Support using DNS to resolve nameservices to IP addresses
> -
>
> Key: HDFS-14118
> URL: https://issues.apache.org/jira/browse/HDFS-14118
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: DNS testing log, HDFS design doc_ Single domain name for 
> clients - Google Docs-1.pdf, HDFS design doc_ Single domain name for clients 
> - Google Docs.pdf, HDFS-14118.001.patch, HDFS-14118.002.patch, 
> HDFS-14118.003.patch, HDFS-14118.004.patch, HDFS-14118.005.patch, 
> HDFS-14118.006.patch, HDFS-14118.007.patch, HDFS-14118.008.patch, 
> HDFS-14118.009.patch, HDFS-14118.010.patch, HDFS-14118.011.patch, 
> HDFS-14118.012.patch, HDFS-14118.013.patch, HDFS-14118.014.patch, 
> HDFS-14118.015.patch, HDFS-14118.016.patch, HDFS-14118.017.patch, 
> HDFS-14118.018.patch, HDFS-14118.019.patch, HDFS-14118.020.patch, 
> HDFS-14118.021.patch, HDFS-14118.022.patch, HDFS-14118.023.patch, 
> HDFS-14118.024.patch, HDFS-14118.patch
>
>
> In router based federation (RBF), clients will need to know about routers to 
> talk to the HDFS cluster (obviously), and having routers updating 
> (adding/removing) will have to make config change in every client, which is a 
> painful process.
> DNS can be used here to resolve the single domain name clients knows to a 
> list of routers in the current config. However, DNS won't be able to consider 
> only resolving to the working router based on certain health thresholds.
> There are some ways about how this can be solved. One way is to have a 
> separate script to regularly check the status of the router and update the 
> DNS records if a router fails the health thresholds. In this way, security 
> might be carefully considered for this way. Another way is to have the client 
> do the normal connecting/failover after they get the list of routers, which 
> requires the change of current failover proxy provider.
> See the attached design document for details about the proposed solution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14118) Support using DNS to resolve nameservices to IP addresses

2019-02-26 Thread Fengnan Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengnan Li updated HDFS-14118:
--
Release Note: HDFS clients can use a single domain name to discover servers 
(namenodes/routers/observers) instead of explicitly listing out all hosts in 
the config

> Support using DNS to resolve nameservices to IP addresses
> -
>
> Key: HDFS-14118
> URL: https://issues.apache.org/jira/browse/HDFS-14118
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: DNS testing log, HDFS design doc_ Single domain name for 
> clients - Google Docs-1.pdf, HDFS design doc_ Single domain name for clients 
> - Google Docs.pdf, HDFS-14118.001.patch, HDFS-14118.002.patch, 
> HDFS-14118.003.patch, HDFS-14118.004.patch, HDFS-14118.005.patch, 
> HDFS-14118.006.patch, HDFS-14118.007.patch, HDFS-14118.008.patch, 
> HDFS-14118.009.patch, HDFS-14118.010.patch, HDFS-14118.011.patch, 
> HDFS-14118.012.patch, HDFS-14118.013.patch, HDFS-14118.014.patch, 
> HDFS-14118.015.patch, HDFS-14118.016.patch, HDFS-14118.017.patch, 
> HDFS-14118.018.patch, HDFS-14118.019.patch, HDFS-14118.020.patch, 
> HDFS-14118.021.patch, HDFS-14118.022.patch, HDFS-14118.023.patch, 
> HDFS-14118.024.patch, HDFS-14118.patch
>
>
> In router based federation (RBF), clients will need to know about routers to 
> talk to the HDFS cluster (obviously), and having routers updating 
> (adding/removing) will have to make config change in every client, which is a 
> painful process.
> DNS can be used here to resolve the single domain name clients knows to a 
> list of routers in the current config. However, DNS won't be able to consider 
> only resolving to the working router based on certain health thresholds.
> There are some ways about how this can be solved. One way is to have a 
> separate script to regularly check the status of the router and update the 
> DNS records if a router fails the health thresholds. In this way, security 
> might be carefully considered for this way. Another way is to have the client 
> do the normal connecting/failover after they get the list of routers, which 
> requires the change of current failover proxy provider.
> See the attached design document for details about the proposed solution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1183) OzoneFileSystem needs to override delegation token APIs

2019-02-26 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-1183:


 Summary: OzoneFileSystem needs to override delegation token APIs
 Key: HDDS-1183
 URL: https://issues.apache.org/jira/browse/HDDS-1183
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This includes addDelegationToken/renewDelegationToken/cancelDelegationToken so 
that MR jobs can collect tokens correctly upon job submission time. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1176) Fix storage of certificate in scm db

2019-02-26 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1176:
-
Status: Patch Available  (was: Open)

> Fix storage of certificate in scm db
> 
>
> Key: HDDS-1176
> URL: https://issues.apache.org/jira/browse/HDDS-1176
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1176.001.patch
>
>
> put operation in {{SCMCertStore}} is failing due to unregistered codec 
> format. Either we can add new codec format or put pem encoded cert. Second 
> option is more preferable i think as every where else we are using pem 
> encoded string.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1176) Fix storage of certificate in scm db

2019-02-26 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1176:
-
Attachment: HDDS-1176.001.patch

> Fix storage of certificate in scm db
> 
>
> Key: HDDS-1176
> URL: https://issues.apache.org/jira/browse/HDDS-1176
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1176.001.patch
>
>
> put operation in {{SCMCertStore}} is failing due to unregistered codec 
> format. Either we can add new codec format or put pem encoded cert. Second 
> option is more preferable i think as every where else we are using pem 
> encoded string.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1176) Fix storage of certificate in scm db

2019-02-26 Thread Xiaoyu Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778967#comment-16778967
 ] 

Xiaoyu Yao commented on HDDS-1176:
--

I think it is the CodecRegistry not flexible to handle Typed data with class, 
subclass hierarchy. Here the class is registered with the CodecRegistry but the 
subclass is passed in to serialize/deserialize where getClass() return only the 
subclass. 

 

Post a simple patch allow subclass using base class registry for 
serialization/deserialization. 

> Fix storage of certificate in scm db
> 
>
> Key: HDDS-1176
> URL: https://issues.apache.org/jira/browse/HDDS-1176
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>
> put operation in {{SCMCertStore}} is failing due to unregistered codec 
> format. Either we can add new codec format or put pem encoded cert. Second 
> option is more preferable i think as every where else we are using pem 
> encoded string.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1176) Fix storage of certificate in scm db

2019-02-26 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-1176:


Assignee: Xiaoyu Yao  (was: Anu Engineer)

> Fix storage of certificate in scm db
> 
>
> Key: HDDS-1176
> URL: https://issues.apache.org/jira/browse/HDDS-1176
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>
> put operation in {{SCMCertStore}} is failing due to unregistered codec 
> format. Either we can add new codec format or put pem encoded cert. Second 
> option is more preferable i think as every where else we are using pem 
> encoded string.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14305) Serial number in BlockTokenSecretManager could overlap between different namenodes

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778957#comment-16778957
 ] 

Hadoop QA commented on HDFS-14305:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 54s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 22 unchanged - 0 fixed = 23 total (was 22) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 32s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}171m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14305 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960285/HDFS-14305.005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 73feb0720ef9 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 625e937 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26345/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26345/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-14111) hdfsOpenFile on HDFS causes unnecessary IO from file offset 0

2019-02-26 Thread Todd Lipcon (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778955#comment-16778955
 ] 

Todd Lipcon commented on HDFS-14111:


BTW does this also end up fixing HDFS-14083 along the way? It seems it would. 
Maybe we should resolve that one and point it here since the patch available 
there has some issues.

> hdfsOpenFile on HDFS causes unnecessary IO from file offset 0
> -
>
> Key: HDFS-14111
> URL: https://issues.apache.org/jira/browse/HDFS-14111
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client, libhdfs
>Affects Versions: 3.2.0
>Reporter: Todd Lipcon
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HDFS-14111.001.patch, HDFS-14111.002.patch
>
>
> hdfsOpenFile() calls readDirect() with a 0-length argument in order to check 
> whether the underlying stream supports bytebuffer reads. With DFSInputStream, 
> the read(0) isn't short circuited, and results in the DFSClient opening a 
> block reader. In the case of a remote block, the block reader will actually 
> issue a read of the whole block, causing the datanode to perform unnecessary 
> IO and network transfers in order to fill up the client's TCP buffers. This 
> causes performance degradation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14292) Introduce Java ExecutorService to DataXceiverServer

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778943#comment-16778943
 ] 

Hadoop QA commented on HDFS-14292:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
52s{color} | {color:green} hadoop-hdfs-project generated 0 new + 537 unchanged 
- 3 fixed = 537 total (was 540) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  9s{color} | {color:orange} hadoop-hdfs-project: The patch generated 4 new + 
645 unchanged - 8 fixed = 649 total (was 653) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
52s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}146m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14292 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960287/HDFS-14292.8.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | 

[jira] [Updated] (HDFS-14322) RBF: Security manager should not load if security is disabled

2019-02-26 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-14322:
---
Description: Security manager tries to instantiate secret manager 
irrespective of whether security is on/off. Simple optimization should be done 
to avoid loading secret manager if security is turned off.

> RBF: Security manager should not load if security is disabled
> -
>
> Key: HDFS-14322
> URL: https://issues.apache.org/jira/browse/HDFS-14322
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
>
> Security manager tries to instantiate secret manager irrespective of whether 
> security is on/off. Simple optimization should be done to avoid loading 
> secret manager if security is turned off.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14316) RBF: Support unavailable subclusters for mount points with multiple destinations

2019-02-26 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778926#comment-16778926
 ] 

CR Hota commented on HDFS-14316:


[~elgoiri] Thanks for your quick comment. Created this HDFS-14322. Wondering 
why this optimization isn't present in current Namenode. It always loads secret 
manager even if security is turned off.

> RBF: Support unavailable subclusters for mount points with multiple 
> destinations
> 
>
> Key: HDFS-14316
> URL: https://issues.apache.org/jira/browse/HDFS-14316
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14316-HDFS-13891.000.patch, 
> HDFS-14316-HDFS-13891.001.patch
>
>
> Currently mount points with multiple destinations (e.g., HASH_ALL) fail 
> writes when the destination subcluster is down. We need an option to allow 
> writing in other subclusters when one is down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14322) RBF: Security manager should not load if security is disabled

2019-02-26 Thread CR Hota (JIRA)
CR Hota created HDFS-14322:
--

 Summary: RBF: Security manager should not load if security is 
disabled
 Key: HDFS-14322
 URL: https://issues.apache.org/jira/browse/HDFS-14322
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: CR Hota
Assignee: CR Hota






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14316) RBF: Support unavailable subclusters for mount points with multiple destinations

2019-02-26 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778920#comment-16778920
 ] 

Íñigo Goiri commented on HDFS-14316:


Thanks [~crh], do you mind opening a JIRA to prevent the secret manager from 
loading if security is off? 

> RBF: Support unavailable subclusters for mount points with multiple 
> destinations
> 
>
> Key: HDFS-14316
> URL: https://issues.apache.org/jira/browse/HDFS-14316
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14316-HDFS-13891.000.patch, 
> HDFS-14316-HDFS-13891.001.patch
>
>
> Currently mount points with multiple destinations (e.g., HASH_ALL) fail 
> writes when the destination subcluster is down. We need an option to allow 
> writing in other subclusters when one is down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14316) RBF: Support unavailable subclusters for mount points with multiple destinations

2019-02-26 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778919#comment-16778919
 ] 

CR Hota commented on HDFS-14316:


[~elgoiri] Thanks for tagging me here.

In HDFS-13358, verbosity isn't suppressed, just handled the 
NullPointerException in a cleaner way that is thrown by hadoop-common 
(ZKDelegationTokenSecretManager). This exception stack is needed when users 
have security enabled but haven't configured zookeeper correctly.

Re-looking at this ticket it seems the best way to handle is not to even 
instantiate secret manager if security is turned off. I can make a change based 
on your thoughts.

> RBF: Support unavailable subclusters for mount points with multiple 
> destinations
> 
>
> Key: HDFS-14316
> URL: https://issues.apache.org/jira/browse/HDFS-14316
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14316-HDFS-13891.000.patch, 
> HDFS-14316-HDFS-13891.001.patch
>
>
> Currently mount points with multiple destinations (e.g., HASH_ALL) fail 
> writes when the destination subcluster is down. We need an option to allow 
> writing in other subclusters when one is down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14314) fullBlockReportLeaseId should be reset after registering to NN

2019-02-26 Thread star (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778907#comment-16778907
 ] 

star commented on HDFS-14314:
-

Still unexpected error !What's next?

> fullBlockReportLeaseId should be reset after registering to NN
> --
>
> Key: HDFS-14314
> URL: https://issues.apache.org/jira/browse/HDFS-14314
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.8.4
> Environment:  
>  
>  
>Reporter: star
>Priority: Critical
> Fix For: 2.8.4
>
> Attachments: HDFS-14314-trunk.001.patch, HDFS-14314-trunk.001.patch, 
> HDFS-14314-trunk.002.patch, HDFS-14314.0.patch, HDFS-14314.2.patch, 
> HDFS-14314.patch
>
>
>   since HDFS-7923 ,to rate-limit DN block report, DN will ask for a full 
> block lease id from active NN before sending full block to NN. Then DN will 
> send full block report together with lease id. If the lease id is invalid, NN 
> will reject the full block report and log "not in the pending set".
>   In a case when DN is doing full block reporting while NN is restarted. 
> It happens that DN will later send a full block report with lease id 
> ,acquired from previous NN instance, which is invalid to the new NN instance. 
> Though DN recognized the new NN instance by heartbeat and reregister itself, 
> it did not reset the lease id from previous instance.
>   The issuse may cause DNs to temporarily go dead, making it unsafe to 
> restart NN especially in hadoop cluster which has large amount of DNs. 
> HDFS-12914 reported the issue  without any clues why it occurred and remain 
> unsolved.
>    To make it clear, look at code below. We take it from method 
> offerService of class BPServiceActor. We eliminate some code to focus on 
> current issue. fullBlockReportLeaseId is a local variable to hold lease id 
> from NN. Exceptions will occur at blockReport call when NN restarting, which 
> will be caught by catch block in while loop. Thus fullBlockReportLeaseId will 
> not be set to 0. After NN restarted, DN will send full block report which 
> will be rejected by the new NN instance. DN will never send full block report 
> until the next full block report schedule, about an hour later.
>   Solution is simple, just reset fullBlockReportLeaseId to 0 after any 
> exception or after registering to NN. Thus it will ask for a valid 
> fullBlockReportLeaseId from new NN instance.
> {code:java}
> private void offerService() throws Exception {
>   long fullBlockReportLeaseId = 0;
>   //
>   // Now loop for a long time
>   //
>   while (shouldRun()) {
> try {
>   final long startTime = scheduler.monotonicNow();
>   //
>   // Every so often, send heartbeat or block-report
>   //
>   final boolean sendHeartbeat = scheduler.isHeartbeatDue(startTime);
>   HeartbeatResponse resp = null;
>   if (sendHeartbeat) {
>   
> boolean requestBlockReportLease = (fullBlockReportLeaseId == 0) &&
> scheduler.isBlockReportDue(startTime);
> scheduler.scheduleNextHeartbeat();
> if (!dn.areHeartbeatsDisabledForTests()) {
>   resp = sendHeartBeat(requestBlockReportLease);
>   assert resp != null;
>   if (resp.getFullBlockReportLeaseId() != 0) {
> if (fullBlockReportLeaseId != 0) {
>   LOG.warn(nnAddr + " sent back a full block report lease " +
>   "ID of 0x" +
>   Long.toHexString(resp.getFullBlockReportLeaseId()) +
>   ", but we already have a lease ID of 0x" +
>   Long.toHexString(fullBlockReportLeaseId) + ". " +
>   "Overwriting old lease ID.");
> }
> fullBlockReportLeaseId = resp.getFullBlockReportLeaseId();
>   }
>  
> }
>   }
>
>  
>   if ((fullBlockReportLeaseId != 0) || forceFullBr) {
> //Exception occurred here when NN restarting
> cmds = blockReport(fullBlockReportLeaseId);
> fullBlockReportLeaseId = 0;
>   }
>   
> } catch(RemoteException re) {
>   
>   } // while (shouldRun())
> } // offerService{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14316) RBF: Support unavailable subclusters for mount points with multiple destinations

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778906#comment-16778906
 ] 

Hadoop QA commented on HDFS-14316:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 5s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
8s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
31s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
7s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
39s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
36s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  4m  
6s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  4m  6s{color} | 
{color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  4m  6s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 17s{color} | {color:orange} root: The patch generated 4 new + 376 unchanged 
- 0 fixed = 380 total (was 376) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
56s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}107m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 43s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}213m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14316 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960267/HDFS-14316-HDFS-13891.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  

[jira] [Commented] (HDFS-14205) Backport HDFS-6440 to branch-2

2019-02-26 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778876#comment-16778876
 ] 

Chao Sun commented on HDFS-14205:
-

Thanks [~vagarychen]. Jenkins is working now. I'll investigate the failed unit 
tests.

> Backport HDFS-6440 to branch-2
> --
>
> Key: HDFS-14205
> URL: https://issues.apache.org/jira/browse/HDFS-14205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14205-branch-2.001.patch, 
> HDFS-14205-branch-2.002.patch, HDFS-14205-branch-2.003.patch
>
>
> Currently support for more than 2 NameNodes (HDFS-6440) is only in branch-3. 
> This JIRA aims to backport it to branch-2, as this is required by HDFS-12943 
> (consistent read from standby) backport to branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1162) Data Scrubbing for Ozone Containers

2019-02-26 Thread Arpit Agarwal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-1162:

Description: 
General Reference:
 [https://en.wikipedia.org/wiki/Data_scrubbing]

  was:
General Reference:
https://en.wikipedia.org/wiki/Data_scrubbing

The internal Design Document for Ozone Container Scanning is attached. Also 
link below:
https://docs.google.com/document/d/1LLGCuGq7n3TVlJcxyUR1LgUhwqC5WB_OwRqQkRw3Eng/edit#

 


> Data Scrubbing for Ozone Containers
> ---
>
> Key: HDDS-1162
> URL: https://issues.apache.org/jira/browse/HDDS-1162
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Ozone Datanode, SCM
>Reporter: Supratim Deka
>Priority: Major
> Attachments: Container Scanner Redux.docx
>
>
> General Reference:
>  [https://en.wikipedia.org/wiki/Data_scrubbing]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-453) OM and SCM should use picocli to parse arguments

2019-02-26 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-453:
-
Summary: OM and SCM should use picocli to parse arguments  (was: OM and SCM 
should use piccoli to parse arguments)

> OM and SCM should use picocli to parse arguments
> 
>
> Key: HDDS-453
> URL: https://issues.apache.org/jira/browse/HDDS-453
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager, SCM
>Reporter: Arpit Agarwal
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: alpha2, newbie
>
> SCM and OM can use the picocli to parse command-line arguments.
> Suggested in HDDS-415 by [~anu].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-453) OM and SCM should use piccoli to parse arguments

2019-02-26 Thread Remko Popma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778860#comment-16778860
 ] 

Remko Popma commented on HDDS-453:
--

Would you mind changing the title to fix the typo in ‘picocli’?

> OM and SCM should use piccoli to parse arguments
> 
>
> Key: HDDS-453
> URL: https://issues.apache.org/jira/browse/HDDS-453
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager, SCM
>Reporter: Arpit Agarwal
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: alpha2, newbie
>
> SCM and OM can use the picocli to parse command-line arguments.
> Suggested in HDDS-415 by [~anu].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14292) Introduce Java ExecutorService to DataXceiverServer

2019-02-26 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14292:
---
Attachment: HDFS-14292.8.patch

> Introduce Java ExecutorService to DataXceiverServer
> ---
>
> Key: HDFS-14292
> URL: https://issues.apache.org/jira/browse/HDFS-14292
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HDFS-14292.1.patch, HDFS-14292.2.patch, 
> HDFS-14292.3.patch, HDFS-14292.4.patch, HDFS-14292.5.patch, 
> HDFS-14292.6.patch, HDFS-14292.7.patch, HDFS-14292.8.patch, 
> HDFS-14292.8.patch, HDFS-14295.8.patch
>
>
> I wanted to investigate {{dfs.datanode.max.transfer.threads}} from 
> {{hdfs-site.xml}}.  It is described as "Specifies the maximum number of 
> threads to use for transferring data in and out of the DN."   The default 
> value is 4096.  I found it interesting because 4096 threads sounds like a lot 
> to me.  I'm not sure how a system with 8-16 cores would react to this large a 
> thread count.  Intuitively, I would say that the overhead of context 
> switching would be immense.
> During mt investigation, I discovered the 
> [following|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java#L203-L216]
>  setup in the {{DataXceiverServer}} class:
> # A peer connects to a DataNode
> # A new thread is spun up to service this connection
> # The thread runs to completion
> # The tread dies
> It would perhaps be better if we used a thread pool to better manage the 
> lifecycle of the service threads and to allow the DataNode to re-use existing 
> threads, saving on the need to create and spin-up threads on demand.
> In this JIRA, I have added a couple of things:
> # Added a thread pool to {{DataXceiverServer}} class that, on demand, will 
> create up to {{dfs.datanode.max.transfer.threads}}.  A thread that has 
> completed its prior duties will stay idle for up to 60 seconds 
> (configurable), it will be retired if no new work has arrived.
> # Added new methods to the {{Peer}} Interface to allow for better logging and 
> less code within each Thread ({{DataXceiver}}).
> # Updated the Thread code ({{DataXceiver}}) regarding its interactions with 
> {{blockReceiver}} instance variable



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14292) Introduce Java ExecutorService to DataXceiverServer

2019-02-26 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14292:
---
Status: Open  (was: Patch Available)

> Introduce Java ExecutorService to DataXceiverServer
> ---
>
> Key: HDFS-14292
> URL: https://issues.apache.org/jira/browse/HDFS-14292
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HDFS-14292.1.patch, HDFS-14292.2.patch, 
> HDFS-14292.3.patch, HDFS-14292.4.patch, HDFS-14292.5.patch, 
> HDFS-14292.6.patch, HDFS-14292.7.patch, HDFS-14292.8.patch, 
> HDFS-14292.8.patch, HDFS-14295.8.patch
>
>
> I wanted to investigate {{dfs.datanode.max.transfer.threads}} from 
> {{hdfs-site.xml}}.  It is described as "Specifies the maximum number of 
> threads to use for transferring data in and out of the DN."   The default 
> value is 4096.  I found it interesting because 4096 threads sounds like a lot 
> to me.  I'm not sure how a system with 8-16 cores would react to this large a 
> thread count.  Intuitively, I would say that the overhead of context 
> switching would be immense.
> During mt investigation, I discovered the 
> [following|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java#L203-L216]
>  setup in the {{DataXceiverServer}} class:
> # A peer connects to a DataNode
> # A new thread is spun up to service this connection
> # The thread runs to completion
> # The tread dies
> It would perhaps be better if we used a thread pool to better manage the 
> lifecycle of the service threads and to allow the DataNode to re-use existing 
> threads, saving on the need to create and spin-up threads on demand.
> In this JIRA, I have added a couple of things:
> # Added a thread pool to {{DataXceiverServer}} class that, on demand, will 
> create up to {{dfs.datanode.max.transfer.threads}}.  A thread that has 
> completed its prior duties will stay idle for up to 60 seconds 
> (configurable), it will be retired if no new work has arrived.
> # Added new methods to the {{Peer}} Interface to allow for better logging and 
> less code within each Thread ({{DataXceiver}}).
> # Updated the Thread code ({{DataXceiver}}) regarding its interactions with 
> {{blockReceiver}} instance variable



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14292) Introduce Java ExecutorService to DataXceiverServer

2019-02-26 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HDFS-14292:
---
Status: Patch Available  (was: Open)

> Introduce Java ExecutorService to DataXceiverServer
> ---
>
> Key: HDFS-14292
> URL: https://issues.apache.org/jira/browse/HDFS-14292
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HDFS-14292.1.patch, HDFS-14292.2.patch, 
> HDFS-14292.3.patch, HDFS-14292.4.patch, HDFS-14292.5.patch, 
> HDFS-14292.6.patch, HDFS-14292.7.patch, HDFS-14292.8.patch, 
> HDFS-14292.8.patch, HDFS-14295.8.patch
>
>
> I wanted to investigate {{dfs.datanode.max.transfer.threads}} from 
> {{hdfs-site.xml}}.  It is described as "Specifies the maximum number of 
> threads to use for transferring data in and out of the DN."   The default 
> value is 4096.  I found it interesting because 4096 threads sounds like a lot 
> to me.  I'm not sure how a system with 8-16 cores would react to this large a 
> thread count.  Intuitively, I would say that the overhead of context 
> switching would be immense.
> During mt investigation, I discovered the 
> [following|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java#L203-L216]
>  setup in the {{DataXceiverServer}} class:
> # A peer connects to a DataNode
> # A new thread is spun up to service this connection
> # The thread runs to completion
> # The tread dies
> It would perhaps be better if we used a thread pool to better manage the 
> lifecycle of the service threads and to allow the DataNode to re-use existing 
> threads, saving on the need to create and spin-up threads on demand.
> In this JIRA, I have added a couple of things:
> # Added a thread pool to {{DataXceiverServer}} class that, on demand, will 
> create up to {{dfs.datanode.max.transfer.threads}}.  A thread that has 
> completed its prior duties will stay idle for up to 60 seconds 
> (configurable), it will be retired if no new work has arrived.
> # Added new methods to the {{Peer}} Interface to allow for better logging and 
> less code within each Thread ({{DataXceiver}}).
> # Updated the Thread code ({{DataXceiver}}) regarding its interactions with 
> {{blockReceiver}} instance variable



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14295) Add Threadpool for DataTransfers

2019-02-26 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778844#comment-16778844
 ] 

BELUGA BEHR commented on HDFS-14295:


[~elgoiri]

So, with this patch, the threads still are Daemon threads.

{code:java}
this.xferService =
Executors.newCachedThreadPool(new Daemon.DaemonFactory())
{code}

I think that it is most polite to give the threads some time to finish.  Yes, 
they are daemon threads and therefore will not hold up the JVM from an exit, 
but it still puts some stress on the active clients as they will have to 
perform a re-try once their service thread is terminated.  I think as-is is the 
best approach.  Block new requests but allow exiting client tasks some glimmer 
of hope of succeeding.

> Add Threadpool for DataTransfers
> 
>
> Key: HDFS-14295
> URL: https://issues.apache.org/jira/browse/HDFS-14295
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HDFS-14295.1.patch, HDFS-14295.2.patch, 
> HDFS-14295.3.patch, HDFS-14295.4.patch, HDFS-14295.5.patch, 
> HDFS-14295.6.patch, HDFS-14295.7.patch, HDFS-14295.8.patch
>
>
> When a DataNode data transfers a block, is spins up a new thread for each 
> transfer.  
> [Here|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L2339]
>  and 
> [Here|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L3019-L3022].
>    Instead, add the threads to a {{CachedThreadPool}} so that when their 
> threads complete the transfer, they can be re-used for another transfer. This 
> should save resources spent on creating and spinning up transfer threads.
> One thing I'll point out that's a bit off, which I address in this patch, ...
> There are two places in the code where a {{DataTransfer}} thread is started. 
> In [one 
> place|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L2339-L2341],
>  it's started in a default thread group. In [another 
> place|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L3019-L3022],
>  it's started in the 
> [dataXceiverServer|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L1164]
>  thread group.
> I do not think it's correct to include any of these threads in the 
> {{dataXceiverServer}} thread group. Anything submitted to the 
> {{dataXceiverServer}} should probably be tied to the 
> {{dfs.datanode.max.transfer.threads}} configurations, and neither of these 
> methods are. Instead, they should be submitted into the same thread pool with 
> its own thread group (probably the default thread group, unless someone 
> suggests otherwise) and is what I have included in this patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14305) Serial number in BlockTokenSecretManager could overlap between different namenodes

2019-02-26 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778835#comment-16778835
 ] 

He Xiaoqiao commented on HDFS-14305:


Thanks [~vagarychen],[~csun],[~xkrogen] for your comments, update and upload 
new patch [^HDFS-14305.005.patch], pending jenkins.

> Serial number in BlockTokenSecretManager could overlap between different 
> namenodes
> --
>
> Key: HDFS-14305
> URL: https://issues.apache.org/jira/browse/HDFS-14305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Reporter: Chao Sun
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14305.001.patch, HDFS-14305.002.patch, 
> HDFS-14305.003.patch, HDFS-14305.004.patch, HDFS-14305.005.patch
>
>
> Currently, a {{BlockTokenSecretManager}} starts with a random integer as the 
> initial serial number, and then use this formula to rotate it:
> {code:java}
> this.intRange = Integer.MAX_VALUE / numNNs;
> this.nnRangeStart = intRange * nnIndex;
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
>  {code}
> while {{numNNs}} is the total number of NameNodes in the cluster, and 
> {{nnIndex}} is the index of the current NameNode specified in the 
> configuration {{dfs.ha.namenodes.}}.
> However, with this approach, different NameNode could have overlapping ranges 
> for serial number. For simplicity, let's assume {{Integer.MAX_VALUE}} is 100, 
> and we have 2 NameNodes {{nn1}} and {{nn2}} in configuration. Then the ranges 
> for these two are:
> {code}
> nn1 -> [-49, 49]
> nn2 -> [1, 99]
> {code}
> This is because the initial serial number could be any negative integer.
> Moreover, when the keys are updated, the serial number will again be updated 
> with the formula:
> {code}
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
> {code}
> which means the new serial number could be updated to a range that belongs to 
> a different NameNode, thus increasing the chance of collision again.
> When the collision happens, DataNodes could overwrite an existing key which 
> will cause clients to fail because of {{InvalidToken}} error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14305) Serial number in BlockTokenSecretManager could overlap between different namenodes

2019-02-26 Thread He Xiaoqiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HDFS-14305:
---
Attachment: HDFS-14305.005.patch

> Serial number in BlockTokenSecretManager could overlap between different 
> namenodes
> --
>
> Key: HDFS-14305
> URL: https://issues.apache.org/jira/browse/HDFS-14305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Reporter: Chao Sun
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14305.001.patch, HDFS-14305.002.patch, 
> HDFS-14305.003.patch, HDFS-14305.004.patch, HDFS-14305.005.patch
>
>
> Currently, a {{BlockTokenSecretManager}} starts with a random integer as the 
> initial serial number, and then use this formula to rotate it:
> {code:java}
> this.intRange = Integer.MAX_VALUE / numNNs;
> this.nnRangeStart = intRange * nnIndex;
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
>  {code}
> while {{numNNs}} is the total number of NameNodes in the cluster, and 
> {{nnIndex}} is the index of the current NameNode specified in the 
> configuration {{dfs.ha.namenodes.}}.
> However, with this approach, different NameNode could have overlapping ranges 
> for serial number. For simplicity, let's assume {{Integer.MAX_VALUE}} is 100, 
> and we have 2 NameNodes {{nn1}} and {{nn2}} in configuration. Then the ranges 
> for these two are:
> {code}
> nn1 -> [-49, 49]
> nn2 -> [1, 99]
> {code}
> This is because the initial serial number could be any negative integer.
> Moreover, when the keys are updated, the serial number will again be updated 
> with the formula:
> {code}
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
> {code}
> which means the new serial number could be updated to a range that belongs to 
> a different NameNode, thus increasing the chance of collision again.
> When the collision happens, DataNodes could overwrite an existing key which 
> will cause clients to fail because of {{InvalidToken}} error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1177) Add validation to AuthorizationHeaderV4

2019-02-26 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778829#comment-16778829
 ] 

Hudson commented on HDDS-1177:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16075 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16075/])
HDDS-1177. Add validation to AuthorizationHeaderV4. Contributed by Ajay (ajay: 
rev 625e93713b5f8e0ec87db8bd8e855d5ab0f6e6d5)
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/header/AuthorizationHeaderV4.java
* (edit) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/header/Credential.java
* (add) 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/header/AWSConstants.java
* (edit) 
hadoop-ozone/s3gateway/src/test/java/org/apache/hadoop/ozone/s3/header/TestAuthorizationHeaderV4.java


> Add validation to AuthorizationHeaderV4
> ---
>
> Key: HDDS-1177
> URL: https://issues.apache.org/jira/browse/HDDS-1177
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-1177.00.patch, HDDS-1177.01.patch, 
> HDDS-1177.02.patch
>
>
> Add validation to AuthorizationHeaderV4. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14205) Backport HDFS-6440 to branch-2

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778822#comment-16778822
 ] 

Hadoop QA commented on HDFS-14205:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 25 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
10s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
8s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
35s{color} | {color:green} branch-2 passed with JDK v1.8.0_191 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
38s{color} | {color:green} branch-2 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
48s{color} | {color:red} hadoop-common-project/hadoop-common in branch-2 has 1 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
10s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in branch-2 has 1 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
46s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green} branch-2 passed with JDK v1.8.0_191 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
29s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 11m 29s{color} 
| {color:red} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 1 new + 1442 
unchanged - 1 fixed = 1443 total (was 1443) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
18s{color} | {color:green} the patch passed with JDK v1.8.0_191 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 59s{color} | {color:orange} root: The patch generated 226 new + 1142 
unchanged - 51 fixed = 1368 total (was 1193) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed with JDK v1.8.0_191 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
15s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 50s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m 11s{color} 
| {color:red} bkjournal in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense 

[jira] [Updated] (HDDS-1177) Add validation to AuthorizationHeaderV4

2019-02-26 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1177:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~anu] thanks for reviews.

> Add validation to AuthorizationHeaderV4
> ---
>
> Key: HDDS-1177
> URL: https://issues.apache.org/jira/browse/HDDS-1177
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-1177.00.patch, HDDS-1177.01.patch, 
> HDDS-1177.02.patch
>
>
> Add validation to AuthorizationHeaderV4. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1177) Add validation to AuthorizationHeaderV4

2019-02-26 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar reassigned HDDS-1177:


Assignee: Ajay Kumar  (was: Anu Engineer)

> Add validation to AuthorizationHeaderV4
> ---
>
> Key: HDDS-1177
> URL: https://issues.apache.org/jira/browse/HDDS-1177
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-1177.00.patch, HDDS-1177.01.patch, 
> HDDS-1177.02.patch
>
>
> Add validation to AuthorizationHeaderV4. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14272) [SBN read] ObserverReadProxyProvider should sync with active txnID on startup

2019-02-26 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778808#comment-16778808
 ] 

Konstantin Shvachko edited comment on HDFS-14272 at 2/27/19 2:53 AM:
-

I see now that you use {{failoverProxy}} as a substitute for active NN, which 
brings us to my second concern in the [comment 
above|https://issues.apache.org/jira/browse/HDFS-14272?focusedCommentId=16773554=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16773554]
 - the "side note", which seems to be the main issue.
 I argue that {{failoverProxy}} should be used only in {{performFailover()}}, 
and not as a substitute for ANN anywhere else. In your approach both writes and 
{{msync()}} are called on {{failoverProxy}}, which triggers failover in the 
underlying proxy CFPP or IPFPP. I think ORPP should enforce its autonomous 
knowledge of the states of NameNodes rather than mixing it with the states 
known to underlying proxies. If ORPP knows thatr n1 is active it should call a 
write or an {{msync()}} on n1. If n1 is not active anymore, then it will throw 
an exception, which triggers failover - the ORPP failover.
 Your approach mixes two failovers: the underlying proxy failover is used for 
writes, but for reads it uses ORPP failover. This doesn't seem consistent to me.


was (Author: shv):
I see now that you use {{failoverProxy}} as a substitute for active NN, which 
brings us to my second concern in the [comment 
above|https://issues.apache.org/jira/browse/HDFS-14272?focusedCommentId=16773554=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16773554]
 - the "side note", which seems to be the main issue.
 I argue that {{failoverProxy}} should be used only in {{performFailover()}}, 
and not as a substitute for ANN anywhere else. In your approach both writes and 
{{msync()}} are called on {{failoverProxy}}, which triggers failover in the 
underlying proxy CFPP or IPFPP. I think ORPP should enforce its autonomous 
knowledge of the states of NameNodes rather than mixing it with the states 
known to underlying proxies. If ORPP knows thatr n1 is active it should call a 
write or an {{msync()}} on n1. If n1 is not active anymore, then it will thrown 
an exception, which trigger failover.
 Your approach mixes two failovers: the underlying proxy failover is used for 
writes, but for reads it uses ORPP failover. This doesn't seem consistent to me.

> [SBN read] ObserverReadProxyProvider should sync with active txnID on startup
> -
>
> Key: HDFS-14272
> URL: https://issues.apache.org/jira/browse/HDFS-14272
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
> Environment: CDH6.1 (Hadoop 3.0.x) + Consistency Reads from Standby + 
> SSL + Kerberos + RPC encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-14272.000.patch, HDFS-14272.001.patch, 
> HDFS-14272.002.patch
>
>
> It is typical for integration tests to create some files and then check their 
> existence. For example, like the following simple bash script:
> {code:java}
> # hdfs dfs -touchz /tmp/abc
> # hdfs dfs -ls /tmp/abc
> {code}
> The test executes HDFS bash command sequentially, but it may fail with 
> Consistent Standby Read because the -ls does not find the file.
> Analysis: the second bash command, while launched sequentially after the 
> first one, is not aware of the state id returned from the first bash command. 
> So ObserverNode wouldn't wait for the the edits to get propagated, and thus 
> fails.
> I've got a cluster where the Observer has tens of seconds of RPC latency, and 
> this becomes very annoying. (I am still trying to figure out why this 
> Observer has such a long RPC latency. But that's another story.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-453) om and scm should use piccoli to parse arguments

2019-02-26 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDDS-453:
-

Assignee: Anu Engineer  (was: Vinicius Higa Murakami)

> om and scm should use piccoli to parse arguments
> 
>
> Key: HDDS-453
> URL: https://issues.apache.org/jira/browse/HDDS-453
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager, SCM
>Reporter: Arpit Agarwal
>Assignee: Anu Engineer
>Priority: Major
>  Labels: alpha2, newbie
>
> SCM and OM can use the picocli to parse command-line arguments.
> Suggested in HDDS-415 by [~anu].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14272) [SBN read] ObserverReadProxyProvider should sync with active txnID on startup

2019-02-26 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778808#comment-16778808
 ] 

Konstantin Shvachko edited comment on HDFS-14272 at 2/27/19 2:49 AM:
-

I see now that you use {{failoverProxy}} as a substitute for active NN, which 
brings us to my second concern in the [comment 
above|https://issues.apache.org/jira/browse/HDFS-14272?focusedCommentId=16773554=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16773554]
 - the "side note", which seems to be the main issue.
 I argue that {{failoverProxy}} should be used only in {{performFailover()}}, 
and not as a substitute for ANN anywhere else. In your approach both writes and 
{{msync()}} are called on {{failoverProxy}}, which triggers failover in the 
underlying proxy CFPP or IPFPP. I think ORPP should enforce its autonomous 
knowledge of the states of NameNodes rather than mixing it with the states 
known to underlying proxies. If ORPP knows thatr n1 is active it should call a 
write or an {{msync()}} on n1. If n1 is not active anymore, then it will thrown 
an exception, which trigger failover.
 Your approach mixes two failovers: the underlying proxy failover is used for 
writes, but for reads it uses ORPP failover. This doesn't seem consistent to me.


was (Author: shv):
I see now that you use {{failoverProxy}} as a substitute for active NN, which 
brings us to my second concern in the comment above - the "side note", which 
seems to be the main issue.
 I argue that {{failoverProxy}} should be used only in {{performFailover()}}, 
and not as a substitute for ANN anywhere else. In your approach both writes and 
{{msync()}} are called on {{failoverProxy}}, which triggers failover in the 
underlying proxy CFPP or IPFPP. I think ORPP should enforce its autonomous 
knowledge of the states of NameNodes rather than mixing it with the states 
known to underlying proxies. If ORPP knows thatr n1 is active it should call a 
write or an {{msync()}} on n1. If n1 is not active anymore, then it will thrown 
an exception, which trigger failover.
 Your approach mixes two failovers: the underlying proxy failover is used for 
writes, but for reads it uses ORPP failover. This doesn't seem consistent to me.

> [SBN read] ObserverReadProxyProvider should sync with active txnID on startup
> -
>
> Key: HDFS-14272
> URL: https://issues.apache.org/jira/browse/HDFS-14272
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
> Environment: CDH6.1 (Hadoop 3.0.x) + Consistency Reads from Standby + 
> SSL + Kerberos + RPC encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-14272.000.patch, HDFS-14272.001.patch, 
> HDFS-14272.002.patch
>
>
> It is typical for integration tests to create some files and then check their 
> existence. For example, like the following simple bash script:
> {code:java}
> # hdfs dfs -touchz /tmp/abc
> # hdfs dfs -ls /tmp/abc
> {code}
> The test executes HDFS bash command sequentially, but it may fail with 
> Consistent Standby Read because the -ls does not find the file.
> Analysis: the second bash command, while launched sequentially after the 
> first one, is not aware of the state id returned from the first bash command. 
> So ObserverNode wouldn't wait for the the edits to get propagated, and thus 
> fails.
> I've got a cluster where the Observer has tens of seconds of RPC latency, and 
> this becomes very annoying. (I am still trying to figure out why this 
> Observer has such a long RPC latency. But that's another story.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14272) [SBN read] ObserverReadProxyProvider should sync with active txnID on startup

2019-02-26 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778808#comment-16778808
 ] 

Konstantin Shvachko commented on HDFS-14272:


I see now that you use {{failoverProxy}} as a substitute for active NN, which 
brings us to my second concern in the comment above - the "side note", which 
seems to be the main issue.
 I argue that {{failoverProxy}} should be used only in {{performFailover()}}, 
and not as a substitute for ANN anywhere else. In your approach both writes and 
{{msync()}} are called on {{failoverProxy}}, which triggers failover in the 
underlying proxy CFPP or IPFPP. I think ORPP should enforce its autonomous 
knowledge of the states of NameNodes rather than mixing it with the states 
known to underlying proxies. If ORPP knows thatr n1 is active it should call a 
write or an {{msync()}} on n1. If n1 is not active anymore, then it will thrown 
an exception, which trigger failover.
 Your approach mixes two failovers: the underlying proxy failover is used for 
writes, but for reads it uses ORPP failover. This doesn't seem consistent to me.

> [SBN read] ObserverReadProxyProvider should sync with active txnID on startup
> -
>
> Key: HDFS-14272
> URL: https://issues.apache.org/jira/browse/HDFS-14272
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
> Environment: CDH6.1 (Hadoop 3.0.x) + Consistency Reads from Standby + 
> SSL + Kerberos + RPC encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-14272.000.patch, HDFS-14272.001.patch, 
> HDFS-14272.002.patch
>
>
> It is typical for integration tests to create some files and then check their 
> existence. For example, like the following simple bash script:
> {code:java}
> # hdfs dfs -touchz /tmp/abc
> # hdfs dfs -ls /tmp/abc
> {code}
> The test executes HDFS bash command sequentially, but it may fail with 
> Consistent Standby Read because the -ls does not find the file.
> Analysis: the second bash command, while launched sequentially after the 
> first one, is not aware of the state id returned from the first bash command. 
> So ObserverNode wouldn't wait for the the edits to get propagated, and thus 
> fails.
> I've got a cluster where the Observer has tens of seconds of RPC latency, and 
> this becomes very annoying. (I am still trying to figure out why this 
> Observer has such a long RPC latency. But that's another story.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-453) Om and Scm should use piccoli to parse arguments

2019-02-26 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-453:
-
Summary: Om and Scm should use piccoli to parse arguments  (was: om and scm 
should use piccoli to parse arguments)

> Om and Scm should use piccoli to parse arguments
> 
>
> Key: HDDS-453
> URL: https://issues.apache.org/jira/browse/HDDS-453
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager, SCM
>Reporter: Arpit Agarwal
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: alpha2, newbie
>
> SCM and OM can use the picocli to parse command-line arguments.
> Suggested in HDDS-415 by [~anu].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-453) OM and SCM should use piccoli to parse arguments

2019-02-26 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-453:
-
Summary: OM and SCM should use piccoli to parse arguments  (was: Om and Scm 
should use piccoli to parse arguments)

> OM and SCM should use piccoli to parse arguments
> 
>
> Key: HDDS-453
> URL: https://issues.apache.org/jira/browse/HDDS-453
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager, SCM
>Reporter: Arpit Agarwal
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: alpha2, newbie
>
> SCM and OM can use the picocli to parse command-line arguments.
> Suggested in HDDS-415 by [~anu].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-792) Use md5 hash as ETag for Ozone S3 objects

2019-02-26 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778797#comment-16778797
 ] 

Anu Engineer commented on HDDS-792:
---

We already have checksums on blocks, do you think we might be able to reuse it?

> Use md5 hash as ETag for Ozone S3 objects
> -
>
> Key: HDDS-792
> URL: https://issues.apache.org/jira/browse/HDDS-792
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Elek, Marton
>Priority: Major
>
> AWS S3 uses md5 hash of the files as ETag. 
> Not a strict requirement, but s3 tests (https://github.com/gaul/s3-tests/) 
> can not been executed without that.
> It requires to support custom key/value annotations on key objects.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14182) Datanode usage histogram is clicked to show ip list

2019-02-26 Thread fengchuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fengchuang updated HDFS-14182:
--
Attachment: scroll.jpeg

> Datanode usage histogram is clicked to show ip list
> ---
>
> Key: HDFS-14182
> URL: https://issues.apache.org/jira/browse/HDFS-14182
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fengchuang
>Assignee: fengchuang
>Priority: Major
> Attachments: HDFS-14182.001.patch, HDFS-14182.002.patch, 
> HDFS-14182.003.patch, scroll.jpeg, showip.jpeg
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14182) Datanode usage histogram is clicked to show ip list

2019-02-26 Thread fengchuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778796#comment-16778796
 ] 

fengchuang commented on HDFS-14182:
---

[~linyiqun]  I uploaded a scrolling picture of more than two thousands of nodes 
on Mac OS.

> Datanode usage histogram is clicked to show ip list
> ---
>
> Key: HDFS-14182
> URL: https://issues.apache.org/jira/browse/HDFS-14182
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fengchuang
>Assignee: fengchuang
>Priority: Major
> Attachments: HDFS-14182.001.patch, HDFS-14182.002.patch, 
> HDFS-14182.003.patch, scroll.jpeg, showip.jpeg
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14182) Datanode usage histogram is clicked to show ip list

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778795#comment-16778795
 ] 

Hadoop QA commented on HDFS-14182:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 18s{color} 
| {color:red} HDFS-14182 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-14182 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26344/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Datanode usage histogram is clicked to show ip list
> ---
>
> Key: HDFS-14182
> URL: https://issues.apache.org/jira/browse/HDFS-14182
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: fengchuang
>Assignee: fengchuang
>Priority: Major
> Attachments: HDFS-14182.001.patch, HDFS-14182.002.patch, 
> HDFS-14182.003.patch, scroll.jpeg, showip.jpeg
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14247) Repeat adding node description into network topology

2019-02-26 Thread HuangTao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778789#comment-16778789
 ] 

HuangTao edited comment on HDFS-14247 at 2/27/19 2:18 AM:
--

HDFS-14245 failed on TestBPOfferService too.


was (Author: marvelrock):
#HDFS-14245 failed on TestBPOfferService too.

> Repeat adding node description into network topology
> 
>
> Key: HDFS-14247
> URL: https://issues.apache.org/jira/browse/HDFS-14247
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Priority: Minor
> Attachments: HDFS-14247.001.patch
>
>
> I find there is a duplicate code to add nodeDescr to networktopology in the 
> DatanodeManager.java#registerDatanode.
> It firstly call networktopology.add(nodeDescr), and then call  
> addDatanode(nodeDescr) to add nodeDescr again



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-453) om and scm should use piccoli to parse arguments

2019-02-26 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDDS-453:
-

Assignee: Siddharth Wagle  (was: Anu Engineer)

> om and scm should use piccoli to parse arguments
> 
>
> Key: HDDS-453
> URL: https://issues.apache.org/jira/browse/HDDS-453
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager, SCM
>Reporter: Arpit Agarwal
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: alpha2, newbie
>
> SCM and OM can use the picocli to parse command-line arguments.
> Suggested in HDDS-415 by [~anu].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14247) Repeat adding node description into network topology

2019-02-26 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778793#comment-16778793
 ] 

Íñigo Goiri commented on HDFS-14247:


Thanks [~marvelrock] for pointing that one out.
As mentioned before the addition to the tree is done in {{addDatanode()}} which 
is called 4 lines later and has no issues with:
{code}
nodeDescr.setSoftwareVersion(nodeReg.getSoftwareVersion());
resolveUpgradeDomain(nodeDescr);
{code}
+1 on  [^HDFS-14247.001.patch] 

> Repeat adding node description into network topology
> 
>
> Key: HDFS-14247
> URL: https://issues.apache.org/jira/browse/HDFS-14247
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Priority: Minor
> Attachments: HDFS-14247.001.patch
>
>
> I find there is a duplicate code to add nodeDescr to networktopology in the 
> DatanodeManager.java#registerDatanode.
> It firstly call networktopology.add(nodeDescr), and then call  
> addDatanode(nodeDescr) to add nodeDescr again



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14295) Add Threadpool for DataTransfers

2019-02-26 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778790#comment-16778790
 ] 

Íñigo Goiri commented on HDFS-14295:


Thanks [~belugabehr] for the clarification.
I agree that there is no point on actually tracking the 30 seconds as it is an 
arbitrary number.
My concern is the possible delay on tasks that were done right away before.
Can we do the shutdown() at the beginning and then shutdownNow() at the end 
without waiting at all?
The old threads were just Daemons and they were find to be killed.
Is there an equivalent to make the execution service spawn daemon threads?

> Add Threadpool for DataTransfers
> 
>
> Key: HDFS-14295
> URL: https://issues.apache.org/jira/browse/HDFS-14295
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HDFS-14295.1.patch, HDFS-14295.2.patch, 
> HDFS-14295.3.patch, HDFS-14295.4.patch, HDFS-14295.5.patch, 
> HDFS-14295.6.patch, HDFS-14295.7.patch, HDFS-14295.8.patch
>
>
> When a DataNode data transfers a block, is spins up a new thread for each 
> transfer.  
> [Here|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L2339]
>  and 
> [Here|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L3019-L3022].
>    Instead, add the threads to a {{CachedThreadPool}} so that when their 
> threads complete the transfer, they can be re-used for another transfer. This 
> should save resources spent on creating and spinning up transfer threads.
> One thing I'll point out that's a bit off, which I address in this patch, ...
> There are two places in the code where a {{DataTransfer}} thread is started. 
> In [one 
> place|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L2339-L2341],
>  it's started in a default thread group. In [another 
> place|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L3019-L3022],
>  it's started in the 
> [dataXceiverServer|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java#L1164]
>  thread group.
> I do not think it's correct to include any of these threads in the 
> {{dataXceiverServer}} thread group. Anything submitted to the 
> {{dataXceiverServer}} should probably be tied to the 
> {{dfs.datanode.max.transfer.threads}} configurations, and neither of these 
> methods are. Instead, they should be submitted into the same thread pool with 
> its own thread group (probably the default thread group, unless someone 
> suggests otherwise) and is what I have included in this patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14316) RBF: Support unavailable subclusters for mount points with multiple destinations

2019-02-26 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14316:
---
Attachment: HDFS-14316-HDFS-13891.001.patch

> RBF: Support unavailable subclusters for mount points with multiple 
> destinations
> 
>
> Key: HDFS-14316
> URL: https://issues.apache.org/jira/browse/HDFS-14316
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14316-HDFS-13891.000.patch, 
> HDFS-14316-HDFS-13891.001.patch
>
>
> Currently mount points with multiple destinations (e.g., HASH_ALL) fail 
> writes when the destination subcluster is down. We need an option to allow 
> writing in other subclusters when one is down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14247) Repeat adding node description into network topology

2019-02-26 Thread HuangTao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778789#comment-16778789
 ] 

HuangTao commented on HDFS-14247:
-

#HDFS-14245 failed on TestBPOfferService too.

> Repeat adding node description into network topology
> 
>
> Key: HDFS-14247
> URL: https://issues.apache.org/jira/browse/HDFS-14247
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Priority: Minor
> Attachments: HDFS-14247.001.patch
>
>
> I find there is a duplicate code to add nodeDescr to networktopology in the 
> DatanodeManager.java#registerDatanode.
> It firstly call networktopology.add(nodeDescr), and then call  
> addDatanode(nodeDescr) to add nodeDescr again



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14316) RBF: Support unavailable subclusters for mount points with multiple destinations

2019-02-26 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778781#comment-16778781
 ] 

Íñigo Goiri commented on HDFS-14316:


I added  [^HDFS-14316-HDFS-13891.001.patch] with some better unit tests and 
documentation.
This includes HDFS-13274 and HADOOP-13144; otherwise, the concurrent writes in 
the Router happen serially and we need too long.
[~brahmareddy], we need to decide what to do with HDFS-13274; otherwise, the 
Router does not support high loads for a single user.

> RBF: Support unavailable subclusters for mount points with multiple 
> destinations
> 
>
> Key: HDFS-14316
> URL: https://issues.apache.org/jira/browse/HDFS-14316
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14316-HDFS-13891.000.patch, 
> HDFS-14316-HDFS-13891.001.patch
>
>
> Currently mount points with multiple destinations (e.g., HASH_ALL) fail 
> writes when the destination subcluster is down. We need an option to allow 
> writing in other subclusters when one is down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13699) Add DFSClient sending handshake token to DataNode, and allow DataNode overwrite downstream QOP

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778771#comment-16778771
 ] 

Hadoop QA commented on HDFS-13699:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 31s{color} | {color:orange} root: The patch generated 51 new + 742 unchanged 
- 3 fixed = 793 total (was 745) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
58s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 5 new 
+ 0 unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
18s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 48s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
52s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}131m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  1m  
3s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}258m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Unread field:DataTransferSaslUtil.java:[line 261] |
|  |  Unread field:DataTransferSaslUtil.java:[line 259] |
|  |  Unread field:DataTransferSaslUtil.java:[line 260] |
|  |  Found reliance on default encoding in 

[jira] [Commented] (HDDS-710) Make Block Commit Sequence (BCS) opaque to clients

2019-02-26 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778779#comment-16778779
 ] 

Anu Engineer commented on HDDS-710:
---

How about we do, Container ID = 44 bit, Local ID = 20 bit and 32 bit BCS ID
that would be a 64 + 32 = 96 bit for a block ID. That should be pretty good for 
most use cases.



> Make Block Commit Sequence (BCS) opaque to clients
> --
>
> Key: HDDS-710
> URL: https://issues.apache.org/jira/browse/HDDS-710
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Critical
>
> An immutable block is identified by the following:
> - Container ID
> - Local Block ID
> - BCS (Block Commit Sequence ID)
> All of these values are currently exposed to the client. Instead we can have 
> a composite block ID that hides these details from the client. A first 
> thought is a naive implementation that generates a 192-bit (3x64-bit) block 
> ID.
> Proposed by [~anu] in HDDS-676.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14205) Backport HDFS-6440 to branch-2

2019-02-26 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778770#comment-16778770
 ] 

Chen Liang commented on HDFS-14205:
---

Thanks [~csun]! Will take a look. I think the branch-2 Jenkins is already fixed 
now.

> Backport HDFS-6440 to branch-2
> --
>
> Key: HDFS-14205
> URL: https://issues.apache.org/jira/browse/HDFS-14205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14205-branch-2.001.patch, 
> HDFS-14205-branch-2.002.patch, HDFS-14205-branch-2.003.patch
>
>
> Currently support for more than 2 NameNodes (HDFS-6440) is only in branch-3. 
> This JIRA aims to backport it to branch-2, as this is required by HDFS-12943 
> (consistent read from standby) backport to branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14321) Fix -Xcheck:jni issues in libhdfs, run ctest with -Xcheck:jni enabled

2019-02-26 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778763#comment-16778763
 ] 

Sahil Takiar commented on HDFS-14321:
-

When running the existing tests with {{-Xcheck:jni}} I only see one error: 
{{FATAL ERROR in native method: Invalid global JNI handle passed to 
DeleteGlobalRef}}, which seems to be caused by {{hadoopRzOptionsFree}} calling 
{{DeleteGlobalRef}} on {{opts->byteBufferPool}} which is not a global ref. It's 
not clear to me how big an issue this is, since {{opts->byteBufferPool}} should 
be a local ref that is automatically deleted when the native method exits.

There are a bunch of warnings of the form {{WARNING in native method: JNI call 
made without checking exceptions when required to from ...}} - after debugging 
these warnings, most of them seem to be caused by the JVM itself (e.g. internal 
JDK code). So they would have to fixed within the JDK itself.

> Fix -Xcheck:jni issues in libhdfs, run ctest with -Xcheck:jni enabled
> -
>
> Key: HDFS-14321
> URL: https://issues.apache.org/jira/browse/HDFS-14321
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
>
> The JVM exposes an option called {{-Xcheck:jni}} which runs various checks 
> against JNI usage by applications. Further explanation of this JVM option can 
> be found in: 
> [https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/clopts002.html]
>  and 
> [https://www.ibm.com/support/knowledgecenter/en/SSYKE2_8.0.0/com.ibm.java.vm.80.doc/docs/jni_debug.html].
>  When run with this option, the JVM will print out any warnings or errors it 
> encounters with the JNI.
> We should run the libhdfs tests with {{-Xcheck:jni}} (can be added to 
> {{LIBHDFS_OPTS}}) and fix any warnings / errors. We should add this option to 
> our ctest runs as well to ensure no regressions are introduced to libhdfs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1178) Healthy pipeline Chill Mode Rule

2019-02-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1178?focusedWorklogId=204887=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-204887
 ]

ASF GitHub Bot logged work on HDDS-1178:


Author: ASF GitHub Bot
Created on: 27/Feb/19 01:22
Start Date: 27/Feb/19 01:22
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #518: HDDS-1178. 
Healthy pipeline Chill Mode Rule.
URL: https://github.com/apache/hadoop/pull/518#issuecomment-467684481
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 23 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for branch |
   | +1 | mvninstall | 968 | trunk passed |
   | +1 | compile | 949 | trunk passed |
   | +1 | checkstyle | 233 | trunk passed |
   | +1 | mvnsite | 157 | trunk passed |
   | +1 | shadedclient | 1062 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 113 | trunk passed |
   | +1 | javadoc | 90 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 101 | the patch passed |
   | +1 | compile | 884 | the patch passed |
   | +1 | javac | 884 | the patch passed |
   | +1 | checkstyle | 188 | the patch passed |
   | +1 | mvnsite | 123 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 704 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 122 | the patch passed |
   | +1 | javadoc | 83 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 69 | common in the patch failed. |
   | +1 | unit | 97 | server-scm in the patch passed. |
   | -1 | unit | 537 | integration-test in the patch failed. |
   | +1 | asflicense | 38 | The patch does not generate ASF License warnings. |
   | | | 6442 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.security.x509.certificate.client.TestDefaultCertificateClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-518/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/518 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
   | uname | Linux ae75be440e0a 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9192f71 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-518/4/artifact/out/patch-unit-hadoop-hdds_common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-518/4/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-518/4/testReport/ |
   | Max. process+thread count | 3556 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-518/4/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 204887)
Time Spent: 2h  (was: 1h 50m)

> Healthy pipeline Chill Mode Rule
> 
>
> Key: HDDS-1178
> URL: https://issues.apache.org/jira/browse/HDDS-1178
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>   

[jira] [Commented] (HDFS-13699) Add DFSClient sending handshake token to DataNode, and allow DataNode overwrite downstream QOP

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778765#comment-16778765
 ] 

Hadoop QA commented on HDFS-13699:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
3s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 12s{color} | {color:orange} root: The patch generated 29 new + 743 unchanged 
- 3 fixed = 772 total (was 746) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 58s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
50s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 5 new 
+ 0 unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
6s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
8s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
49s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 13s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}189m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Unread field:DataTransferSaslUtil.java:[line 261] |
|  |  Unread field:DataTransferSaslUtil.java:[line 259] |
|  |  Unread field:DataTransferSaslUtil.java:[line 260] |
|  |  Found reliance on 

[jira] [Updated] (HDFS-14320) SkipTrash for WebHDFS

2019-02-26 Thread Karthik Palanisamy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HDFS-14320:
--
Attachment: (was: HDFS-14320-004.patch)

> SkipTrash for WebHDFS 
> --
>
> Key: HDFS-14320
> URL: https://issues.apache.org/jira/browse/HDFS-14320
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode, webhdfs
>Affects Versions: 3.2.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
> Attachments: HDFS-14320-001.patch, HDFS-14320-002.patch, 
> HDFS-14320-003.patch
>
>
> Files/Directories deleted via webhdfs rest call doesn't use the skiptrash 
> feature, it would be deleted permanently. This feature is very important us 
> because our user has deleted large directory accidentally.
> By default, Skiptrash option is set to true, skiptrash=true. Any files, Using 
> CURL will be permanently deleted.
> Example:
> curl -iv -X DELETE 
> "http://:50070/webhdfs/v1/tmp/sampledata?op=DELETE=hdfs=true;
>  
> Use skiptrash=false, to move files to trash Instead.
> Example:
> curl -iv -X DELETE 
> "http://:50070/webhdfs/v1/tmp/sampledata?op=DELETE=hdfs=true=false;
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14321) Fix -Xcheck:jni issues in libhdfs, run ctest with -Xcheck:jni enabled

2019-02-26 Thread Sahil Takiar (JIRA)
Sahil Takiar created HDFS-14321:
---

 Summary: Fix -Xcheck:jni issues in libhdfs, run ctest with 
-Xcheck:jni enabled
 Key: HDFS-14321
 URL: https://issues.apache.org/jira/browse/HDFS-14321
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client, libhdfs, native
Reporter: Sahil Takiar
Assignee: Sahil Takiar


The JVM exposes an option called {{-Xcheck:jni}} which runs various checks 
against JNI usage by applications. Further explanation of this JVM option can 
be found in: 
[https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/clopts002.html]
 and 
[https://www.ibm.com/support/knowledgecenter/en/SSYKE2_8.0.0/com.ibm.java.vm.80.doc/docs/jni_debug.html].
 When run with this option, the JVM will print out any warnings or errors it 
encounters with the JNI.

We should run the libhdfs tests with {{-Xcheck:jni}} (can be added to 
{{LIBHDFS_OPTS}}) and fix any warnings / errors. We should add this option to 
our ctest runs as well to ensure no regressions are introduced to libhdfs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14320) SkipTrash for WebHDFS

2019-02-26 Thread Karthik Palanisamy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HDFS-14320:
--
Attachment: HDFS-14320-004.patch

> SkipTrash for WebHDFS 
> --
>
> Key: HDFS-14320
> URL: https://issues.apache.org/jira/browse/HDFS-14320
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode, webhdfs
>Affects Versions: 3.2.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
> Attachments: HDFS-14320-001.patch, HDFS-14320-002.patch, 
> HDFS-14320-003.patch, HDFS-14320-004.patch
>
>
> Files/Directories deleted via webhdfs rest call doesn't use the skiptrash 
> feature, it would be deleted permanently. This feature is very important us 
> because our user has deleted large directory accidentally.
> By default, Skiptrash option is set to true, skiptrash=true. Any files, Using 
> CURL will be permanently deleted.
> Example:
> curl -iv -X DELETE 
> "http://:50070/webhdfs/v1/tmp/sampledata?op=DELETE=hdfs=true;
>  
> Use skiptrash=false, to move files to trash Instead.
> Example:
> curl -iv -X DELETE 
> "http://:50070/webhdfs/v1/tmp/sampledata?op=DELETE=hdfs=true=false;
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13699) Add DFSClient sending handshake token to DataNode, and allow DataNode overwrite downstream QOP

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778760#comment-16778760
 ] 

Hadoop QA commented on HDFS-13699:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
19s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 11s{color} | {color:orange} root: The patch generated 29 new + 744 unchanged 
- 3 fixed = 773 total (was 747) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
0s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 5 new 
+ 0 unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
22s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
19s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
50s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 59s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
51s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}198m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Unread field:DataTransferSaslUtil.java:[line 261] |
|  |  Unread field:DataTransferSaslUtil.java:[line 259] |
|  |  Unread field:DataTransferSaslUtil.java:[line 260] |
|  |  Found reliance on 

[jira] [Commented] (HDFS-13699) Add DFSClient sending handshake token to DataNode, and allow DataNode overwrite downstream QOP

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778749#comment-16778749
 ] 

Hadoop QA commented on HDFS-13699:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 14m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 50s{color} | {color:orange} root: The patch generated 51 new + 743 unchanged 
- 3 fixed = 794 total (was 746) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 58s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
49s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 5 new 
+ 0 unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
14s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
16s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
49s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
55s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}194m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Unread field:DataTransferSaslUtil.java:[line 261] |
|  |  Unread field:DataTransferSaslUtil.java:[line 259] |
|  |  Unread field:DataTransferSaslUtil.java:[line 260] |
|  |  Found reliance on default encoding in 

[jira] [Commented] (HDFS-14320) SkipTrash for WebHDFS

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778735#comment-16778735
 ] 

Hadoop QA commented on HDFS-14320:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  6s{color} | {color:orange} hadoop-hdfs-project: The patch generated 15 new 
+ 150 unchanged - 1 fixed = 165 total (was 151) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
45s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 11s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}180m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.web.resources.TestParam |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14320 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960236/HDFS-14320-003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d9aebe5b48cc 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 

[jira] [Commented] (HDFS-14205) Backport HDFS-6440 to branch-2

2019-02-26 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778723#comment-16778723
 ] 

Chao Sun commented on HDFS-14205:
-

Attached patch v3. The main change is that we forgot to use 
{{MultipleNameNodeProxy}} in the previous patch - this fixed it. 

 

Also [~vagarychen]: do you know whether the branch-2 Jenkins will be fixed 
soon? we'll need that for the 2.10 release, right?

> Backport HDFS-6440 to branch-2
> --
>
> Key: HDFS-14205
> URL: https://issues.apache.org/jira/browse/HDFS-14205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14205-branch-2.001.patch, 
> HDFS-14205-branch-2.002.patch, HDFS-14205-branch-2.003.patch
>
>
> Currently support for more than 2 NameNodes (HDFS-6440) is only in branch-3. 
> This JIRA aims to backport it to branch-2, as this is required by HDFS-12943 
> (consistent read from standby) backport to branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1182) Pipeline Rule where atleast one datanode is reported in the pipeline

2019-02-26 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1182:


 Summary: Pipeline Rule where atleast one datanode is reported in 
the pipeline
 Key: HDDS-1182
 URL: https://issues.apache.org/jira/browse/HDDS-1182
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


h2. Pipeline Rule with configurable percentage of pipelines with at least one 
datanode reported:

In this rule we consider when at least  90% of pipelines have at least one 
datanode reported. 

 

This rule satisfies, when we exit chill mode, Ozone clients will have at least 
one open replica for reads to succeed. (We can increase this threshold default 
from 90%, if we want to see fewer failures during reads after exit chill mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1177) Add validation to AuthorizationHeaderV4

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778724#comment-16778724
 ] 

Hadoop QA commented on HDDS-1177:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 42m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} s3gateway in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1177 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960250/HDDS-1177.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bb0c400bc8f3 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a5a751b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2385/testReport/ |
| Max. process+thread count | 336 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/s3gateway U: hadoop-ozone/s3gateway |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2385/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add validation to AuthorizationHeaderV4
> ---
>
> Key: HDDS-1177
> URL: https://issues.apache.org/jira/browse/HDDS-1177
> 

[jira] [Commented] (HDDS-1084) Ozone Recon

2019-02-26 Thread Siddharth Wagle (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778715#comment-16778715
 ] 

Siddharth Wagle commented on HDDS-1084:
---

Attaching preliminary design doc for Recon Service; will iterate over this over 
the next few weeks while creating Jiras to implement features outlined in the 
design doc.

> Ozone Recon
> ---
>
> Key: HDDS-1084
> URL: https://issues.apache.org/jira/browse/HDDS-1084
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: fsck
>Affects Versions: 0.4.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
> Attachments: Ozone_Recon_Design_V1_Draft.pdf
>
>
> Recon Server at a high level will maintain a global view of Ozone that is not 
> available from SCM or OM. Things like how many volumes exist; and how many 
> buckets exist per volume; which volume has maximum buckets; which are buckets 
> that have not been accessed for a year, which are the corrupt blocks, which 
> are blocks on data nodes which are not used; and answer similar queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1179) Ozone dist build failed on Jenkins

2019-02-26 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778722#comment-16778722
 ] 

Anu Engineer commented on HDDS-1179:


[~elek], [~xyao] does this mean that we don't need these changes? I am going to 
put this patch into open state, please resolve it as not needed if that so. 
Thanks in advance.


> Ozone dist build failed on Jenkins
> --
>
> Key: HDDS-1179
> URL: https://issues.apache.org/jira/browse/HDDS-1179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1179.001.patch, HDDS-1179.002.patch
>
>
> This is part of the Jenkins execution and was reported in several latest HDDS 
> Jenkins run.
> I spend some today and found a simplified repro steps:
> {code}
> cd hadoop-ozone/dist
> mvn -Phdds -DskipTests -fae clean install -DskipTests=true 
> -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true 
> {code}
>  
> The root cause is that the 
> hadoop-ozone/dist/dev-support/bin/dist-layout-stitching need 
> objectstore-sevice jar being build earlier but he dependency was not 
> explicitly declared in pom. I will attach a fix shortly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14205) Backport HDFS-6440 to branch-2

2019-02-26 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778723#comment-16778723
 ] 

Chao Sun edited comment on HDFS-14205 at 2/27/19 12:02 AM:
---

Attached patch v3. The main change is that we forgot to use 
{{MultipleNameNodeProxy}} in the previous patch - this fixed it. 

Also [~vagarychen]: do you know whether the branch-2 Jenkins will be fixed 
soon? we'll need that for the 2.10 release, right?


was (Author: csun):
Attached patch v3. The main change is that we forgot to use 
{{MultipleNameNodeProxy}} in the previous patch - this fixed it. 

 

Also [~vagarychen]: do you know whether the branch-2 Jenkins will be fixed 
soon? we'll need that for the 2.10 release, right?

> Backport HDFS-6440 to branch-2
> --
>
> Key: HDFS-14205
> URL: https://issues.apache.org/jira/browse/HDFS-14205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14205-branch-2.001.patch, 
> HDFS-14205-branch-2.002.patch, HDFS-14205-branch-2.003.patch
>
>
> Currently support for more than 2 NameNodes (HDFS-6440) is only in branch-3. 
> This JIRA aims to backport it to branch-2, as this is required by HDFS-12943 
> (consistent read from standby) backport to branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1179) Ozone dist build failed on Jenkins

2019-02-26 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1179:
---
Status: Open  (was: Patch Available)

> Ozone dist build failed on Jenkins
> --
>
> Key: HDDS-1179
> URL: https://issues.apache.org/jira/browse/HDDS-1179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-1179.001.patch, HDDS-1179.002.patch
>
>
> This is part of the Jenkins execution and was reported in several latest HDDS 
> Jenkins run.
> I spend some today and found a simplified repro steps:
> {code}
> cd hadoop-ozone/dist
> mvn -Phdds -DskipTests -fae clean install -DskipTests=true 
> -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true 
> {code}
>  
> The root cause is that the 
> hadoop-ozone/dist/dev-support/bin/dist-layout-stitching need 
> objectstore-sevice jar being build earlier but he dependency was not 
> explicitly declared in pom. I will attach a fix shortly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14205) Backport HDFS-6440 to branch-2

2019-02-26 Thread Chao Sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HDFS-14205:

Attachment: HDFS-14205-branch-2.003.patch

> Backport HDFS-6440 to branch-2
> --
>
> Key: HDFS-14205
> URL: https://issues.apache.org/jira/browse/HDFS-14205
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-14205-branch-2.001.patch, 
> HDFS-14205-branch-2.002.patch, HDFS-14205-branch-2.003.patch
>
>
> Currently support for more than 2 NameNodes (HDFS-6440) is only in branch-3. 
> This JIRA aims to backport it to branch-2, as this is required by HDFS-12943 
> (consistent read from standby) backport to branch-2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1178) Healthy pipeline Chill Mode Rule

2019-02-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1178?focusedWorklogId=204867=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-204867
 ]

ASF GitHub Bot logged work on HDDS-1178:


Author: ASF GitHub Bot
Created on: 26/Feb/19 23:59
Start Date: 26/Feb/19 23:59
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #518: HDDS-1178. Healthy 
pipeline Chill Mode Rule.
URL: https://github.com/apache/hadoop/pull/518#issuecomment-467665680
 
 
   :+1: , feel free to commit once we get a Jenkins run. Thanks for taking care 
of this.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 204867)
Time Spent: 1h 50m  (was: 1h 40m)

> Healthy pipeline Chill Mode Rule
> 
>
> Key: HDDS-1178
> URL: https://issues.apache.org/jira/browse/HDDS-1178
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> This Jira is to implement one of the chill mode rule.
> h2. Pipeline Rule with configurable percentage of open pipelines with all 
> datanodes reported:
> h2. In this rule, we consider when at least 10% of the pipelines have all 3 
> data node's reported (if it is a 3 node ratis ring), and one node reported if 
> it is a one node ratis ring. (The percentage is configurable via ozone 
> configuration parameter).
>  
> This rule satisfies, once the SCM is restarted, and after exiting chill mode, 
> we have enough pipelines to give for clients for writes to succeed.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1084) Ozone Recon Service

2019-02-26 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-1084:
--
Summary: Ozone Recon Service  (was: Ozone Recon)

> Ozone Recon Service
> ---
>
> Key: HDDS-1084
> URL: https://issues.apache.org/jira/browse/HDDS-1084
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: fsck
>Affects Versions: 0.4.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
> Attachments: Ozone_Recon_Design_V1_Draft.pdf
>
>
> Recon Server at a high level will maintain a global view of Ozone that is not 
> available from SCM or OM. Things like how many volumes exist; and how many 
> buckets exist per volume; which volume has maximum buckets; which are buckets 
> that have not been accessed for a year, which are the corrupt blocks, which 
> are blocks on data nodes which are not used; and answer similar queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14246) Remove duplicate code on Namenode when Datanode registor

2019-02-26 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778704#comment-16778704
 ] 

Shweta commented on HDFS-14246:
---

Hi, [~xihazhangyun], can you elaborate a little more on which code chunk is 
duplicate?

> Remove duplicate code on Namenode when Datanode registor
> 
>
> Key: HDFS-14246
> URL: https://issues.apache.org/jira/browse/HDFS-14246
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.3.0
>Reporter: zhangli
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1084) Ozone Recon

2019-02-26 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-1084:
--
Summary: Ozone Recon  (was: Ozone Recon server)

> Ozone Recon
> ---
>
> Key: HDDS-1084
> URL: https://issues.apache.org/jira/browse/HDDS-1084
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: fsck
>Affects Versions: 0.4.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
> Attachments: Ozone_Recon_Design_V1_Draft.pdf
>
>
> Recon Server at a high level will maintain a global view of Ozone that is not 
> available from SCM or OM. Things like how many volumes exist; and how many 
> buckets exist per volume; which volume has maximum buckets; which are buckets 
> that have not been accessed for a year, which are the corrupt blocks, which 
> are blocks on data nodes which are not used; and answer similar queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1084) Ozone Recon server

2019-02-26 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-1084:
--
Attachment: Ozone_Recon_Design_V1_Draft.pdf

> Ozone Recon server
> --
>
> Key: HDDS-1084
> URL: https://issues.apache.org/jira/browse/HDDS-1084
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: fsck
>Affects Versions: 0.4.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
> Attachments: Ozone_Recon_Design_V1_Draft.pdf
>
>
> Recon Server at a high level will maintain a global view of Ozone that is not 
> available from SCM or OM. Things like how many volumes exist; and how many 
> buckets exist per volume; which volume has maximum buckets; which are buckets 
> that have not been accessed for a year, which are the corrupt blocks, which 
> are blocks on data nodes which are not used; and answer similar queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1084) Ozone Recon server

2019-02-26 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle updated HDDS-1084:
--
Description: Recon Server at a high level will maintain a global view of 
Ozone that is not available from SCM or OM. Things like how many volumes exist; 
and how many buckets exist per volume; which volume has maximum buckets; which 
are buckets that have not been accessed for a year, which are the corrupt 
blocks, which are blocks on data nodes which are not used; and answer similar 
queries.  (was: Recon Server at a high level will maintain a global view of 
Ozone that is not available from SCM or OM. Things like how many volumes exist; 
and how many buckets exist per volume; which volume has maximum buckets; which 
are buckets that have not been accessed for a year, which are the corrupt 
blocks, which are blocks on data nodes which are not used; and answer similar 
queries.

I will work on a design document and attach it in a few days.)

> Ozone Recon server
> --
>
> Key: HDDS-1084
> URL: https://issues.apache.org/jira/browse/HDDS-1084
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: fsck
>Affects Versions: 0.4.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
> Attachments: Ozone_Recon_Design_V1_Draft.pdf
>
>
> Recon Server at a high level will maintain a global view of Ozone that is not 
> available from SCM or OM. Things like how many volumes exist; and how many 
> buckets exist per volume; which volume has maximum buckets; which are buckets 
> that have not been accessed for a year, which are the corrupt blocks, which 
> are blocks on data nodes which are not used; and answer similar queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1182) Pipeline Rule where atleast one datanode is reported in the pipeline

2019-02-26 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1182 started by Bharat Viswanadham.

> Pipeline Rule where atleast one datanode is reported in the pipeline
> 
>
> Key: HDDS-1182
> URL: https://issues.apache.org/jira/browse/HDDS-1182
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> h2. Pipeline Rule with configurable percentage of pipelines with at least one 
> datanode reported:
> In this rule we consider when at least  90% of pipelines have at least one 
> datanode reported. 
>  
> This rule satisfies, when we exit chill mode, Ozone clients will have at 
> least one open replica for reads to succeed. (We can increase this threshold 
> default from 90%, if we want to see fewer failures during reads after exit 
> chill mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1182) Pipeline Rule where atleast one datanode is reported in the pipeline

2019-02-26 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1182:
-
Component/s: SCM

> Pipeline Rule where atleast one datanode is reported in the pipeline
> 
>
> Key: HDDS-1182
> URL: https://issues.apache.org/jira/browse/HDDS-1182
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> h2. Pipeline Rule with configurable percentage of pipelines with at least one 
> datanode reported:
> In this rule we consider when at least  90% of pipelines have at least one 
> datanode reported. 
>  
> This rule satisfies, when we exit chill mode, Ozone clients will have at 
> least one open replica for reads to succeed. (We can increase this threshold 
> default from 90%, if we want to see fewer failures during reads after exit 
> chill mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14320) SkipTrash for WebHDFS

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778697#comment-16778697
 ] 

Hadoop QA commented on HDFS-14320:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  8s{color} | {color:orange} hadoop-hdfs-project: The patch generated 15 new 
+ 150 unchanged - 1 fixed = 165 total (was 151) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
51s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m  7s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}159m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM |
|   | hadoop.hdfs.web.resources.TestParam |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14320 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960233/HDFS-14320-002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  

[jira] [Work logged] (HDDS-1178) Healthy pipeline Chill Mode Rule

2019-02-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1178?focusedWorklogId=204859=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-204859
 ]

ASF GitHub Bot logged work on HDDS-1178:


Author: ASF GitHub Bot
Created on: 26/Feb/19 23:34
Start Date: 26/Feb/19 23:34
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #518: HDDS-1178. 
Healthy pipeline Chill Mode Rule.
URL: https://github.com/apache/hadoop/pull/518#issuecomment-467659727
 
 
   Thank You @anuengineer  and @elek  for the review.
   I have fixed the test failure.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 204859)
Time Spent: 1h 40m  (was: 1.5h)

> Healthy pipeline Chill Mode Rule
> 
>
> Key: HDDS-1178
> URL: https://issues.apache.org/jira/browse/HDDS-1178
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> This Jira is to implement one of the chill mode rule.
> h2. Pipeline Rule with configurable percentage of open pipelines with all 
> datanodes reported:
> h2. In this rule, we consider when at least 10% of the pipelines have all 3 
> data node's reported (if it is a 3 node ratis ring), and one node reported if 
> it is a one node ratis ring. (The percentage is configurable via ozone 
> configuration parameter).
>  
> This rule satisfies, once the SCM is restarted, and after exiting chill mode, 
> we have enough pipelines to give for clients for writes to succeed.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14316) RBF: Support unavailable subclusters for mount points with multiple destinations

2019-02-26 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778688#comment-16778688
 ] 

Íñigo Goiri commented on HDFS-14316:


[~crh], in the unit tests I'm seeing:
{code}
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.hdfs.server.federation.router.security.RouterSecurityManager.newSecretManager(RouterSecurityManager.java:78)
at 
org.apache.hadoop.hdfs.server.federation.router.security.RouterSecurityManager.(RouterSecurityManager.java:52)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.(RouterRpcServer.java:263)
at 
org.apache.hadoop.hdfs.server.federation.router.Router.createRpcServer(Router.java:375)
at 
org.apache.hadoop.hdfs.server.federation.router.Router.serviceInit(Router.java:184)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at 
org.apache.hadoop.hdfs.server.federation.MiniRouterDFSCluster$RouterContext.(MiniRouterDFSCluster.java:169)
at 
org.apache.hadoop.hdfs.server.federation.MiniRouterDFSCluster.buildRouter(MiniRouterDFSCluster.java:734)
at 
org.apache.hadoop.hdfs.server.federation.MiniRouterDFSCluster.startRouters(MiniRouterDFSCluster.java:812)
at 
org.apache.hadoop.hdfs.server.federation.router.TestRouterAllResolver.setup(TestRouterAllResolver.java:128)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
Caused by: java.lang.NullPointerException: Zookeeper connection string cannot 
be null
at 
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
at 
org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.(ZKDelegationTokenSecretManager.java:158)
at 
org.apache.hadoop.hdfs.server.federation.router.security.token.ZKDelegationTokenSecretManagerImpl.(ZKDelegationTokenSecretManagerImpl.java:42)
... 40 more
{code}

This is a test with no security or anything.
I thought we had reduced the verbosity of the logs in HDFS-13358.
Any ideas?



> RBF: Support unavailable subclusters for mount points with multiple 
> destinations
> 
>
> Key: HDFS-14316
> URL: https://issues.apache.org/jira/browse/HDFS-14316
> Project: Hadoop 

[jira] [Commented] (HDDS-1174) Freon tests are failing with null pointer exception

2019-02-26 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778671#comment-16778671
 ] 

Hudson commented on HDDS-1174:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16073 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16073/])
HDDS-1174. Freon tests are failing with null pointer exception. (aengineer: rev 
a5a751b4187c69a6cda66466a3210d81ef54d929)
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/RandomKeyGenerator.java


> Freon tests are failing with null pointer exception
> ---
>
> Key: HDDS-1174
> URL: https://issues.apache.org/jira/browse/HDDS-1174
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone CLI
>Affects Versions: 0.4.0
> Environment: {code:java}
>  {code}
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1174.001.patch
>
>
> java.lang.NullPointerException at 
> org.apache.hadoop.ozone.freon.RandomKeyGenerator.init(RandomKeyGenerator.java:218)
>  at 
> org.apache.hadoop.ozone.freon.RandomKeyGenerator.call(RandomKeyGenerator.java:224)
>  at 
> org.apache.hadoop.ozone.freon.TestDataValidate.ratisTestLargeKey(TestDataValidate.java:75)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1061) DelegationToken: Add certificate serial id to Ozone Delegation Token Identifier

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778674#comment-16778674
 ] 

Hadoop QA commented on HDDS-1061:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
39s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
49s{color} | {color:red} hadoop-ozone in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
24s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
18s{color} | {color:red} ozone-manager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red}  0m 27s{color} | 
{color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 27s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
26s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
19s{color} | {color:red} ozone-manager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
20s{color} | {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
19s{color} | {color:red} ozone-manager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 29s{color} 
| {color:red} common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 22s{color} 
| {color:red} ozone-manager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1061 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960192/HDDS-1061.05.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  

[jira] [Updated] (HDDS-1178) Healthy pipeline Chill Mode Rule

2019-02-26 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1178:
---
Target Version/s: 0.4.0

> Healthy pipeline Chill Mode Rule
> 
>
> Key: HDDS-1178
> URL: https://issues.apache.org/jira/browse/HDDS-1178
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> This Jira is to implement one of the chill mode rule.
> h2. Pipeline Rule with configurable percentage of open pipelines with all 
> datanodes reported:
> h2. In this rule, we consider when at least 10% of the pipelines have all 3 
> data node's reported (if it is a 3 node ratis ring), and one node reported if 
> it is a one node ratis ring. (The percentage is configurable via ozone 
> configuration parameter).
>  
> This rule satisfies, once the SCM is restarted, and after exiting chill mode, 
> we have enough pipelines to give for clients for writes to succeed.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14305) Serial number in BlockTokenSecretManager could overlap between different namenodes

2019-02-26 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778664#comment-16778664
 ] 

Chen Liang commented on HDFS-14305:
---

The patch looks good to me overall, thanks [~hexiaoqiao]! I was also thinking 
that using 10 bits seems high. 10 bits supports 1024 NNs, but I don't think we 
are going to hit even 100 NNs any time soon, before that point, 90% of the key 
space is effectively just being wasted. But I think 22 bits seems enough for 
key ranges to me so I'm not too concerned about this.



> Serial number in BlockTokenSecretManager could overlap between different 
> namenodes
> --
>
> Key: HDFS-14305
> URL: https://issues.apache.org/jira/browse/HDFS-14305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Reporter: Chao Sun
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14305.001.patch, HDFS-14305.002.patch, 
> HDFS-14305.003.patch, HDFS-14305.004.patch
>
>
> Currently, a {{BlockTokenSecretManager}} starts with a random integer as the 
> initial serial number, and then use this formula to rotate it:
> {code:java}
> this.intRange = Integer.MAX_VALUE / numNNs;
> this.nnRangeStart = intRange * nnIndex;
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
>  {code}
> while {{numNNs}} is the total number of NameNodes in the cluster, and 
> {{nnIndex}} is the index of the current NameNode specified in the 
> configuration {{dfs.ha.namenodes.}}.
> However, with this approach, different NameNode could have overlapping ranges 
> for serial number. For simplicity, let's assume {{Integer.MAX_VALUE}} is 100, 
> and we have 2 NameNodes {{nn1}} and {{nn2}} in configuration. Then the ranges 
> for these two are:
> {code}
> nn1 -> [-49, 49]
> nn2 -> [1, 99]
> {code}
> This is because the initial serial number could be any negative integer.
> Moreover, when the keys are updated, the serial number will again be updated 
> with the formula:
> {code}
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
> {code}
> which means the new serial number could be updated to a range that belongs to 
> a different NameNode, thus increasing the chance of collision again.
> When the collision happens, DataNodes could overwrite an existing key which 
> will cause clients to fail because of {{InvalidToken}} error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1177) Add validation to AuthorizationHeaderV4

2019-02-26 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1177:
-
Attachment: HDDS-1177.02.patch

> Add validation to AuthorizationHeaderV4
> ---
>
> Key: HDDS-1177
> URL: https://issues.apache.org/jira/browse/HDDS-1177
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-1177.00.patch, HDDS-1177.01.patch, 
> HDDS-1177.02.patch
>
>
> Add validation to AuthorizationHeaderV4. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1177) Add validation to AuthorizationHeaderV4

2019-02-26 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778658#comment-16778658
 ] 

Anu Engineer edited comment on HDDS-1177 at 2/26/19 10:46 PM:
--

+1, for v2 patch.  Thanks for the update. It looks excellent, thank you so much 
for adding so many test cases. Appreciate it.


was (Author: anu):
+1, thanks for the update. It looks excellent, thank you so much for adding so 
many test cases. Appreciate it.

> Add validation to AuthorizationHeaderV4
> ---
>
> Key: HDDS-1177
> URL: https://issues.apache.org/jira/browse/HDDS-1177
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-1177.00.patch, HDDS-1177.01.patch, 
> HDDS-1177.02.patch
>
>
> Add validation to AuthorizationHeaderV4. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1177) Add validation to AuthorizationHeaderV4

2019-02-26 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778658#comment-16778658
 ] 

Anu Engineer commented on HDDS-1177:


+1, thanks for the update. It looks excellent, thank you so much for adding so 
many test cases. Appreciate it.

> Add validation to AuthorizationHeaderV4
> ---
>
> Key: HDDS-1177
> URL: https://issues.apache.org/jira/browse/HDDS-1177
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-1177.00.patch, HDDS-1177.01.patch, 
> HDDS-1177.02.patch
>
>
> Add validation to AuthorizationHeaderV4. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1177) Add validation to AuthorizationHeaderV4

2019-02-26 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1177:
---
Target Version/s: 0.4.0

> Add validation to AuthorizationHeaderV4
> ---
>
> Key: HDDS-1177
> URL: https://issues.apache.org/jira/browse/HDDS-1177
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-1177.00.patch, HDDS-1177.01.patch
>
>
> Add validation to AuthorizationHeaderV4. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1174) Freon tests are failing with null pointer exception

2019-02-26 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1174:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~shashikant] Thanks for the contribution. I have committed this patch to the 
trunk.

> Freon tests are failing with null pointer exception
> ---
>
> Key: HDDS-1174
> URL: https://issues.apache.org/jira/browse/HDDS-1174
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone CLI
>Affects Versions: 0.4.0
> Environment: {code:java}
>  {code}
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1174.001.patch
>
>
> java.lang.NullPointerException at 
> org.apache.hadoop.ozone.freon.RandomKeyGenerator.init(RandomKeyGenerator.java:218)
>  at 
> org.apache.hadoop.ozone.freon.RandomKeyGenerator.call(RandomKeyGenerator.java:224)
>  at 
> org.apache.hadoop.ozone.freon.TestDataValidate.ratisTestLargeKey(TestDataValidate.java:75)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1178) Healthy pipeline Chill Mode Rule

2019-02-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1178?focusedWorklogId=204813=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-204813
 ]

ASF GitHub Bot logged work on HDDS-1178:


Author: ASF GitHub Bot
Created on: 26/Feb/19 22:24
Start Date: 26/Feb/19 22:24
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #518: HDDS-1178. Healthy 
pipeline Chill Mode Rule.
URL: https://github.com/apache/hadoop/pull/518#issuecomment-467639233
 
 
   sorry, is this failure related to this patch? 
   hdds.scm.chillmode.healthy.pipelie.pct expected:<0> but was:<1>
   if so, can we please add this field to the ozone-default.xml before we 
commit this? thanks
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 204813)
Time Spent: 1.5h  (was: 1h 20m)

> Healthy pipeline Chill Mode Rule
> 
>
> Key: HDDS-1178
> URL: https://issues.apache.org/jira/browse/HDDS-1178
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> This Jira is to implement one of the chill mode rule.
> h2. Pipeline Rule with configurable percentage of open pipelines with all 
> datanodes reported:
> h2. In this rule, we consider when at least 10% of the pipelines have all 3 
> data node's reported (if it is a 3 node ratis ring), and one node reported if 
> it is a one node ratis ring. (The percentage is configurable via ozone 
> configuration parameter).
>  
> This rule satisfies, once the SCM is restarted, and after exiting chill mode, 
> we have enough pipelines to give for clients for writes to succeed.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14292) Introduce Java ExecutorService to DataXceiverServer

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778642#comment-16778642
 ] 

Hadoop QA commented on HDFS-14292:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
3s{color} | {color:green} hadoop-hdfs-project generated 0 new + 537 unchanged - 
3 fixed = 537 total (was 540) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  9s{color} | {color:orange} hadoop-hdfs-project: The patch generated 4 new + 
644 unchanged - 8 fixed = 648 total (was 652) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
47s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 28s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}157m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestReconstructStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 

[jira] [Work logged] (HDDS-1178) Healthy pipeline Chill Mode Rule

2019-02-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1178?focusedWorklogId=204812=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-204812
 ]

ASF GitHub Bot logged work on HDDS-1178:


Author: ASF GitHub Bot
Created on: 26/Feb/19 22:23
Start Date: 26/Feb/19 22:23
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #518: HDDS-1178. Healthy 
pipeline Chill Mode Rule.
URL: https://github.com/apache/hadoop/pull/518#issuecomment-467638802
 
 
   +1, Thanks for the update. Looks good to me.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 204812)
Time Spent: 1h 20m  (was: 1h 10m)

> Healthy pipeline Chill Mode Rule
> 
>
> Key: HDDS-1178
> URL: https://issues.apache.org/jira/browse/HDDS-1178
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> This Jira is to implement one of the chill mode rule.
> h2. Pipeline Rule with configurable percentage of open pipelines with all 
> datanodes reported:
> h2. In this rule, we consider when at least 10% of the pipelines have all 3 
> data node's reported (if it is a 3 node ratis ring), and one node reported if 
> it is a one node ratis ring. (The percentage is configurable via ozone 
> configuration parameter).
>  
> This rule satisfies, once the SCM is restarted, and after exiting chill mode, 
> we have enough pipelines to give for clients for writes to succeed.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-919) Enable prometheus endpoints for Ozone datanodes

2019-02-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-919?focusedWorklogId=204811=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-204811
 ]

ASF GitHub Bot logged work on HDDS-919:
---

Author: ASF GitHub Bot
Created on: 26/Feb/19 22:20
Start Date: 26/Feb/19 22:20
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #502: HDDS-919. 
Enable prometheus endpoints for Ozone datanodes
URL: https://github.com/apache/hadoop/pull/502#issuecomment-467638054
 
 
   Thank You @elek  for addressing comments.
   There are many additional changes are done in ozone-default.xml, not related 
to this, can we do them as part of separate Jira. As those changes do not 
belong to this Jira.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 204811)
Time Spent: 2h 10m  (was: 2h)

> Enable prometheus endpoints for Ozone datanodes
> ---
>
> Key: HDDS-919
> URL: https://issues.apache.org/jira/browse/HDDS-919
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> HDDS-846 provides a new metric endpoint which publishes the available Hadoop 
> metrics in prometheus friendly format with a new servlet.
> Unfortunately it's enabled only on the scm/om side. It would be great to 
> enable it in the Ozone/HDDS datanodes on the web server of the HDDS Rest 
> endpoint. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14295) Add Threadpool for DataTransfers

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778631#comment-16778631
 ] 

Hadoop QA commented on HDFS-14295:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 45s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}137m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.TestFileCorruption |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
|   | hadoop.hdfs.TestPersistBlocks |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14295 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960223/HDFS-14295.8.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1f0df5304b02 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a106d2d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Commented] (HDFS-14292) Introduce Java ExecutorService to DataXceiverServer

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778629#comment-16778629
 ] 

Hadoop QA commented on HDFS-14292:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 25s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14292 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12960225/HDFS-14295.8.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0484a73c449b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a106d2d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26334/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test 

[jira] [Commented] (HDFS-14295) Add Threadpool for DataTransfers

2019-02-26 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778627#comment-16778627
 ] 

Hadoop QA commented on HDFS-14295:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 31s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}145m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestFileCreation |
|   | hadoop.hdfs.qjournal.server.TestJournalNodeSync |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-497/1/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/497 |
| JIRA Issue | HDFS-14295 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ca58174fc52b 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / a106d2d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-497/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Updated] (HDFS-14308) DFSStripedInputStream curStripeBuf is not freed by unbuffer()

2019-02-26 Thread Joe McDonnell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe McDonnell updated HDFS-14308:
-
Description: 
Some users of HDFS cache opened HDFS file handles to avoid repeated roundtrips 
to the NameNode. For example, Impala caches up to 20,000 HDFS file handles by 
default. Recent tests on erasure coded files show that the open file handles 
can consume a large amount of memory when not in use.

For example, here is output from Impala's JMX endpoint when 608 file handles 
are cached
{noformat}
{
"name": "java.nio:type=BufferPool,name=direct",
"modelerType": "sun.management.ManagementFactoryHelper$1",
"Name": "direct",
"TotalCapacity": 1921048960,
"MemoryUsed": 1921048961,
"Count": 633,
"ObjectName": "java.nio:type=BufferPool,name=direct"
},{noformat}
This shows direct buffer memory usage of 3MB per DFSStripedInputStream. 
Attached is output from Eclipse MAT showing that the direct buffers come from 
DFSStripedInputStream objects. Both Impala and HBase call unbuffer() when a 
file handle is being cached and potentially unused for significant chunks of 
time, yet this shows that the memory remains in use.

To support caching file handles on erasure coded files, DFSStripedInputStream 
should avoid holding buffers after the unbuffer() call. See HDFS-7694. 
"unbuffer()" is intended to move an input stream to a lower memory state to 
support these caching use cases. In particular, the curStripeBuf seems to be 
allocated from the BUFFER_POOL on a resetCurStripeBuffer(true) call. It is not 
freed until close().

  was:
Some users of HDFS cache opened HDFS file handles to avoid repeated roundtrips 
to the NameNode. For example, Impala caches up to 20,000 HDFS file handles by 
default. Recent tests on erasure coded files show that the open file handles 
can consume a large amount of memory when not in use.

For example, here is output from Impala's JMX endpoint when 608 file handles 
are cached
{noformat}
{
"name": "java.nio:type=BufferPool,name=direct",
"modelerType": "sun.management.ManagementFactoryHelper$1",
"Name": "direct",
"TotalCapacity": 1921048960,
"MemoryUsed": 1921048961,
"Count": 633,
"ObjectName": "java.nio:type=BufferPool,name=direct"
},{noformat}
This shows direct buffer memory usage of 3MB per DFSStripedInputStream. 
Attached is output from Eclipse MAT showing that the direct buffers come from 
DFSStripedInputStream objects.

To support caching file handles on erasure coded files, DFSStripedInputStream 
should implement the unbuffer() call. See HDFS-7694. "unbuffer()" is intended 
to move an input stream to a lower memory state to support these caching use 
cases. Both Impala and HBase call unbuffer() when a file handle is being cached 
and potentially unused for significant chunks of time.


> DFSStripedInputStream curStripeBuf is not freed by unbuffer()
> -
>
> Key: HDFS-14308
> URL: https://issues.apache.org/jira/browse/HDFS-14308
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Joe McDonnell
>Priority: Major
> Attachments: ec_heap_dump.png
>
>
> Some users of HDFS cache opened HDFS file handles to avoid repeated 
> roundtrips to the NameNode. For example, Impala caches up to 20,000 HDFS file 
> handles by default. Recent tests on erasure coded files show that the open 
> file handles can consume a large amount of memory when not in use.
> For example, here is output from Impala's JMX endpoint when 608 file handles 
> are cached
> {noformat}
> {
> "name": "java.nio:type=BufferPool,name=direct",
> "modelerType": "sun.management.ManagementFactoryHelper$1",
> "Name": "direct",
> "TotalCapacity": 1921048960,
> "MemoryUsed": 1921048961,
> "Count": 633,
> "ObjectName": "java.nio:type=BufferPool,name=direct"
> },{noformat}
> This shows direct buffer memory usage of 3MB per DFSStripedInputStream. 
> Attached is output from Eclipse MAT showing that the direct buffers come from 
> DFSStripedInputStream objects. Both Impala and HBase call unbuffer() when a 
> file handle is being cached and potentially unused for significant chunks of 
> time, yet this shows that the memory remains in use.
> To support caching file handles on erasure coded files, DFSStripedInputStream 
> should avoid holding buffers after the unbuffer() call. See HDFS-7694. 
> "unbuffer()" is intended to move an input stream to a lower memory state to 
> support these caching use cases. In particular, the curStripeBuf seems to be 
> allocated from the BUFFER_POOL on a resetCurStripeBuffer(true) call. It is 
> not freed until close().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Commented] (HDFS-14308) DFSStripedInputStream curStripeBuf is not freed by unbuffer()

2019-02-26 Thread Joe McDonnell (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16778626#comment-16778626
 ] 

Joe McDonnell commented on HDFS-14308:
--

Updated the description / title.

> DFSStripedInputStream curStripeBuf is not freed by unbuffer()
> -
>
> Key: HDFS-14308
> URL: https://issues.apache.org/jira/browse/HDFS-14308
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Joe McDonnell
>Priority: Major
> Attachments: ec_heap_dump.png
>
>
> Some users of HDFS cache opened HDFS file handles to avoid repeated 
> roundtrips to the NameNode. For example, Impala caches up to 20,000 HDFS file 
> handles by default. Recent tests on erasure coded files show that the open 
> file handles can consume a large amount of memory when not in use.
> For example, here is output from Impala's JMX endpoint when 608 file handles 
> are cached
> {noformat}
> {
> "name": "java.nio:type=BufferPool,name=direct",
> "modelerType": "sun.management.ManagementFactoryHelper$1",
> "Name": "direct",
> "TotalCapacity": 1921048960,
> "MemoryUsed": 1921048961,
> "Count": 633,
> "ObjectName": "java.nio:type=BufferPool,name=direct"
> },{noformat}
> This shows direct buffer memory usage of 3MB per DFSStripedInputStream. 
> Attached is output from Eclipse MAT showing that the direct buffers come from 
> DFSStripedInputStream objects. Both Impala and HBase call unbuffer() when a 
> file handle is being cached and potentially unused for significant chunks of 
> time, yet this shows that the memory remains in use.
> To support caching file handles on erasure coded files, DFSStripedInputStream 
> should avoid holding buffers after the unbuffer() call. See HDFS-7694. 
> "unbuffer()" is intended to move an input stream to a lower memory state to 
> support these caching use cases. In particular, the curStripeBuf seems to be 
> allocated from the BUFFER_POOL on a resetCurStripeBuffer(true) call. It is 
> not freed until close().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >