[jira] [Updated] (HDFS-14118) Use DNS to resolve Namenodes and Routers

2019-01-31 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14118:
---
Attachment: HDFS-14118.012.patch

> Use DNS to resolve Namenodes and Routers
> 
>
> Key: HDFS-14118
> URL: https://issues.apache.org/jira/browse/HDFS-14118
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: DNS testing log, HDFS-14118.001.patch, 
> HDFS-14118.002.patch, HDFS-14118.003.patch, HDFS-14118.004.patch, 
> HDFS-14118.005.patch, HDFS-14118.006.patch, HDFS-14118.007.patch, 
> HDFS-14118.008.patch, HDFS-14118.009.patch, HDFS-14118.010.patch, 
> HDFS-14118.011.patch, HDFS-14118.012.patch, HDFS-14118.patch
>
>
> Clients will need to know about routers to talk to the HDFS cluster 
> (obviously), and having routers updating (adding/removing) will have to make 
> every client change, which is a painful process.
> DNS can be used here to resolve the single domain name clients knows to a 
> list of routers in the current config. However, DNS won't be able to consider 
> only resolving to the working router based on certain health thresholds.
> There are some ways about how this can be solved. One way is to have a 
> separate script to regularly check the status of the router and update the 
> DNS records if a router fails the health thresholds. In this way, security 
> might be carefully considered for this way. Another way is to have the client 
> do the normal connecting/failover after they get the list of routers, which 
> requires the change of current failover proxy provider.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14118) Use DNS to resolve Namenodes and Routers

2019-01-31 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758026#comment-16758026
 ] 

Íñigo Goiri commented on HDFS-14118:


Sorry for the delay, I got stuck with HDFS-14249.
I tried to avoid the RuntimeException as much as possible but I had to have one 
at the end.
We may want to add negative cases to the tests too.

> Use DNS to resolve Namenodes and Routers
> 
>
> Key: HDFS-14118
> URL: https://issues.apache.org/jira/browse/HDFS-14118
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: DNS testing log, HDFS-14118.001.patch, 
> HDFS-14118.002.patch, HDFS-14118.003.patch, HDFS-14118.004.patch, 
> HDFS-14118.005.patch, HDFS-14118.006.patch, HDFS-14118.007.patch, 
> HDFS-14118.008.patch, HDFS-14118.009.patch, HDFS-14118.010.patch, 
> HDFS-14118.011.patch, HDFS-14118.012.patch, HDFS-14118.patch
>
>
> Clients will need to know about routers to talk to the HDFS cluster 
> (obviously), and having routers updating (adding/removing) will have to make 
> every client change, which is a painful process.
> DNS can be used here to resolve the single domain name clients knows to a 
> list of routers in the current config. However, DNS won't be able to consider 
> only resolving to the working router based on certain health thresholds.
> There are some ways about how this can be solved. One way is to have a 
> separate script to regularly check the status of the router and update the 
> DNS records if a router fails the health thresholds. In this way, security 
> might be carefully considered for this way. Another way is to have the client 
> do the normal connecting/failover after they get the list of routers, which 
> requires the change of current failover proxy provider.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14118) Use DNS to resolve Namenodes and Routers

2019-01-31 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14118:
---
Attachment: (was: HDFS-14118.012.patch)

> Use DNS to resolve Namenodes and Routers
> 
>
> Key: HDFS-14118
> URL: https://issues.apache.org/jira/browse/HDFS-14118
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: DNS testing log, HDFS-14118.001.patch, 
> HDFS-14118.002.patch, HDFS-14118.003.patch, HDFS-14118.004.patch, 
> HDFS-14118.005.patch, HDFS-14118.006.patch, HDFS-14118.007.patch, 
> HDFS-14118.008.patch, HDFS-14118.009.patch, HDFS-14118.010.patch, 
> HDFS-14118.011.patch, HDFS-14118.patch
>
>
> Clients will need to know about routers to talk to the HDFS cluster 
> (obviously), and having routers updating (adding/removing) will have to make 
> every client change, which is a painful process.
> DNS can be used here to resolve the single domain name clients knows to a 
> list of routers in the current config. However, DNS won't be able to consider 
> only resolving to the working router based on certain health thresholds.
> There are some ways about how this can be solved. One way is to have a 
> separate script to regularly check the status of the router and update the 
> DNS records if a router fails the health thresholds. In this way, security 
> might be carefully considered for this way. Another way is to have the client 
> do the normal connecting/failover after they get the list of routers, which 
> requires the change of current failover proxy provider.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14118) Use DNS to resolve Namenodes and Routers

2019-01-31 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14118:
---
Attachment: HDFS-14118.012.patch

> Use DNS to resolve Namenodes and Routers
> 
>
> Key: HDFS-14118
> URL: https://issues.apache.org/jira/browse/HDFS-14118
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: DNS testing log, HDFS-14118.001.patch, 
> HDFS-14118.002.patch, HDFS-14118.003.patch, HDFS-14118.004.patch, 
> HDFS-14118.005.patch, HDFS-14118.006.patch, HDFS-14118.007.patch, 
> HDFS-14118.008.patch, HDFS-14118.009.patch, HDFS-14118.010.patch, 
> HDFS-14118.011.patch, HDFS-14118.patch
>
>
> Clients will need to know about routers to talk to the HDFS cluster 
> (obviously), and having routers updating (adding/removing) will have to make 
> every client change, which is a painful process.
> DNS can be used here to resolve the single domain name clients knows to a 
> list of routers in the current config. However, DNS won't be able to consider 
> only resolving to the working router based on certain health thresholds.
> There are some ways about how this can be solved. One way is to have a 
> separate script to regularly check the status of the router and update the 
> DNS records if a router fails the health thresholds. In this way, security 
> might be carefully considered for this way. Another way is to have the client 
> do the normal connecting/failover after they get the list of routers, which 
> requires the change of current failover proxy provider.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1029) Allow option for force in DeleteContainerCommand

2019-01-31 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758018#comment-16758018
 ] 

Yiqun Lin commented on HDDS-1029:
-

Before the check, can we check is the container is replicated across three 
nodes? I mean that the container maybe not replicated successfully in 
hddsDatanodeService1 sometimes. Then causes the failure.

Also another way, can we refactor this, use {{STAND_ALONE}} type with {{ONE}} 
factor way: one deletion bahaviour for open state container and the other one 
for un-open container.


> Allow option for force in DeleteContainerCommand
> 
>
> Key: HDDS-1029
> URL: https://issues.apache.org/jira/browse/HDDS-1029
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1029.00.patch, HDDS-1029.01.patch, 
> HDDS-1029.02.patch, HDDS-1029.03.patch, HDDS-1029.04.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Right now, we check container state if it is not open, and then we delete 
> container.
> We need a way to delete the containers which are open, so adding a force flag 
> will allow deleting a container without any state checks. (This is required 
> for delete replica's when SCM detects over-replicated, and that container to 
> delete can be in open state)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-997) Add blockade Tests for scm isolation and mixed node isolation

2019-01-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758016#comment-16758016
 ] 

Hadoop QA commented on HDDS-997:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 33m 31s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
44s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
|   | hadoop.ozone.container.TestContainerReplication |
|   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-997 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957204/HDDS-997.003.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  shellcheck  |
| uname | Linux 9d18807cafa6 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / 16195ea |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| shellcheck | v0.4.6 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2162/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2162/testReport/ |
| Max. process+thread count | 1137 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2162/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add blockade Tests for scm isolation and mixed node isolation
> -
>
> Key: HDDS-997
> URL: https://issues.apache.org/jira/browse/HDDS-997
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Attachments: HDDS-997.001.patch, HDDS-997.002.patch, 
> HDDS-997.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13209) DistributedFileSystem.create should allow an option to provide StoragePolicy

2019-01-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758006#comment-16758006
 ] 

Hadoop QA commented on HDFS-13209:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 13 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 10s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 
1030 unchanged - 1 fixed = 1032 total (was 1031) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
44s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 34s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
53s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}166m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-13209 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957194/HDFS-13209-03.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 

[jira] [Commented] (HDFS-14202) "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as per set value.

2019-01-31 Thread Ranith Sardar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757989#comment-16757989
 ] 

Ranith Sardar commented on HDFS-14202:
--

{quote}For the assert, can also check for a particular number instead of <= 8000
{quote}
 As we are mocking computedelay and used a particular time to move the data 
with default bandwidth 10mb/s, By default it will return a fixed time. Here, it 
is 8000 ms. 
{quote}Can we also clarify 21936966
{quote}
In UT, have already mentioned that we are trying to move 20MB(20*1024*1024) =  
21936966 byes (approx).

> "dfs.disk.balancer.max.disk.throughputInMBperSec" property is not working as 
> per set value.
> ---
>
> Key: HDFS-14202
> URL: https://issues.apache.org/jira/browse/HDFS-14202
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: diskbalancer
>Affects Versions: 3.0.1
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-14202.001.patch, HDFS-14202.002.patch, 
> HDFS-14202.003.patch, HDFS-14202.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-997) Add blockade Tests for scm isolation and mixed node isolation

2019-01-31 Thread Nilotpal Nandi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nilotpal Nandi updated HDDS-997:

Attachment: HDDS-997.003.patch

> Add blockade Tests for scm isolation and mixed node isolation
> -
>
> Key: HDDS-997
> URL: https://issues.apache.org/jira/browse/HDDS-997
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Attachments: HDDS-997.001.patch, HDDS-997.002.patch, 
> HDDS-997.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-997) Add blockade Tests for scm isolation and mixed node isolation

2019-01-31 Thread Nilotpal Nandi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nilotpal Nandi updated HDDS-997:

Attachment: (was: HDDS-997.003.patch)

> Add blockade Tests for scm isolation and mixed node isolation
> -
>
> Key: HDDS-997
> URL: https://issues.apache.org/jira/browse/HDDS-997
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Attachments: HDDS-997.001.patch, HDDS-997.002.patch, 
> HDDS-997.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13794) [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.

2019-01-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757978#comment-16757978
 ] 

Hadoop QA commented on HDFS-13794:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 11m 
25s{color} | {color:red} Docker failed to build yetus/hadoop:ba1ab08. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-13794 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957203/HDFS-13794-HDFS-12090.005.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26109/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.
> --
>
> Key: HDFS-13794
> URL: https://issues.apache.org/jira/browse/HDFS-13794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13794-HDFS-12090.001.patch, 
> HDFS-13794-HDFS-12090.002.patch, HDFS-13794-HDFS-12090.003.patch, 
> HDFS-13794-HDFS-12090.004.patch, HDFS-13794-HDFS-12090.005.patch
>
>
> When updating the BlockAliasMap we may need to deal with deleted blocks. 
> Otherwise the BlockAliasMap will grow indefinitely(!).
> Therefore, the BlockAliasMap.Writer needs a method for removing blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13794) [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.

2019-01-31 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-13794:
--
Status: Patch Available  (was: Open)

> [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.
> --
>
> Key: HDFS-13794
> URL: https://issues.apache.org/jira/browse/HDFS-13794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13794-HDFS-12090.001.patch, 
> HDFS-13794-HDFS-12090.002.patch, HDFS-13794-HDFS-12090.003.patch, 
> HDFS-13794-HDFS-12090.004.patch, HDFS-13794-HDFS-12090.005.patch
>
>
> When updating the BlockAliasMap we may need to deal with deleted blocks. 
> Otherwise the BlockAliasMap will grow indefinitely(!).
> Therefore, the BlockAliasMap.Writer needs a method for removing blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13794) [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.

2019-01-31 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-13794:
--
Attachment: (was: HDFS-13794-HDFS-12090.005.patch)

> [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.
> --
>
> Key: HDFS-13794
> URL: https://issues.apache.org/jira/browse/HDFS-13794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13794-HDFS-12090.001.patch, 
> HDFS-13794-HDFS-12090.002.patch, HDFS-13794-HDFS-12090.003.patch, 
> HDFS-13794-HDFS-12090.004.patch, HDFS-13794-HDFS-12090.005.patch
>
>
> When updating the BlockAliasMap we may need to deal with deleted blocks. 
> Otherwise the BlockAliasMap will grow indefinitely(!).
> Therefore, the BlockAliasMap.Writer needs a method for removing blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13794) [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.

2019-01-31 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-13794:
--
Attachment: HDFS-13794-HDFS-12090.005.patch

> [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.
> --
>
> Key: HDFS-13794
> URL: https://issues.apache.org/jira/browse/HDFS-13794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13794-HDFS-12090.001.patch, 
> HDFS-13794-HDFS-12090.002.patch, HDFS-13794-HDFS-12090.003.patch, 
> HDFS-13794-HDFS-12090.004.patch, HDFS-13794-HDFS-12090.005.patch
>
>
> When updating the BlockAliasMap we may need to deal with deleted blocks. 
> Otherwise the BlockAliasMap will grow indefinitely(!).
> Therefore, the BlockAliasMap.Writer needs a method for removing blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13794) [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.

2019-01-31 Thread Virajith Jalaparti (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-13794:
--
Status: Open  (was: Patch Available)

> [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.
> --
>
> Key: HDFS-13794
> URL: https://issues.apache.org/jira/browse/HDFS-13794
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ewan Higgs
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HDFS-13794-HDFS-12090.001.patch, 
> HDFS-13794-HDFS-12090.002.patch, HDFS-13794-HDFS-12090.003.patch, 
> HDFS-13794-HDFS-12090.004.patch, HDFS-13794-HDFS-12090.005.patch
>
>
> When updating the BlockAliasMap we may need to deal with deleted blocks. 
> Otherwise the BlockAliasMap will grow indefinitely(!).
> Therefore, the BlockAliasMap.Writer needs a method for removing blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14230) RBF: Throw RetriableException instead of IOException when no namenodes available

2019-01-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757950#comment-16757950
 ] 

Hadoop QA commented on HDFS-14230:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
51s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
13s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 25m  
7s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14230 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957187/HDFS-14230-HDFS-13891.005.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 136d322b8659 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / d37590b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26107/testReport/ |
| Max. process+thread count | 1077 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26107/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: Throw RetriableException instead of IOException when no namenodes 
> available
> 

[jira] [Commented] (HDDS-1025) Handle replication of closed containers in DeadNodeHanlder

2019-01-31 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757939#comment-16757939
 ] 

Hudson commented on HDDS-1025:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15864 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15864/])
HDDS-1025. Handle replication of closed containers in DeadNodeHanlder. (yqlin: 
rev 16195eaee1b4a7620bc018f48e9c24fc5fc7cc02)
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/node/DeadNodeHandler.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/TestUtils.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/node/TestDeadNodeHandler.java


> Handle replication of closed containers in DeadNodeHanlder
> --
>
> Key: HDDS-1025
> URL: https://issues.apache.org/jira/browse/HDDS-1025
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
> Attachments: HDDS-1025.00.patch, HDDS-1025.01.patch, 
> HDDS-1025.02.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira is to do one of the TODO mentioned in the DeadNodeHandler
> // TODO: Check replica count and call replication manager.
>  
> As Right now, when a node is dead, replication for the closed containers is 
> not triggered.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1025) Handle replication of closed containers in DeadNodeHanlder

2019-01-31 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDDS-1025:

   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

Committed this. Thanks [~bharatviswa] for the contribution.

> Handle replication of closed containers in DeadNodeHanlder
> --
>
> Key: HDDS-1025
> URL: https://issues.apache.org/jira/browse/HDDS-1025
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
> Attachments: HDDS-1025.00.patch, HDDS-1025.01.patch, 
> HDDS-1025.02.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira is to do one of the TODO mentioned in the DeadNodeHandler
> // TODO: Check replica count and call replication manager.
>  
> As Right now, when a node is dead, replication for the closed containers is 
> not triggered.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1025) Handle replication of closed containers in DeadNodeHanlder

2019-01-31 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757927#comment-16757927
 ] 

Yiqun Lin edited comment on HDDS-1025 at 2/1/19 3:31 AM:
-

LGTM, +1. Thanks for addressing the comment, [~bharatviswa]! I would just 
commit the latest patch now.


was (Author: linyiqun):
LGTM, +1. Thanks for addressing the comment, [~bharatviswa]!

> Handle replication of closed containers in DeadNodeHanlder
> --
>
> Key: HDDS-1025
> URL: https://issues.apache.org/jira/browse/HDDS-1025
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1025.00.patch, HDDS-1025.01.patch, 
> HDDS-1025.02.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira is to do one of the TODO mentioned in the DeadNodeHandler
> // TODO: Check replica count and call replication manager.
>  
> As Right now, when a node is dead, replication for the closed containers is 
> not triggered.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1025) Handle replication of closed containers in DeadNodeHanlder

2019-01-31 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757927#comment-16757927
 ] 

Yiqun Lin commented on HDDS-1025:
-

LGTM, +1. Thanks for addressing the comment, [~bharatviswa]!

> Handle replication of closed containers in DeadNodeHanlder
> --
>
> Key: HDDS-1025
> URL: https://issues.apache.org/jira/browse/HDDS-1025
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1025.00.patch, HDDS-1025.01.patch, 
> HDDS-1025.02.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira is to do one of the TODO mentioned in the DeadNodeHandler
> // TODO: Check replica count and call replication manager.
>  
> As Right now, when a node is dead, replication for the closed containers is 
> not triggered.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13209) DistributedFileSystem.create should allow an option to provide StoragePolicy

2019-01-31 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13209:

Attachment: HDFS-13209-03.patch

> DistributedFileSystem.create should allow an option to provide StoragePolicy
> 
>
> Key: HDFS-13209
> URL: https://issues.apache.org/jira/browse/HDFS-13209
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Affects Versions: 3.0.0
>Reporter: Jean-Marc Spaggiari
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13209-01.patch, HDFS-13209-02.patch, 
> HDFS-13209-03.patch
>
>
> DistributedFileSystem.create allows to get a FSDataOutputStream. The stored 
> file and related blocks will used the directory based StoragePolicy.
>  
> However, sometime, we might need to keep all files in the same directory 
> (consistency constraint) but might want some of them on SSD (small, in my 
> case) until they are processed and merger/removed. Then they will go on the 
> default policy.
>  
> When creating a file, it will be useful to have an option to specify a 
> different StoragePolicy...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1040) Add blockade Tests for client failures

2019-01-31 Thread Nilotpal Nandi (JIRA)
Nilotpal Nandi created HDDS-1040:


 Summary: Add blockade Tests for client failures
 Key: HDDS-1040
 URL: https://issues.apache.org/jira/browse/HDDS-1040
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Nilotpal Nandi






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14230) RBF: Throw RetriableException instead of IOException when no namenodes available

2019-01-31 Thread Fei Hui (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757903#comment-16757903
 ] 

Fei Hui commented on HDFS-14230:


Upload v005 patch
* Fix checkstyle
* Fix TestRouterClientRejectOverload#testNoNamenodesAvailable, Update namenode 
heartbeat
* Fix TestRouterRPCClientRetries#testRetryWhenAllNameServiceDown, change 
expected string

> RBF: Throw RetriableException instead of IOException when no namenodes 
> available
> 
>
> Key: HDFS-14230
> URL: https://issues.apache.org/jira/browse/HDFS-14230
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0, 3.1.1, 2.9.2, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14230-HDFS-13891.001.patch, 
> HDFS-14230-HDFS-13891.002.patch, HDFS-14230-HDFS-13891.003.patch, 
> HDFS-14230-HDFS-13891.004.patch, HDFS-14230-HDFS-13891.005.patch
>
>
> Failover usually happens when upgrading namenodes. And there are no active 
> namenodes within some seconds, Accessing HDFS through router fails at this 
> moment. This could make jobs  failure or hang. Some hive jobs logs are as 
> follow  
> {code:java}
> 2019-01-03 16:12:08,337 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 
> 133.33 sec
> MapReduce Total cumulative CPU time: 2 minutes 13 seconds 330 msec
> Ended Job = job_1542178952162_24411913
> Launching Job 4 out of 6
> Exception in thread "Thread-86" java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): No namenode 
> available under nameservice Cluster3
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.shouldRetry(RouterRpcClient.java:328)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invoke(RouterRpcClient.java:488)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invoke(RouterRpcClient.java:495)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:385)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:760)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category READ is not supported in state standby
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1804)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1338)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3925)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1014)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
> 

[jira] [Updated] (HDFS-14230) RBF: Throw RetriableException instead of IOException when no namenodes available

2019-01-31 Thread Fei Hui (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-14230:
---
Attachment: HDFS-14230-HDFS-13891.005.patch

> RBF: Throw RetriableException instead of IOException when no namenodes 
> available
> 
>
> Key: HDFS-14230
> URL: https://issues.apache.org/jira/browse/HDFS-14230
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.2.0, 3.1.1, 2.9.2, 3.0.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14230-HDFS-13891.001.patch, 
> HDFS-14230-HDFS-13891.002.patch, HDFS-14230-HDFS-13891.003.patch, 
> HDFS-14230-HDFS-13891.004.patch, HDFS-14230-HDFS-13891.005.patch
>
>
> Failover usually happens when upgrading namenodes. And there are no active 
> namenodes within some seconds, Accessing HDFS through router fails at this 
> moment. This could make jobs  failure or hang. Some hive jobs logs are as 
> follow  
> {code:java}
> 2019-01-03 16:12:08,337 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 
> 133.33 sec
> MapReduce Total cumulative CPU time: 2 minutes 13 seconds 330 msec
> Ended Job = job_1542178952162_24411913
> Launching Job 4 out of 6
> Exception in thread "Thread-86" java.lang.RuntimeException: 
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): No namenode 
> available under nameservice Cluster3
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.shouldRetry(RouterRpcClient.java:328)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invoke(RouterRpcClient.java:488)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invoke(RouterRpcClient.java:495)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeMethod(RouterRpcClient.java:385)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.invokeSequential(RouterRpcClient.java:760)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:1152)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):
>  Operation category READ is not supported in state standby
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1804)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1338)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3925)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1014)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:849)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2134)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2130)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1867)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2130)
> {code}
> Deep into the code. Maybe we can throw StandbyException when no namenodes 
> available. Client will fail after some retries



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HDFS-14249) RBF: Tooling to identify the subcluster location of a file

2019-01-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757860#comment-16757860
 ] 

Hadoop QA commented on HDFS-14249:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
14s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-rbf generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 23m 10s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterAdminCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14249 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957177/HDFS-14249-HDFS-13891.000.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux a65bca71379f 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / d37590b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Commented] (HDFS-14187) Make warning message more clear when there are not enough data nodes for EC write

2019-01-31 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757826#comment-16757826
 ] 

Hudson commented on HDFS-14187:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15863 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15863/])
HDFS-14187. Make warning message more clear when there are not enough (weichiu: 
rev 0ab7fc92009fec2f0ab341f3d878e1b8864b8ea9)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java


> Make warning message more clear when there are not enough data nodes for EC 
> write
> -
>
> Key: HDFS-14187
> URL: https://issues.apache.org/jira/browse/HDFS-14187
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.1.1
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14187.001.patch
>
>
> When setting an erasure coding policy for which there are not enough racks or 
> data nodes, write will fail with the following message:
> {code:java}
> [root@oks-upgrade6727-1 ~]# sudo -u systest hdfs dfs -mkdir 
> /user/systest/testdir
> [root@oks-upgrade6727-1 ~]# sudo -u hdfs hdfs ec -setPolicy -path 
> /user/systest/testdir
> Set default erasure coding policy on /user/systest/testdir
> [root@oks-upgrade6727-1 ~]# sudo -u systest hdfs dfs -put /tmp/file1 
> /user/systest/testdir
> 18/11/12 05:41:26 WARN hdfs.DFSOutputStream: Cannot allocate parity 
> block(index=3, policy=RS-3-2-1024k). Not enough datanodes? Exclude nodes=[]
> 18/11/12 05:41:26 WARN hdfs.DFSOutputStream: Cannot allocate parity 
> block(index=4, policy=RS-3-2-1024k). Not enough datanodes? Exclude nodes=[]
> 18/11/12 05:41:26 WARN hdfs.DFSOutputStream: Block group <1> failed to write 
> 2 blocks. It's at high risk of losing data.
> {code}
> I suggest to log a more descriptive message suggesting to use hdfs ec 
> -verifyCluster command to verify the cluster setup against the ec policies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14187) Make warning message more clear when there are not enough data nodes for EC write

2019-01-31 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757827#comment-16757827
 ] 

Kitti Nanasi commented on HDFS-14187:
-

Thanks for reviewing and committing it [~jojochuang]!

> Make warning message more clear when there are not enough data nodes for EC 
> write
> -
>
> Key: HDFS-14187
> URL: https://issues.apache.org/jira/browse/HDFS-14187
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.1.1
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14187.001.patch
>
>
> When setting an erasure coding policy for which there are not enough racks or 
> data nodes, write will fail with the following message:
> {code:java}
> [root@oks-upgrade6727-1 ~]# sudo -u systest hdfs dfs -mkdir 
> /user/systest/testdir
> [root@oks-upgrade6727-1 ~]# sudo -u hdfs hdfs ec -setPolicy -path 
> /user/systest/testdir
> Set default erasure coding policy on /user/systest/testdir
> [root@oks-upgrade6727-1 ~]# sudo -u systest hdfs dfs -put /tmp/file1 
> /user/systest/testdir
> 18/11/12 05:41:26 WARN hdfs.DFSOutputStream: Cannot allocate parity 
> block(index=3, policy=RS-3-2-1024k). Not enough datanodes? Exclude nodes=[]
> 18/11/12 05:41:26 WARN hdfs.DFSOutputStream: Cannot allocate parity 
> block(index=4, policy=RS-3-2-1024k). Not enough datanodes? Exclude nodes=[]
> 18/11/12 05:41:26 WARN hdfs.DFSOutputStream: Block group <1> failed to write 
> 2 blocks. It's at high risk of losing data.
> {code}
> I suggest to log a more descriptive message suggesting to use hdfs ec 
> -verifyCluster command to verify the cluster setup against the ec policies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14249) RBF: Tooling to identify the subcluster location of a file

2019-01-31 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14249:
---
Attachment: HDFS-14249-HDFS-13891.000.patch

> RBF: Tooling to identify the subcluster location of a file
> --
>
> Key: HDFS-14249
> URL: https://issues.apache.org/jira/browse/HDFS-14249
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14249-HDFS-13891.000.patch
>
>
> Mount points can spread files across multiple subclusters depennding on a 
> policy (e.g., HASH, HASH_ALL). Administrators would need a way to identify 
> the location.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14187) Make warning message more clear when there are not enough data nodes for EC write

2019-01-31 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14187:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Pushed to trunk. Thanks [~knanasi] for contribution!

> Make warning message more clear when there are not enough data nodes for EC 
> write
> -
>
> Key: HDFS-14187
> URL: https://issues.apache.org/jira/browse/HDFS-14187
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.1.1
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14187.001.patch
>
>
> When setting an erasure coding policy for which there are not enough racks or 
> data nodes, write will fail with the following message:
> {code:java}
> [root@oks-upgrade6727-1 ~]# sudo -u systest hdfs dfs -mkdir 
> /user/systest/testdir
> [root@oks-upgrade6727-1 ~]# sudo -u hdfs hdfs ec -setPolicy -path 
> /user/systest/testdir
> Set default erasure coding policy on /user/systest/testdir
> [root@oks-upgrade6727-1 ~]# sudo -u systest hdfs dfs -put /tmp/file1 
> /user/systest/testdir
> 18/11/12 05:41:26 WARN hdfs.DFSOutputStream: Cannot allocate parity 
> block(index=3, policy=RS-3-2-1024k). Not enough datanodes? Exclude nodes=[]
> 18/11/12 05:41:26 WARN hdfs.DFSOutputStream: Cannot allocate parity 
> block(index=4, policy=RS-3-2-1024k). Not enough datanodes? Exclude nodes=[]
> 18/11/12 05:41:26 WARN hdfs.DFSOutputStream: Block group <1> failed to write 
> 2 blocks. It's at high risk of losing data.
> {code}
> I suggest to log a more descriptive message suggesting to use hdfs ec 
> -verifyCluster command to verify the cluster setup against the ec policies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14249) RBF: Tooling to identify the subcluster location of a file

2019-01-31 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14249:
---
Status: Patch Available  (was: Open)

> RBF: Tooling to identify the subcluster location of a file
> --
>
> Key: HDFS-14249
> URL: https://issues.apache.org/jira/browse/HDFS-14249
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14249-HDFS-13891.000.patch
>
>
> Mount points can spread files across multiple subclusters depennding on a 
> policy (e.g., HASH, HASH_ALL). Administrators would need a way to identify 
> the location.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14118) Use DNS to resolve Namenodes and Routers

2019-01-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757790#comment-16757790
 ] 

Hadoop QA commented on HDFS-14118:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
35s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m  
3s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 50s{color} | {color:orange} root: The patch generated 1 new + 108 unchanged 
- 0 fixed = 109 total (was 108) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
25s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
14s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}109m 27s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
50s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}251m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14118 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957147/HDFS-14118.011.patch |
| Optional Tests |  dupname  asflicense  compile  

[jira] [Commented] (HDFS-14187) Make warning message more clear when there are not enough data nodes for EC write

2019-01-31 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757780#comment-16757780
 ] 

Wei-Chiu Chuang commented on HDFS-14187:


+1

> Make warning message more clear when there are not enough data nodes for EC 
> write
> -
>
> Key: HDFS-14187
> URL: https://issues.apache.org/jira/browse/HDFS-14187
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.1.1
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-14187.001.patch
>
>
> When setting an erasure coding policy for which there are not enough racks or 
> data nodes, write will fail with the following message:
> {code:java}
> [root@oks-upgrade6727-1 ~]# sudo -u systest hdfs dfs -mkdir 
> /user/systest/testdir
> [root@oks-upgrade6727-1 ~]# sudo -u hdfs hdfs ec -setPolicy -path 
> /user/systest/testdir
> Set default erasure coding policy on /user/systest/testdir
> [root@oks-upgrade6727-1 ~]# sudo -u systest hdfs dfs -put /tmp/file1 
> /user/systest/testdir
> 18/11/12 05:41:26 WARN hdfs.DFSOutputStream: Cannot allocate parity 
> block(index=3, policy=RS-3-2-1024k). Not enough datanodes? Exclude nodes=[]
> 18/11/12 05:41:26 WARN hdfs.DFSOutputStream: Cannot allocate parity 
> block(index=4, policy=RS-3-2-1024k). Not enough datanodes? Exclude nodes=[]
> 18/11/12 05:41:26 WARN hdfs.DFSOutputStream: Block group <1> failed to write 
> 2 blocks. It's at high risk of losing data.
> {code}
> I suggest to log a more descriptive message suggesting to use hdfs ec 
> -verifyCluster command to verify the cluster setup against the ec policies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1039) OzoneManager fails to connect with secure SCM

2019-01-31 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1039:
-
Labels: Security  (was: )

> OzoneManager fails to connect with secure SCM
> -
>
> Key: HDDS-1039
> URL: https://issues.apache.org/jira/browse/HDDS-1039
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1039.00.patch
>
>
> In a secure Ozone cluster. OzoneManager fail to connect to SCM on 
> {{SCMBlockLocationProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1038) Datanode fails to connect with secure SCM

2019-01-31 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1038:
-
Labels: Security  (was: )

> Datanode fails to connect with secure SCM
> -
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: Security
> Fix For: 0.4.0
>
> Attachments: HDDS-1038.00.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13209) DistributedFileSystem.create should allow an option to provide StoragePolicy

2019-01-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757771#comment-16757771
 ] 

Hadoop QA commented on HDFS-13209:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 13 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 14s{color} | {color:orange} hadoop-hdfs-project: The patch generated 8 new + 
1030 unchanged - 1 fixed = 1038 total (was 1031) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
10s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}110m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m  9s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}215m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestProcessCorruptBlocks |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.fs.contract.router.web.TestRouterWebHDFSContractSeek |
|   | hadoop.fs.contract.router.TestRouterHDFSContractRename |
|   | hadoop.fs.contract.router.TestRouterHDFSContractConcat |
|   | hadoop.fs.contract.router.TestRouterHDFSContractSeek |
|   | hadoop.fs.contract.router.TestRouterHDFSContractRootDirectory |
|   | hadoop.fs.contract.router.web.TestRouterWebHDFSContractOpen |
|   

[jira] [Commented] (HDFS-14250) [Standby Reads] msync should sync with active NameNode to fetch the latest stateID

2019-01-31 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757742#comment-16757742
 ] 

Chao Sun commented on HDFS-14250:
-

Yes, I think we can just use {{activeOnly}} for this type of purpose. 

> [Standby Reads] msync should sync with active NameNode to fetch the latest 
> stateID
> --
>
> Key: HDFS-14250
> URL: https://issues.apache.org/jira/browse/HDFS-14250
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
>
> Currently the {{msync}} call is a dummy operation to observer without really 
> syncing. Instead, it should:
>  # Get the latest stateID from active NN.
>  # Use the stateID to talk to observer NN and make sure it is synced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1039) OzoneManager fails to connect with secure SCM

2019-01-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757743#comment-16757743
 ] 

Hadoop QA commented on HDDS-1039:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 36m  
5s{color} | {color:green} hadoop-ozone in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
33s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1039 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957170/HDDS-1039.00.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux a286ea43747d 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / f738b39 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2160/testReport/ |
| Max. process+thread count | 1134 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common U: hadoop-hdds/common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2160/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> OzoneManager fails to connect with secure SCM
> -
>
> Key: HDDS-1039
> URL: https://issues.apache.org/jira/browse/HDDS-1039
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1039.00.patch
>
>
> In a secure Ozone cluster. OzoneManager fail to connect to SCM on 
> {{SCMBlockLocationProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, 

[jira] [Commented] (HDDS-1029) Allow option for force in DeleteContainerCommand

2019-01-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757733#comment-16757733
 ] 

Hadoop QA commented on HDDS-1029:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 32m 46s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
40s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestDeleteContainerHandler
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1029 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957150/HDDS-1029.04.patch |
| Optional Tests |  asflicense  unit  javac  javadoc  findbugs  checkstyle  |
| uname | Linux 00e7539ac2be 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / f738b39 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2159/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2159/testReport/ |
| Max. process+thread count | 1094 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-hdds/server-scm hadoop-ozone/integration-test U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2159/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Allow option for force in DeleteContainerCommand
> 
>
> Key: HDDS-1029
> URL: https://issues.apache.org/jira/browse/HDDS-1029
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1029.00.patch, HDDS-1029.01.patch, 
> 

[jira] [Commented] (HDDS-1038) Datanode fails to connect with secure SCM

2019-01-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757736#comment-16757736
 ] 

Hadoop QA commented on HDDS-1038:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 7s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 25m 42s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
24s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
|   | hadoop.ozone.client.rpc.TestBCSID |
|   | hadoop.ozone.client.rpc.TestContainerStateMachine |
|   | hadoop.ozone.container.TestContainerReplication |
|   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
|   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
|   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1038 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957169/HDDS-1038.00.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux db30dcaedb92 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build@2/ozone.sh |
| git revision | trunk / f738b39 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2161/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2161/testReport/ |
| Max. process+thread count | 1134 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2161/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Datanode fails to connect with secure SCM
> -
>
> Key: HDDS-1038

[jira] [Commented] (HDDS-1025) Handle replication of closed containers in DeadNodeHanlder

2019-01-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757723#comment-16757723
 ] 

Hadoop QA commented on HDDS-1025:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 37m 
35s{color} | {color:green} hadoop-ozone in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
30s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1025 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957166/HDDS-1025.02.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 35e32bf88e49 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / f738b39 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2158/testReport/ |
| Max. process+thread count | 1129 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2158/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Handle replication of closed containers in DeadNodeHanlder
> --
>
> Key: HDDS-1025
> URL: https://issues.apache.org/jira/browse/HDDS-1025
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1025.00.patch, HDDS-1025.01.patch, 
> HDDS-1025.02.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira is to do one of the TODO mentioned in the DeadNodeHandler
> // TODO: Check replica count and call replication manager.
>  
> As Right now, when a node is dead, replication for the closed containers is 
> not 

[jira] [Commented] (HDFS-14250) [Standby Reads] msync should sync with active NameNode to fetch the latest stateID

2019-01-31 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757722#comment-16757722
 ] 

Erik Krogen commented on HDFS-14250:


I don't think we want to make it write anything per se, but IIRC we have an 
annotation that we can add which makes it always go to active? I think that 
would be sufficient?

> [Standby Reads] msync should sync with active NameNode to fetch the latest 
> stateID
> --
>
> Key: HDFS-14250
> URL: https://issues.apache.org/jira/browse/HDFS-14250
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
>
> Currently the {{msync}} call is a dummy operation to observer without really 
> syncing. Instead, it should:
>  # Get the latest stateID from active NN.
>  # Use the stateID to talk to observer NN and make sure it is synced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14250) [Standby Reads] msync should sync with active NameNode to fetch the latest stateID

2019-01-31 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757712#comment-16757712
 ] 

Chao Sun commented on HDFS-14250:
-

[~xkrogen] you are right. Maybe {{msync}} can simply be a write call that just 
fetch the stateID from active NN?

> [Standby Reads] msync should sync with active NameNode to fetch the latest 
> stateID
> --
>
> Key: HDFS-14250
> URL: https://issues.apache.org/jira/browse/HDFS-14250
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
>
> Currently the {{msync}} call is a dummy operation to observer without really 
> syncing. Instead, it should:
>  # Get the latest stateID from active NN.
>  # Use the stateID to talk to observer NN and make sure it is synced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1038) Datanode fails to connect with secure SCM

2019-01-31 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757711#comment-16757711
 ] 

Ajay Kumar commented on HDDS-1038:
--

cc: [~xyao], [~anu] attaching initial patch. Will see if unit test is feasible, 
else our robot test should cover this.

> Datanode fails to connect with secure SCM
> -
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-1038.00.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1039) OzoneManager fails to connect with secure SCM

2019-01-31 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1039:
-
Issue Type: Sub-task  (was: Bug)
Parent: HDDS-4

> OzoneManager fails to connect with secure SCM
> -
>
> Key: HDDS-1039
> URL: https://issues.apache.org/jira/browse/HDDS-1039
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-1039.00.patch
>
>
> In a secure Ozone cluster. OzoneManager fail to connect to SCM on 
> {{SCMBlockLocationProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1029) Allow option for force in DeleteContainerCommand

2019-01-31 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757709#comment-16757709
 ] 

Bharat Viswanadham commented on HDDS-1029:
--

Not sure why the test failed on the trunk, but it is passing locally.

> Allow option for force in DeleteContainerCommand
> 
>
> Key: HDDS-1029
> URL: https://issues.apache.org/jira/browse/HDDS-1029
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1029.00.patch, HDDS-1029.01.patch, 
> HDDS-1029.02.patch, HDDS-1029.03.patch, HDDS-1029.04.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Right now, we check container state if it is not open, and then we delete 
> container.
> We need a way to delete the containers which are open, so adding a force flag 
> will allow deleting a container without any state checks. (This is required 
> for delete replica's when SCM detects over-replicated, and that container to 
> delete can be in open state)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1025) Handle replication of closed containers in DeadNodeHanlder

2019-01-31 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757684#comment-16757684
 ] 

Bharat Viswanadham edited comment on HDDS-1025 at 1/31/19 8:48 PM:
---

Thank you [~linyiqun] for review.

Addressed the review comments in patch v02

Not added LOG for container open, as we plan to open the Jira and fix the TODO, 
but in the test for open containers, checked that we should not have the 
request in the log.

 


was (Author: bharatviswa):
Thank you [~linyiqun] for review.

Addressed the review comments in patch v02

Not added LOG for container open, as we plan to open the Jira and fix the TODO, 
but checked for open containers we don't have the request for open container in 
the log.

 

> Handle replication of closed containers in DeadNodeHanlder
> --
>
> Key: HDDS-1025
> URL: https://issues.apache.org/jira/browse/HDDS-1025
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1025.00.patch, HDDS-1025.01.patch, 
> HDDS-1025.02.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira is to do one of the TODO mentioned in the DeadNodeHandler
> // TODO: Check replica count and call replication manager.
>  
> As Right now, when a node is dead, replication for the closed containers is 
> not triggered.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1038) Datanode fails to connect with secure SCM

2019-01-31 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1038:
-
Attachment: HDDS-1038.00.patch

> Datanode fails to connect with secure SCM
> -
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-1038.00.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1025) Handle replication of closed containers in DeadNodeHanlder

2019-01-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757714#comment-16757714
 ] 

Hadoop QA commented on HDDS-1025:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 30m 52s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
33s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1025 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957165/HDDS-1025.01.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 49ae10342f9e 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / f738b39 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2157/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2157/testReport/ |
| Max. process+thread count | 1136 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2157/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Handle replication of closed containers in DeadNodeHanlder
> --
>
> Key: HDDS-1025
> URL: https://issues.apache.org/jira/browse/HDDS-1025
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1025.00.patch, HDDS-1025.01.patch, 
> HDDS-1025.02.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira is to 

[jira] [Updated] (HDDS-1038) Datanode fails to connect with secure SCM

2019-01-31 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1038:
-
Fix Version/s: 0.4.0

> Datanode fails to connect with secure SCM
> -
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1038.00.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1039) OzoneManager fails to connect with secure SCM

2019-01-31 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1039:
-
Fix Version/s: 0.4.0

> OzoneManager fails to connect with secure SCM
> -
>
> Key: HDDS-1039
> URL: https://issues.apache.org/jira/browse/HDDS-1039
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1039.00.patch
>
>
> In a secure Ozone cluster. OzoneManager fail to connect to SCM on 
> {{SCMBlockLocationProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1039) OzoneManager fails to connect with secure SCM

2019-01-31 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1039:
-
Status: Patch Available  (was: Open)

> OzoneManager fails to connect with secure SCM
> -
>
> Key: HDDS-1039
> URL: https://issues.apache.org/jira/browse/HDDS-1039
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-1039.00.patch
>
>
> In a secure Ozone cluster. OzoneManager fail to connect to SCM on 
> {{SCMBlockLocationProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1039) OzoneManager fails to connect with secure SCM

2019-01-31 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757703#comment-16757703
 ] 

Ajay Kumar commented on HDDS-1039:
--

OM startup logs

{code}org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
 Unknown protocol: org.apache.hadoop.ozone.protocol.ScmBlockLocationProtocol
at 
org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1198)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1063)
19/01/31 20:17:04 DEBUG ipc.Client: IPC Client (1231006815) connection to 
ctr-e139-1542663976389-57536-01-02.hwx.site/172.27.11.75:9863 from 
om/ctr-e139-1542663976389-57536-01-03.hwx.s...@example.com: closed
19/01/31 20:17:04 DEBUG ipc.Client: IPC Client (1231006815) connection to 
ctr-e139-1542663976389-57536-01-02.hwx.site/172.27.11.75:9863 from 
om/ctr-e139-1542663976389-57536-01-03.hwx.s...@example.com: stopped, 
remaining connections 0
19/01/31 20:17:04 ERROR om.OzoneManager: Failed to start the OzoneManager.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
 Unknown protocol: org.apache.hadoop.ozone.protocol.ScmBlockLocationProtocol
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
at org.apache.hadoop.ipc.Client.call(Client.java:1443)
at org.apache.hadoop.ipc.Client.call(Client.java:1353)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy14.getScmInfo(Unknown Source)
at 
org.apache.hadoop.hdds.scm.protocolPB.ScmBlockLocationProtocolClientSideTranslatorPB.getScmInfo(ScmBlockLocationProtocolClientSideTranslatorPB.java:154)
at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:242)
at 
org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:679)
at 
org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:592){code}

> OzoneManager fails to connect with secure SCM
> -
>
> Key: HDDS-1039
> URL: https://issues.apache.org/jira/browse/HDDS-1039
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
> In a secure Ozone cluster. OzoneManager fail to connect to SCM on 
> {{SCMBlockLocationProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1038) Datanode fails to connect with secure SCM

2019-01-31 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1038:
-
Issue Type: Sub-task  (was: Bug)
Parent: HDDS-4

> Datanode fails to connect with secure SCM
> -
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-1038.00.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-987) MultipartUpload: S3API for list parts of a object

2019-01-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757708#comment-16757708
 ] 

Hadoop QA commented on HDDS-987:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 18 line(s) with tabs. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 30m 16s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
31s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
|   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-987 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957162/HDDS-987.02.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  checkstyle  |
| uname | Linux 57f524fd4141 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build@2/ozone.sh |
| git revision | trunk / f738b39 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2156/artifact/out/whitespace-tabs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2156/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2156/testReport/ |
| Max. process+thread count | 1139 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/client hadoop-ozone/dist hadoop-ozone/s3gateway U: 
hadoop-ozone |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2156/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> MultipartUpload: S3API for list parts of a object
> -
>
> Key: HDDS-987
> URL: https://issues.apache.org/jira/browse/HDDS-987
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: 

[jira] [Commented] (HDFS-14250) [Standby Reads] msync should sync with active NameNode to fetch the latest stateID

2019-01-31 Thread Erik Krogen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757704#comment-16757704
 ] 

Erik Krogen commented on HDFS-14250:


I think #2 is not necessary -- this will be handled automatically by any 
subsequent call that goes to the observer, right?

> [Standby Reads] msync should sync with active NameNode to fetch the latest 
> stateID
> --
>
> Key: HDFS-14250
> URL: https://issues.apache.org/jira/browse/HDFS-14250
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
>
> Currently the {{msync}} call is a dummy operation to observer without really 
> syncing. Instead, it should:
>  # Get the latest stateID from active NN.
>  # Use the stateID to talk to observer NN and make sure it is synced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1038) Datanode fails to connect with secure SCM

2019-01-31 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1038:
-
Status: Patch Available  (was: Open)

> Datanode fails to connect with secure SCM
> -
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-1038.00.patch
>
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1039) OzoneManager fails to connect with secure SCM

2019-01-31 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1039:
-
Attachment: HDDS-1039.00.patch

> OzoneManager fails to connect with secure SCM
> -
>
> Key: HDDS-1039
> URL: https://issues.apache.org/jira/browse/HDDS-1039
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-1039.00.patch
>
>
> In a secure Ozone cluster. OzoneManager fail to connect to SCM on 
> {{SCMBlockLocationProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14247) Repeat adding node description into network topology

2019-01-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757707#comment-16757707
 ] 

Hadoop QA commented on HDFS-14247:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 29s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 22s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14247 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957002/HDFS-14247.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a151e0ba 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / bcc3a79 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26103/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26103/testReport/ |
| Max. process+thread count | 4233 (vs. ulimit of 1) |

[jira] [Updated] (HDDS-1039) OzoneManager fails to connect with secure SCM

2019-01-31 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1039:
-
Description: 
In a secure Ozone cluster. OzoneManager fail to connect to SCM on 
{{SCMBlockLocationProtocol}}. 


  was:In a secure Ozone cluster. Datanodes fail to connect to SCM on 
{{StorageContainerDatanodeProtocol}}. 


> OzoneManager fails to connect with secure SCM
> -
>
> Key: HDDS-1039
> URL: https://issues.apache.org/jira/browse/HDDS-1039
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
> In a secure Ozone cluster. OzoneManager fail to connect to SCM on 
> {{SCMBlockLocationProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1039) OzoneManager fails to connect with secure SCM

2019-01-31 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757695#comment-16757695
 ] 

Ajay Kumar commented on HDDS-1039:
--

{code}2019-01-31 20:17:04,246 DEBUG ipc.Server (Server.java:saslProcess(1841)) 
- SASL server context established. Negotiated QoP is auth
2019-01-31 20:17:04,247 DEBUG ipc.Server (Server.java:saslProcess(1846)) - SASL 
server successfully authenticated client: 
om/ctr-e139-1542663976389-57536-01-03.hwx.s...@example.com (auth:KERBEROS)
2019-01-31 20:17:04,247 DEBUG ipc.Server (Server.java:processResponse(1468)) - 
Socket Reader #1 for port 9863: responding to Call#-33 Retry#-1 null from 
172.27.27.138:39305
2019-01-31 20:17:04,247 DEBUG ipc.Server (Server.java:processResponse(1487)) - 
Socket Reader #1 for port 9863: responding to Call#-33 Retry#-1 null from 
172.27.27.138:39305 Wrote 22 bytes.
2019-01-31 20:17:04,255 DEBUG ipc.Server (Server.java:processOneRpc(2355)) -  
got #-3
2019-01-31 20:17:04,256 INFO  ipc.Server 
(Server.java:authorizeConnection(2562)) - Connection from 172.27.27.138:39305 
for protocol org.apache.hadoop.ozone.protocol.ScmBlockLocationProtocol is 
unauthorized for user 
om/ctr-e139-1542663976389-57536-01-03.hwx.s...@example.com (auth:KERBEROS)
2019-01-31 20:17:04,256 DEBUG ipc.Server (Server.java:processOneRpc(2372)) - 
Socket Reader #1 for port 9863: processOneRpc from client 172.27.27.138:39305 
threw exception [org.apache.hadoop.security.authorize.AuthorizationException: 
Unknown protocol: org.apache.hadoop.ozone.protocol.ScmBlockLocationProtocol]
2019-01-31 20:17:04,256 DEBUG ipc.Server (Server.java:processResponse(1468)) - 
Socket Reader #1 for port 9863: responding to Call#-3 Retry#-1 null from 
172.27.27.138:39305
2019-01-31 20:17:04,256 DEBUG ipc.Server (Server.java:processResponse(1487)) - 
Socket Reader #1 for port 9863: responding to Call#-3 Retry#-1 null from 
172.27.27.138:39305 Wrote 160 bytes.
2019-01-31 20:17:04,256 DEBUG ipc.Server (Server.java:close(3438)) - Socket 
Reader #1 for port 9863: disconnecting client 172.27.27.138:39305. Number of 
active connections: 0
2019-01-31 20:17:04,821 DEBUG ipc.Server (Server.java:processOneRpc(2355)) -  
got #35
2019-01-31 20:17:04,821 DEBUG ipc.Server (Server.java:run(2670)) - IPC Server 
handler 3 on 9861: Call#35 Retry#0 
org.apache.hadoop.ozone.protocol.StorageContainerDatanodeProtocol.sendHeartbeat 
from 172.27.27.138:41805 for RpcKind RPC_PROTOCOL_BUFFER{code}

> OzoneManager fails to connect with secure SCM
> -
>
> Key: HDDS-1039
> URL: https://issues.apache.org/jira/browse/HDDS-1039
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
> In a secure Ozone cluster. OzoneManager fail to connect to SCM on 
> {{SCMBlockLocationProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1039) OzoneManager fails to connect with secure SCM

2019-01-31 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-1039:


 Summary: OzoneManager fails to connect with secure SCM
 Key: HDDS-1039
 URL: https://issues.apache.org/jira/browse/HDDS-1039
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Ajay Kumar
Assignee: Ajay Kumar


In a secure Ozone cluster. Datanodes fail to connect to SCM on 
{{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1038) Datanode fails to connect with secure SCM

2019-01-31 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar reassigned HDDS-1038:


Assignee: Ajay Kumar

> Datanode fails to connect with secure SCM
> -
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1038) Datanode fails to connect with secure SCM

2019-01-31 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757686#comment-16757686
 ] 

Ajay Kumar commented on HDDS-1038:
--

{code}2019-01-31 00:08:32,497 ERROR statemachine.EndpointStateMachine 
(EndpointStateMachine.java:logIfNeeded(207)) - Unable to communicate to SCM 
server at ctr-e139-1542663976389-57536-01-02.hwx.site:9861 for past 43200 
seconds.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
 Protocol interface 
org.apache.hadoop.ozone.protocol.StorageContainerDatanodeProtocol is not known.
   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
   at org.apache.hadoop.ipc.Client.call(Client.java:1443)
   at org.apache.hadoop.ipc.Client.call(Client.java:1353)
   at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
   at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
   at com.sun.proxy.$Proxy83.getVersion(Unknown Source)
   at 
org.apache.hadoop.ozone.protocolPB.StorageContainerDatanodeProtocolClientSideTranslatorPB.getVersion(StorageContainerDatanodeProtocolClientSideTranslatorPB.java:112)
   at 
org.apache.hadoop.ozone.container.common.states.endpoint.VersionEndpointTask.call(VersionEndpointTask.java:70)
   at 
org.apache.hadoop.ozone.container.common.states.endpoint.VersionEndpointTask.call(VersionEndpointTask.java:42)
   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
   at java.lang.Thread.run(Thread.java:745){code}

> Datanode fails to connect with secure SCM
> -
>
> Key: HDDS-1038
> URL: https://issues.apache.org/jira/browse/HDDS-1038
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>
> In a secure Ozone cluster. Datanodes fail to connect to SCM on 
> {{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1025) Handle replication of closed containers in DeadNodeHanlder

2019-01-31 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1025:
-
Attachment: HDDS-1025.02.patch

> Handle replication of closed containers in DeadNodeHanlder
> --
>
> Key: HDDS-1025
> URL: https://issues.apache.org/jira/browse/HDDS-1025
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1025.00.patch, HDDS-1025.01.patch, 
> HDDS-1025.02.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira is to do one of the TODO mentioned in the DeadNodeHandler
> // TODO: Check replica count and call replication manager.
>  
> As Right now, when a node is dead, replication for the closed containers is 
> not triggered.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1029) Allow option for force in DeleteContainerCommand

2019-01-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757677#comment-16757677
 ] 

Hadoop QA commented on HDDS-1029:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 30m 39s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
12s{color} | {color:green} hadoop-hdds in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestDeleteContainerHandler
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDDS-1029 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957150/HDDS-1029.04.patch |
| Optional Tests |  asflicense  unit  javac  javadoc  findbugs  checkstyle  |
| uname | Linux 3c03810c36e6 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HDDS-Build/ozone.sh |
| git revision | trunk / bcc3a79 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| unit | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2155/artifact/out/patch-unit-hadoop-ozone.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2155/testReport/ |
| Max. process+thread count | 1145 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-hdds/server-scm hadoop-ozone/integration-test U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/2155/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Allow option for force in DeleteContainerCommand
> 
>
> Key: HDDS-1029
> URL: https://issues.apache.org/jira/browse/HDDS-1029
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1029.00.patch, HDDS-1029.01.patch, 
> HDDS-1029.02.patch, 

[jira] [Comment Edited] (HDDS-1038) Datanode fails to connect with secure SCM

2019-01-31 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757686#comment-16757686
 ] 

Ajay Kumar edited comment on HDDS-1038 at 1/31/19 8:06 PM:
---

Logs on datanode side:
{code}2019-01-31 00:08:32,497 ERROR statemachine.EndpointStateMachine 
(EndpointStateMachine.java:logIfNeeded(207)) - Unable to communicate to SCM 
server at ctr-e139-1542663976389-57536-01-02.hwx.site:9861 for past 43200 
seconds.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
 Protocol interface 
org.apache.hadoop.ozone.protocol.StorageContainerDatanodeProtocol is not known.
   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
   at org.apache.hadoop.ipc.Client.call(Client.java:1443)
   at org.apache.hadoop.ipc.Client.call(Client.java:1353)
   at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
   at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
   at com.sun.proxy.$Proxy83.getVersion(Unknown Source)
   at 
org.apache.hadoop.ozone.protocolPB.StorageContainerDatanodeProtocolClientSideTranslatorPB.getVersion(StorageContainerDatanodeProtocolClientSideTranslatorPB.java:112)
   at 
org.apache.hadoop.ozone.container.common.states.endpoint.VersionEndpointTask.call(VersionEndpointTask.java:70)
   at 
org.apache.hadoop.ozone.container.common.states.endpoint.VersionEndpointTask.call(VersionEndpointTask.java:42)
   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
   at java.lang.Thread.run(Thread.java:745){code}

On SCM
{code}2019-01-31 00:35:00,591 DEBUG ipc.Server 
(Server.java:processSaslToken(1952)) - Have read input token of size 32 for 
processing by saslServer.evaluateResponse()
2019-01-31 00:35:00,591 DEBUG ipc.Server (Server.java:buildSaslResponse(1969)) 
- Will send SUCCESS token of size null from saslServer.
2019-01-31 00:35:00,592 DEBUG ipc.Server (Server.java:saslProcess(1841)) - SASL 
server context established. Negotiated QoP is auth
2019-01-31 00:35:00,592 DEBUG ipc.Server (Server.java:saslProcess(1846)) - SASL 
server successfully authenticated client: 
dn/ctr-e139-1542663976389-57536-01-03.hwx.s...@example.com (auth:KERBEROS)
2019-01-31 00:35:00,592 DEBUG ipc.Server (Server.java:processResponse(1468)) - 
Socket Reader #1 for port 9861: responding to Call#-33 Retry#-1 null from 
172.27.27.138:32785
2019-01-31 00:35:00,592 DEBUG ipc.Server (Server.java:processResponse(1487)) - 
Socket Reader #1 for port 9861: responding to Call#-33 Retry#-1 null from 
172.27.27.138:32785 Wrote 22 bytes.
2019-01-31 00:35:00,594 DEBUG ipc.Server (Server.java:processOneRpc(2355)) -  
got #-3
2019-01-31 00:35:00,594 INFO  ipc.Server 
(Server.java:authorizeConnection(2562)) - Connection from 172.27.27.138:32785 
for protocol org.apache.hadoop.ozone.protocol.StorageContainerDatanodeProtocol 
is unauthorized for user 
dn/ctr-e139-1542663976389-57536-01-03.hwx.s...@example.com (auth:KERBEROS)
2019-01-31 00:35:00,594 DEBUG ipc.Server (Server.java:processOneRpc(2372)) - 
Socket Reader #1 for port 9861: processOneRpc from client 172.27.27.138:32785 
threw exception [org.apache.hadoop.security.authorize.AuthorizationException: 
Protocol interface 
org.apache.hadoop.ozone.protocol.StorageContainerDatanodeProtocol is not known.]
2019-01-31 00:35:00,594 DEBUG ipc.Server (Server.java:processResponse(1468)) - 
Socket Reader #1 for port 9861: responding to Call#-3 Retry#-1 null from 
172.27.27.138:32785
2019-01-31 00:35:00,594 DEBUG ipc.Server (Server.java:processResponse(1487)) - 
Socket Reader #1 for port 9861: responding to Call#-3 Retry#-1 null from 
172.27.27.138:32785 Wrote 183 bytes.
2019-01-31 00:35:00,594 DEBUG ipc.Server (Server.java:close(3438)) - Socket 
Reader #1 for port 9861: disconnecting client 172.27.27.138:32785. Number of 
active connections: {code}


was (Author: ajayydv):
Logs on datanode side:
{code}2019-01-31 00:08:32,497 ERROR statemachine.EndpointStateMachine 
(EndpointStateMachine.java:logIfNeeded(207)) - Unable to communicate to SCM 
server at ctr-e139-1542663976389-57536-01-02.hwx.site:9861 for past 43200 
seconds.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
 Protocol interface 
org.apache.hadoop.ozone.protocol.StorageContainerDatanodeProtocol is not known.
   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
   at org.apache.hadoop.ipc.Client.call(Client.java:1443)
   at 

[jira] [Comment Edited] (HDDS-1038) Datanode fails to connect with secure SCM

2019-01-31 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757686#comment-16757686
 ] 

Ajay Kumar edited comment on HDDS-1038 at 1/31/19 8:05 PM:
---

Logs on datanode side:
{code}2019-01-31 00:08:32,497 ERROR statemachine.EndpointStateMachine 
(EndpointStateMachine.java:logIfNeeded(207)) - Unable to communicate to SCM 
server at ctr-e139-1542663976389-57536-01-02.hwx.site:9861 for past 43200 
seconds.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
 Protocol interface 
org.apache.hadoop.ozone.protocol.StorageContainerDatanodeProtocol is not known.
   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
   at org.apache.hadoop.ipc.Client.call(Client.java:1443)
   at org.apache.hadoop.ipc.Client.call(Client.java:1353)
   at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
   at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
   at com.sun.proxy.$Proxy83.getVersion(Unknown Source)
   at 
org.apache.hadoop.ozone.protocolPB.StorageContainerDatanodeProtocolClientSideTranslatorPB.getVersion(StorageContainerDatanodeProtocolClientSideTranslatorPB.java:112)
   at 
org.apache.hadoop.ozone.container.common.states.endpoint.VersionEndpointTask.call(VersionEndpointTask.java:70)
   at 
org.apache.hadoop.ozone.container.common.states.endpoint.VersionEndpointTask.call(VersionEndpointTask.java:42)
   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
   at java.lang.Thread.run(Thread.java:745){code}

On SCM
{coed}2019-01-31 00:35:00,591 DEBUG ipc.Server 
(Server.java:processSaslToken(1952)) - Have read input token of size 32 for 
processing by saslServer.evaluateResponse()
2019-01-31 00:35:00,591 DEBUG ipc.Server (Server.java:buildSaslResponse(1969)) 
- Will send SUCCESS token of size null from saslServer.
2019-01-31 00:35:00,592 DEBUG ipc.Server (Server.java:saslProcess(1841)) - SASL 
server context established. Negotiated QoP is auth
2019-01-31 00:35:00,592 DEBUG ipc.Server (Server.java:saslProcess(1846)) - SASL 
server successfully authenticated client: 
dn/ctr-e139-1542663976389-57536-01-03.hwx.s...@example.com (auth:KERBEROS)
2019-01-31 00:35:00,592 DEBUG ipc.Server (Server.java:processResponse(1468)) - 
Socket Reader #1 for port 9861: responding to Call#-33 Retry#-1 null from 
172.27.27.138:32785
2019-01-31 00:35:00,592 DEBUG ipc.Server (Server.java:processResponse(1487)) - 
Socket Reader #1 for port 9861: responding to Call#-33 Retry#-1 null from 
172.27.27.138:32785 Wrote 22 bytes.
2019-01-31 00:35:00,594 DEBUG ipc.Server (Server.java:processOneRpc(2355)) -  
got #-3
2019-01-31 00:35:00,594 INFO  ipc.Server 
(Server.java:authorizeConnection(2562)) - Connection from 172.27.27.138:32785 
for protocol org.apache.hadoop.ozone.protocol.StorageContainerDatanodeProtocol 
is unauthorized for user 
dn/ctr-e139-1542663976389-57536-01-03.hwx.s...@example.com (auth:KERBEROS)
2019-01-31 00:35:00,594 DEBUG ipc.Server (Server.java:processOneRpc(2372)) - 
Socket Reader #1 for port 9861: processOneRpc from client 172.27.27.138:32785 
threw exception [org.apache.hadoop.security.authorize.AuthorizationException: 
Protocol interface 
org.apache.hadoop.ozone.protocol.StorageContainerDatanodeProtocol is not known.]
2019-01-31 00:35:00,594 DEBUG ipc.Server (Server.java:processResponse(1468)) - 
Socket Reader #1 for port 9861: responding to Call#-3 Retry#-1 null from 
172.27.27.138:32785
2019-01-31 00:35:00,594 DEBUG ipc.Server (Server.java:processResponse(1487)) - 
Socket Reader #1 for port 9861: responding to Call#-3 Retry#-1 null from 
172.27.27.138:32785 Wrote 183 bytes.
2019-01-31 00:35:00,594 DEBUG ipc.Server (Server.java:close(3438)) - Socket 
Reader #1 for port 9861: disconnecting client 172.27.27.138:32785. Number of 
active connections: {code}


was (Author: ajayydv):
{code}2019-01-31 00:08:32,497 ERROR statemachine.EndpointStateMachine 
(EndpointStateMachine.java:logIfNeeded(207)) - Unable to communicate to SCM 
server at ctr-e139-1542663976389-57536-01-02.hwx.site:9861 for past 43200 
seconds.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
 Protocol interface 
org.apache.hadoop.ozone.protocol.StorageContainerDatanodeProtocol is not known.
   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)
   at org.apache.hadoop.ipc.Client.call(Client.java:1443)
   at 

[jira] [Created] (HDDS-1038) Datanode fails to connect with secure SCM

2019-01-31 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDDS-1038:


 Summary: Datanode fails to connect with secure SCM
 Key: HDDS-1038
 URL: https://issues.apache.org/jira/browse/HDDS-1038
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Ajay Kumar


In a secure Ozone cluster. Datanodes fail to connect to SCM on 
{{StorageContainerDatanodeProtocol}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1025) Handle replication of closed containers in DeadNodeHanlder

2019-01-31 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757684#comment-16757684
 ] 

Bharat Viswanadham commented on HDDS-1025:
--

Thank you [~linyiqun] for review.

Addressed the review comments in patch v02

Not added LOG for container open, as we plan to open the Jira and fix the TODO, 
but checked for open containers we don't have the request for open container in 
the log.

 

> Handle replication of closed containers in DeadNodeHanlder
> --
>
> Key: HDDS-1025
> URL: https://issues.apache.org/jira/browse/HDDS-1025
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1025.00.patch, HDDS-1025.01.patch, 
> HDDS-1025.02.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira is to do one of the TODO mentioned in the DeadNodeHandler
> // TODO: Check replica count and call replication manager.
>  
> As Right now, when a node is dead, replication for the closed containers is 
> not triggered.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1025) Handle replication of closed containers in DeadNodeHanlder

2019-01-31 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1025:
-
Attachment: HDDS-1025.01.patch

> Handle replication of closed containers in DeadNodeHanlder
> --
>
> Key: HDDS-1025
> URL: https://issues.apache.org/jira/browse/HDDS-1025
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1025.00.patch, HDDS-1025.01.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira is to do one of the TODO mentioned in the DeadNodeHandler
> // TODO: Check replica count and call replication manager.
>  
> As Right now, when a node is dead, replication for the closed containers is 
> not triggered.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14250) [Standby Reads] msync should sync with active NameNode to fetch the latest stateID

2019-01-31 Thread Chao Sun (JIRA)
Chao Sun created HDFS-14250:
---

 Summary: [Standby Reads] msync should sync with active NameNode to 
fetch the latest stateID
 Key: HDFS-14250
 URL: https://issues.apache.org/jira/browse/HDFS-14250
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Chao Sun
Assignee: Chao Sun


Currently the {{msync}} call is a dummy operation to observer without really 
syncing. Instead, it should:
 # Get the latest stateID from active NN.
 # Use the stateID to talk to observer NN and make sure it is synced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-997) Add blockade Tests for scm isolation and mixed node isolation

2019-01-31 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-997:
---
Status: Open  (was: Patch Available)

> Add blockade Tests for scm isolation and mixed node isolation
> -
>
> Key: HDDS-997
> URL: https://issues.apache.org/jira/browse/HDDS-997
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Attachments: HDDS-997.001.patch, HDDS-997.002.patch, 
> HDDS-997.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-987) MultipartUpload: S3API for list parts of a object

2019-01-31 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-987:

Attachment: HDDS-987.02.patch

> MultipartUpload: S3API for list parts of a object
> -
>
> Key: HDDS-987
> URL: https://issues.apache.org/jira/browse/HDDS-987
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-987.01.patch, HDDS-987.02.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListParts.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-997) Add blockade Tests for scm isolation and mixed node isolation

2019-01-31 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-997:
---
Status: Patch Available  (was: Open)

> Add blockade Tests for scm isolation and mixed node isolation
> -
>
> Key: HDDS-997
> URL: https://issues.apache.org/jira/browse/HDDS-997
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Attachments: HDDS-997.001.patch, HDDS-997.002.patch, 
> HDDS-997.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-987) MultipartUpload: S3API for list parts of a object

2019-01-31 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-987?focusedWorklogId=192997=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-192997
 ]

ASF GitHub Bot logged work on HDDS-987:
---

Author: ASF GitHub Bot
Created on: 31/Jan/19 19:25
Start Date: 31/Jan/19 19:25
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #478: 
HDDS-987. MultipartUpload: S3API for list parts of a object.
URL: https://github.com/apache/hadoop/pull/478
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 192997)
Time Spent: 10m
Remaining Estimate: 0h

> MultipartUpload: S3API for list parts of a object
> -
>
> Key: HDDS-987
> URL: https://issues.apache.org/jira/browse/HDDS-987
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-987.01.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListParts.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14234) Limit WebHDFS to specifc user, host, directory triples

2019-01-31 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757647#comment-16757647
 ] 

Anu Engineer edited comment on HDFS-14234 at 1/31/19 7:22 PM:
--

Hi [~clayb],

Thanks for the patch. It looks pretty good. I think this is a good approach, 
especially I like the extension where more filters can be added more easily. I 
have code reviewed this as if this a real patch. I have some minor comments.

*DatanodeHttpServer.java:384:*
 # nit: Developer Comment? Remove? XXX Clay how do we want to handle this for 
generic users?

*HostingRestrictingAuthorization.java*
 # Remove unused imports.
 # Line 152: Unused variable, overrideConfigs
 # Line 160: Clean up some args in the new
 # Line 165: Should we make this a WARN instead of debug. If the user wrote a 
rule wrong, at least we will see a warn. In the current patch, we ignore it 
silently.
 # Don't understand the use case, but I am going to assume that
 _// Map is {"user": [subnet, path]}_
 user is some user in Kerberos/Directory and user can also be group.

*HostRestrictingAuthorizationFilterHandler.java*
 # Unused imports.
 # Nit: Remove XXX in line 194
 # Nit: Line:103 Java Doc is wrong.

*TestHostRestrictingAuthorizationFilter.java*
 # Remove Unused Imports.

*TestHostRestrictingAuthorizationFilterHandler.java*
 # Remove unused imports.
 # This file looks incomplete, care to fix or remove from this patch.

Ps. I have also added you to the contributors group, so you can assign JIRAs to 
yourself.


was (Author: anu):
Hi [~clayb],

Thanks for the patch. It looks pretty good. I think this is a good approach, 
especially I like the extension where more filters can be added more easily. I 
have code reviewed this as if this a real patch. I have some minor comments.

*DatanodeHttpServer.java:384:*
# nit: Developer Comment? Remove? XXX Clay how do we want to handle this for 
generic users?

*HostingRestrictingAuthorization.java*
# Remove unused imports.
# Line 152: Unused variable, overrideConfigs
# Line 160: Clean up some args in the new
# Line 165: Should we make this a WARN instead of debug. If the user wrote a 
rule wrong, at least we will see a warn. In the current patch, we ignore it 
silently.
# Don't understand the use case, but I am going to assume that 
 _// Map is \{"user": [subnet, path]}_
user is some user in Kerberos/Directory and user can also be group.

*HostRestrictingAuthorizationFilterHandler.java*
# Unused imports.
# Nit: Remove XXX in line 194
#  Nit: Line:103 Java Doc is wrong.

*TestHostRestrictingAuthorizationFilter.java*
 # Remove Unused Imports.

*TestHostRestrictingAuthorizationFilterHandler.java*
# Remove unused imports.
# This file looks incomplete, care to fix or remove from this patch.

> Limit WebHDFS to specifc user, host, directory triples
> --
>
> Key: HDFS-14234
> URL: https://issues.apache.org/jira/browse/HDFS-14234
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Clay B.
>Assignee: Anu Engineer
>Priority: Trivial
> Attachments: 
> 0001-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch
>
>
> For those who have multiple network zones, it is useful to prevent certain 
> zones from downloading data from WebHDFS while still allowing uploads. This 
> can enable functionality of HDFS as a dropbox for data - data goes in but can 
> not be pulled back out. (Motivation further presented in [StrangeLoop 2018 Of 
> Data Dropboxes and Data 
> Gloveboxes|https://www.thestrangeloop.com/2018/of-data-dropboxes-and-data-gloveboxes.html]).
> Ideally, one could limit the datanodes from returning data via an 
> [{{OPEN}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Open_and_Read_a_File]
>  but still allow things such as 
> [{{GETFILECHECKSUM}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_File_Checksum]
>  and 
> {{[{{CREATE}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Create_and_Write_to_a_File]}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14234) Limit WebHDFS to specifc user, host, directory triples

2019-01-31 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757647#comment-16757647
 ] 

Anu Engineer commented on HDFS-14234:
-

Hi [~clayb],

Thanks for the patch. It looks pretty good. I think this is a good approach, 
especially I like the extension where more filters can be added more easily. I 
have code reviewed this as if this a real patch. I have some minor comments.

*DatanodeHttpServer.java:384:
 *1. nit: Developer Comment? Remove? XXX Clay how do we want to handle this for 
generic users?

*HostingRestrictingAuthorization.java*
 2. Remove unused imports.
 3. Line 152: Unused variable, overrideConfigs
 4. Line 160: Clean up some args in the new
 5. Line 165: Should we make this a WARN instead of debug. If the user wrote a 
rule wrong, at least we will see a warn. In the current patch, we ignore it 
silently.

6. Don't understand the use case, but I am going to assume that 
 "// Map is

{"user": [subnet, path]}

" means that user is some user in Kerberos/Directory and user can also be 
groupgit s.

*HostRestrictingAuthorizationFilterHandler.java*

1. Unused imports.
 2. Nit: Remove XXX in line 194
 3. Line:103 Java Doc is wrong.

*TestHostRestrictingAuthorizationFilter.java*

1. Remove Unused Imports.
 *
 TestHostRestrictingAuthorizationFilterHandler.java*

1. Remove unused imports.
 2. This file looks incomplete, care to fix or remove from this patch.

> Limit WebHDFS to specifc user, host, directory triples
> --
>
> Key: HDFS-14234
> URL: https://issues.apache.org/jira/browse/HDFS-14234
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Clay B.
>Assignee: Anu Engineer
>Priority: Trivial
> Attachments: 
> 0001-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch
>
>
> For those who have multiple network zones, it is useful to prevent certain 
> zones from downloading data from WebHDFS while still allowing uploads. This 
> can enable functionality of HDFS as a dropbox for data - data goes in but can 
> not be pulled back out. (Motivation further presented in [StrangeLoop 2018 Of 
> Data Dropboxes and Data 
> Gloveboxes|https://www.thestrangeloop.com/2018/of-data-dropboxes-and-data-gloveboxes.html]).
> Ideally, one could limit the datanodes from returning data via an 
> [{{OPEN}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Open_and_Read_a_File]
>  but still allow things such as 
> [{{GETFILECHECKSUM}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_File_Checksum]
>  and 
> {{[{{CREATE}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Create_and_Write_to_a_File]}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-987) MultipartUpload: S3API for list parts of a object

2019-01-31 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-987:

Labels: pull-request-available  (was: )

> MultipartUpload: S3API for list parts of a object
> -
>
> Key: HDDS-987
> URL: https://issues.apache.org/jira/browse/HDDS-987
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-987.01.patch
>
>
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListParts.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14234) Limit WebHDFS to specifc user, host, directory triples

2019-01-31 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDFS-14234:
---

Assignee: Clay B.  (was: Anu Engineer)

> Limit WebHDFS to specifc user, host, directory triples
> --
>
> Key: HDFS-14234
> URL: https://issues.apache.org/jira/browse/HDFS-14234
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Clay B.
>Assignee: Clay B.
>Priority: Trivial
> Attachments: 
> 0001-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch
>
>
> For those who have multiple network zones, it is useful to prevent certain 
> zones from downloading data from WebHDFS while still allowing uploads. This 
> can enable functionality of HDFS as a dropbox for data - data goes in but can 
> not be pulled back out. (Motivation further presented in [StrangeLoop 2018 Of 
> Data Dropboxes and Data 
> Gloveboxes|https://www.thestrangeloop.com/2018/of-data-dropboxes-and-data-gloveboxes.html]).
> Ideally, one could limit the datanodes from returning data via an 
> [{{OPEN}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Open_and_Read_a_File]
>  but still allow things such as 
> [{{GETFILECHECKSUM}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_File_Checksum]
>  and 
> {{[{{CREATE}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Create_and_Write_to_a_File]}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14234) Limit WebHDFS to specifc user, host, directory triples

2019-01-31 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757647#comment-16757647
 ] 

Anu Engineer edited comment on HDFS-14234 at 1/31/19 7:20 PM:
--

Hi [~clayb],

Thanks for the patch. It looks pretty good. I think this is a good approach, 
especially I like the extension where more filters can be added more easily. I 
have code reviewed this as if this a real patch. I have some minor comments.

*DatanodeHttpServer.java:384:*
# nit: Developer Comment? Remove? XXX Clay how do we want to handle this for 
generic users?

*HostingRestrictingAuthorization.java*
# Remove unused imports.
# Line 152: Unused variable, overrideConfigs
# Line 160: Clean up some args in the new
# Line 165: Should we make this a WARN instead of debug. If the user wrote a 
rule wrong, at least we will see a warn. In the current patch, we ignore it 
silently.
# Don't understand the use case, but I am going to assume that 
 _// Map is \{"user": [subnet, path]}_
user is some user in Kerberos/Directory and user can also be group.

*HostRestrictingAuthorizationFilterHandler.java*
# Unused imports.
# Nit: Remove XXX in line 194
#  Nit: Line:103 Java Doc is wrong.

*TestHostRestrictingAuthorizationFilter.java*
 # Remove Unused Imports.

*TestHostRestrictingAuthorizationFilterHandler.java*
# Remove unused imports.
# This file looks incomplete, care to fix or remove from this patch.


was (Author: anu):
Hi [~clayb],

Thanks for the patch. It looks pretty good. I think this is a good approach, 
especially I like the extension where more filters can be added more easily. I 
have code reviewed this as if this a real patch. I have some minor comments.

*DatanodeHttpServer.java:384:
 *1. nit: Developer Comment? Remove? XXX Clay how do we want to handle this for 
generic users?

*HostingRestrictingAuthorization.java*
 2. Remove unused imports.
 3. Line 152: Unused variable, overrideConfigs
 4. Line 160: Clean up some args in the new
 5. Line 165: Should we make this a WARN instead of debug. If the user wrote a 
rule wrong, at least we will see a warn. In the current patch, we ignore it 
silently.

6. Don't understand the use case, but I am going to assume that 
 "// Map is

{"user": [subnet, path]}

" means that user is some user in Kerberos/Directory and user can also be 
groupgit s.

*HostRestrictingAuthorizationFilterHandler.java*

1. Unused imports.
 2. Nit: Remove XXX in line 194
 3. Line:103 Java Doc is wrong.

*TestHostRestrictingAuthorizationFilter.java*

1. Remove Unused Imports.
 *
 TestHostRestrictingAuthorizationFilterHandler.java*

1. Remove unused imports.
 2. This file looks incomplete, care to fix or remove from this patch.

> Limit WebHDFS to specifc user, host, directory triples
> --
>
> Key: HDFS-14234
> URL: https://issues.apache.org/jira/browse/HDFS-14234
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Clay B.
>Assignee: Anu Engineer
>Priority: Trivial
> Attachments: 
> 0001-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch
>
>
> For those who have multiple network zones, it is useful to prevent certain 
> zones from downloading data from WebHDFS while still allowing uploads. This 
> can enable functionality of HDFS as a dropbox for data - data goes in but can 
> not be pulled back out. (Motivation further presented in [StrangeLoop 2018 Of 
> Data Dropboxes and Data 
> Gloveboxes|https://www.thestrangeloop.com/2018/of-data-dropboxes-and-data-gloveboxes.html]).
> Ideally, one could limit the datanodes from returning data via an 
> [{{OPEN}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Open_and_Read_a_File]
>  but still allow things such as 
> [{{GETFILECHECKSUM}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_File_Checksum]
>  and 
> {{[{{CREATE}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Create_and_Write_to_a_File]}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-987) MultipartUpload: S3API for list parts of a object

2019-01-31 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757646#comment-16757646
 ] 

Bharat Viswanadham commented on HDDS-987:
-

Will upload a new PR, as now HDDS-956 got checked in.

> MultipartUpload: S3API for list parts of a object
> -
>
> Key: HDDS-987
> URL: https://issues.apache.org/jira/browse/HDDS-987
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: HDDS-987.01.patch
>
>
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListParts.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14234) Limit WebHDFS to specifc user, host, directory triples

2019-01-31 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDFS-14234:
---

Assignee: Anu Engineer  (was: Clay B.)

> Limit WebHDFS to specifc user, host, directory triples
> --
>
> Key: HDFS-14234
> URL: https://issues.apache.org/jira/browse/HDFS-14234
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Clay B.
>Assignee: Anu Engineer
>Priority: Trivial
> Attachments: 
> 0001-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch
>
>
> For those who have multiple network zones, it is useful to prevent certain 
> zones from downloading data from WebHDFS while still allowing uploads. This 
> can enable functionality of HDFS as a dropbox for data - data goes in but can 
> not be pulled back out. (Motivation further presented in [StrangeLoop 2018 Of 
> Data Dropboxes and Data 
> Gloveboxes|https://www.thestrangeloop.com/2018/of-data-dropboxes-and-data-gloveboxes.html]).
> Ideally, one could limit the datanodes from returning data via an 
> [{{OPEN}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Open_and_Read_a_File]
>  but still allow things such as 
> [{{GETFILECHECKSUM}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_File_Checksum]
>  and 
> {{[{{CREATE}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Create_and_Write_to_a_File]}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14234) Limit WebHDFS to specifc user, host, directory triples

2019-01-31 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDFS-14234:
---

Assignee: Clay B.

> Limit WebHDFS to specifc user, host, directory triples
> --
>
> Key: HDFS-14234
> URL: https://issues.apache.org/jira/browse/HDFS-14234
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Clay B.
>Assignee: Clay B.
>Priority: Trivial
> Attachments: 
> 0001-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch
>
>
> For those who have multiple network zones, it is useful to prevent certain 
> zones from downloading data from WebHDFS while still allowing uploads. This 
> can enable functionality of HDFS as a dropbox for data - data goes in but can 
> not be pulled back out. (Motivation further presented in [StrangeLoop 2018 Of 
> Data Dropboxes and Data 
> Gloveboxes|https://www.thestrangeloop.com/2018/of-data-dropboxes-and-data-gloveboxes.html]).
> Ideally, one could limit the datanodes from returning data via an 
> [{{OPEN}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Open_and_Read_a_File]
>  but still allow things such as 
> [{{GETFILECHECKSUM}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_File_Checksum]
>  and 
> {{[{{CREATE}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Create_and_Write_to_a_File]}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1029) Allow option for force in DeleteContainerCommand

2019-01-31 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757629#comment-16757629
 ] 

Bharat Viswanadham commented on HDDS-1029:
--

Fixed checkstyle issue.

Thank You, [~linyiqun] for the review.

I will commit this, once after clean Jenkins run.

> Allow option for force in DeleteContainerCommand
> 
>
> Key: HDDS-1029
> URL: https://issues.apache.org/jira/browse/HDDS-1029
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1029.00.patch, HDDS-1029.01.patch, 
> HDDS-1029.02.patch, HDDS-1029.03.patch, HDDS-1029.04.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Right now, we check container state if it is not open, and then we delete 
> container.
> We need a way to delete the containers which are open, so adding a force flag 
> will allow deleting a container without any state checks. (This is required 
> for delete replica's when SCM detects over-replicated, and that container to 
> delete can be in open state)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1029) Allow option for force in DeleteContainerCommand

2019-01-31 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1029:
-
Attachment: HDDS-1029.04.patch

> Allow option for force in DeleteContainerCommand
> 
>
> Key: HDDS-1029
> URL: https://issues.apache.org/jira/browse/HDDS-1029
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1029.00.patch, HDDS-1029.01.patch, 
> HDDS-1029.02.patch, HDDS-1029.03.patch, HDDS-1029.04.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Right now, we check container state if it is not open, and then we delete 
> container.
> We need a way to delete the containers which are open, so adding a force flag 
> will allow deleting a container without any state checks. (This is required 
> for delete replica's when SCM detects over-replicated, and that container to 
> delete can be in open state)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14118) Use DNS to resolve Namenodes and Routers

2019-01-31 Thread Fengnan Li (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757623#comment-16757623
 ] 

Fengnan Li commented on HDFS-14118:
---

[~elgoiri] I uploaded another patch mainly to fix the checkstyle issues, but 
feel free to make changes on top of it.

> Use DNS to resolve Namenodes and Routers
> 
>
> Key: HDFS-14118
> URL: https://issues.apache.org/jira/browse/HDFS-14118
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: DNS testing log, HDFS-14118.001.patch, 
> HDFS-14118.002.patch, HDFS-14118.003.patch, HDFS-14118.004.patch, 
> HDFS-14118.005.patch, HDFS-14118.006.patch, HDFS-14118.007.patch, 
> HDFS-14118.008.patch, HDFS-14118.009.patch, HDFS-14118.010.patch, 
> HDFS-14118.011.patch, HDFS-14118.patch
>
>
> Clients will need to know about routers to talk to the HDFS cluster 
> (obviously), and having routers updating (adding/removing) will have to make 
> every client change, which is a painful process.
> DNS can be used here to resolve the single domain name clients knows to a 
> list of routers in the current config. However, DNS won't be able to consider 
> only resolving to the working router based on certain health thresholds.
> There are some ways about how this can be solved. One way is to have a 
> separate script to regularly check the status of the router and update the 
> DNS records if a router fails the health thresholds. In this way, security 
> might be carefully considered for this way. Another way is to have the client 
> do the normal connecting/failover after they get the list of routers, which 
> requires the change of current failover proxy provider.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13209) DistributedFileSystem.create should allow an option to provide StoragePolicy

2019-01-31 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13209:

Attachment: HDFS-13209-02.patch

> DistributedFileSystem.create should allow an option to provide StoragePolicy
> 
>
> Key: HDFS-13209
> URL: https://issues.apache.org/jira/browse/HDFS-13209
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Affects Versions: 3.0.0
>Reporter: Jean-Marc Spaggiari
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13209-01.patch, HDFS-13209-02.patch
>
>
> DistributedFileSystem.create allows to get a FSDataOutputStream. The stored 
> file and related blocks will used the directory based StoragePolicy.
>  
> However, sometime, we might need to keep all files in the same directory 
> (consistency constraint) but might want some of them on SSD (small, in my 
> case) until they are processed and merger/removed. Then they will go on the 
> default policy.
>  
> When creating a file, it will be useful to have an option to specify a 
> different StoragePolicy...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14118) Use DNS to resolve Namenodes and Routers

2019-01-31 Thread Fengnan Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fengnan Li updated HDFS-14118:
--
Attachment: HDFS-14118.011.patch

> Use DNS to resolve Namenodes and Routers
> 
>
> Key: HDFS-14118
> URL: https://issues.apache.org/jira/browse/HDFS-14118
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: DNS testing log, HDFS-14118.001.patch, 
> HDFS-14118.002.patch, HDFS-14118.003.patch, HDFS-14118.004.patch, 
> HDFS-14118.005.patch, HDFS-14118.006.patch, HDFS-14118.007.patch, 
> HDFS-14118.008.patch, HDFS-14118.009.patch, HDFS-14118.010.patch, 
> HDFS-14118.011.patch, HDFS-14118.patch
>
>
> Clients will need to know about routers to talk to the HDFS cluster 
> (obviously), and having routers updating (adding/removing) will have to make 
> every client change, which is a painful process.
> DNS can be used here to resolve the single domain name clients knows to a 
> list of routers in the current config. However, DNS won't be able to consider 
> only resolving to the working router based on certain health thresholds.
> There are some ways about how this can be solved. One way is to have a 
> separate script to regularly check the status of the router and update the 
> DNS records if a router fails the health thresholds. In this way, security 
> might be carefully considered for this way. Another way is to have the client 
> do the normal connecting/failover after they get the list of routers, which 
> requires the change of current failover proxy provider.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14240) blockReport test in NNThroughputBenchmark throws ArrayIndexOutOfBoundsException

2019-01-31 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757595#comment-16757595
 ] 

Masatake Iwasaki commented on HDFS-14240:
-

{quote}roblem is here:array datanodes's length is determined by args as 
"-datanodes" or "-threads" ,but dnIdx = dnInfo.getXferPort() is a random port.
{quote}
NNThroughputBenchmark registers fake datanode having port number based on idx 
if using in-process NameNode. The code path assuming the mode seems to be 
reached in the mode using real NameNode.

[~shenyinjie] if you can show the command line to reproduce the issue, it would 
be help.

[~RANith] I assigned you to this ticket.

 

> blockReport test in NNThroughputBenchmark throws 
> ArrayIndexOutOfBoundsException
> ---
>
> Key: HDFS-14240
> URL: https://issues.apache.org/jira/browse/HDFS-14240
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shen Yinjie
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: screenshot-1.png
>
>
> When I run a blockReport test with NNThroughputBenchmark, 
> BlockReportStats.addBlocks() throws ArrayIndexOutOfBoundsException.
> digging the code:
> {code:java}
> for(DatanodeInfo dnInfo : loc.getLocations())
> { int dnIdx = dnInfo.getXferPort() - 1; 
> datanodes[dnIdx].addBlock(loc.getBlock().getLocalBlock());{code}
>  
> problem is here:array datanodes's length is determined by args as 
> "-datanodes" or "-threads" ,but dnIdx = dnInfo.getXferPort() is a random port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13209) DistributedFileSystem.create should allow an option to provide StoragePolicy

2019-01-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757584#comment-16757584
 ] 

Hadoop QA commented on HDFS-13209:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HDFS-13209 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-13209 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957142/HDFS-13209-01.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26102/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> DistributedFileSystem.create should allow an option to provide StoragePolicy
> 
>
> Key: HDFS-13209
> URL: https://issues.apache.org/jira/browse/HDFS-13209
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Affects Versions: 3.0.0
>Reporter: Jean-Marc Spaggiari
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13209-01.patch
>
>
> DistributedFileSystem.create allows to get a FSDataOutputStream. The stored 
> file and related blocks will used the directory based StoragePolicy.
>  
> However, sometime, we might need to keep all files in the same directory 
> (consistency constraint) but might want some of them on SSD (small, in my 
> case) until they are processed and merger/removed. Then they will go on the 
> default policy.
>  
> When creating a file, it will be useful to have an option to specify a 
> different StoragePolicy...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14240) blockReport test in NNThroughputBenchmark throws ArrayIndexOutOfBoundsException

2019-01-31 Thread Masatake Iwasaki (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki reassigned HDFS-14240:
---

Assignee: Ranith Sardar

> blockReport test in NNThroughputBenchmark throws 
> ArrayIndexOutOfBoundsException
> ---
>
> Key: HDFS-14240
> URL: https://issues.apache.org/jira/browse/HDFS-14240
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Shen Yinjie
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: screenshot-1.png
>
>
> When I run a blockReport test with NNThroughputBenchmark, 
> BlockReportStats.addBlocks() throws ArrayIndexOutOfBoundsException.
> digging the code:
> {code:java}
> for(DatanodeInfo dnInfo : loc.getLocations())
> { int dnIdx = dnInfo.getXferPort() - 1; 
> datanodes[dnIdx].addBlock(loc.getBlock().getLocalBlock());{code}
>  
> problem is here:array datanodes's length is determined by args as 
> "-datanodes" or "-threads" ,but dnIdx = dnInfo.getXferPort() is a random port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13209) DistributedFileSystem.create should allow an option to provide StoragePolicy

2019-01-31 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13209:

Status: Patch Available  (was: Open)

> DistributedFileSystem.create should allow an option to provide StoragePolicy
> 
>
> Key: HDFS-13209
> URL: https://issues.apache.org/jira/browse/HDFS-13209
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Affects Versions: 3.0.0
>Reporter: Jean-Marc Spaggiari
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13209-01.patch
>
>
> DistributedFileSystem.create allows to get a FSDataOutputStream. The stored 
> file and related blocks will used the directory based StoragePolicy.
>  
> However, sometime, we might need to keep all files in the same directory 
> (consistency constraint) but might want some of them on SSD (small, in my 
> case) until they are processed and merger/removed. Then they will go on the 
> default policy.
>  
> When creating a file, it will be useful to have an option to specify a 
> different StoragePolicy...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14247) Repeat adding node description into network topology

2019-01-31 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14247:
---
Status: Patch Available  (was: Open)

> Repeat adding node description into network topology
> 
>
> Key: HDFS-14247
> URL: https://issues.apache.org/jira/browse/HDFS-14247
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Priority: Minor
> Attachments: HDFS-14247.001.patch
>
>
> I find there is a duplicate code to add nodeDescr to networktopology in the 
> DatanodeManager.java#registerDatanode.
> It firstly call networktopology.add(nodeDescr), and then call  
> addDatanode(nodeDescr) to add nodeDescr again



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13209) DistributedFileSystem.create should allow an option to provide StoragePolicy

2019-01-31 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-13209:

Attachment: HDFS-13209-01.patch

> DistributedFileSystem.create should allow an option to provide StoragePolicy
> 
>
> Key: HDFS-13209
> URL: https://issues.apache.org/jira/browse/HDFS-13209
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: hdfs
>Affects Versions: 3.0.0
>Reporter: Jean-Marc Spaggiari
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-13209-01.patch
>
>
> DistributedFileSystem.create allows to get a FSDataOutputStream. The stored 
> file and related blocks will used the directory based StoragePolicy.
>  
> However, sometime, we might need to keep all files in the same directory 
> (consistency constraint) but might want some of them on SSD (small, in my 
> case) until they are processed and merger/removed. Then they will go on the 
> default policy.
>  
> When creating a file, it will be useful to have an option to specify a 
> different StoragePolicy...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14249) RBF: Tooling to identify the subcluster location of a file

2019-01-31 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HDFS-14249:
--

Assignee: Íñigo Goiri

> RBF: Tooling to identify the subcluster location of a file
> --
>
> Key: HDFS-14249
> URL: https://issues.apache.org/jira/browse/HDFS-14249
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
>
> Mount points can spread files across multiple subclusters depennding on a 
> policy (e.g., HASH, HASH_ALL). Administrators would need a way to identify 
> the location.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14249) RBF: Tooling to identify the subcluster location of a file

2019-01-31 Thread JIRA
Íñigo Goiri created HDFS-14249:
--

 Summary: RBF: Tooling to identify the subcluster location of a file
 Key: HDFS-14249
 URL: https://issues.apache.org/jira/browse/HDFS-14249
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Íñigo Goiri


Mount points can spread files across multiple subclusters depennding on a 
policy (e.g., HASH, HASH_ALL). Administrators would need a way to identify the 
location.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14172) Avoid NPE when SectionName#fromString() returns null

2019-01-31 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757504#comment-16757504
 ] 

Hadoop QA commented on HDFS-14172:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 110 new + 149 unchanged - 17 fixed = 259 total (was 166) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.viewfs.TestViewFileSystemHdfs |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HDFS-14172 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957110/HADOOP-14172.003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 40c5baf70904 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 71c49fa |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Updated] (HDDS-997) Add blockade Tests for scm isolation and mixed node isolation

2019-01-31 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-997:
---
Status: Patch Available  (was: Open)

Thanks for updating the patch [~nilotpalnandi].
+1, the v3 patch looks good to me.

Resubmitting the patch to re-trigger jenkins.

> Add blockade Tests for scm isolation and mixed node isolation
> -
>
> Key: HDDS-997
> URL: https://issues.apache.org/jira/browse/HDDS-997
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Attachments: HDDS-997.001.patch, HDDS-997.002.patch, 
> HDDS-997.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-997) Add blockade Tests for scm isolation and mixed node isolation

2019-01-31 Thread Mukul Kumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HDDS-997:
---
Status: Open  (was: Patch Available)

> Add blockade Tests for scm isolation and mixed node isolation
> -
>
> Key: HDDS-997
> URL: https://issues.apache.org/jira/browse/HDDS-997
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.0
>Reporter: Nilotpal Nandi
>Assignee: Nilotpal Nandi
>Priority: Major
> Attachments: HDDS-997.001.patch, HDDS-997.002.patch, 
> HDDS-997.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-956) MultipartUpload: List Parts for a Multipart upload key

2019-01-31 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-956?focusedWorklogId=192913=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-192913
 ]

ASF GitHub Bot logged work on HDDS-956:
---

Author: ASF GitHub Bot
Created on: 31/Jan/19 16:17
Start Date: 31/Jan/19 16:17
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #475: 
HDDS-956: MultipartUpload: List Parts for a Multipart upload key.
URL: https://github.com/apache/hadoop/pull/475
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 192913)
Time Spent: 0.5h  (was: 20m)

> MultipartUpload: List Parts for a Multipart upload key
> --
>
> Key: HDDS-956
> URL: https://issues.apache.org/jira/browse/HDDS-956
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.0
>
> Attachments: HDDS-956.00.patch, HDDS-956.01.patch, HDDS-956.02.patch, 
> HDDS-956.03.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> This Jira is to implement backend to support API in S3 for list parts for an 
> object.
> https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListParts.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >