[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication

2018-05-07 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466900#comment-16466900
 ] 

genericqa commented on HDFS-12284:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 14 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  2s{color} | {color:orange} hadoop-hdfs-project: The patch generated 14 new 
+ 0 unchanged - 0 fixed = 14 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-hdfs-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 50s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}130m  7s{color} 
| {color:red} hadoop-hdfs-project in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
32s{color} | {color:red} The patch generated 12 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}275m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.federation.router.TestRBFConfigFields 
|
|   | hadoop.hdfs.server.federation.router.TestRBFConfigFields |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.server.namenode.TestFsck |
|   

[jira] [Commented] (HDDS-19) Ozone: Update ozone to latest ratis snapshot build (0.1.1-alpha-4309324-SNAPSHOT)

2018-05-07 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-19?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466842#comment-16466842
 ] 

Nanda kumar commented on HDDS-19:
-

[~szetszwo], {{java.lang.NoSuchFieldError}} is due to the fact that we are 
expecting {{io.opencensus.trace.unsafe.ContextUtils.CONTEXT_SPAN_KEY}} of type 
{{org.apache.ratis.shaded.io.grpc.Context.Key}} from 
{{io.opencensus.trace.unsafe.ContextUtils}}, since 
{{io.opencensus.trace.unsafe.ContextUtils}} is loaded from 
opencensus-api-0.12.2.jar it has 
{{io.opencensus.trace.unsafe.ContextUtils.CONTEXT_SPAN_KEY}} of type 
{{io.grpc.Context.Key}}. 

{{io.opencensus.trace.unsafe.ContextUtils}} with 
{{io.opencensus.trace.unsafe.ContextUtils.CONTEXT_SPAN_KEY}} of type 
{{org.apache.ratis.shaded.io.grpc.Context.Key}} is present in 
{{ratis-proto-shaded}}.

This is related to RATIS-237.

> Ozone: Update ozone to latest ratis snapshot build 
> (0.1.1-alpha-4309324-SNAPSHOT)
> -
>
> Key: HDDS-19
> URL: https://issues.apache.org/jira/browse/HDDS-19
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-19.001.patch, HDFS-13456-HDFS-7240.001.patch, 
> HDFS-13456-HDFS-7240.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-6) Enable SCM kerberos auth

2018-05-07 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466840#comment-16466840
 ] 

genericqa commented on HDDS-6:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HDDS-6 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDDS-6 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922386/HDDS-4-HDDS-6.01.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/52/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Enable SCM kerberos auth
> 
>
> Key: HDDS-6
> URL: https://issues.apache.org/jira/browse/HDDS-6
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM, Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-4-HDDS-6.00.patch, HDDS-4-HDDS-6.01.patch
>
>
> Enable SCM kerberos auth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-6) Enable SCM kerberos auth

2018-05-07 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466824#comment-16466824
 ] 

Ajay Kumar commented on HDDS-6:
---

[~xyao], thanks for review. Addressed your comments in patch v1. Instead of 
adding new test in {{TestStorageContainerManager}} added new class 
{{TestSecureOzoneCluster}} as success needs some pre configuration for kerberos.

> Enable SCM kerberos auth
> 
>
> Key: HDDS-6
> URL: https://issues.apache.org/jira/browse/HDDS-6
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM, Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-4-HDDS-6.00.patch, HDDS-4-HDDS-6.01.patch
>
>
> Enable SCM kerberos auth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-6) Enable SCM kerberos auth

2018-05-07 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-6:
--
Attachment: HDDS-4-HDDS-6.01.patch

> Enable SCM kerberos auth
> 
>
> Key: HDDS-6
> URL: https://issues.apache.org/jira/browse/HDDS-6
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM, Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-4-HDDS-6.00.patch, HDDS-4-HDDS-6.01.patch
>
>
> Enable SCM kerberos auth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-6) Enable SCM kerberos auth

2018-05-07 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-6:
--
Attachment: (was: HDDS-4-HDDS-6.01.patch)

> Enable SCM kerberos auth
> 
>
> Key: HDDS-6
> URL: https://issues.apache.org/jira/browse/HDDS-6
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM, Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-4-HDDS-6.00.patch, HDDS-4-HDDS-6.01.patch
>
>
> Enable SCM kerberos auth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-6) Enable SCM kerberos auth

2018-05-07 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-6:
--
Attachment: HDDS-4-HDDS-6.01.patch

> Enable SCM kerberos auth
> 
>
> Key: HDDS-6
> URL: https://issues.apache.org/jira/browse/HDDS-6
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM, Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-4-HDDS-6.00.patch, HDDS-4-HDDS-6.01.patch
>
>
> Enable SCM kerberos auth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13532) RBF: Adding security

2018-05-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466791#comment-16466791
 ] 

Íñigo Goiri edited comment on HDFS-13532 at 5/8/18 3:00 AM:


[~zhengxg3] do you mind adding a design doc here for this as [~daryn] asked in 
HDFS-13358?


was (Author: elgoiri):
[~zhengxg3] do you mind adding a design doc here for this as [~daryn] asked in 
HDFS-12284?

> RBF: Adding security
> 
>
> Key: HDFS-13532
> URL: https://issues.apache.org/jira/browse/HDFS-13532
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Sherwood Zheng
>Priority: Major
>
> HDFS Router based federation should support security. This includes 
> authentication and delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13532) RBF: Adding security

2018-05-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466791#comment-16466791
 ] 

Íñigo Goiri commented on HDFS-13532:


[~zhengxg3] do you mind adding a design doc here for this as [~daryn] asked in 
HDFS-12284?

> RBF: Adding security
> 
>
> Key: HDFS-13532
> URL: https://issues.apache.org/jira/browse/HDFS-13532
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Sherwood Zheng
>Priority: Major
>
> HDFS Router based federation should support security. This includes 
> authentication and delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12510) RBF: Add security to UI

2018-05-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12510:
---
Issue Type: Task  (was: Sub-task)
Parent: (was: HDFS-12615)

> RBF: Add security to UI
> ---
>
> Key: HDFS-12510
> URL: https://issues.apache.org/jira/browse/HDFS-12510
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Íñigo Goiri
>Priority: Major
>  Labels: RBF
>
> HDFS-12273 implemented the UI for Router Based Federation without security.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12510) RBF: Add security to UI

2018-05-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12510:
---
Issue Type: Sub-task  (was: Task)
Parent: HDFS-13532

> RBF: Add security to UI
> ---
>
> Key: HDFS-12510
> URL: https://issues.apache.org/jira/browse/HDFS-12510
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Priority: Major
>  Labels: RBF
>
> HDFS-12273 implemented the UI for Router Based Federation without security.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13358) RBF: Support for Delegation Token

2018-05-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13358:
---
Issue Type: Bug  (was: Sub-task)
Parent: (was: HDFS-12615)

> RBF: Support for Delegation Token
> -
>
> Key: HDFS-13358
> URL: https://issues.apache.org/jira/browse/HDFS-13358
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Sherwood Zheng
>Assignee: Sherwood Zheng
>Priority: Major
>
> HDFS Router should support issuing / managing HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13358) RBF: Support for Delegation Token

2018-05-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-13358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-13358:
---
Issue Type: Sub-task  (was: Bug)
Parent: HDFS-13532

> RBF: Support for Delegation Token
> -
>
> Key: HDFS-13358
> URL: https://issues.apache.org/jira/browse/HDFS-13358
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Sherwood Zheng
>Assignee: Sherwood Zheng
>Priority: Major
>
> HDFS Router should support issuing / managing HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12284) RBF: Support for Kerberos authentication

2018-05-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12284:
---
Issue Type: Sub-task  (was: Task)
Parent: HDFS-13532

> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Fix For: HDFS-10467
>
> Attachments: HDFS-12284.000.patch, HDFS-12284.001.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12284) RBF: Support for Kerberos authentication

2018-05-07 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-12284:
---
Issue Type: Task  (was: Sub-task)
Parent: (was: HDFS-12615)

> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Fix For: HDFS-10467
>
> Attachments: HDFS-12284.000.patch, HDFS-12284.001.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13532) RBF: Adding security

2018-05-07 Thread JIRA
Íñigo Goiri created HDFS-13532:
--

 Summary: RBF: Adding security
 Key: HDFS-13532
 URL: https://issues.apache.org/jira/browse/HDFS-13532
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Íñigo Goiri
Assignee: Sherwood Zheng


HDFS Router based federation should support security. This includes 
authentication and delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12136) BlockSender performance regression due to volume scanner edge case

2018-05-07 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466776#comment-16466776
 ] 

Junping Du commented on HDFS-12136:
---

looks like HDFS-11187 may not cover all fix here. [~jojochuang], do you have 
further comments here?
Move it to 2.8.5 as we need more discussion and 2.8.4 is in RC stage.

> BlockSender performance regression due to volume scanner edge case
> --
>
> Key: HDFS-12136
> URL: https://issues.apache.org/jira/browse/HDFS-12136
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-12136.branch-2.patch, HDFS-12136.trunk.patch
>
>
> HDFS-11160 attempted to fix a volume scan race for a file appended mid-scan 
> by reading the last checksum of finalized blocks within the {{BlockSender}} 
> ctor.  Unfortunately it's holding the exclusive dataset lock to open and read 
> the metafile multiple times  Block sender instantiation becomes serialized.
> Performance completely collapses under heavy disk i/o utilization or high 
> xceiver activity.  Ex. lost node replication, balancing, or decommissioning.  
> The xceiver threads congest creating block senders and impair the heartbeat 
> processing that is contending for the same lock.  Combined with other lock 
> contention issues, pipelines break and nodes sporadically go dead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12136) BlockSender performance regression due to volume scanner edge case

2018-05-07 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-12136:
--
Target Version/s: 2.8.5  (was: 2.8.4)

> BlockSender performance regression due to volume scanner edge case
> --
>
> Key: HDFS-12136
> URL: https://issues.apache.org/jira/browse/HDFS-12136
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-12136.branch-2.patch, HDFS-12136.trunk.patch
>
>
> HDFS-11160 attempted to fix a volume scan race for a file appended mid-scan 
> by reading the last checksum of finalized blocks within the {{BlockSender}} 
> ctor.  Unfortunately it's holding the exclusive dataset lock to open and read 
> the metafile multiple times  Block sender instantiation becomes serialized.
> Performance completely collapses under heavy disk i/o utilization or high 
> xceiver activity.  Ex. lost node replication, balancing, or decommissioning.  
> The xceiver threads congest creating block senders and impair the heartbeat 
> processing that is contending for the same lock.  Combined with other lock 
> contention issues, pipelines break and nodes sporadically go dead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-27) Fix TestStorageContainerManager#testBlockDeletionTransactions

2018-05-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-27?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466737#comment-16466737
 ] 

Hudson commented on HDDS-27:


SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14136 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14136/])
HDDS-27. Fix TestStorageContainerManager#testBlockDeletionTra;nsactions. (xyao: 
rev 08ea90e1e4cd5d4860668a1368f0b0396fbe83e0)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManager.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManagerHelper.java


> Fix TestStorageContainerManager#testBlockDeletionTransactions
> -
>
> Key: HDDS-27
> URL: https://issues.apache.org/jira/browse/HDDS-27
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-27.001.patch
>
>
> TestStorageContainerManagerHelper#getAllBlocks needs to handle ID based 
> blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-28) Duplicate declaration in hadoop-tools/hadoop-ozone/pom.xml

2018-05-07 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-28?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466733#comment-16466733
 ] 

genericqa commented on HDDS-28:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
69m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 54s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} tools in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
53s{color} | {color:green} hadoop-ozone in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-dist in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}119m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-28 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922361/o28_20180507b.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux eb962a4a1cf0 4.4.0-121-generic #145-Ubuntu SMP Fri Apr 13 
13:47:23 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 696a4be |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/51/testReport/ |
| Max. process+thread count | 1096 (vs. ulimit of 1) |
| modules | C: hadoop-ozone/tools hadoop-tools/hadoop-ozone hadoop-dist U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/51/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Created] (HDDS-32) Fix TestContainerDeletionChoosingPolicy#testTopNOrderedChoosingPolicy

2018-05-07 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-32:
--

 Summary: Fix 
TestContainerDeletionChoosingPolicy#testTopNOrderedChoosingPolicy
 Key: HDDS-32
 URL: https://issues.apache.org/jira/browse/HDDS-32
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-27) Fix TestStorageContainerManager#testBlockDeletionTransactions

2018-05-07 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-27?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-27:
---
   Resolution: Fixed
Fix Version/s: 0.2.1
   Status: Resolved  (was: Patch Available)

Thanks [~anu] for the review. I've commit the fix to trunk.

> Fix TestStorageContainerManager#testBlockDeletionTransactions
> -
>
> Key: HDDS-27
> URL: https://issues.apache.org/jira/browse/HDDS-27
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-27.001.patch
>
>
> TestStorageContainerManagerHelper#getAllBlocks needs to handle ID based 
> blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-27) Fix TestStorageContainerManager#testBlockDeletionTransactions

2018-05-07 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-27?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466708#comment-16466708
 ] 

genericqa commented on HDDS-27:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 49m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 23m 16s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}111m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.TestStorageContainerManager |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.scm.TestSCMCli |
|   | hadoop.ozone.scm.TestContainerSQLCli |
|   | hadoop.ozone.TestOzoneConfigurationFields |
|   | hadoop.ozone.container.common.impl.TestContainerDeletionChoosingPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-27 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922357/HDDS-27.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7b6e87a25f7b 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 696a4be |
| maven | 

[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication

2018-05-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466696#comment-16466696
 ] 

Íñigo Goiri commented on HDFS-12284:


Thanks [~zhengxg3] for [^HDFS-12284.001.patch].
 Yetus will report on a few license missing and a couple other style things 
there.
 A couple comments:
 * I'm not sure there is a point on reimplementing all the methods to test 
this. I would just make a test that starts a secure cluster and does a few 
typical operations; that should be enough.
 * The change in hadoop-hdfs-project/pom.xml shouldn't be there.
 * I'm not sure we should do the {{MiniRouterDFSCluster#getWebAddress()}} 
approach (in any case, I think the null check is wrong there). How does the 
regular MiniDFSCluster test this? I would take SecurityConfUtil and make it 
something like {{TestSecureRouterFederation}}.
 * For the temporary path in SecurityConfUtil, I would use the same approach as 
in {{TestSecureNNWithQJM}}.

> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Fix For: HDFS-10467
>
> Attachments: HDFS-12284.000.patch, HDFS-12284.001.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13322) fuse dfs - uid persists when switching between ticket caches

2018-05-07 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466693#comment-16466693
 ] 

genericqa commented on HDFS-13322:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
59m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 16m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  8m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 24m 
38s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13322 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922350/HDFS-13322.002.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 3019cd8416f0 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 696a4be |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24146/testReport/ |
| Max. process+thread count | 301 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24146/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> fuse dfs - uid persists when switching between ticket caches
> 
>
> Key: HDFS-13322
> URL: https://issues.apache.org/jira/browse/HDFS-13322
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.6.0
> Environment: Linux xx.xx.xx.xxx 3.10.0-514.el7.x86_64 #1 SMP Wed 
> Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
>  
>Reporter: Alex Volskiy
>Assignee: Istvan Fajth
>Priority: Minor
> Attachments: HDFS-13322.001.patch, HDFS-13322.002.patch, 
> testHDFS-13322.sh, test_after_patch.out, test_before_patch.out
>
>
> The symptoms 

[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication

2018-05-07 Thread Sherwood Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466679#comment-16466679
 ] 

Sherwood Zheng commented on HDFS-12284:
---

Uploaded a patch with unit test. 

I have addressed most of [~daryn]'s comments except for the jmx change one, 
where I have to do a meaningless doAs in order to make it work, not sure why 
this is the case yet, still investigating into it. Please review my patch in 
the meantime.

Also fixed some style issues with >80 char line.

> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Fix For: HDFS-10467
>
> Attachments: HDFS-12284.000.patch, HDFS-12284.001.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-28) Duplicate declaration in hadoop-tools/hadoop-ozone/pom.xml

2018-05-07 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-28?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466676#comment-16466676
 ] 

genericqa commented on HDDS-28:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
40m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
56s{color} | {color:green} hadoop-ozone in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-28 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922358/o28_20180507.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux 49048eb52e17 3.13.0-141-generic #190-Ubuntu SMP Fri Jan 19 
12:52:38 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 696a4be |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/50/testReport/ |
| Max. process+thread count | 992 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-ozone U: hadoop-tools/hadoop-ozone |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/50/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Duplicate declaration in hadoop-tools/hadoop-ozone/pom.xml
> --
>
> Key: HDDS-28
> URL: https://issues.apache.org/jira/browse/HDDS-28
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: o28_20180507.patch, 

[jira] [Updated] (HDFS-12284) RBF: Support for Kerberos authentication

2018-05-07 Thread Sherwood Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sherwood Zheng updated HDFS-12284:
--
Attachment: HDFS-12284.001.patch

> RBF: Support for Kerberos authentication
> 
>
> Key: HDFS-12284
> URL: https://issues.apache.org/jira/browse/HDFS-12284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: security
>Reporter: Zhe Zhang
>Assignee: Sherwood Zheng
>Priority: Major
> Fix For: HDFS-10467
>
> Attachments: HDFS-12284.000.patch, HDFS-12284.001.patch
>
>
> HDFS Router should support Kerberos authentication and issuing / managing 
> HDFS delegation tokens.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-19) Ozone: Update ozone to latest ratis snapshot build (0.1.1-alpha-4309324-SNAPSHOT)

2018-05-07 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-19?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466627#comment-16466627
 ] 

Tsz Wo Nicholas Sze commented on HDDS-19:
-

Just found some problem in the pom files; see HDDS-28.  Not sure if it would 
also fix the problem here.

> Ozone: Update ozone to latest ratis snapshot build 
> (0.1.1-alpha-4309324-SNAPSHOT)
> -
>
> Key: HDDS-19
> URL: https://issues.apache.org/jira/browse/HDDS-19
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-19.001.patch, HDFS-13456-HDFS-7240.001.patch, 
> HDFS-13456-HDFS-7240.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-28) Duplicate declaration in hadoop-tools/hadoop-ozone/pom.xml

2018-05-07 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-28?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDDS-28:

Attachment: o28_20180507b.patch

> Duplicate declaration in hadoop-tools/hadoop-ozone/pom.xml
> --
>
> Key: HDDS-28
> URL: https://issues.apache.org/jira/browse/HDDS-28
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: o28_20180507.patch, o28_20180507b.patch
>
>
> {code}
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-ozone-filesystem:jar:3.2.0-SNAPSHOT
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-server-framework:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 173, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-server-scm:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 178, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-client:jar -> duplicate declaration 
> of version (?) @ org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 183, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-container-service:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 188, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-ozone-ozone-manager:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 193, column 17
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-28) Duplicate declaration in hadoop-tools/hadoop-ozone/pom.xml

2018-05-07 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-28?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466625#comment-16466625
 ] 

Tsz Wo Nicholas Sze commented on HDDS-28:
-

o28_20180507b.patch: adds "provided" scope for the internal dependencies.

> Duplicate declaration in hadoop-tools/hadoop-ozone/pom.xml
> --
>
> Key: HDDS-28
> URL: https://issues.apache.org/jira/browse/HDDS-28
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: o28_20180507.patch, o28_20180507b.patch
>
>
> {code}
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-ozone-filesystem:jar:3.2.0-SNAPSHOT
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-server-framework:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 173, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-server-scm:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 178, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-client:jar -> duplicate declaration 
> of version (?) @ org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 183, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-container-service:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 188, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-ozone-ozone-manager:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 193, column 17
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-27) Fix TestStorageContainerManager#testBlockDeletionTransactions

2018-05-07 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-27?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466621#comment-16466621
 ] 

Anu Engineer commented on HDDS-27:
--

+1, pending jenkins

> Fix TestStorageContainerManager#testBlockDeletionTransactions
> -
>
> Key: HDDS-27
> URL: https://issues.apache.org/jira/browse/HDDS-27
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-27.001.patch
>
>
> TestStorageContainerManagerHelper#getAllBlocks needs to handle ID based 
> blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-25) Simple async event processing for SCM

2018-05-07 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-25?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466619#comment-16466619
 ] 

genericqa commented on HDDS-25:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-hdds/framework generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-hdds_framework generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} framework in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
24s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdds/framework |
|  |  
org.apache.hadoop.hdds.server.org.apache.hdds.events.EventQueue.fireEvent(Event,
 Object) makes inefficient use of keySet iterator instead of entrySet iterator  
At EventQueue.java:of keySet iterator instead of entrySet iterator  At 
EventQueue.java:[line 82] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922329/HDDS-25.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 884a08d10628 4.4.0-121-generic #145-Ubuntu SMP Fri Apr 13 
13:47:23 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 696a4be |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDDS-Build/48/artifact/out/new-findbugs-hadoop-hdds_framework.html
 |
| javadoc | 

[jira] [Commented] (HDFS-13174) hdfs mover -p /path times out after 20 min

2018-05-07 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466611#comment-16466611
 ] 

Wei-Chiu Chuang commented on HDFS-13174:


Thanks for raising the issue, [~pifta]. The description makes sense to me. I'll 
review the patch.

> hdfs mover -p /path times out after 20 min
> --
>
> Key: HDFS-13174
> URL: https://issues.apache.org/jira/browse/HDFS-13174
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha2
>Reporter: Istvan Fajth
>Assignee: Istvan Fajth
>Priority: Major
> Attachments: HDFS-13174.001.patch
>
>
> In HDFS-11015 there is an iteration timeout introduced in Dispatcher.Source 
> class, that is checked during dispatching the moves that the Balancer and the 
> Mover does. This timeout is hardwired to 20 minutes.
> In the Balancer we have iterations, and even if an iteration is timing out 
> the Balancer runs further and does an other iteration before it fails if 
> there were no moves happened in a few iterations.
> The Mover on the other hand does not have iterations, so if moving a path 
> runs for more than 20 minutes, and there are moves decided and enqueued 
> between two DataNode, after 20 minutes Mover will stop with the following 
> exception reported to the console (lines might differ as this exception came 
> from a CDH5.12.1 installation).
>  java.io.IOException: Block move timed out
>  at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.receiveResponse(Dispatcher.java:382)
>  at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.dispatch(Dispatcher.java:328)
>  at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$PendingMove.access$2500(Dispatcher.java:186)
>  at 
> org.apache.hadoop.hdfs.server.balancer.Dispatcher$1.run(Dispatcher.java:956)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
>  
> Note that this issue is not coming up if all blocks can be moved inside the 
> DataNodes without having to move the block to an other DataNode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13486) Backport HDFS-11817 (A faulty node can cause a lease leak and NPE on accessing data) to branch-2.7

2018-05-07 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13486:
---
Summary: Backport HDFS-11817 (A faulty node can cause a lease leak and NPE 
on accessing data) to branch-2.7  (was: Backport HDFS-11817 to branch-2.7)

> Backport HDFS-11817 (A faulty node can cause a lease leak and NPE on 
> accessing data) to branch-2.7
> --
>
> Key: HDFS-13486
> URL: https://issues.apache.org/jira/browse/HDFS-13486
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 2.7.7
>
> Attachments: HDFS-11817.branch-2.7.001.patch, 
> HDFS-11817.branch-2.7.002.patch
>
>
> HDFS-11817 is a good fix to have in branch-2.7.
> I'm taking a stab at it now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13486) Backport HDFS-11817 (A faulty node can cause a lease leak and NPE on accessing data) to branch-2.7

2018-05-07 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-13486:
---
   Resolution: Fixed
Fix Version/s: 2.7.7
   Status: Resolved  (was: Patch Available)

Committed to branch-2.7

> Backport HDFS-11817 (A faulty node can cause a lease leak and NPE on 
> accessing data) to branch-2.7
> --
>
> Key: HDFS-13486
> URL: https://issues.apache.org/jira/browse/HDFS-13486
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 2.7.7
>
> Attachments: HDFS-11817.branch-2.7.001.patch, 
> HDFS-11817.branch-2.7.002.patch
>
>
> HDFS-11817 is a good fix to have in branch-2.7.
> I'm taking a stab at it now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-31) Fix TestSCMCli

2018-05-07 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-31?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-31:
---
Summary: Fix TestSCMCli  (was: Fix TestSCMCli.testHelp)

> Fix TestSCMCli
> --
>
> Key: HDDS-31
> URL: https://issues.apache.org/jira/browse/HDDS-31
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Priority: Major
>
> [ERROR]   TestSCMCli.testHelp:481 expected:<[usage: hdfs scm -container 
> -create
> ]> but was:<[]>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-31) Fix TestSCMCli

2018-05-07 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-31?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-31:
---
Description: 
[ERROR]   TestSCMCli.testHelp:481 expected:<[usage: hdfs scm -container -create
]> but was:<[]>
[ERROR]   TestSCMCli.testListContainerCommand:406
[ERROR] Errors:


  was:
[ERROR]   TestSCMCli.testHelp:481 expected:<[usage: hdfs scm -container -create
]> but was:<[]>



> Fix TestSCMCli
> --
>
> Key: HDDS-31
> URL: https://issues.apache.org/jira/browse/HDDS-31
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Priority: Major
>
> [ERROR]   TestSCMCli.testHelp:481 expected:<[usage: hdfs scm -container 
> -create
> ]> but was:<[]>
> [ERROR]   TestSCMCli.testListContainerCommand:406
> [ERROR] Errors:



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-30) Fix TestContainerSQLCli

2018-05-07 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-30:
--

 Summary: Fix TestContainerSQLCli
 Key: HDDS-30
 URL: https://issues.apache.org/jira/browse/HDDS-30
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-31) Fix TestSCMCli.testHelp

2018-05-07 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-31:
--

 Summary: Fix TestSCMCli.testHelp
 Key: HDDS-31
 URL: https://issues.apache.org/jira/browse/HDDS-31
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao


[ERROR]   TestSCMCli.testHelp:481 expected:<[usage: hdfs scm -container -create
]> but was:<[]>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-28) Duplicate declaration in hadoop-tools/hadoop-ozone/pom.xml

2018-05-07 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-28?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDDS-28:

Attachment: o28_20180507.patch

> Duplicate declaration in hadoop-tools/hadoop-ozone/pom.xml
> --
>
> Key: HDDS-28
> URL: https://issues.apache.org/jira/browse/HDDS-28
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: o28_20180507.patch
>
>
> {code}
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-ozone-filesystem:jar:3.2.0-SNAPSHOT
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-server-framework:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 173, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-server-scm:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 178, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-client:jar -> duplicate declaration 
> of version (?) @ org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 183, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-container-service:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 188, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-ozone-ozone-manager:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 193, column 17
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-28) Duplicate declaration in hadoop-tools/hadoop-ozone/pom.xml

2018-05-07 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-28?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDDS-28:

Status: Patch Available  (was: Open)

> Duplicate declaration in hadoop-tools/hadoop-ozone/pom.xml
> --
>
> Key: HDDS-28
> URL: https://issues.apache.org/jira/browse/HDDS-28
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Attachments: o28_20180507.patch
>
>
> {code}
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-ozone-filesystem:jar:3.2.0-SNAPSHOT
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-server-framework:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 173, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-server-scm:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 178, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-client:jar -> duplicate declaration 
> of version (?) @ org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 183, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-hdds-container-service:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 188, column 17
> [WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
> be unique: org.apache.hadoop:hadoop-ozone-ozone-manager:jar -> duplicate 
> declaration of version (?) @ 
> org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
> /Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
> 193, column 17
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-29) Fix TestStorageContainerManager#testRpcPermission

2018-05-07 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-29:
--

 Summary: Fix TestStorageContainerManager#testRpcPermission
 Key: HDDS-29
 URL: https://issues.apache.org/jira/browse/HDDS-29
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao


This is caused by the limitation of object instantiated (ClientProtocolServer 
after HDDS-13) inside mocked object via Mockito.spy does not return fakedUser. 





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-28) Duplicate declaration in hadoop-tools/hadoop-ozone/pom.xml

2018-05-07 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDDS-28:
---

 Summary: Duplicate declaration in hadoop-tools/hadoop-ozone/pom.xml
 Key: HDDS-28
 URL: https://issues.apache.org/jira/browse/HDDS-28
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Filesystem
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


{code}
[WARNING] Some problems were encountered while building the effective model for 
org.apache.hadoop:hadoop-ozone-filesystem:jar:3.2.0-SNAPSHOT
[WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
be unique: org.apache.hadoop:hadoop-hdds-server-framework:jar -> duplicate 
declaration of version (?) @ 
org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
/Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
173, column 17
[WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
be unique: org.apache.hadoop:hadoop-hdds-server-scm:jar -> duplicate 
declaration of version (?) @ 
org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
/Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
178, column 17
[WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
be unique: org.apache.hadoop:hadoop-hdds-client:jar -> duplicate declaration of 
version (?) @ org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
/Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
183, column 17
[WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
be unique: org.apache.hadoop:hadoop-hdds-container-service:jar -> duplicate 
declaration of version (?) @ 
org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
/Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
188, column 17
[WARNING] 'dependencies.dependency.(groupId:artifactId:type:classifier)' must 
be unique: org.apache.hadoop:hadoop-ozone-ozone-manager:jar -> duplicate 
declaration of version (?) @ 
org.apache.hadoop:hadoop-ozone-filesystem:[unknown-version], 
/Users/szetszwo/hadoop/apache-hadoop/hadoop-tools/hadoop-ozone/pom.xml, line 
193, column 17
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-27) Fix TestStorageContainerManager#testBlockDeletionTransactions

2018-05-07 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-27?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-27:
---
Status: Patch Available  (was: Open)

> Fix TestStorageContainerManager#testBlockDeletionTransactions
> -
>
> Key: HDDS-27
> URL: https://issues.apache.org/jira/browse/HDDS-27
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-27.001.patch
>
>
> TestStorageContainerManagerHelper#getAllBlocks needs to handle ID based 
> blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-27) Fix TestStorageContainerManager#testBlockDeletionTransactions

2018-05-07 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-27?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-27:
---
Attachment: HDDS-27.001.patch

> Fix TestStorageContainerManager#testBlockDeletionTransactions
> -
>
> Key: HDDS-27
> URL: https://issues.apache.org/jira/browse/HDDS-27
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-27.001.patch
>
>
> TestStorageContainerManagerHelper#getAllBlocks needs to handle ID based 
> blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-27) Fix TestStorageContainerManager#testBlockDeletionTransactions

2018-05-07 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-27:
--

 Summary: Fix 
TestStorageContainerManager#testBlockDeletionTransactions
 Key: HDDS-27
 URL: https://issues.apache.org/jira/browse/HDDS-27
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Xiaoyu Yao


TestStorageContainerManagerHelper#getAllBlocks needs to handle ID based blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-27) Fix TestStorageContainerManager#testBlockDeletionTransactions

2018-05-07 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-27?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-27:
--

Assignee: Xiaoyu Yao

> Fix TestStorageContainerManager#testBlockDeletionTransactions
> -
>
> Key: HDDS-27
> URL: https://issues.apache.org/jira/browse/HDDS-27
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> TestStorageContainerManagerHelper#getAllBlocks needs to handle ID based 
> blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-26) Fix Ozone Unit Test Failures

2018-05-07 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-26?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reassigned HDDS-26:
--

Assignee: Xiaoyu Yao

> Fix Ozone Unit Test Failures
> 
>
> Key: HDDS-26
> URL: https://issues.apache.org/jira/browse/HDDS-26
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> This is an umbrellas JIRA to fix unit test failures related or unrelated 
> HDDS-1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-26) Fix Ozone Unit Test Failures

2018-05-07 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDDS-26:
--

 Summary: Fix Ozone Unit Test Failures
 Key: HDDS-26
 URL: https://issues.apache.org/jira/browse/HDDS-26
 Project: Hadoop Distributed Data Store
  Issue Type: Test
Reporter: Xiaoyu Yao


This is an umbrellas JIRA to fix unit test failures related or unrelated HDDS-1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12981) renameSnapshot a Non-Existent snapshot to itself should throw error

2018-05-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466586#comment-16466586
 ] 

Hudson commented on HDFS-12981:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14135 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14135/])
HDFS-12981. renameSnapshot a Non-Existent snapshot to itself should (xiao: rev 
696a4be0daac00dd3bb64801d9fbe659aef9e089)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotRename.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSnapshotCommands.java


> renameSnapshot a Non-Existent snapshot to itself should throw error
> ---
>
> Key: HDFS-12981
> URL: https://issues.apache.org/jira/browse/HDFS-12981
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Patel
>Assignee: Kitti Nanasi
>Priority: Minor
> Fix For: 2.10.0, 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HDFS-12981-branch-2.6.0.001.patch, 
> HDFS-12981-branch-2.6.0.002.patch, HDFS-12981.001.patch, 
> HDFS-12981.002.patch, HDFS-12981.003.patch, HDFS-12981.004.patch
>
>
> When trying to rename a non-existent HDFS  snapshot to ITSELF, there are no 
> errors and exits with a success code.
> The steps to reproduce this issue is:
> hdfs dfs -mkdir /tmp/dir1
> hdfs dfsadmin -allowSnapshot /tmp/dir1
> hdfs dfs  -createSnapshot /tmp/dir1  snap1_dir
> Rename from non-existent to another_non-existent : errors and return code 1.  
> This is correct.
>   hdfs dfs -renameSnapshot /tmp/dir1 nonexist another_nonexist  : 
>   echo $?
>
>   renameSnapshot: The snapshot nonexist does not exist for directory /tmp/dir1
> Rename from non-existent to non-existent : no errors and return code 0  
> instead of Error and return code 1.
>   hdfs dfs -renameSnapshot /tmp/dir1 nonexist nonexist  ;  echo $?
> Current behavior:   No error and return code 0.
> Expected behavior:  An error returned and return code 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12981) renameSnapshot a Non-Existent snapshot to itself should throw error

2018-05-07 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-12981:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.4
   3.1.1
   3.2.0
   2.10.0
   Status: Resolved  (was: Patch Available)

Committed to trunk through branch-2. Thanks again Sailesh for the report, and 
Kitti for the fix!

> renameSnapshot a Non-Existent snapshot to itself should throw error
> ---
>
> Key: HDFS-12981
> URL: https://issues.apache.org/jira/browse/HDFS-12981
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Patel
>Assignee: Kitti Nanasi
>Priority: Minor
> Fix For: 2.10.0, 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HDFS-12981-branch-2.6.0.001.patch, 
> HDFS-12981-branch-2.6.0.002.patch, HDFS-12981.001.patch, 
> HDFS-12981.002.patch, HDFS-12981.003.patch, HDFS-12981.004.patch
>
>
> When trying to rename a non-existent HDFS  snapshot to ITSELF, there are no 
> errors and exits with a success code.
> The steps to reproduce this issue is:
> hdfs dfs -mkdir /tmp/dir1
> hdfs dfsadmin -allowSnapshot /tmp/dir1
> hdfs dfs  -createSnapshot /tmp/dir1  snap1_dir
> Rename from non-existent to another_non-existent : errors and return code 1.  
> This is correct.
>   hdfs dfs -renameSnapshot /tmp/dir1 nonexist another_nonexist  : 
>   echo $?
>
>   renameSnapshot: The snapshot nonexist does not exist for directory /tmp/dir1
> Rename from non-existent to non-existent : no errors and return code 0  
> instead of Error and return code 1.
>   hdfs dfs -renameSnapshot /tmp/dir1 nonexist nonexist  ;  echo $?
> Current behavior:   No error and return code 0.
> Expected behavior:  An error returned and return code 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12981) renameSnapshot a Non-Existent snapshot to itself should throw error

2018-05-07 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466561#comment-16466561
 ] 

Xiao Chen commented on HDFS-12981:
--

+1 on patch 4, committing

> renameSnapshot a Non-Existent snapshot to itself should throw error
> ---
>
> Key: HDFS-12981
> URL: https://issues.apache.org/jira/browse/HDFS-12981
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Patel
>Assignee: Kitti Nanasi
>Priority: Minor
> Attachments: HDFS-12981-branch-2.6.0.001.patch, 
> HDFS-12981-branch-2.6.0.002.patch, HDFS-12981.001.patch, 
> HDFS-12981.002.patch, HDFS-12981.003.patch, HDFS-12981.004.patch
>
>
> When trying to rename a non-existent HDFS  snapshot to ITSELF, there are no 
> errors and exits with a success code.
> The steps to reproduce this issue is:
> hdfs dfs -mkdir /tmp/dir1
> hdfs dfsadmin -allowSnapshot /tmp/dir1
> hdfs dfs  -createSnapshot /tmp/dir1  snap1_dir
> Rename from non-existent to another_non-existent : errors and return code 1.  
> This is correct.
>   hdfs dfs -renameSnapshot /tmp/dir1 nonexist another_nonexist  : 
>   echo $?
>
>   renameSnapshot: The snapshot nonexist does not exist for directory /tmp/dir1
> Rename from non-existent to non-existent : no errors and return code 0  
> instead of Error and return code 1.
>   hdfs dfs -renameSnapshot /tmp/dir1 nonexist nonexist  ;  echo $?
> Current behavior:   No error and return code 0.
> Expected behavior:  An error returned and return code 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12981) renameSnapshot a Non-Existent snapshot to itself should throw error

2018-05-07 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-12981:
-
Summary: renameSnapshot a Non-Existent snapshot to itself should throw 
error  (was: HDFS  renameSnapshot to Itself for Non Existent snapshot should 
throw error)

> renameSnapshot a Non-Existent snapshot to itself should throw error
> ---
>
> Key: HDFS-12981
> URL: https://issues.apache.org/jira/browse/HDFS-12981
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Patel
>Assignee: Kitti Nanasi
>Priority: Minor
> Attachments: HDFS-12981-branch-2.6.0.001.patch, 
> HDFS-12981-branch-2.6.0.002.patch, HDFS-12981.001.patch, 
> HDFS-12981.002.patch, HDFS-12981.003.patch, HDFS-12981.004.patch
>
>
> When trying to rename a non-existent HDFS  snapshot to ITSELF, there are no 
> errors and exits with a success code.
> The steps to reproduce this issue is:
> hdfs dfs -mkdir /tmp/dir1
> hdfs dfsadmin -allowSnapshot /tmp/dir1
> hdfs dfs  -createSnapshot /tmp/dir1  snap1_dir
> Rename from non-existent to another_non-existent : errors and return code 1.  
> This is correct.
>   hdfs dfs -renameSnapshot /tmp/dir1 nonexist another_nonexist  : 
>   echo $?
>
>   renameSnapshot: The snapshot nonexist does not exist for directory /tmp/dir1
> Rename from non-existent to non-existent : no errors and return code 0  
> instead of Error and return code 1.
>   hdfs dfs -renameSnapshot /tmp/dir1 nonexist nonexist  ;  echo $?
> Current behavior:   No error and return code 0.
> Expected behavior:  An error returned and return code 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-19) Ozone: Update ozone to latest ratis snapshot build (0.1.1-alpha-4309324-SNAPSHOT)

2018-05-07 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-19?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466559#comment-16466559
 ] 

Tsz Wo Nicholas Sze commented on HDDS-19:
-

[~msingh], how could I reproduce java.lang.NoSuchFieldError?  What 
commands/steps I need to do exactly?

> Ozone: Update ozone to latest ratis snapshot build 
> (0.1.1-alpha-4309324-SNAPSHOT)
> -
>
> Key: HDDS-19
> URL: https://issues.apache.org/jira/browse/HDDS-19
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-19.001.patch, HDFS-13456-HDFS-7240.001.patch, 
> HDFS-13456-HDFS-7240.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1) Remove SCM Block DB

2018-05-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466545#comment-16466545
 ] 

Hudson commented on HDDS-1:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14134 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14134/])
HDDS-1. Remove SCM Block DB. Contributed by Xiaoyu Yao. (aengineer: rev 
3a43ac2851f5dea4deb8a1dfebf9bf65fc57bd76)
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkRocksDbStore.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CloseContainerHandler.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSQLCli.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/TestBlockDeletingService.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/ksm/helpers/KsmKeyLocationInfoGroup.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipelines/ratis/RatisManagerImpl.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/TestContainerMapping.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/common/DeleteBlockGroupResult.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSmallFile.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestXceiverClientManager.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/ozoneimpl/TestOzoneContainer.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/DeleteBlockResult.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerPersistence.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ContainerReport.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/ContainerManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMBlockProtocolServer.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/container/common/helpers/PipelineChannel.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/ChunkGroupOutputStream.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/interfaces/ContainerDeletionChoosingPolicy.java
* (edit) 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/container/CloseContainerHandler.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/web/client/TestKeys.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/metrics/TestContainerMetrics.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/protocol/StorageContainerLocationProtocol.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/background/BlockDeletingService.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClient.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMClientProtocolServer.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerStateManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/Mapping.java
* (edit) 
hadoop-hdds/container-service/src/main/proto/StorageContainerDatanodeProtocol.proto
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/ContainerUtils.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/ksm/KeySpaceManager.java
* (edit) 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkDatanodeDispatcher.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/server/TestContainerServer.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/LevelDBStore.java
* (edit) 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/helpers/FileUtils.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/ksm/KSMMetadataManagerImpl.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientManager.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/container/common/impl/TestContainerDeletionChoosingPolicy.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/storage/ContainerProtocolCalls.java
* (edit) 

[jira] [Commented] (HDFS-13322) fuse dfs - uid persists when switching between ticket caches

2018-05-07 Thread Istvan Fajth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466540#comment-16466540
 ] 

Istvan Fajth commented on HDFS-13322:
-

Added patch v2 with the forgotten API doc changes added.

> fuse dfs - uid persists when switching between ticket caches
> 
>
> Key: HDFS-13322
> URL: https://issues.apache.org/jira/browse/HDFS-13322
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.6.0
> Environment: Linux xx.xx.xx.xxx 3.10.0-514.el7.x86_64 #1 SMP Wed 
> Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
>  
>Reporter: Alex Volskiy
>Assignee: Istvan Fajth
>Priority: Minor
> Attachments: HDFS-13322.001.patch, HDFS-13322.002.patch, 
> testHDFS-13322.sh, test_after_patch.out, test_before_patch.out
>
>
> The symptoms of this issue are the same as described in HDFS-3608 except the 
> workaround that was applied (detect changes in UID ticket cache) doesn't 
> resolve the issue when multiple ticket caches are in use by the same user.
> Our use case requires that a job scheduler running as a specific uid obtain 
> separate kerberos sessions per job and that each of these sessions use a 
> separate cache. When switching sessions this way, no change is made to the 
> original ticket cache so the cached filesystem instance doesn't get 
> regenerated.
>  
> {{$ export KRB5CCNAME=/tmp/krb5cc_session1}}
> {{$ kinit user_a@domain}}
> {{$ touch /fuse_mount/tmp/testfile1}}
> {{$ ls -l /fuse_mount/tmp/testfile1}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile1*}}
> {{$ export KRB5CCNAME=/tmp/krb5cc_session2}}
> {{$ kinit user_b@domain}}
> {{$ touch /fuse_mount/tmp/testfile2}}
> {{$ ls -l /fuse_mount/tmp/testfile2}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile2*}}
> {{   }}{color:#d04437}*{{** expected owner to be user_b **}}*{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13322) fuse dfs - uid persists when switching between ticket caches

2018-05-07 Thread Istvan Fajth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Fajth updated HDFS-13322:

Attachment: HDFS-13322.002.patch

> fuse dfs - uid persists when switching between ticket caches
> 
>
> Key: HDFS-13322
> URL: https://issues.apache.org/jira/browse/HDFS-13322
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.6.0
> Environment: Linux xx.xx.xx.xxx 3.10.0-514.el7.x86_64 #1 SMP Wed 
> Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
>  
>Reporter: Alex Volskiy
>Assignee: Istvan Fajth
>Priority: Minor
> Attachments: HDFS-13322.001.patch, HDFS-13322.002.patch, 
> testHDFS-13322.sh, test_after_patch.out, test_before_patch.out
>
>
> The symptoms of this issue are the same as described in HDFS-3608 except the 
> workaround that was applied (detect changes in UID ticket cache) doesn't 
> resolve the issue when multiple ticket caches are in use by the same user.
> Our use case requires that a job scheduler running as a specific uid obtain 
> separate kerberos sessions per job and that each of these sessions use a 
> separate cache. When switching sessions this way, no change is made to the 
> original ticket cache so the cached filesystem instance doesn't get 
> regenerated.
>  
> {{$ export KRB5CCNAME=/tmp/krb5cc_session1}}
> {{$ kinit user_a@domain}}
> {{$ touch /fuse_mount/tmp/testfile1}}
> {{$ ls -l /fuse_mount/tmp/testfile1}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile1*}}
> {{$ export KRB5CCNAME=/tmp/krb5cc_session2}}
> {{$ kinit user_b@domain}}
> {{$ touch /fuse_mount/tmp/testfile2}}
> {{$ ls -l /fuse_mount/tmp/testfile2}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile2*}}
> {{   }}{color:#d04437}*{{** expected owner to be user_b **}}*{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-4) Implement security for Hadoop Distributed Storage Layer

2018-05-07 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-4?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDDS-4:
---

Assignee: Xiaoyu Yao  (was: Anu Engineer)

> Implement security for Hadoop Distributed Storage Layer 
> 
>
> Key: HDDS-4
> URL: https://issues.apache.org/jira/browse/HDDS-4
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: Security
>Reporter: Anu Engineer
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HadoopStorageLayerSecurity.pdf
>
>
> In HDFS-7240, we have created a scalable block layer that facilitates 
> separation of namespace and block layer.  Hadoop Distributed Storage Layer 
> (HDSL) allows us to scale HDFS(HDFS-10419) and as well as create ozone 
> (HDFS-13074).
> This JIRA is an umbrella JIRA that tracks the security-related work items for 
> Hadoop Distributed Storage Layer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1) Remove SCM Block DB

2018-05-07 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

[~xyao] Thanks for getting this Heculean task. The code review was so hard, I 
really appreciate how much work has gone into this patch.

There are some Ozone tests which are broken, but we can file JIRAs to address 
those issues.

 

> Remove SCM Block DB
> ---
>
> Key: HDDS-1
> URL: https://issues.apache.org/jira/browse/HDDS-1
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-1.002.patch, HDDS-1.003.patch, HDDS-1.004.patch, 
> HDDS-1.005.patch, HDFS-13504.001.patch
>
>
> The block/key information is maintained by Ozone Master (a.k.a. KSM). This 
> ticket is opened to remove the redundant block db at SCM. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-6) Enable SCM kerberos auth

2018-05-07 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466528#comment-16466528
 ] 

Xiaoyu Yao commented on HDDS-6:
---

Thanks [~ajayydv] for working on this. The patch looks good to me overall. Here 
are a few minor comments.

 

ScmConfigKeys.java

Line 118: NIT change to OZONE_SCM_KERBEROS_KEYTAB_FILE_KEY 
"ozone.scm.kerberos.keytab.file" for consistency and easy config UI filtering?

 

ScmBlockLocationProtocolPB.java

Line 39: do we expect the principal for the block location protocol client 
always be DN kerberos principal, Quadra for example may run as non-hdfs 
principal?

 

OzoneConfigKeys.java

{color:#00}Line 235: Can we document the relationship between 
{color}ozone.security.enabled and {color:#658aba}hadoop.security.authentication 
in ozone-default.xml? What if {color}ozone.security.enabled but 
hadoop.security.authentication=simple?

 

Ozone-default.xml

Line 1057-1067, 1090-1097: should we leave this for KSM kerberos support in 
next patch?

 

StorageContainerManager.java 

Line 170: should the default to false instead of true?

 

Line 204: the comment is not accurate. It should be something like "Login as 
the configured user for SCM." 

 

Line 208: NIT: suggest rename to loginAsSCMUser()

 

 

MiniOzoneClusterImpl.java

Line 282: can you add more context info related to authentication error, e.g. 
login failure to SCM user.

 

 

TestStorageContainerManager.java 

Can you add a case for successfully scm login and failed scm login due to bad 
principle or keytab due to misconfiguration?

> Enable SCM kerberos auth
> 
>
> Key: HDDS-6
> URL: https://issues.apache.org/jira/browse/HDDS-6
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM, Security
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-4-HDDS-6.00.patch
>
>
> Enable SCM kerberos auth



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-25) Simple async event processing for SCM

2018-05-07 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-25?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466524#comment-16466524
 ] 

Anu Engineer commented on HDDS-25:
--

[~elek] Thanks for posting the draft. It looks good to me. some questions.

 
 # Nit: The {{EventExecutor Interface}} some functions have the comments 
swapped. You might want to fix them later.
 # {EventPublisher#fireEvent }-- ? Does it have an error interface ? or should 
we use exceptions ? 
 # Do we need a executor per event type ? or just have a thread  pool ?
 # Suppose, I have to get a respose for an Event that I have queued, does it 
make sense to support a future interface – or is that mute ? Should we have an 
Event ID – for the same purpose if we want to indicate this is in response to 
that command.

[~msingh], [~nandakumar131] Please share your thoughts too.

 

> Simple async event processing for SCM
> -
>
> Key: HDDS-25
> URL: https://issues.apache.org/jira/browse/HDDS-25
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-25.001.patch
>
>
> For implementing all the SCM status changes we need a simple async event 
> processing. 
> Our use-case is very similar to an actor based system: we would like to 
> communicate with full asny event/messages, process the different events on 
> different threads, ...
> But a full Actor framework (such as Akka) would be overkill for this use 
> case. We don't need distributed actor systems, actor hierarchy or complex 
> resiliency.
> As a first approach we can use a very simple system where a common EventQueue 
> entry point could route events to the async event handlers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1) Remove SCM Block DB

2018-05-07 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466500#comment-16466500
 ] 

genericqa commented on HDDS-1:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 35 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
58s{color} | {color:red} hadoop-hdds/common in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
42s{color} | {color:red} hadoop-hdds/server-scm in trunk has 1 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
51s{color} | {color:red} hadoop-ozone/common in trunk has 2 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-ozone/ozone-manager in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
44s{color} | {color:red} hadoop-ozone/tools in trunk has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m  
5s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 29m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} hadoop-hdds/common generated 0 new + 0 unchanged - 1 
fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | 

[jira] [Updated] (HDFS-13430) Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445

2018-05-07 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13430:
-
Fix Version/s: (was: 2.9.2)
   (was: 2.8.4)
   (was: 2.10.0)

> Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445
> --
>
> Key: HDFS-13430
> URL: https://issues.apache.org/jira/browse/HDFS-13430
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Attachments: HDFS-13430.01.patch
>
>
> Unfortunately HADOOP-14445 had an HDFS test failure that's not caught in the 
> hadoop-common precommit runs.
> This is caught by our internal pre-commit using dist-test, and appears to be 
> the only failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13430) Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445

2018-05-07 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen resolved HDFS-13430.
--
   Resolution: Invalid
Fix Version/s: (was: 3.0.3)
   (was: 3.1.1)
   (was: 3.2.0)

> Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445
> --
>
> Key: HDFS-13430
> URL: https://issues.apache.org/jira/browse/HDFS-13430
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 2.10.0, 2.8.4, 2.9.2
>
> Attachments: HDFS-13430.01.patch
>
>
> Unfortunately HADOOP-14445 had an HDFS test failure that's not caught in the 
> hadoop-common precommit runs.
> This is caught by our internal pre-commit using dist-test, and appears to be 
> the only failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-13430) Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445

2018-05-07 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen reopened HDFS-13430:
--

> Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445
> --
>
> Key: HDFS-13430
> URL: https://issues.apache.org/jira/browse/HDFS-13430
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 2.10.0, 2.8.4, 2.9.2
>
> Attachments: HDFS-13430.01.patch
>
>
> Unfortunately HADOOP-14445 had an HDFS test failure that's not caught in the 
> hadoop-common precommit runs.
> This is caught by our internal pre-commit using dist-test, and appears to be 
> the only failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13430) Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445

2018-05-07 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466473#comment-16466473
 ] 

Xiao Chen commented on HDFS-13430:
--

Per 
[discussion|https://issues.apache.org/jira/browse/HADOOP-14445?focusedCommentId=16464600=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16464600]
 in HADOOP-14445, it will be reverted due to its complexity. This one will be 
reverted with it, and no longer an issue after HADOOP-14445 is reverted.

> Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445
> --
>
> Key: HDFS-13430
> URL: https://issues.apache.org/jira/browse/HDFS-13430
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 2.10.0, 2.8.4, 2.9.2
>
> Attachments: HDFS-13430.01.patch
>
>
> Unfortunately HADOOP-14445 had an HDFS test failure that's not caught in the 
> hadoop-common precommit runs.
> This is caught by our internal pre-commit using dist-test, and appears to be 
> the only failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13322) fuse dfs - uid persists when switching between ticket caches

2018-05-07 Thread Istvan Fajth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Fajth updated HDFS-13322:

Release Note: FUSE lib now recognize the change of the Kerberos ticket 
cache path if it was changed between two file system access in the same local 
user session via the KRB5CCNAME environment variable.
  Status: Patch Available  (was: Open)

> fuse dfs - uid persists when switching between ticket caches
> 
>
> Key: HDFS-13322
> URL: https://issues.apache.org/jira/browse/HDFS-13322
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.6.0
> Environment: Linux xx.xx.xx.xxx 3.10.0-514.el7.x86_64 #1 SMP Wed 
> Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
>  
>Reporter: Alex Volskiy
>Assignee: Istvan Fajth
>Priority: Minor
> Attachments: HDFS-13322.001.patch, testHDFS-13322.sh, 
> test_after_patch.out, test_before_patch.out
>
>
> The symptoms of this issue are the same as described in HDFS-3608 except the 
> workaround that was applied (detect changes in UID ticket cache) doesn't 
> resolve the issue when multiple ticket caches are in use by the same user.
> Our use case requires that a job scheduler running as a specific uid obtain 
> separate kerberos sessions per job and that each of these sessions use a 
> separate cache. When switching sessions this way, no change is made to the 
> original ticket cache so the cached filesystem instance doesn't get 
> regenerated.
>  
> {{$ export KRB5CCNAME=/tmp/krb5cc_session1}}
> {{$ kinit user_a@domain}}
> {{$ touch /fuse_mount/tmp/testfile1}}
> {{$ ls -l /fuse_mount/tmp/testfile1}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile1*}}
> {{$ export KRB5CCNAME=/tmp/krb5cc_session2}}
> {{$ kinit user_b@domain}}
> {{$ touch /fuse_mount/tmp/testfile2}}
> {{$ ls -l /fuse_mount/tmp/testfile2}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile2*}}
> {{   }}{color:#d04437}*{{** expected owner to be user_b **}}*{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13430) Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445

2018-05-07 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466465#comment-16466465
 ] 

Hudson commented on HDFS-13430:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14133 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14133/])
Revert "HDFS-13430. Fix TestEncryptionZonesWithKMS failure due to (xiao: rev 
118bd7580583e31bf643b642a2fbc9556177b906)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java


> Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445
> --
>
> Key: HDFS-13430
> URL: https://issues.apache.org/jira/browse/HDFS-13430
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Major
> Fix For: 2.10.0, 2.8.4, 3.2.0, 3.1.1, 2.9.2, 3.0.3
>
> Attachments: HDFS-13430.01.patch
>
>
> Unfortunately HADOOP-14445 had an HDFS test failure that's not caught in the 
> hadoop-common precommit runs.
> This is caught by our internal pre-commit using dist-test, and appears to be 
> the only failure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13322) fuse dfs - uid persists when switching between ticket caches

2018-05-07 Thread Istvan Fajth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Fajth updated HDFS-13322:

Attachment: test_before_patch.out
test_after_patch.out

> fuse dfs - uid persists when switching between ticket caches
> 
>
> Key: HDFS-13322
> URL: https://issues.apache.org/jira/browse/HDFS-13322
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.6.0
> Environment: Linux xx.xx.xx.xxx 3.10.0-514.el7.x86_64 #1 SMP Wed 
> Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
>  
>Reporter: Alex Volskiy
>Assignee: Istvan Fajth
>Priority: Minor
> Attachments: HDFS-13322.001.patch, testHDFS-13322.sh, 
> test_after_patch.out, test_before_patch.out
>
>
> The symptoms of this issue are the same as described in HDFS-3608 except the 
> workaround that was applied (detect changes in UID ticket cache) doesn't 
> resolve the issue when multiple ticket caches are in use by the same user.
> Our use case requires that a job scheduler running as a specific uid obtain 
> separate kerberos sessions per job and that each of these sessions use a 
> separate cache. When switching sessions this way, no change is made to the 
> original ticket cache so the cached filesystem instance doesn't get 
> regenerated.
>  
> {{$ export KRB5CCNAME=/tmp/krb5cc_session1}}
> {{$ kinit user_a@domain}}
> {{$ touch /fuse_mount/tmp/testfile1}}
> {{$ ls -l /fuse_mount/tmp/testfile1}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile1*}}
> {{$ export KRB5CCNAME=/tmp/krb5cc_session2}}
> {{$ kinit user_b@domain}}
> {{$ touch /fuse_mount/tmp/testfile2}}
> {{$ ls -l /fuse_mount/tmp/testfile2}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile2*}}
> {{   }}{color:#d04437}*{{** expected owner to be user_b **}}*{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13322) fuse dfs - uid persists when switching between ticket caches

2018-05-07 Thread Istvan Fajth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466459#comment-16466459
 ] 

Istvan Fajth commented on HDFS-13322:
-

As I have checked, the tests for fuse are not running, and they are not 
configured for neither in CMake, nor to run the tests in TestFuseDFS class. I 
have attached a script that shows the behaviour.

The testHDFS-13322.sh requires a Hadoop cluster to mount, the fuse package 
installed, two kerberos principals, and the keytab file for those principals to 
run.

The patch fixes the behaviour. I am attaching an example output of the test 
from before and from after the patch.

> fuse dfs - uid persists when switching between ticket caches
> 
>
> Key: HDFS-13322
> URL: https://issues.apache.org/jira/browse/HDFS-13322
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.6.0
> Environment: Linux xx.xx.xx.xxx 3.10.0-514.el7.x86_64 #1 SMP Wed 
> Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
>  
>Reporter: Alex Volskiy
>Assignee: Istvan Fajth
>Priority: Minor
> Attachments: HDFS-13322.001.patch, testHDFS-13322.sh
>
>
> The symptoms of this issue are the same as described in HDFS-3608 except the 
> workaround that was applied (detect changes in UID ticket cache) doesn't 
> resolve the issue when multiple ticket caches are in use by the same user.
> Our use case requires that a job scheduler running as a specific uid obtain 
> separate kerberos sessions per job and that each of these sessions use a 
> separate cache. When switching sessions this way, no change is made to the 
> original ticket cache so the cached filesystem instance doesn't get 
> regenerated.
>  
> {{$ export KRB5CCNAME=/tmp/krb5cc_session1}}
> {{$ kinit user_a@domain}}
> {{$ touch /fuse_mount/tmp/testfile1}}
> {{$ ls -l /fuse_mount/tmp/testfile1}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile1*}}
> {{$ export KRB5CCNAME=/tmp/krb5cc_session2}}
> {{$ kinit user_b@domain}}
> {{$ touch /fuse_mount/tmp/testfile2}}
> {{$ ls -l /fuse_mount/tmp/testfile2}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile2*}}
> {{   }}{color:#d04437}*{{** expected owner to be user_b **}}*{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13322) fuse dfs - uid persists when switching between ticket caches

2018-05-07 Thread Istvan Fajth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Fajth updated HDFS-13322:

Attachment: testHDFS-13322.sh
HDFS-13322.001.patch

> fuse dfs - uid persists when switching between ticket caches
> 
>
> Key: HDFS-13322
> URL: https://issues.apache.org/jira/browse/HDFS-13322
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.6.0
> Environment: Linux xx.xx.xx.xxx 3.10.0-514.el7.x86_64 #1 SMP Wed 
> Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
>  
>Reporter: Alex Volskiy
>Assignee: Istvan Fajth
>Priority: Minor
> Attachments: HDFS-13322.001.patch, testHDFS-13322.sh
>
>
> The symptoms of this issue are the same as described in HDFS-3608 except the 
> workaround that was applied (detect changes in UID ticket cache) doesn't 
> resolve the issue when multiple ticket caches are in use by the same user.
> Our use case requires that a job scheduler running as a specific uid obtain 
> separate kerberos sessions per job and that each of these sessions use a 
> separate cache. When switching sessions this way, no change is made to the 
> original ticket cache so the cached filesystem instance doesn't get 
> regenerated.
>  
> {{$ export KRB5CCNAME=/tmp/krb5cc_session1}}
> {{$ kinit user_a@domain}}
> {{$ touch /fuse_mount/tmp/testfile1}}
> {{$ ls -l /fuse_mount/tmp/testfile1}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile1*}}
> {{$ export KRB5CCNAME=/tmp/krb5cc_session2}}
> {{$ kinit user_b@domain}}
> {{$ touch /fuse_mount/tmp/testfile2}}
> {{$ ls -l /fuse_mount/tmp/testfile2}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile2*}}
> {{   }}{color:#d04437}*{{** expected owner to be user_b **}}*{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-25) Simple async event processing for SCM

2018-05-07 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-25?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-25:
-
Status: Patch Available  (was: Open)

First version. Please let me know what do you think about this approach (cc 
[~nandakumar131], [~msingh], [~anu]). 

I removed the annotation based implementation as it can't guarantee compile 
time type safety (could be guaranteed with annotation processor but IDE won't 
detect it. With simple generic types we can see the errors even before the 
compilation, thanks to the IDE).

> Simple async event processing for SCM
> -
>
> Key: HDDS-25
> URL: https://issues.apache.org/jira/browse/HDDS-25
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-25.001.patch
>
>
> For implementing all the SCM status changes we need a simple async event 
> processing. 
> Our use-case is very similar to an actor based system: we would like to 
> communicate with full asny event/messages, process the different events on 
> different threads, ...
> But a full Actor framework (such as Akka) would be overkill for this use 
> case. We don't need distributed actor systems, actor hierarchy or complex 
> resiliency.
> As a first approach we can use a very simple system where a common EventQueue 
> entry point could route events to the async event handlers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-25) Simple async event processing for SCM

2018-05-07 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-25?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-25:
-
Attachment: HDDS-25.001.patch

> Simple async event processing for SCM
> -
>
> Key: HDDS-25
> URL: https://issues.apache.org/jira/browse/HDDS-25
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-25.001.patch
>
>
> For implementing all the SCM status changes we need a simple async event 
> processing. 
> Our use-case is very similar to an actor based system: we would like to 
> communicate with full asny event/messages, process the different events on 
> different threads, ...
> But a full Actor framework (such as Akka) would be overkill for this use 
> case. We don't need distributed actor systems, actor hierarchy or complex 
> resiliency.
> As a first approach we can use a very simple system where a common EventQueue 
> entry point could route events to the async event handlers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1) Remove SCM Block DB

2018-05-07 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466386#comment-16466386
 ] 

Xiaoyu Yao edited comment on HDDS-1 at 5/7/18 7:52 PM:
---

Thanks all for the review and test. Final minor update to fix the Metadata 
Store prefix matching logic and 2 more unit test fixes.

 

Remaining unit test failures listed below will be fixed with follow up JIRAs.

 

HDDS failures (related but will fix later)
{code:java}
[*ERROR*]   *TestDeletedBlockLog.testDeletedBlockTransactions:349*

[*ERROR*]   *TestContainerCloser.testCleanupThreadRuns:197*

[*ERROR*]   *TestContainerCloser.testRepeatedClose:159 expected:<1> but was:<0>*

[*ERROR*]   *TestContainerSupervisor.testDetectOverReplica:221 expected:<2> but 
was:<0>*

[*ERROR*]   *TestContainerSupervisor.testDetectSingleContainerReplica:174 
expected:<9001> but was:<9002>*

[*ERROR*]   *TestStorageContainerManagerHttpServer.testHttpPolicy:105 » Bind 
Port in use: 0...*

 

[*ERROR*]   *TestContainerSupervisor.testAddingNewPoolWorks:266 » Timeout Timed 
out waiting...*

 

{code}
 

Ozone failures (seems unrelated to this change)
{code:java}
[*ERROR*]   *TestKeySpaceManagerHttpServer.testHttpPolicy:103 » Bind Port in 
use: 0.0.0.0:9...*

[*ERROR*]   *TestKeySpaceManagerHttpServer.testHttpPolicy:103 » Bind Port in 
use: 0.0.0.0:9...*

{code}


was (Author: xyao):
Minor update to fix the Metadata Store prefix matching logic and 2 more unit 
test fixes.

 

Remaining unit test failures listed below will be fixed with follow up JIRAs.

 

HDDS failures (related but will fix later)

{code}

[*ERROR*]   *TestDeletedBlockLog.testDeletedBlockTransactions:349*

[*ERROR*]   *TestContainerCloser.testCleanupThreadRuns:197*

[*ERROR*]   *TestContainerCloser.testRepeatedClose:159 expected:<1> but was:<0>*

[*ERROR*]   *TestContainerSupervisor.testDetectOverReplica:221 expected:<2> but 
was:<0>*

[*ERROR*]   *TestContainerSupervisor.testDetectSingleContainerReplica:174 
expected:<9001> but was:<9002>*

[*ERROR*]   *TestStorageContainerManagerHttpServer.testHttpPolicy:105 » Bind 
Port in use: 0...*

 

[*ERROR*]   *TestContainerSupervisor.testAddingNewPoolWorks:266 » Timeout Timed 
out waiting...*

 

{code}

 

Ozone failures (seems unrelated to this change)

{code}

[*ERROR*]   *TestKeySpaceManagerHttpServer.testHttpPolicy:103 » Bind Port in 
use: 0.0.0.0:9...*

[*ERROR*]   *TestKeySpaceManagerHttpServer.testHttpPolicy:103 » Bind Port in 
use: 0.0.0.0:9...*

{code}

> Remove SCM Block DB
> ---
>
> Key: HDDS-1
> URL: https://issues.apache.org/jira/browse/HDDS-1
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-1.002.patch, HDDS-1.003.patch, HDDS-1.004.patch, 
> HDDS-1.005.patch, HDFS-13504.001.patch
>
>
> The block/key information is maintained by Ozone Master (a.k.a. KSM). This 
> ticket is opened to remove the redundant block db at SCM. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1) Remove SCM Block DB

2018-05-07 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466386#comment-16466386
 ] 

Xiaoyu Yao commented on HDDS-1:
---

Minor update to fix the Metadata Store prefix matching logic and 2 more unit 
test fixes.

 

Remaining unit test failures listed below will be fixed with follow up JIRAs.

 

HDDS failures (related but will fix later)

{code}

[*ERROR*]   *TestDeletedBlockLog.testDeletedBlockTransactions:349*

[*ERROR*]   *TestContainerCloser.testCleanupThreadRuns:197*

[*ERROR*]   *TestContainerCloser.testRepeatedClose:159 expected:<1> but was:<0>*

[*ERROR*]   *TestContainerSupervisor.testDetectOverReplica:221 expected:<2> but 
was:<0>*

[*ERROR*]   *TestContainerSupervisor.testDetectSingleContainerReplica:174 
expected:<9001> but was:<9002>*

[*ERROR*]   *TestStorageContainerManagerHttpServer.testHttpPolicy:105 » Bind 
Port in use: 0...*

 

[*ERROR*]   *TestContainerSupervisor.testAddingNewPoolWorks:266 » Timeout Timed 
out waiting...*

 

{code}

 

Ozone failures (seems unrelated to this change)

{code}

[*ERROR*]   *TestKeySpaceManagerHttpServer.testHttpPolicy:103 » Bind Port in 
use: 0.0.0.0:9...*

[*ERROR*]   *TestKeySpaceManagerHttpServer.testHttpPolicy:103 » Bind Port in 
use: 0.0.0.0:9...*

{code}

> Remove SCM Block DB
> ---
>
> Key: HDDS-1
> URL: https://issues.apache.org/jira/browse/HDDS-1
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-1.002.patch, HDDS-1.003.patch, HDDS-1.004.patch, 
> HDDS-1.005.patch, HDFS-13504.001.patch
>
>
> The block/key information is maintained by Ozone Master (a.k.a. KSM). This 
> ticket is opened to remove the redundant block db at SCM. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1) Remove SCM Block DB

2018-05-07 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1:
--
Attachment: HDDS-1.005.patch

> Remove SCM Block DB
> ---
>
> Key: HDDS-1
> URL: https://issues.apache.org/jira/browse/HDDS-1
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-1.002.patch, HDDS-1.003.patch, HDDS-1.004.patch, 
> HDDS-1.005.patch, HDFS-13504.001.patch
>
>
> The block/key information is maintained by Ozone Master (a.k.a. KSM). This 
> ticket is opened to remove the redundant block db at SCM. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13403) libhdfs++: Use hdfs::IoService object rather than asio::io_service

2018-05-07 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466343#comment-16466343
 ] 

Bibin A Chundatt commented on HDFS-13403:
-

[~James C]

For version gcc 7 native compilation fails

[https://bugzilla.redhat.com/show_bug.cgi?id=1417383]

{code}
For reference, the , , and  headers used to include the 
whole of  (thousands of lines), but now they don't. This is a Good 
Thing™.

I'll make a note of this in the GCC 7 "porting to" doc.
{code}

Seems we have to explicitly include 

> libhdfs++: Use hdfs::IoService object rather than asio::io_service
> --
>
> Key: HDFS-13403
> URL: https://issues.apache.org/jira/browse/HDFS-13403
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: James Clampffer
>Assignee: James Clampffer
>Priority: Critical
> Attachments: HDFS-13403.000.patch, build_fixes.patch
>
>
> At the moment the hdfs::IoService is a simple wrapper over asio's io_service 
> object.  I'd like to make this smarter and have it do things like track which 
> tasks are queued, validate that dependencies of tasks exist, and monitor 
> ioservice throughput and contention.  In order to get there we need to use 
> have all components in the library to go through the hdfs::IoService rather 
> than directly interacting with the asio::io_service.  The only time the 
> asio::io_service should be used is when calling things like asio::async_write 
> that need an io_service&.  HDFS-11884 will be able get rid of those remaining 
> instances once this work is in place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-23) Remove SCMNodeAddressList from SCMRegisterRequestProto

2018-05-07 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-23?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466338#comment-16466338
 ] 

Ajay Kumar commented on HDDS-23:


LGTM

> Remove SCMNodeAddressList from SCMRegisterRequestProto
> --
>
> Key: HDDS-23
> URL: https://issues.apache.org/jira/browse/HDDS-23
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode, SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-23.000.patch
>
>
> {{SCMNodeAddressList}} in {{SCMRegisterRequestProto}} is not used by SCM and 
> it's not necessary to send it in register call of datanode. 
> {{SCMNodeAddressList}} can be removed from {{SCMRegisterRequestProto}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13530) NameNode: Fix NullPointerException when getQuotaUsageInt() invoked

2018-05-07 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466267#comment-16466267
 ] 

Ajay Kumar edited comment on HDFS-13530 at 5/7/18 6:13 PM:
---

[~liuhongtong] thanks for filing the jira and submitting the patch. patch looks 
good, mind adding a test case?


was (Author: ajayydv):
[~liuhongtong] thanks for filing the jira and submitting the patch. mind adding 
a test case?

> NameNode: Fix NullPointerException when getQuotaUsageInt() invoked
> --
>
> Key: HDFS-13530
> URL: https://issues.apache.org/jira/browse/HDFS-13530
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation, namenode
>Reporter: liuhongtong
>Priority: Major
> Attachments: HDFS-13530.001.patch
>
>
> If the directory is nonexistent, getQuotaUsage rpc call will run into 
> NullPointerException throwed by
> FSDirStatAndListingOp.getQuotaUsageInt() .
> I think FSDirStatAndListingOp.getQuotaUsageInt() should throw 
> FileNotFoundException when the directory is nonexistent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13530) NameNode: Fix NullPointerException when getQuotaUsageInt() invoked

2018-05-07 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466267#comment-16466267
 ] 

Ajay Kumar commented on HDFS-13530:
---

[~liuhongtong] thanks for filing the jira and submitting the patch. mind adding 
a test case?

> NameNode: Fix NullPointerException when getQuotaUsageInt() invoked
> --
>
> Key: HDFS-13530
> URL: https://issues.apache.org/jira/browse/HDFS-13530
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation, namenode
>Reporter: liuhongtong
>Priority: Major
> Attachments: HDFS-13530.001.patch
>
>
> If the directory is nonexistent, getQuotaUsage rpc call will run into 
> NullPointerException throwed by
> FSDirStatAndListingOp.getQuotaUsageInt() .
> I think FSDirStatAndListingOp.getQuotaUsageInt() should throw 
> FileNotFoundException when the directory is nonexistent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1) Remove SCM Block DB

2018-05-07 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1:
--
Attachment: HDDS-1.004.patch

> Remove SCM Block DB
> ---
>
> Key: HDDS-1
> URL: https://issues.apache.org/jira/browse/HDDS-1
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-1.002.patch, HDDS-1.003.patch, HDDS-1.004.patch, 
> HDFS-13504.001.patch
>
>
> The block/key information is maintained by Ozone Master (a.k.a. KSM). This 
> ticket is opened to remove the redundant block db at SCM. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13531) RBF: RouterAdmin supports to set mount table readonly/readwrite

2018-05-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HDFS-13531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466111#comment-16466111
 ] 

Íñigo Goiri commented on HDFS-13531:


Thanks [~liuhongtong] for  [^HDFS-13531.001.patch].
I'm not sure we need the readWrite option in addMount, I think having the 
readOnly to false should suffice.
I'm OK with adding it in the CLI itself though; not sure if there's a better 
way to reset it.


> RBF: RouterAdmin supports to set mount table readonly/readwrite
> ---
>
> Key: HDFS-13531
> URL: https://issues.apache.org/jira/browse/HDFS-13531
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: liuhongtong
>Priority: Minor
> Attachments: HDFS-13531.001.patch
>
>
> RouterAdmin only supports to set mount point read only.
> If we want to reset the mount point read write, there is no way.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-7) Enable kerberos auth for Ozone client in hadoop rpc

2018-05-07 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-7?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-7:
--
Attachment: HDDS-4-HDDS-7-poc.patch

> Enable kerberos auth for Ozone client in hadoop rpc 
> 
>
> Key: HDDS-7
> URL: https://issues.apache.org/jira/browse/HDDS-7
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Client, SCM Client
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.3.0
>
> Attachments: HDDS-4-HDDS-7-poc.patch
>
>
> Enable kerberos auth for Ozone client in hadoop rpc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-19) Ozone: Update ozone to latest ratis snapshot build (0.1.1-alpha-4309324-SNAPSHOT)

2018-05-07 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-19?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-19:
-
Fix Version/s: 0.2.1

> Ozone: Update ozone to latest ratis snapshot build 
> (0.1.1-alpha-4309324-SNAPSHOT)
> -
>
> Key: HDDS-19
> URL: https://issues.apache.org/jira/browse/HDDS-19
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Blocker
> Fix For: 0.2.1
>
> Attachments: HDDS-19.001.patch, HDFS-13456-HDFS-7240.001.patch, 
> HDFS-13456-HDFS-7240.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-19) Ozone: Update ozone to latest ratis snapshot build (0.1.1-alpha-4309324-SNAPSHOT)

2018-05-07 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-19?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-19:
-
Component/s: Ozone Datanode

> Ozone: Update ozone to latest ratis snapshot build 
> (0.1.1-alpha-4309324-SNAPSHOT)
> -
>
> Key: HDDS-19
> URL: https://issues.apache.org/jira/browse/HDDS-19
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Blocker
> Attachments: HDDS-19.001.patch, HDFS-13456-HDFS-7240.001.patch, 
> HDFS-13456-HDFS-7240.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-19) Ozone: Update ozone to latest ratis snapshot build (0.1.1-alpha-4309324-SNAPSHOT)

2018-05-07 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-19?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-19:
-
Issue Type: Bug  (was: Improvement)

> Ozone: Update ozone to latest ratis snapshot build 
> (0.1.1-alpha-4309324-SNAPSHOT)
> -
>
> Key: HDDS-19
> URL: https://issues.apache.org/jira/browse/HDDS-19
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-19.001.patch, HDFS-13456-HDFS-7240.001.patch, 
> HDFS-13456-HDFS-7240.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-19) Ozone: Update ozone to latest ratis snapshot build (0.1.1-alpha-4309324-SNAPSHOT)

2018-05-07 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-19?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-19:
-
Affects Version/s: 0.2.1

> Ozone: Update ozone to latest ratis snapshot build 
> (0.1.1-alpha-4309324-SNAPSHOT)
> -
>
> Key: HDDS-19
> URL: https://issues.apache.org/jira/browse/HDDS-19
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Blocker
> Attachments: HDDS-19.001.patch, HDFS-13456-HDFS-7240.001.patch, 
> HDFS-13456-HDFS-7240.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-19) Ozone: Update ozone to latest ratis snapshot build (0.1.1-alpha-4309324-SNAPSHOT)

2018-05-07 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-19?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-19:
-
Priority: Blocker  (was: Major)

> Ozone: Update ozone to latest ratis snapshot build 
> (0.1.1-alpha-4309324-SNAPSHOT)
> -
>
> Key: HDDS-19
> URL: https://issues.apache.org/jira/browse/HDDS-19
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.2.1
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Blocker
> Attachments: HDDS-19.001.patch, HDFS-13456-HDFS-7240.001.patch, 
> HDFS-13456-HDFS-7240.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-20) Ozone: Add support for rename key within a bucket for rpc client

2018-05-07 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-20?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466070#comment-16466070
 ] 

genericqa commented on HDDS-20:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 38m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 46m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
27m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
44s{color} | {color:red} hadoop-ozone/common in trunk has 2 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
24s{color} | {color:red} hadoop-ozone/ozone-manager in trunk has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
38s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 39m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 39m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 39m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
51s{color} | {color:red} hadoop-ozone/ozone-manager generated 1 new + 1 
unchanged - 0 fixed = 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} ozone-manager in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} objectstore-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 51s{color} 
| {color:red} integration-test in the 

[jira] [Created] (HDDS-25) Simple async event processing for SCM

2018-05-07 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-25:


 Summary: Simple async event processing for SCM
 Key: HDDS-25
 URL: https://issues.apache.org/jira/browse/HDDS-25
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: SCM
Reporter: Elek, Marton
Assignee: Elek, Marton
 Fix For: 0.2.1


For implementing all the SCM status changes we need a simple async event 
processing. 

Our use-case is very similar to an actor based system: we would like to 
communicate with full asny event/messages, process the different events on 
different threads, ...

But a full Actor framework (such as Akka) would be overkill for this use case. 
We don't need distributed actor systems, actor hierarchy or complex resiliency.

As a first approach we can use a very simple system where a common EventQueue 
entry point could route events to the async event handlers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-23) Remove SCMNodeAddressList from SCMRegisterRequestProto

2018-05-07 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-23?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465967#comment-16465967
 ] 

Nanda kumar commented on HDDS-23:
-

Thanks [~anu] for the review. 
>> we will use this when SCM HA is enabled...
For SCM HA it should be sufficient having {{SCMNodeAddressList}} in the 
response of register command, which is still there.
This jira removes {{SCMNodeAddressList}} from the register request call which 
is sent from datanode to SCM.


> Remove SCMNodeAddressList from SCMRegisterRequestProto
> --
>
> Key: HDDS-23
> URL: https://issues.apache.org/jira/browse/HDDS-23
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode, SCM
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HDDS-23.000.patch
>
>
> {{SCMNodeAddressList}} in {{SCMRegisterRequestProto}} is not used by SCM and 
> it's not necessary to send it in register call of datanode. 
> {{SCMNodeAddressList}} can be removed from {{SCMRegisterRequestProto}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13489) Get base snapshotable path if exists for a given path

2018-05-07 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465918#comment-16465918
 ] 

genericqa commented on HDFS-13489:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 17m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
47s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 1 
unchanged - 0 fixed = 2 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
32s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 14s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 32s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}228m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.federation.router.TestRouterQuota |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13489 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922247/HDFS-13489.006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  

[jira] [Commented] (HDFS-9924) [umbrella] Nonblocking HDFS Access

2018-05-07 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465872#comment-16465872
 ] 

Duo Zhang commented on HDFS-9924:
-

Linked a design doc. Not finished yet but I think we can discuss the rpc client 
first.

Thanks.

> [umbrella] Nonblocking HDFS Access
> --
>
> Key: HDFS-9924
> URL: https://issues.apache.org/jira/browse/HDFS-9924
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Duo Zhang
>Priority: Major
> Attachments: Async-HDFS-Performance-Report.pdf, 
> AsyncHdfs20160510.pdf, HDFS-9924-POC.patch
>
>
> This is an umbrella JIRA for supporting Nonblocking HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support nonblocking calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13322) fuse dfs - uid persists when switching between ticket caches

2018-05-07 Thread Istvan Fajth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Fajth reassigned HDFS-13322:
---

Assignee: Istvan Fajth

> fuse dfs - uid persists when switching between ticket caches
> 
>
> Key: HDFS-13322
> URL: https://issues.apache.org/jira/browse/HDFS-13322
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.6.0
> Environment: Linux xx.xx.xx.xxx 3.10.0-514.el7.x86_64 #1 SMP Wed 
> Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
>  
>Reporter: Alex Volskiy
>Assignee: Istvan Fajth
>Priority: Minor
>
> The symptoms of this issue are the same as described in HDFS-3608 except the 
> workaround that was applied (detect changes in UID ticket cache) doesn't 
> resolve the issue when multiple ticket caches are in use by the same user.
> Our use case requires that a job scheduler running as a specific uid obtain 
> separate kerberos sessions per job and that each of these sessions use a 
> separate cache. When switching sessions this way, no change is made to the 
> original ticket cache so the cached filesystem instance doesn't get 
> regenerated.
>  
> {{$ export KRB5CCNAME=/tmp/krb5cc_session1}}
> {{$ kinit user_a@domain}}
> {{$ touch /fuse_mount/tmp/testfile1}}
> {{$ ls -l /fuse_mount/tmp/testfile1}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile1*}}
> {{$ export KRB5CCNAME=/tmp/krb5cc_session2}}
> {{$ kinit user_b@domain}}
> {{$ touch /fuse_mount/tmp/testfile2}}
> {{$ ls -l /fuse_mount/tmp/testfile2}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile2*}}
> {{   }}{color:#d04437}*{{** expected owner to be user_b **}}*{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13322) fuse dfs - uid persists when switching between ticket caches

2018-05-07 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HDFS-13322:
-

Assignee: (was: Gabor Bota)

> fuse dfs - uid persists when switching between ticket caches
> 
>
> Key: HDFS-13322
> URL: https://issues.apache.org/jira/browse/HDFS-13322
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.6.0
> Environment: Linux xx.xx.xx.xxx 3.10.0-514.el7.x86_64 #1 SMP Wed 
> Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux
>  
>Reporter: Alex Volskiy
>Priority: Minor
>
> The symptoms of this issue are the same as described in HDFS-3608 except the 
> workaround that was applied (detect changes in UID ticket cache) doesn't 
> resolve the issue when multiple ticket caches are in use by the same user.
> Our use case requires that a job scheduler running as a specific uid obtain 
> separate kerberos sessions per job and that each of these sessions use a 
> separate cache. When switching sessions this way, no change is made to the 
> original ticket cache so the cached filesystem instance doesn't get 
> regenerated.
>  
> {{$ export KRB5CCNAME=/tmp/krb5cc_session1}}
> {{$ kinit user_a@domain}}
> {{$ touch /fuse_mount/tmp/testfile1}}
> {{$ ls -l /fuse_mount/tmp/testfile1}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile1*}}
> {{$ export KRB5CCNAME=/tmp/krb5cc_session2}}
> {{$ kinit user_b@domain}}
> {{$ touch /fuse_mount/tmp/testfile2}}
> {{$ ls -l /fuse_mount/tmp/testfile2}}
> {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile2*}}
> {{   }}{color:#d04437}*{{** expected owner to be user_b **}}*{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-20) Ozone: Add support for rename key within a bucket for rpc client

2018-05-07 Thread Lokesh Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HDDS-20?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465831#comment-16465831
 ] 

Lokesh Jain commented on HDDS-20:
-

[~msingh] Thanks for reviewing the patch! HDDS-20.002.patch addresses your 
comments.

Jiras HDDS-21 and HDDS-24 handle the issues 3 and 4 respectively.

> Ozone: Add support for rename key within a bucket for rpc client
> 
>
> Key: HDDS-20
> URL: https://issues.apache.org/jira/browse/HDDS-20
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-20.001.patch, HDDS-20.002.patch, 
> HDFS-13228-HDFS-7240.001.patch
>
>
> This jira aims to implement rename operation on a key within a bucket for rpc 
> client. OzoneFilesystem currently rewrites a key on rename. Addition of this 
> operation would simplify renames in OzoneFilesystem as renames would just be 
> a db update in ksm.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-24) Ozone: Rename directory in ozonefs should be atomic

2018-05-07 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-24:
---

 Summary: Ozone: Rename directory in ozonefs should be atomic
 Key: HDDS-24
 URL: https://issues.apache.org/jira/browse/HDDS-24
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Lokesh Jain
Assignee: Lokesh Jain


Currently rename in ozonefs is not atomic. While rename takes place another 
client might be adding a new file into the directory. Further if rename fails 
midway the directory will be in an inconsistent state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-20) Ozone: Add support for rename key within a bucket for rpc client

2018-05-07 Thread Lokesh Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-20?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain updated HDDS-20:

Attachment: HDDS-20.002.patch

> Ozone: Add support for rename key within a bucket for rpc client
> 
>
> Key: HDDS-20
> URL: https://issues.apache.org/jira/browse/HDDS-20
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-20.001.patch, HDDS-20.002.patch, 
> HDFS-13228-HDFS-7240.001.patch
>
>
> This jira aims to implement rename operation on a key within a bucket for rpc 
> client. OzoneFilesystem currently rewrites a key on rename. Addition of this 
> operation would simplify renames in OzoneFilesystem as renames would just be 
> a db update in ksm.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12981) HDFS renameSnapshot to Itself for Non Existent snapshot should throw error

2018-05-07 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465765#comment-16465765
 ] 

genericqa commented on HDFS-12981:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 31s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}172m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-12981 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922230/HDFS-12981.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bf45e1986dd7 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 67f239c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24144/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24144/testReport/ |
| Max. process+thread count | 3417 (vs. ulimit of 

  1   2   >