[jira] [Commented] (HDDS-2366) Remove ozone.enabled flag

2019-10-25 Thread YiSheng Lien (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960262#comment-16960262
 ] 

YiSheng Lien commented on HDDS-2366:


Hi [~bharat], thanks this jira.
Could we update the 
[Deployment|https://cwiki.apache.org/confluence/display/HADOOP/Single+Node+Deployment]
 after the patch ?

> Remove ozone.enabled flag
> -
>
> Key: HDDS-2366
> URL: https://issues.apache.org/jira/browse/HDDS-2366
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Now when ozone is started the start-ozone.sh/stop-ozone.sh script check 
> whether this property is enabled or not to start ozone services. Now, this 
> property and this check can be removed.
>  
> This was needed when ozone is part of Hadoop, and we don't want to start 
> ozone services by default. Now there is no such requirement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14927) RBF: Add metrics for async callers thread pool

2019-10-25 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960242#comment-16960242
 ] 

Hadoop QA commented on HDFS-14927:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m  
3s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14927 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12984063/HDFS-14927.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 07619b424cce 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7be5508 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28183/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28183/testReport/ |
| Max. process+thread count | 2775 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28183/console |
| Powered by | Apache Yetus 0.8.0   

[jira] [Resolved] (HDDS-2056) Datanode unable to start command handler thread with security enabled

2019-10-25 Thread Mukul Kumar Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh resolved HDDS-2056.
-
Resolution: Duplicate

> Datanode unable to start command handler thread with security enabled
> -
>
> Key: HDDS-2056
> URL: https://issues.apache.org/jira/browse/HDDS-2056
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.5.0
>
>
>  
> {code:java}
> 2019-08-29 02:50:23,536 ERROR 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine: 
> Critical Error : Command processor thread encountered an error. Thread: 
> Thread[Command processor thread,5,main]
> java.lang.IllegalArgumentException: Null user
>         at 
> org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1269)
>         at 
> org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1256)
>         at 
> org.apache.hadoop.hdds.security.token.BlockTokenVerifier.verify(BlockTokenVerifier.java:116)
>         at 
> org.apache.hadoop.ozone.container.common.transport.server.XceiverServer.submitRequest(XceiverServer.java:68)
>         at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.submitRequest(XceiverServerRatis.java:482)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.CloseContainerCommandHandler.handle(CloseContainerCommandHandler.java:109)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.CommandDispatcher.handle(CommandDispatcher.java:93)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$initCommandHandlerThread$1(DatanodeStateMachine.java:432)
>         at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14912) Set dfs.image.string-tables.expanded default to false in branch-2.7

2019-10-25 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960220#comment-16960220
 ] 

Wei-Chiu Chuang edited comment on HDFS-14912 at 10/26/19 12:07 AM:
---

+1 it's quite clear a mistake.


was (Author: jojochuang):
+1

> Set dfs.image.string-tables.expanded default to false in branch-2.7
> ---
>
> Key: HDFS-14912
> URL: https://issues.apache.org/jira/browse/HDFS-14912
> Project: Hadoop HDFS
>  Issue Type: Task
>Affects Versions: 2.7.8
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-14912.001.branch-2.7.patch
>
>
> In the branch-2.7 patch for CVE-2018-11768 HDFS FSImage Corruption, 
> dfs.image.string-tables.expanded is set to true by default: 
> https://github.com/apache/hadoop/commit/109d44604ca843212bdf22b50e86a5a41e1d21da#diff-36b19e9d8816002ed9dff8580055d3fbR627
> This is different from all other branches, which set it to false by default.
> For instance, branch-2.8: 
> https://github.com/apache/hadoop/commit/f697f3c4fc0067bb82494e445900d86942685b09#diff-36b19e9d8816002ed9dff8580055d3fbR629
> Goal: Flip the dfs.image.string-tables.expanded default in branch-2.7 to 
> false to make it consistent with other branches.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14912) Set dfs.image.string-tables.expanded default to false in branch-2.7

2019-10-25 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960220#comment-16960220
 ] 

Wei-Chiu Chuang commented on HDFS-14912:


+1

> Set dfs.image.string-tables.expanded default to false in branch-2.7
> ---
>
> Key: HDFS-14912
> URL: https://issues.apache.org/jira/browse/HDFS-14912
> Project: Hadoop HDFS
>  Issue Type: Task
>Affects Versions: 2.7.8
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDFS-14912.001.branch-2.7.patch
>
>
> In the branch-2.7 patch for CVE-2018-11768 HDFS FSImage Corruption, 
> dfs.image.string-tables.expanded is set to true by default: 
> https://github.com/apache/hadoop/commit/109d44604ca843212bdf22b50e86a5a41e1d21da#diff-36b19e9d8816002ed9dff8580055d3fbR627
> This is different from all other branches, which set it to false by default.
> For instance, branch-2.8: 
> https://github.com/apache/hadoop/commit/f697f3c4fc0067bb82494e445900d86942685b09#diff-36b19e9d8816002ed9dff8580055d3fbR629
> Goal: Flip the dfs.image.string-tables.expanded default in branch-2.7 to 
> false to make it consistent with other branches.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14927) RBF: Add metrics for async callers thread pool

2019-10-25 Thread Leon Gao (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960221#comment-16960221
 ] 

Leon Gao commented on HDFS-14927:
-

Add UT and update function name to getAsyncCallerPool

> RBF: Add metrics for async callers thread pool
> --
>
> Key: HDFS-14927
> URL: https://issues.apache.org/jira/browse/HDFS-14927
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Minor
> Attachments: HDFS-14927.001.patch, HDFS-14927.002.patch
>
>
> It is good to add some monitoring on the async caller thread pool to handle 
> fan-out RPC client requests, so we know the utilization and when to bump up 
> dfs.federation.router.client.thread-size



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14927) RBF: Add metrics for async callers thread pool

2019-10-25 Thread Leon Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leon Gao updated HDFS-14927:

Attachment: HDFS-14927.002.patch

> RBF: Add metrics for async callers thread pool
> --
>
> Key: HDFS-14927
> URL: https://issues.apache.org/jira/browse/HDFS-14927
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Minor
> Attachments: HDFS-14927.001.patch, HDFS-14927.002.patch
>
>
> It is good to add some monitoring on the async caller thread pool to handle 
> fan-out RPC client requests, so we know the utilization and when to bump up 
> dfs.federation.router.client.thread-size



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14931) hdfs crypto commands limit column width

2019-10-25 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960214#comment-16960214
 ] 

Wei-Chiu Chuang commented on HDFS-14931:


+1 LGTM

> hdfs crypto commands limit column width
> ---
>
> Key: HDFS-14931
> URL: https://issues.apache.org/jira/browse/HDFS-14931
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: HDFS-14931.001.patch
>
>
> {noformat}
> foo@bar$ hdfs crypto -listZones
> /projects/foo/bar/fizzgig/myprojectdirectorynameorsomethingcool1  encr
>   
> yptio
>   nzon
>   e1
> /projects/foo/bar/fizzgig/myprojectdirectorynameorsomethingcool2  encr
>   
> yptio
>   nzon
>   e2
> /projects/foo/bar/fizzgig/myprojectdirectorynameorsomethingcool3  encr
>   
> yptio
>   nzon
>   e3
> {noformat}
> The command ends up looking something really ugly like this when the path is 
> long. This also makes it very difficult to pipe the output into other 
> utilities, such as awk.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2366) Remove ozone.enabled flag

2019-10-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2366?focusedWorklogId=334414=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-334414
 ]

ASF GitHub Bot logged work on HDDS-2366:


Author: ASF GitHub Bot
Created on: 25/Oct/19 22:52
Start Date: 25/Oct/19 22:52
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #90: HDDS-2366. Remoce 
ozone.enabled as a flag and config item.
URL: https://github.com/apache/hadoop-ozone/pull/90
 
 
   ## What changes were proposed in this pull request?
   
   Removed all checks for ozone.enabled and configuration items.
   
   ## What is the link to the Apache JIRA
   https://issues.apache.org/jira/browse/HDDS-2366
   
   ## How was this patch tested?
   Verified mvn install and checkstyle goals succeed.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 334414)
Remaining Estimate: 0h
Time Spent: 10m

> Remove ozone.enabled flag
> -
>
> Key: HDDS-2366
> URL: https://issues.apache.org/jira/browse/HDDS-2366
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Now when ozone is started the start-ozone.sh/stop-ozone.sh script check 
> whether this property is enabled or not to start ozone services. Now, this 
> property and this check can be removed.
>  
> This was needed when ozone is part of Hadoop, and we don't want to start 
> ozone services by default. Now there is no such requirement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2366) Remove ozone.enabled flag

2019-10-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2366:
-
Labels: newbie pull-request-available  (was: newbie)

> Remove ozone.enabled flag
> -
>
> Key: HDDS-2366
> URL: https://issues.apache.org/jira/browse/HDDS-2366
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: newbie, pull-request-available
>
> Now when ozone is started the start-ozone.sh/stop-ozone.sh script check 
> whether this property is enabled or not to start ozone services. Now, this 
> property and this check can be removed.
>  
> This was needed when ozone is part of Hadoop, and we don't want to start 
> ozone services by default. Now there is no such requirement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2322) DoubleBuffer flush termination and OM shutdown's after that.

2019-10-25 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2322:
-
Status: Patch Available  (was: In Progress)

> DoubleBuffer flush termination and OM shutdown's after that.
> 
>
> Key: HDDS-2322
> URL: https://issues.apache.org/jira/browse/HDDS-2322
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> om1_1       | 2019-10-18 00:34:45,317 [OMDoubleBufferFlushThread] ERROR      
> - Terminating with exit status 2: OMDoubleBuffer flush 
> threadOMDoubleBufferFlushThreadencountered Throwable error
> om1_1       | java.util.ConcurrentModificationException
> om1_1       | at 
> java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1660)
> om1_1       | at 
> java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
> om1_1       | at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
> om1_1       | at 
> java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913)
> om1_1       | at 
> java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> om1_1       | at 
> java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578)
> om1_1       | at 
> org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup.getProtobuf(OmKeyLocationInfoGroup.java:65)
> om1_1       | at 
> java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
> om1_1       | at 
> java.base/java.util.Collections$2.tryAdvance(Collections.java:4745)
> om1_1       | at 
> java.base/java.util.Collections$2.forEachRemaining(Collections.java:4753)
> om1_1       | at 
> java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
> om1_1       | at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
> om1_1       | at 
> java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913)
> om1_1       | at 
> java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> om1_1       | at 
> java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578)
> om1_1       | at 
> org.apache.hadoop.ozone.om.helpers.OmKeyInfo.getProtobuf(OmKeyInfo.java:362)
> om1_1       | at 
> org.apache.hadoop.ozone.om.codec.OmKeyInfoCodec.toPersistedFormat(OmKeyInfoCodec.java:37)
> om1_1       | at 
> org.apache.hadoop.ozone.om.codec.OmKeyInfoCodec.toPersistedFormat(OmKeyInfoCodec.java:31)
> om1_1       | at 
> org.apache.hadoop.hdds.utils.db.CodecRegistry.asRawData(CodecRegistry.java:68)
> om1_1       | at 
> org.apache.hadoop.hdds.utils.db.TypedTable.putWithBatch(TypedTable.java:125)
> om1_1       | at 
> org.apache.hadoop.ozone.om.response.key.OMKeyCreateResponse.addToDBBatch(OMKeyCreateResponse.java:58)
> om1_1       | at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.lambda$flushTransactions$0(OzoneManagerDoubleBuffer.java:139)
> om1_1       | at 
> java.base/java.util.Iterator.forEachRemaining(Iterator.java:133)
> om1_1       | at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.flushTransactions(OzoneManagerDoubleBuffer.java:137)
> om1_1       | at java.base/java.lang.Thread.run(Thread.java:834)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2356) Multipart upload report errors while writing to ozone Ratis pipeline

2019-10-25 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960192#comment-16960192
 ] 

Bharat Viswanadham commented on HDDS-2356:
--

[~timmylicheng] Posted patch for HDDS-2322. I have run freon tests now I don't 
see the error. If you get a chance you can run Multipart upload tests which you 
are running and let us know the issue is fixed or not.

> Multipart upload report errors while writing to ozone Ratis pipeline
> 
>
> Key: HDDS-2356
> URL: https://issues.apache.org/jira/browse/HDDS-2356
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
> Environment: Env: 4 VMs in total: 3 Datanodes on 3 VMs, 1 OM & 1 SCM 
> on a separate VM
>Reporter: Li Cheng
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Fix For: 0.5.0
>
>
> Env: 4 VMs in total: 3 Datanodes on 3 VMs, 1 OM & 1 SCM on a separate VM, say 
> it's VM0.
> I use goofys as a fuse and enable ozone S3 gateway to mount ozone to a path 
> on VM0, while reading data from VM0 local disk and write to mount path. The 
> dataset has various sizes of files from 0 byte to GB-level and it has a 
> number of ~50,000 files. 
> The writing is slow (1GB for ~10 mins) and it stops after around 4GB. As I 
> look at hadoop-root-om-VM_50_210_centos.out log, I see OM throwing errors 
> related with Multipart upload. This error eventually causes the  writing to 
> terminate and OM to be closed. 
>  
> 2019-10-24 16:01:59,527 [OMDoubleBufferFlushThread] ERROR - Terminating with 
> exit status 2: OMDoubleBuffer flush 
> threadOMDoubleBufferFlushThreadencountered Throwable error
> java.util.ConcurrentModificationException
>  at java.util.TreeMap.forEach(TreeMap.java:1004)
>  at 
> org.apache.hadoop.ozone.om.helpers.OmMultipartKeyInfo.getProto(OmMultipartKeyInfo.java:111)
>  at 
> org.apache.hadoop.ozone.om.codec.OmMultipartKeyInfoCodec.toPersistedFormat(OmMultipartKeyInfoCodec.java:38)
>  at 
> org.apache.hadoop.ozone.om.codec.OmMultipartKeyInfoCodec.toPersistedFormat(OmMultipartKeyInfoCodec.java:31)
>  at 
> org.apache.hadoop.hdds.utils.db.CodecRegistry.asRawData(CodecRegistry.java:68)
>  at 
> org.apache.hadoop.hdds.utils.db.TypedTable.putWithBatch(TypedTable.java:125)
>  at 
> org.apache.hadoop.ozone.om.response.s3.multipart.S3MultipartUploadCommitPartResponse.addToDBBatch(S3MultipartUploadCommitPartResponse.java:112)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.lambda$flushTransactions$0(OzoneManagerDoubleBuffer.java:137)
>  at java.util.Iterator.forEachRemaining(Iterator.java:116)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.flushTransactions(OzoneManagerDoubleBuffer.java:135)
>  at java.lang.Thread.run(Thread.java:745)
> 2019-10-24 16:01:59,629 [shutdown-hook-0] INFO - SHUTDOWN_MSG:



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2322) DoubleBuffer flush termination and OM shutdown's after that.

2019-10-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2322:
-
Labels: pull-request-available  (was: )

> DoubleBuffer flush termination and OM shutdown's after that.
> 
>
> Key: HDDS-2322
> URL: https://issues.apache.org/jira/browse/HDDS-2322
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> om1_1       | 2019-10-18 00:34:45,317 [OMDoubleBufferFlushThread] ERROR      
> - Terminating with exit status 2: OMDoubleBuffer flush 
> threadOMDoubleBufferFlushThreadencountered Throwable error
> om1_1       | java.util.ConcurrentModificationException
> om1_1       | at 
> java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1660)
> om1_1       | at 
> java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
> om1_1       | at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
> om1_1       | at 
> java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913)
> om1_1       | at 
> java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> om1_1       | at 
> java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578)
> om1_1       | at 
> org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup.getProtobuf(OmKeyLocationInfoGroup.java:65)
> om1_1       | at 
> java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
> om1_1       | at 
> java.base/java.util.Collections$2.tryAdvance(Collections.java:4745)
> om1_1       | at 
> java.base/java.util.Collections$2.forEachRemaining(Collections.java:4753)
> om1_1       | at 
> java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
> om1_1       | at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
> om1_1       | at 
> java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913)
> om1_1       | at 
> java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> om1_1       | at 
> java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578)
> om1_1       | at 
> org.apache.hadoop.ozone.om.helpers.OmKeyInfo.getProtobuf(OmKeyInfo.java:362)
> om1_1       | at 
> org.apache.hadoop.ozone.om.codec.OmKeyInfoCodec.toPersistedFormat(OmKeyInfoCodec.java:37)
> om1_1       | at 
> org.apache.hadoop.ozone.om.codec.OmKeyInfoCodec.toPersistedFormat(OmKeyInfoCodec.java:31)
> om1_1       | at 
> org.apache.hadoop.hdds.utils.db.CodecRegistry.asRawData(CodecRegistry.java:68)
> om1_1       | at 
> org.apache.hadoop.hdds.utils.db.TypedTable.putWithBatch(TypedTable.java:125)
> om1_1       | at 
> org.apache.hadoop.ozone.om.response.key.OMKeyCreateResponse.addToDBBatch(OMKeyCreateResponse.java:58)
> om1_1       | at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.lambda$flushTransactions$0(OzoneManagerDoubleBuffer.java:139)
> om1_1       | at 
> java.base/java.util.Iterator.forEachRemaining(Iterator.java:133)
> om1_1       | at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.flushTransactions(OzoneManagerDoubleBuffer.java:137)
> om1_1       | at java.base/java.lang.Thread.run(Thread.java:834)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2322) DoubleBuffer flush termination and OM shutdown's after that.

2019-10-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2322?focusedWorklogId=334410=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-334410
 ]

ASF GitHub Bot logged work on HDDS-2322:


Author: ASF GitHub Bot
Created on: 25/Oct/19 22:48
Start Date: 25/Oct/19 22:48
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #89: 
HDDS-2322. DoubleBuffer flush termination and OM shutdown's after that. Make 
entry returned from cache a new copy.
URL: https://github.com/apache/hadoop-ozone/pull/89
 
 
   ## What changes were proposed in this pull request?
   
   Whenever value is returned from cache, it returns a copy object. So, that 
when doubleBuffer flushes, we don't see random ConcurrentModificationException 
errors like this.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2322
   
   Please replace this section with the link to the Apache JIRA)
   
   Ran freon tests with 100K Keys multiple times, now I am not seeing the error 
given in Jira description. Without this patch, we will see the error. (If you 
are not seeing error run multiple times in non-HA or just enable 
ozone.om.ratis.enable, and you will see the error in the first run)
   
   **Command used for testing:**
   ozone freon rk --numOfBuckets=1 --numOfVolumes=1 --numOfKeys=10 
--keySize=0
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 334410)
Remaining Estimate: 0h
Time Spent: 10m

> DoubleBuffer flush termination and OM shutdown's after that.
> 
>
> Key: HDDS-2322
> URL: https://issues.apache.org/jira/browse/HDDS-2322
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> om1_1       | 2019-10-18 00:34:45,317 [OMDoubleBufferFlushThread] ERROR      
> - Terminating with exit status 2: OMDoubleBuffer flush 
> threadOMDoubleBufferFlushThreadencountered Throwable error
> om1_1       | java.util.ConcurrentModificationException
> om1_1       | at 
> java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1660)
> om1_1       | at 
> java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
> om1_1       | at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
> om1_1       | at 
> java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913)
> om1_1       | at 
> java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> om1_1       | at 
> java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578)
> om1_1       | at 
> org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup.getProtobuf(OmKeyLocationInfoGroup.java:65)
> om1_1       | at 
> java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
> om1_1       | at 
> java.base/java.util.Collections$2.tryAdvance(Collections.java:4745)
> om1_1       | at 
> java.base/java.util.Collections$2.forEachRemaining(Collections.java:4753)
> om1_1       | at 
> java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
> om1_1       | at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
> om1_1       | at 
> java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913)
> om1_1       | at 
> java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> om1_1       | at 
> java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578)
> om1_1       | at 
> org.apache.hadoop.ozone.om.helpers.OmKeyInfo.getProtobuf(OmKeyInfo.java:362)
> om1_1       | at 
> org.apache.hadoop.ozone.om.codec.OmKeyInfoCodec.toPersistedFormat(OmKeyInfoCodec.java:37)
> om1_1       | at 
> org.apache.hadoop.ozone.om.codec.OmKeyInfoCodec.toPersistedFormat(OmKeyInfoCodec.java:31)
> om1_1       | at 
> org.apache.hadoop.hdds.utils.db.CodecRegistry.asRawData(CodecRegistry.java:68)
> om1_1       | at 
> org.apache.hadoop.hdds.utils.db.TypedTable.putWithBatch(TypedTable.java:125)
> om1_1       | at 
> org.apache.hadoop.ozone.om.response.key.OMKeyCreateResponse.addToDBBatch(OMKeyCreateResponse.java:58)
> om1_1       | at 
> 

[jira] [Assigned] (HDDS-2366) Remove ozone.enabled flag

2019-10-25 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle reassigned HDDS-2366:
-

Assignee: Siddharth Wagle

> Remove ozone.enabled flag
> -
>
> Key: HDDS-2366
> URL: https://issues.apache.org/jira/browse/HDDS-2366
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: newbie
>
> Now when ozone is started the start-ozone.sh/stop-ozone.sh script check 
> whether this property is enabled or not to start ozone services. Now, this 
> property and this check can be removed.
>  
> This was needed when ozone is part of Hadoop, and we don't want to start 
> ozone services by default. Now there is no such requirement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14923) Remove dead code from HealthMonitor

2019-10-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960176#comment-16960176
 ] 

Hudson commented on HDFS-14923:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17576 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17576/])
HDFS-14923. Remove dead code from HealthMonitor. Contributed by Fei Hui. 
(weichiu: rev 7be5508d9b35892f483ba6022b6aced7648b8fa3)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/HealthMonitor.java


> Remove dead code from HealthMonitor
> ---
>
> Key: HDFS-14923
> URL: https://issues.apache.org/jira/browse/HDFS-14923
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.0, 3.2.1, 3.1.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14923.001.patch
>
>
> Dig ZKFC source code and find that the dead code as follow
> {code}
> public void removeCallback(Callback cb) {
>callbacks.remove(cb);
> }
> public synchronized void removeServiceStateCallback(ServiceStateCallback cb) {
>serviceStateCallbacks.remove(cb);
> }
> synchronized HAServiceStatus getLastServiceStatus() {
>return lastServiceState;
> }
> {code}
> It's useless, and should be deleted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14923) Remove dead code from HealthMonitor

2019-10-25 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14923:
---
Fix Version/s: 3.2.2
   3.1.4
   3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks!

> Remove dead code from HealthMonitor
> ---
>
> Key: HDFS-14923
> URL: https://issues.apache.org/jira/browse/HDFS-14923
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.0, 3.2.1, 3.1.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14923.001.patch
>
>
> Dig ZKFC source code and find that the dead code as follow
> {code}
> public void removeCallback(Callback cb) {
>callbacks.remove(cb);
> }
> public synchronized void removeServiceStateCallback(ServiceStateCallback cb) {
>serviceStateCallbacks.remove(cb);
> }
> synchronized HAServiceStatus getLastServiceStatus() {
>return lastServiceState;
> }
> {code}
> It's useless, and should be deleted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2366) Remove ozone.enabled flag

2019-10-25 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2366:
-
Description: 
Now when ozone is started the start-ozone.sh/stop-ozone.sh script check whether 
this property is enabled or not to start ozone services. Now, this property and 
this check can be removed.

 

This was needed when ozone is part of Hadoop, and we don't want to start ozone 
services by default. Now there is no such requirement.

  was:
Now when ozone is started the start-ozone.sh/stop-ozone.sh script check whether 
this property is enabled or not to start ozone services. Now, this property and 
this check can be removed.

 

This was needed when ozone is part of Hadoop, and we don't want to start ozone 
services by default.


> Remove ozone.enabled flag
> -
>
> Key: HDDS-2366
> URL: https://issues.apache.org/jira/browse/HDDS-2366
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Priority: Major
>
> Now when ozone is started the start-ozone.sh/stop-ozone.sh script check 
> whether this property is enabled or not to start ozone services. Now, this 
> property and this check can be removed.
>  
> This was needed when ozone is part of Hadoop, and we don't want to start 
> ozone services by default. Now there is no such requirement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2366) Remove ozone.enabled flag

2019-10-25 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2366:


 Summary: Remove ozone.enabled flag
 Key: HDDS-2366
 URL: https://issues.apache.org/jira/browse/HDDS-2366
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham


Now when ozone is started the start-ozone.sh/stop-ozone.sh script check whether 
this property is enabled or not to start ozone services. Now, this property and 
this check can be removed.

 

This was needed when ozone is part of Hadoop, and we don't want to start ozone 
services by default.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2366) Remove ozone.enabled flag

2019-10-25 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2366:
-
Labels: newbie  (was: )

> Remove ozone.enabled flag
> -
>
> Key: HDDS-2366
> URL: https://issues.apache.org/jira/browse/HDDS-2366
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
>
> Now when ozone is started the start-ozone.sh/stop-ozone.sh script check 
> whether this property is enabled or not to start ozone services. Now, this 
> property and this check can be removed.
>  
> This was needed when ozone is part of Hadoop, and we don't want to start 
> ozone services by default. Now there is no such requirement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14923) Remove dead code from HealthMonitor

2019-10-25 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960159#comment-16960159
 ] 

Fei Hui commented on HDFS-14923:


[~weichiu] Thanks for review !

> Remove dead code from HealthMonitor
> ---
>
> Key: HDFS-14923
> URL: https://issues.apache.org/jira/browse/HDFS-14923
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.0, 3.2.1, 3.1.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-14923.001.patch
>
>
> Dig ZKFC source code and find that the dead code as follow
> {code}
> public void removeCallback(Callback cb) {
>callbacks.remove(cb);
> }
> public synchronized void removeServiceStateCallback(ServiceStateCallback cb) {
>serviceStateCallbacks.remove(cb);
> }
> synchronized HAServiceStatus getLastServiceStatus() {
>return lastServiceState;
> }
> {code}
> It's useless, and should be deleted.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14768) EC : Busy DN replica should be consider in live replica check.

2019-10-25 Thread Surendra Singh Lilhore (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960050#comment-16960050
 ] 

Surendra Singh Lilhore commented on HDFS-14768:
---

[~gjhkael], you can use this UT to reproduce it.

Here the problem is dn decommission triggers reconstruction of busy block. It 
should not happened and it should only reconstruct decommission DN block.
{code:java}
   /**
   * DN decommission shouldn't reconstruction busy DN block.
   * @throws Exception
   */
  @Test
  public void testDecommissionWithBusyNode() throws Exception {
byte busyDNIndex = 1;
byte decommisionDNIndex = 0;
//1. create EC file
final Path ecFile = new Path(ecDir, "testDecommissionWithBusyNode");
int writeBytes = cellSize * dataBlocks;
writeStripedFile(dfs, ecFile, writeBytes);
Assert.assertEquals(0, bm.numOfUnderReplicatedBlocks());
FileChecksum fileChecksum1 = dfs.getFileChecksum(ecFile, writeBytes);

//2. make once DN busy
final INodeFile fileNode = cluster.getNamesystem().getFSDirectory()
.getINode4Write(ecFile.toString()).asFile();
BlockInfo firstBlock = fileNode.getBlocks()[0];
DatanodeStorageInfo[] dnStorageInfos = bm.getStorages(firstBlock); 
DatanodeDescriptor busyNode = 
dnStorageInfos[busyDNIndex].getDatanodeDescriptor();
for (int j = 0; j < 4; j++) {
  busyNode.incrementPendingReplicationWithoutTargets();
}

//3. decomission one node
List decommisionNodes = new ArrayList<>();

decommisionNodes.add(dnStorageInfos[decommisionDNIndex].getDatanodeDescriptor());
decommissionNode(0, decommisionNodes, AdminStates.DECOMMISSIONED);
assertEquals(decommisionNodes.size(), fsn.getNumDecomLiveDataNodes());

//4. wait for decommission block to replicate
Thread.sleep(3000);
DatanodeStorageInfo[] newDnStorageInfos = bm.getStorages(firstBlock);
Assert.assertEquals("Busy DN shouldn't be reconstructed",
dnStorageInfos[busyDNIndex].getStorageID(),
newDnStorageInfos[busyDNIndex].getStorageID());
   
//5. check decommission DN block index, it should be reconstructed again
LocatedBlocks lbs = cluster.getNameNodeRpc().getBlockLocations(
ecFile.toString(), 0, writeBytes);
LocatedStripedBlock bg = (LocatedStripedBlock) (lbs.get(0));
int decommissionBlockIndexCount = 0;
for (byte index : bg.getBlockIndices()) {
  if (index == decommisionDNIndex) {
decommissionBlockIndexCount++;
  }
}
 
Assert.assertEquals("Decommission DN block should be reconstructed", 2,
decommissionBlockIndexCount);

FileChecksum fileChecksum2 = dfs.getFileChecksum(ecFile, writeBytes);
Assert.assertTrue("Checksum mismatches!",
fileChecksum1.equals(fileChecksum2));
  }{code}
 

> EC : Busy DN replica should be consider in live replica check.
> --
>
> Key: HDFS-14768
> URL: https://issues.apache.org/jira/browse/HDFS-14768
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, erasure-coding, hdfs, namenode
>Affects Versions: 3.0.2
>Reporter: guojh
>Assignee: guojh
>Priority: Major
>  Labels: patch
> Attachments: 1568275810244.jpg, 1568276338275.jpg, 1568771471942.jpg, 
> HDFS-14768.000.patch, HDFS-14768.001.patch, HDFS-14768.002.patch, 
> HDFS-14768.003.patch, HDFS-14768.004.patch, HDFS-14768.005.patch, 
> HDFS-14768.006.patch, HDFS-14768.007.patch, HDFS-14768.008.patch, 
> HDFS-14768.jpg, guojh_UT_after_deomission.txt, 
> guojh_UT_before_deomission.txt, zhaoyiming_UT_after_deomission.txt, 
> zhaoyiming_UT_beofre_deomission.txt
>
>
> Policy is RS-6-3-1024K, version is hadoop 3.0.2;
> We suppose a file's block Index is [0,1,2,3,4,5,6,7,8], And decommission 
> index[3,4], increase the index 6 datanode's
> pendingReplicationWithoutTargets  that make it large than 
> replicationStreamsHardLimit(we set 14). Then, After the method 
> chooseSourceDatanodes of BlockMananger, the liveBlockIndices is 
> [0,1,2,3,4,5,7,8], Block Counter is, Live:7, Decommission:2. 
> In method scheduleReconstruction of BlockManager, the additionalReplRequired 
> is 9 - 7 = 2. After Namenode choose two target Datanode, will assign a 
> erasureCode task to target datanode.
> When datanode get the task will build  targetIndices from liveBlockIndices 
> and target length. the code is blow.
> {code:java}
> // code placeholder
> targetIndices = new short[targets.length];
> private void initTargetIndices() { 
>   BitSet bitset = reconstructor.getLiveBitSet();
>   int m = 0; hasValidTargets = false; 
>   for (int i = 0; i < dataBlkNum + parityBlkNum; i++) {  
> if (!bitset.get) {    
>   if (reconstructor.getBlockLen > 0) {
>        if (m < targets.length) {
>          targetIndices[m++] = (short)i;
>          

[jira] [Commented] (HDFS-14308) DFSStripedInputStream curStripeBuf is not freed by unbuffer()

2019-10-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960045#comment-16960045
 ] 

Hudson commented on HDFS-14308:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17575 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17575/])
HDFS-14308. DFSStripedInputStream curStripeBuf is not freed by (weichiu: rev 
30db895b59d250788d029cb2013bb4712ef9b546)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedInputStream.java


> DFSStripedInputStream curStripeBuf is not freed by unbuffer()
> -
>
> Key: HDFS-14308
> URL: https://issues.apache.org/jira/browse/HDFS-14308
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.0.0
>Reporter: Joe McDonnell
>Assignee: Zhao Yi Ming
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: ec_heap_dump.png
>
>
> Some users of HDFS cache opened HDFS file handles to avoid repeated 
> roundtrips to the NameNode. For example, Impala caches up to 20,000 HDFS file 
> handles by default. Recent tests on erasure coded files show that the open 
> file handles can consume a large amount of memory when not in use.
> For example, here is output from Impala's JMX endpoint when 608 file handles 
> are cached
> {noformat}
> {
> "name": "java.nio:type=BufferPool,name=direct",
> "modelerType": "sun.management.ManagementFactoryHelper$1",
> "Name": "direct",
> "TotalCapacity": 1921048960,
> "MemoryUsed": 1921048961,
> "Count": 633,
> "ObjectName": "java.nio:type=BufferPool,name=direct"
> },{noformat}
> This shows direct buffer memory usage of 3MB per DFSStripedInputStream. 
> Attached is output from Eclipse MAT showing that the direct buffers come from 
> DFSStripedInputStream objects. Both Impala and HBase call unbuffer() when a 
> file handle is being cached and potentially unused for significant chunks of 
> time, yet this shows that the memory remains in use.
> To support caching file handles on erasure coded files, DFSStripedInputStream 
> should avoid holding buffers after the unbuffer() call. See HDFS-7694. 
> "unbuffer()" is intended to move an input stream to a lower memory state to 
> support these caching use cases. In particular, the curStripeBuf seems to be 
> allocated from the BUFFER_POOL on a resetCurStripeBuffer(true) call. It is 
> not freed until close().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14308) DFSStripedInputStream curStripeBuf is not freed by unbuffer()

2019-10-25 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-14308.

Fix Version/s: 3.2.2
   3.1.4
   3.3.0
   Resolution: Fixed

Thanks [~zhaoyim] !

> DFSStripedInputStream curStripeBuf is not freed by unbuffer()
> -
>
> Key: HDFS-14308
> URL: https://issues.apache.org/jira/browse/HDFS-14308
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.0.0
>Reporter: Joe McDonnell
>Assignee: Zhao Yi Ming
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: ec_heap_dump.png
>
>
> Some users of HDFS cache opened HDFS file handles to avoid repeated 
> roundtrips to the NameNode. For example, Impala caches up to 20,000 HDFS file 
> handles by default. Recent tests on erasure coded files show that the open 
> file handles can consume a large amount of memory when not in use.
> For example, here is output from Impala's JMX endpoint when 608 file handles 
> are cached
> {noformat}
> {
> "name": "java.nio:type=BufferPool,name=direct",
> "modelerType": "sun.management.ManagementFactoryHelper$1",
> "Name": "direct",
> "TotalCapacity": 1921048960,
> "MemoryUsed": 1921048961,
> "Count": 633,
> "ObjectName": "java.nio:type=BufferPool,name=direct"
> },{noformat}
> This shows direct buffer memory usage of 3MB per DFSStripedInputStream. 
> Attached is output from Eclipse MAT showing that the direct buffers come from 
> DFSStripedInputStream objects. Both Impala and HBase call unbuffer() when a 
> file handle is being cached and potentially unused for significant chunks of 
> time, yet this shows that the memory remains in use.
> To support caching file handles on erasure coded files, DFSStripedInputStream 
> should avoid holding buffers after the unbuffer() call. See HDFS-7694. 
> "unbuffer()" is intended to move an input stream to a lower memory state to 
> support these caching use cases. In particular, the curStripeBuf seems to be 
> allocated from the BUFFER_POOL on a resetCurStripeBuffer(true) call. It is 
> not freed until close().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14908) LeaseManager should check parent-child relationship when filter open files.

2019-10-25 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960015#comment-16960015
 ] 

Íñigo Goiri commented on HDFS-14908:


Just to clarify, which one is  [^HDFS-14908.001.patch] (startsWith? isParent?) 
and which one is  [^HDFS-14908.003.patch] (startsWithAndCharAt?)?

> LeaseManager should check parent-child relationship when filter open files.
> ---
>
> Key: HDFS-14908
> URL: https://issues.apache.org/jira/browse/HDFS-14908
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.0.1
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Minor
> Attachments: HDFS-14908.001.patch, HDFS-14908.002.patch, 
> HDFS-14908.003.patch, Test.java, TestV2.java, TestV3.java
>
>
> Now when doing listOpenFiles(), LeaseManager only checks whether the filter 
> path is the prefix of the open files. We should check whether the filter path 
> is the parent/ancestor of the open files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14902) RBF: NullPointer When Misconfigured

2019-10-25 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960013#comment-16960013
 ] 

Íñigo Goiri commented on HDFS-14902:


[^HDFS-14902.001.patch] looks good.
[~belugabehr] do you mind checking if [^HDFS-14902.001.patch] solves your issue?

> RBF: NullPointer When Misconfigured
> ---
>
> Key: HDFS-14902
> URL: https://issues.apache.org/jira/browse/HDFS-14902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Affects Versions: 3.2.0
>Reporter: David Mollitor
>Assignee: Takanobu Asanuma
>Priority: Minor
> Attachments: HDFS-14902.001.patch
>
>
> Admittedly the server was mis-configured, but this should be a bit more 
> elegant.
> {code:none}
> 2019-10-08 11:19:52,505 ERROR router.NamenodeHeartbeatService: Unhandled 
> exception updating NN registration for null:null
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.federation.protocol.proto.HdfsServerFederationProtos$NamenodeMembershipRecordProto$Builder.setServiceAddress(HdfsServerFederationProtos.java:3831)
>   at 
> org.apache.hadoop.hdfs.server.federation.store.records.impl.pb.MembershipStatePBImpl.setServiceAddress(MembershipStatePBImpl.java:119)
>   at 
> org.apache.hadoop.hdfs.server.federation.store.records.MembershipState.newInstance(MembershipState.java:108)
>   at 
> org.apache.hadoop.hdfs.server.federation.resolver.MembershipNamenodeResolver.registerNamenode(MembershipNamenodeResolver.java:259)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.updateState(NamenodeHeartbeatService.java:223)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.periodicInvoke(NamenodeHeartbeatService.java:159)
>   at 
> org.apache.hadoop.hdfs.server.federation.router.PeriodicService$1.run(PeriodicService.java:178)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1600) Add userName and IPAddress as part of OMRequest.

2019-10-25 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959980#comment-16959980
 ] 

Bharat Viswanadham edited comment on HDDS-1600 at 10/25/19 6:44 PM:


[~cxorm]

There is a UT in the patch which test UserInfo is set properly or not.

And to test this out you can use it to test NativeAuthorizer enable ACLS and 
test out acls functionality is properly working or not.  Right now only I 
believe the only username/group is used in acl validation in NativeAuthorizer.

 

Let me tell you the main reason behind this patch validateAndUpdateCache in HA 
runs under GrpcContext, so we will not have UGI object, so we are creating 
UserInfo in preExecute (where we can get UGI object) and use the UserInfo 
during acl validation.


was (Author: bharatviswa):
[~cxorm]

There is a UT in the patch which test UserInfo is set properly or not.

And to test this out you can use it to test NativeAuthorizer enable ACLS and 
test out acls functionality is properly working or not.  Right now only I 
believe the only username is used in acl validation in NativeAuthorizer.

 

Let me tell you the main reason behind this patch validateAndUpdateCache in HA 
runs under GrpcContext, so we will not have UGI object, so we are creating 
UserInfo in preExecute (where we can get UGI object) and use the UserInfo 
during acl validation.

> Add userName and IPAddress as part of OMRequest.
> 
>
> Key: HDDS-1600
> URL: https://issues.apache.org/jira/browse/HDDS-1600
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> In OM HA, the actual execution of request happens under GRPC context, so UGI 
> object which we retrieve from ProtobufRpcEngine.Server.getRemoteUser(); will 
> not be available.
> In similar manner ProtobufRpcEngine.Server.getRemoteIp().
>  
> So, during preExecute(which happens under RPC context) extract userName and 
> IPAddress and add it to the OMRequest, and then send the request to ratis 
> server.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1600) Add userName and IPAddress as part of OMRequest.

2019-10-25 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959980#comment-16959980
 ] 

Bharat Viswanadham edited comment on HDDS-1600 at 10/25/19 6:43 PM:


[~cxorm]

There is a UT in the patch which test UserInfo is set properly or not.

And to test this out you can use it to test NativeAuthorizer enable ACLS and 
test out acls functionality is properly working or not.  Right now only I 
believe the only username is used in acl validation in NativeAuthorizer.

 

Let me tell you the main reason behind this patch validateAndUpdateCache in HA 
runs under GrpcContext, so we will not have UGI object, so we are creating 
UserInfo in preExecute (where we can get UGI object) and use the UserInfo 
during acl validation.


was (Author: bharatviswa):
[~cxorm]

There is a UT in the patch which test UserInfo is set properly or not.

And to test this out you can use it to test NativeAuthorizer enable ACLS and 
test out acls functionality is properly working or not. 

 

Let me tell you the main reason behind this patch validateAndUpdateCache in HA 
runs under GrpcContext, so we will not have UGI object, so we are creating 
UserInfo in preExecute (where we can get UGI object) and use the UserInfo 
during acl validation.

> Add userName and IPAddress as part of OMRequest.
> 
>
> Key: HDDS-1600
> URL: https://issues.apache.org/jira/browse/HDDS-1600
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> In OM HA, the actual execution of request happens under GRPC context, so UGI 
> object which we retrieve from ProtobufRpcEngine.Server.getRemoteUser(); will 
> not be available.
> In similar manner ProtobufRpcEngine.Server.getRemoteIp().
>  
> So, during preExecute(which happens under RPC context) extract userName and 
> IPAddress and add it to the OMRequest, and then send the request to ratis 
> server.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1600) Add userName and IPAddress as part of OMRequest.

2019-10-25 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959980#comment-16959980
 ] 

Bharat Viswanadham commented on HDDS-1600:
--

[~cxorm]

There is a UT in the patch which test UserInfo is set properly or not.

And to test this out you can use it to test NativeAuthorizer enable ACLS and 
test out acls functionality is properly working or not. 

 

Let me tell you the main reason behind this patch validateAndUpdateCache in HA 
runs under GrpcContext, so we will not have UGI object, so we are creating 
UserInfo in preExecute (where we can get UGI object) and use the UserInfo 
during acl validation.

> Add userName and IPAddress as part of OMRequest.
> 
>
> Key: HDDS-1600
> URL: https://issues.apache.org/jira/browse/HDDS-1600
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> In OM HA, the actual execution of request happens under GRPC context, so UGI 
> object which we retrieve from ProtobufRpcEngine.Server.getRemoteUser(); will 
> not be available.
> In similar manner ProtobufRpcEngine.Server.getRemoteIp().
>  
> So, during preExecute(which happens under RPC context) extract userName and 
> IPAddress and add it to the OMRequest, and then send the request to ratis 
> server.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2272) Avoid buffer copying in GrpcReplicationClient

2019-10-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2272:
-
Labels: pull-request-available  (was: )

> Avoid buffer copying in GrpcReplicationClient
> -
>
> Key: HDDS-2272
> URL: https://issues.apache.org/jira/browse/HDDS-2272
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Tsz-wo Sze
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> In StreamDownloader.onNext, CopyContainerResponseProto is copied to a byte[] 
> and then it is written out to the stream.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2272) Avoid buffer copying in GrpcReplicationClient

2019-10-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2272?focusedWorklogId=334296=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-334296
 ]

ASF GitHub Bot logged work on HDDS-2272:


Author: ASF GitHub Bot
Created on: 25/Oct/19 18:37
Start Date: 25/Oct/19 18:37
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #88: HDDS-2272. 
Avoid buffer copying in GrpcReplicationClient
URL: https://github.com/apache/hadoop-ozone/pull/88
 
 
   ## What changes were proposed in this pull request?
   
   Eliminate `BufferedOutputStream`, write `ByteString` directly to 
`FileStream` to avoid a buffer copy.
   Also, use `ByteString.writeTo(OutputStream)`, although it still seems to 
copy the byte array internally.
   
   https://issues.apache.org/jira/browse/HDDS-2272
   
   ## How was this patch tested?
   
   Tested closed container replication manually with a 300MB container.  
Verified that container is correctly replicated to other datanode.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 334296)
Remaining Estimate: 0h
Time Spent: 10m

> Avoid buffer copying in GrpcReplicationClient
> -
>
> Key: HDDS-2272
> URL: https://issues.apache.org/jira/browse/HDDS-2272
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Tsz-wo Sze
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In StreamDownloader.onNext, CopyContainerResponseProto is copied to a byte[] 
> and then it is written out to the stream.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2273) Avoid buffer copying in GrpcReplicationService

2019-10-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2273:
-
Labels: pull-request-available  (was: )

> Avoid buffer copying in GrpcReplicationService
> --
>
> Key: HDDS-2273
> URL: https://issues.apache.org/jira/browse/HDDS-2273
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Tsz-wo Sze
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> In GrpcOutputStream, it writes data to a ByteArrayOutputStream and copies 
> them to a ByteString.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2273) Avoid buffer copying in GrpcReplicationService

2019-10-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2273?focusedWorklogId=334248=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-334248
 ]

ASF GitHub Bot logged work on HDDS-2273:


Author: ASF GitHub Bot
Created on: 25/Oct/19 17:43
Start Date: 25/Oct/19 17:43
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #87: HDDS-2273. 
Avoid buffer copying in GrpcReplicationService
URL: https://github.com/apache/hadoop-ozone/pull/87
 
 
   ## What changes were proposed in this pull request?
   
   Use `ByteString.Output` stream instead of `ByteArrayOutputStream`.  Its 
initial size is configured to 1MB (same as previous buffer size), and is 
flushed when that size is reached.  This helps to avoid allocating multiple 
buffers as well as buffer copy when converting to `ByteString`.
   
   https://issues.apache.org/jira/browse/HDDS-2273
   
   ## How was this patch tested?
   
   Tested closed container replication manually with a 300MB container.  
Verified that container is correctly replicated to other datanode.  Also 
verified that flush happens when buffer is full.
   
   ```
   datanode_1  | - Streaming container data (1) to other datanode
   datanode_1  | - Sending 1048576 bytes (of type LiteralByteString) for 
container 1
   datanode_1  | - Sending 530637 bytes (of type LiteralByteString) for 
container 1
   datanode_1  | - 1579213 bytes written to the rpc stream from container 1
   ...
   datanode_5  | - Container is downloaded to 
/tmp/container-copy/container-1.tar.gz
   datanode_5  | - Container 1 is downloaded, starting to import.
   datanode_5  | - Container 1 is replicated successfully
   datanode_5  | - Container 1 is replicated.
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 334248)
Remaining Estimate: 0h
Time Spent: 10m

> Avoid buffer copying in GrpcReplicationService
> --
>
> Key: HDDS-2273
> URL: https://issues.apache.org/jira/browse/HDDS-2273
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Tsz-wo Sze
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In GrpcOutputStream, it writes data to a ByteArrayOutputStream and copies 
> them to a ByteString.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2341) Validate tar entry path during extraction

2019-10-25 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-2341:

   Fix Version/s: 0.5.0
Target Version/s:   (was: 0.5.0)
  Resolution: Fixed
  Status: Resolved  (was: Patch Available)

+1 I've committed this. Thanks for the contribution [~adoroszlai] and the great 
test coverage.

> Validate tar entry path during extraction
> -
>
> Key: HDDS-2341
> URL: https://issues.apache.org/jira/browse/HDDS-2341
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Containers extracted from tar.gz should be validated to confine entries to 
> the archive's root directory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2341) Validate tar entry path during extraction

2019-10-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2341?focusedWorklogId=334242=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-334242
 ]

ASF GitHub Bot logged work on HDDS-2341:


Author: ASF GitHub Bot
Created on: 25/Oct/19 17:22
Start Date: 25/Oct/19 17:22
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #72: HDDS-2341. Validate 
tar entry path during extraction
URL: https://github.com/apache/hadoop-ozone/pull/72
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 334242)
Time Spent: 20m  (was: 10m)

> Validate tar entry path during extraction
> -
>
> Key: HDDS-2341
> URL: https://issues.apache.org/jira/browse/HDDS-2341
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Containers extracted from tar.gz should be validated to confine entries to 
> the archive's root directory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-13934) Multipart uploaders to be created through API call to FileSystem/FileContext, not service loader

2019-10-25 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HDFS-13934:
-

Assignee: Steve Loughran

> Multipart uploaders to be created through API call to FileSystem/FileContext, 
> not service loader
> 
>
> Key: HDFS-13934
> URL: https://issues.apache.org/jira/browse/HDFS-13934
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, fs/s3, hdfs
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> the Multipart Uploaders are created via service loaders. This is troublesome
> # HADOOP-12636, HADOOP-13323, HADOOP-13625 highlight how the load process 
> forces the transient loading of dependencies.  If a dependent class cannot be 
> loaded (e.g aws-sdk is not on the classpath), that service won't load. 
> Without error handling round the load process, this stops any uploader from 
> loading. Even with that error handling, the performance hit of that load, 
> especially with reshaded dependencies, hurts performance (HADOOP-13138).
> # it makes wrapping the the load with any filter impossible, stops transitive 
> binding through viewFS, mocking, etc.
> # It complicates security in a kerberized world. If you have an FS instance 
> of user A, then you should be able to create an MPU instance with that user's 
> permissions. currently, if a service were to try to create one, you'd be 
> looking at doAs() games around the service loading, and a more complex bind 
> process.
> Proposed
> # remove the service loader mech entirely
> # add to FS & FC as createMultipartUploader(path) call, which will create one 
> bound to the current FS, with its permissions, DTs, etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2272) Avoid buffer copying in GrpcReplicationClient

2019-10-25 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2272 started by Attila Doroszlai.
--
> Avoid buffer copying in GrpcReplicationClient
> -
>
> Key: HDDS-2272
> URL: https://issues.apache.org/jira/browse/HDDS-2272
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Tsz-wo Sze
>Assignee: Attila Doroszlai
>Priority: Major
>
> In StreamDownloader.onNext, CopyContainerResponseProto is copied to a byte[] 
> and then it is written out to the stream.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2273) Avoid buffer copying in GrpcReplicationService

2019-10-25 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2273 started by Attila Doroszlai.
--
> Avoid buffer copying in GrpcReplicationService
> --
>
> Key: HDDS-2273
> URL: https://issues.apache.org/jira/browse/HDDS-2273
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Tsz-wo Sze
>Assignee: Attila Doroszlai
>Priority: Major
>
> In GrpcOutputStream, it writes data to a ByteArrayOutputStream and copies 
> them to a ByteString.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2206) Separate handling for OMException and IOException in the Ozone Manager

2019-10-25 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-2206:

Fix Version/s: (was: 0.5.0)

> Separate handling for OMException and IOException in the Ozone Manager
> --
>
> Key: HDDS-2206
> URL: https://issues.apache.org/jira/browse/HDDS-2206
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> As part of improving error propagation from the OM for ease of 
> troubleshooting and diagnosis, the proposal is to handle IOExceptions 
> separately from the business exceptions which are thrown as OMExceptions.
> Handling for OMExceptions will not be changed in this jira.
> Handling for IOExceptions will include logging the stacktrace on the server, 
> and propagation to the client under the control of a config parameter.
> Similar handling is also proposed for SCMException.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2206) Separate handling for OMException and IOException in the Ozone Manager

2019-10-25 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-2206:

Target Version/s: 0.5.0

> Separate handling for OMException and IOException in the Ozone Manager
> --
>
> Key: HDDS-2206
> URL: https://issues.apache.org/jira/browse/HDDS-2206
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> As part of improving error propagation from the OM for ease of 
> troubleshooting and diagnosis, the proposal is to handle IOExceptions 
> separately from the business exceptions which are thrown as OMExceptions.
> Handling for OMExceptions will not be changed in this jira.
> Handling for IOExceptions will include logging the stacktrace on the server, 
> and propagation to the client under the control of a config parameter.
> Similar handling is also proposed for SCMException.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDDS-2206) Separate handling for OMException and IOException in the Ozone Manager

2019-10-25 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HDDS-2206:
-

Reverted this based on offline conversation with [~aengineer].

Anu has requested we add a config key to control this behavior.

> Separate handling for OMException and IOException in the Ozone Manager
> --
>
> Key: HDDS-2206
> URL: https://issues.apache.org/jira/browse/HDDS-2206
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> As part of improving error propagation from the OM for ease of 
> troubleshooting and diagnosis, the proposal is to handle IOExceptions 
> separately from the business exceptions which are thrown as OMExceptions.
> Handling for OMExceptions will not be changed in this jira.
> Handling for IOExceptions will include logging the stacktrace on the server, 
> and propagation to the client under the control of a config parameter.
> Similar handling is also proposed for SCMException.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2206) Separate handling for OMException and IOException in the Ozone Manager

2019-10-25 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-2206:

Labels:   (was: pull-request-available)

> Separate handling for OMException and IOException in the Ozone Manager
> --
>
> Key: HDDS-2206
> URL: https://issues.apache.org/jira/browse/HDDS-2206
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> As part of improving error propagation from the OM for ease of 
> troubleshooting and diagnosis, the proposal is to handle IOExceptions 
> separately from the business exceptions which are thrown as OMExceptions.
> Handling for OMExceptions will not be changed in this jira.
> Handling for IOExceptions will include logging the stacktrace on the server, 
> and propagation to the client under the control of a config parameter.
> Similar handling is also proposed for SCMException.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14931) hdfs crypto commands limit column width

2019-10-25 Thread Eric Badger (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959904#comment-16959904
 ] 

Eric Badger commented on HDFS-14931:


I ran TestDistributedFileSystem locally and it didn't fail for me. I don't 
believe it is related to this patch.

> hdfs crypto commands limit column width
> ---
>
> Key: HDFS-14931
> URL: https://issues.apache.org/jira/browse/HDFS-14931
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: HDFS-14931.001.patch
>
>
> {noformat}
> foo@bar$ hdfs crypto -listZones
> /projects/foo/bar/fizzgig/myprojectdirectorynameorsomethingcool1  encr
>   
> yptio
>   nzon
>   e1
> /projects/foo/bar/fizzgig/myprojectdirectorynameorsomethingcool2  encr
>   
> yptio
>   nzon
>   e2
> /projects/foo/bar/fizzgig/myprojectdirectorynameorsomethingcool3  encr
>   
> yptio
>   nzon
>   e3
> {noformat}
> The command ends up looking something really ugly like this when the path is 
> long. This also makes it very difficult to pipe the output into other 
> utilities, such as awk.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14935) Refactor DFSNetworkTopology#isNodeInScope

2019-10-25 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959894#comment-16959894
 ] 

Ayush Saxena commented on HDFS-14935:
-

Here the checks which throw illegalArgumentException will be executed again for 
no use, because if they were to fail they won’t reach till this point

> Refactor DFSNetworkTopology#isNodeInScope
> -
>
> Key: HDFS-14935
> URL: https://issues.apache.org/jira/browse/HDFS-14935
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14935.001.patch, HDFS-14935.002.patch
>
>
> {code:java}
> private boolean isNodeInScope(Node node, String scope) {
>   if (!scope.endsWith("/")) {
> scope += "/";
>   }
>   String nodeLocation = node.getNetworkLocation() + "/";
>   return nodeLocation.startsWith(scope);
> }
> {code}
> NodeBase#normalize() is used to normalize scope.
> so i refator DFSNetworkTopology#isNodeInScope.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2329) Destroy pipelines on any decommission or maintenance nodes

2019-10-25 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDDS-2329:

Status: Patch Available  (was: Open)

> Destroy pipelines on any decommission or maintenance nodes
> --
>
> Key: HDDS-2329
> URL: https://issues.apache.org/jira/browse/HDDS-2329
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When a node is marked for decommission or maintenance, the first step in 
> taking the node out of service is to destroy any pipelines the node is 
> involved in and confirm they have been destroyed before getting the container 
> list for the node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2329) Destroy pipelines on any decommission or maintenance nodes

2019-10-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2329:
-
Labels: pull-request-available  (was: )

> Destroy pipelines on any decommission or maintenance nodes
> --
>
> Key: HDDS-2329
> URL: https://issues.apache.org/jira/browse/HDDS-2329
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>
> When a node is marked for decommission or maintenance, the first step in 
> taking the node out of service is to destroy any pipelines the node is 
> involved in and confirm they have been destroyed before getting the container 
> list for the node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2329) Destroy pipelines on any decommission or maintenance nodes

2019-10-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2329?focusedWorklogId=334204=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-334204
 ]

ASF GitHub Bot logged work on HDDS-2329:


Author: ASF GitHub Bot
Created on: 25/Oct/19 15:38
Start Date: 25/Oct/19 15:38
Worklog Time Spent: 10m 
  Work Description: sodonnel commented on pull request #86: HDDS-2329 
Destroy pipelines on any decommission or maintenance nodes
URL: https://github.com/apache/hadoop-ozone/pull/86
 
 
   ## What changes were proposed in this pull request?
   
   When a node is marked for decommission or maintenance, the first step in 
taking the node out of service is to destroy any pipelines the node is involved 
in and confirm they have been destroyed before getting the container list for 
the node.
   
   This commit added a new class called the DatanodeAdminMonitor, which is 
responsible for tracking nodes as they go through the decommission workflow.
   
   When a node is marked for decommission, it gets added a to a queue in this 
monitor. The monitor runs periodically (30 seconds by default) and process any 
queued nodes. After processing they are tracked inside the monitor as they 
decommission workflow progresses (closing pipelines, getting the container 
list, replicating the container, etc).
   
   With this commit, a node can be added to the monitor for decommission or 
maintenace and it will have its pipelines closed.
   
   It will not make any further progress after the pipelines have been closed 
and further commits will address the next states.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2329
   
   ## How was this patch tested?
   
   Some manual tests and new unit tests have been added.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 334204)
Remaining Estimate: 0h
Time Spent: 10m

> Destroy pipelines on any decommission or maintenance nodes
> --
>
> Key: HDDS-2329
> URL: https://issues.apache.org/jira/browse/HDDS-2329
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When a node is marked for decommission or maintenance, the first step in 
> taking the node out of service is to destroy any pipelines the node is 
> involved in and confirm they have been destroyed before getting the container 
> list for the node.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14935) Refactor DFSNetworkTopology#isNodeInScope

2019-10-25 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959623#comment-16959623
 ] 

Lisheng Sun edited comment on HDFS-14935 at 10/25/19 3:27 PM:
--

hi [~ayushtkn]

this Jira is used to refactor code and no need to write repetitive code.

use existing code as follow:
{code:java}
scope = NodeBase.normalize(scope);

public static String normalize(String path) {
  if (path == null) {
throw new IllegalArgumentException(
"Network Location is null ");
  }

  if (path.length() == 0) {
return ROOT;
  }
  
  if (path.charAt(0) != PATH_SEPARATOR) {
throw new IllegalArgumentException(
   "Network Location path does not start 
with "
   +PATH_SEPARATOR_STR+ ": "+path);
  }
  
  int len = path.length();
  if (path.charAt(len-1) == PATH_SEPARATOR) {
return path.substring(0, len-1);
  }
  return path;
}
{code}


was (Author: leosun08):
hi [~ayushtkn]

this Jira is used to optimize code and no need to write repetitive code.

use existing code as follow:
{code:java}
scope = NodeBase.normalize(scope);

public static String normalize(String path) {
  if (path == null) {
throw new IllegalArgumentException(
"Network Location is null ");
  }

  if (path.length() == 0) {
return ROOT;
  }
  
  if (path.charAt(0) != PATH_SEPARATOR) {
throw new IllegalArgumentException(
   "Network Location path does not start 
with "
   +PATH_SEPARATOR_STR+ ": "+path);
  }
  
  int len = path.length();
  if (path.charAt(len-1) == PATH_SEPARATOR) {
return path.substring(0, len-1);
  }
  return path;
}
{code}

> Refactor DFSNetworkTopology#isNodeInScope
> -
>
> Key: HDFS-14935
> URL: https://issues.apache.org/jira/browse/HDFS-14935
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14935.001.patch, HDFS-14935.002.patch
>
>
> {code:java}
> private boolean isNodeInScope(Node node, String scope) {
>   if (!scope.endsWith("/")) {
> scope += "/";
>   }
>   String nodeLocation = node.getNetworkLocation() + "/";
>   return nodeLocation.startsWith(scope);
> }
> {code}
> NodeBase#normalize() is used to normalize scope.
> so i refator DFSNetworkTopology#isNodeInScope.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14935) Refactor DFSNetworkTopology#isNodeInScope

2019-10-25 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14935:
---
Summary: Refactor DFSNetworkTopology#isNodeInScope  (was: Optimize 
DFSNetworkTopology#isNodeInScope)

> Refactor DFSNetworkTopology#isNodeInScope
> -
>
> Key: HDFS-14935
> URL: https://issues.apache.org/jira/browse/HDFS-14935
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14935.001.patch, HDFS-14935.002.patch
>
>
> {code:java}
> private boolean isNodeInScope(Node node, String scope) {
>   if (!scope.endsWith("/")) {
> scope += "/";
>   }
>   String nodeLocation = node.getNetworkLocation() + "/";
>   return nodeLocation.startsWith(scope);
> }
> {code}
> NodeBase#normalize() is used to normalize scope.
> so i refator 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14935) Refactor DFSNetworkTopology#isNodeInScope

2019-10-25 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14935:
---
Description: 
{code:java}
private boolean isNodeInScope(Node node, String scope) {
  if (!scope.endsWith("/")) {
scope += "/";
  }
  String nodeLocation = node.getNetworkLocation() + "/";
  return nodeLocation.startsWith(scope);
}
{code}
NodeBase#normalize() is used to normalize scope.

so i refator DFSNetworkTopology#isNodeInScope.

 

 

  was:
{code:java}
private boolean isNodeInScope(Node node, String scope) {
  if (!scope.endsWith("/")) {
scope += "/";
  }
  String nodeLocation = node.getNetworkLocation() + "/";
  return nodeLocation.startsWith(scope);
}
{code}
NodeBase#normalize() is used to normalize scope.

so i refator isNodeInScope

 

 


> Refactor DFSNetworkTopology#isNodeInScope
> -
>
> Key: HDFS-14935
> URL: https://issues.apache.org/jira/browse/HDFS-14935
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14935.001.patch, HDFS-14935.002.patch
>
>
> {code:java}
> private boolean isNodeInScope(Node node, String scope) {
>   if (!scope.endsWith("/")) {
> scope += "/";
>   }
>   String nodeLocation = node.getNetworkLocation() + "/";
>   return nodeLocation.startsWith(scope);
> }
> {code}
> NodeBase#normalize() is used to normalize scope.
> so i refator DFSNetworkTopology#isNodeInScope.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14935) Refactor DFSNetworkTopology#isNodeInScope

2019-10-25 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14935:
---
Description: 
{code:java}
private boolean isNodeInScope(Node node, String scope) {
  if (!scope.endsWith("/")) {
scope += "/";
  }
  String nodeLocation = node.getNetworkLocation() + "/";
  return nodeLocation.startsWith(scope);
}
{code}
NodeBase#normalize() is used to normalize scope.

so i refator isNodeInScope

 

 

  was:
{code:java}
private boolean isNodeInScope(Node node, String scope) {
  if (!scope.endsWith("/")) {
scope += "/";
  }
  String nodeLocation = node.getNetworkLocation() + "/";
  return nodeLocation.startsWith(scope);
}
{code}
NodeBase#normalize() is used to normalize scope.

so i refator 

 

 


> Refactor DFSNetworkTopology#isNodeInScope
> -
>
> Key: HDFS-14935
> URL: https://issues.apache.org/jira/browse/HDFS-14935
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14935.001.patch, HDFS-14935.002.patch
>
>
> {code:java}
> private boolean isNodeInScope(Node node, String scope) {
>   if (!scope.endsWith("/")) {
> scope += "/";
>   }
>   String nodeLocation = node.getNetworkLocation() + "/";
>   return nodeLocation.startsWith(scope);
> }
> {code}
> NodeBase#normalize() is used to normalize scope.
> so i refator isNodeInScope
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14935) Optimize DFSNetworkTopology#isNodeInScope

2019-10-25 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14935:
---
Description: 
{code:java}
private boolean isNodeInScope(Node node, String scope) {
  if (!scope.endsWith("/")) {
scope += "/";
  }
  String nodeLocation = node.getNetworkLocation() + "/";
  return nodeLocation.startsWith(scope);
}
{code}
NodeBase#normalize() is used to normalize scope.

so i refator 

 

 

> Optimize DFSNetworkTopology#isNodeInScope
> -
>
> Key: HDFS-14935
> URL: https://issues.apache.org/jira/browse/HDFS-14935
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14935.001.patch, HDFS-14935.002.patch
>
>
> {code:java}
> private boolean isNodeInScope(Node node, String scope) {
>   if (!scope.endsWith("/")) {
> scope += "/";
>   }
>   String nodeLocation = node.getNetworkLocation() + "/";
>   return nodeLocation.startsWith(scope);
> }
> {code}
> NodeBase#normalize() is used to normalize scope.
> so i refator 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14907) [Dynamometer] DataNode can't find junit jar when using Hadoop-3 binary

2019-10-25 Thread Erik Krogen (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959833#comment-16959833
 ] 

Erik Krogen commented on HDFS-14907:


We already have some classpath manipulation in 
[{{start-component.sh}}|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra/src/main/resources/start-component.sh#L110],
 I think we can add it there?

> [Dynamometer] DataNode can't find junit jar when using Hadoop-3 binary
> --
>
> Key: HDFS-14907
> URL: https://issues.apache.org/jira/browse/HDFS-14907
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Priority: Major
>
> When executing {{start-dynamometer-cluster.sh}} with Hadoop-3 binary, 
> datanodes fail to run with the following log and 
> {{start-dynamometer-cluster.sh}} fails.
> {noformat}
> LogType:stderr
> LogLastModifiedTime:Wed Oct 09 15:03:09 +0900 2019
> LogLength:1386
> LogContents:
> Exception in thread "main" java.lang.NoClassDefFoundError: org/junit/Assert
> at 
> org.apache.hadoop.test.GenericTestUtils.assertExists(GenericTestUtils.java:299)
> at 
> org.apache.hadoop.test.GenericTestUtils.getTestDir(GenericTestUtils.java:243)
> at 
> org.apache.hadoop.test.GenericTestUtils.getTestDir(GenericTestUtils.java:252)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.getBaseDirectory(MiniDFSCluster.java:2982)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.determineDfsBaseDir(MiniDFSCluster.java:2972)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.formatDataNodeDirs(MiniDFSCluster.java:2834)
> at 
> org.apache.hadoop.tools.dynamometer.SimulatedDataNodes.run(SimulatedDataNodes.java:123)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at 
> org.apache.hadoop.tools.dynamometer.SimulatedDataNodes.main(SimulatedDataNodes.java:88)
> Caused by: java.lang.ClassNotFoundException: org.junit.Assert
> at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ... 9 more
> ./start-component.sh: line 317: kill: (2261) - No such process
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2361) Ozone Manager init & start command prints out unnecessary line in the beginning.

2019-10-25 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2361 started by YiSheng Lien.
--
> Ozone Manager init & start command prints out unnecessary line in the 
> beginning.
> 
>
> Key: HDDS-2361
> URL: https://issues.apache.org/jira/browse/HDDS-2361
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Aravindan Vijayan
>Assignee: YiSheng Lien
>Priority: Major
>
> {code}
> [root@avijayan-om-1 ozone-0.5.0-SNAPSHOT]# bin/ozone --daemon start om
> Ozone Manager classpath extended by
> {code}
> We could probably print this line only when extra elements are added to OM 
> classpath or skip printing this line altogether.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2361) Ozone Manager init & start command prints out unnecessary line in the beginning.

2019-10-25 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien updated HDDS-2361:
---
Description: 
{code}
[root@avijayan-om-1 ozone-0.5.0-SNAPSHOT]# bin/ozone --daemon start om
Ozone Manager classpath extended by
{code}

We could probably print this line only when extra elements are added to OM 
classpath or skip printing this line altogether.

  was:
{code}
[root@avijayan-om-1 ozone-0.5.0-SNAPSHOT]# bin/ozone --daemon start om
Ozone Manager classpath extended by
{code}

We could probably print this line only when extra elements are added to OM 
classpathor skip printing this line altogether.


> Ozone Manager init & start command prints out unnecessary line in the 
> beginning.
> 
>
> Key: HDDS-2361
> URL: https://issues.apache.org/jira/browse/HDDS-2361
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Aravindan Vijayan
>Assignee: YiSheng Lien
>Priority: Major
>
> {code}
> [root@avijayan-om-1 ozone-0.5.0-SNAPSHOT]# bin/ozone --daemon start om
> Ozone Manager classpath extended by
> {code}
> We could probably print this line only when extra elements are added to OM 
> classpath or skip printing this line altogether.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2219) Move all the ozone dist scripts/configs to one location

2019-10-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2219:
-
Labels: newbe pull-request-available  (was: newbe)

> Move all the ozone dist scripts/configs to one location
> ---
>
> Key: HDDS-2219
> URL: https://issues.apache.org/jira/browse/HDDS-2219
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: build
>Reporter: Marton Elek
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbe, pull-request-available
>
> The hadoop distribution tar file contains jar files scripts and default 
> configuration files.
> The scripts and configuration files are stored in multiple separated projects 
> without any reason:
> {code:java}
> ls hadoop-hdds/common/src/main/bin/
> hadoop-config.cmd  hadoop-config.sh  hadoop-daemons.sh  hadoop-functions.sh  
> workers.sh
> ls hadoop-ozone/common/src/main/bin 
> ozone  ozone-config.sh  start-ozone.sh  stop-ozone.sh
> ls hadoop-ozone/common/src/main/shellprofile.d 
> hadoop-ozone.sh
> ls hadoop-ozone/dist/src/main/conf 
> dn-audit-log4j2.properties  log4j.properties  om-audit-log4j2.properties  
> ozone-shell-log4j.properties  ozone-site.xml  scm-audit-log4j2.properties
>  {code}
> All of these scripts can be moved to the hadoop-ozone/dist/src/shell
> hadoop-ozone/dist/dev-support/bin/dist-layout-stitching also should be 
> updated to copy all of them to the right place in the tar.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2219) Move all the ozone dist scripts/configs to one location

2019-10-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2219?focusedWorklogId=334132=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-334132
 ]

ASF GitHub Bot logged work on HDDS-2219:


Author: ASF GitHub Bot
Created on: 25/Oct/19 13:13
Start Date: 25/Oct/19 13:13
Worklog Time Spent: 10m 
  Work Description: cxorm commented on pull request #85: HDDS-2219. Move 
all the ozone dist scripts/configs to one location
URL: https://github.com/apache/hadoop-ozone/pull/85
 
 
   ## What changes were proposed in this pull request?
   Relocate the separated scripts and confuguration file to the same directory,
   and modify the ```dist-layout-stitching``` to layout these files to right 
directory in distribution.
   
   ## What is the link to the Apache JIRA
   https://issues.apache.org/jira/browse/HDDS-2219
   
   ## How was this patch tested?
   Just ran the command in ```hadoop-ozone/```:
   ```mvn clean package -Pdist -Dtar -DskipTests```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 334132)
Remaining Estimate: 0h
Time Spent: 10m

> Move all the ozone dist scripts/configs to one location
> ---
>
> Key: HDDS-2219
> URL: https://issues.apache.org/jira/browse/HDDS-2219
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: build
>Reporter: Marton Elek
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbe, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The hadoop distribution tar file contains jar files scripts and default 
> configuration files.
> The scripts and configuration files are stored in multiple separated projects 
> without any reason:
> {code:java}
> ls hadoop-hdds/common/src/main/bin/
> hadoop-config.cmd  hadoop-config.sh  hadoop-daemons.sh  hadoop-functions.sh  
> workers.sh
> ls hadoop-ozone/common/src/main/bin 
> ozone  ozone-config.sh  start-ozone.sh  stop-ozone.sh
> ls hadoop-ozone/common/src/main/shellprofile.d 
> hadoop-ozone.sh
> ls hadoop-ozone/dist/src/main/conf 
> dn-audit-log4j2.properties  log4j.properties  om-audit-log4j2.properties  
> ozone-shell-log4j.properties  ozone-site.xml  scm-audit-log4j2.properties
>  {code}
> All of these scripts can be moved to the hadoop-ozone/dist/src/shell
> hadoop-ozone/dist/dev-support/bin/dist-layout-stitching also should be 
> updated to copy all of them to the right place in the tar.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2219) Move all the ozone dist scripts/configs to one location

2019-10-25 Thread YiSheng Lien (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959708#comment-16959708
 ] 

YiSheng Lien commented on HDDS-2219:


Hi [~elek], thanks this JIRA.
A little question, 
could we relocate the files in hadoop-hdds/common/src/main/conf/ ?

> Move all the ozone dist scripts/configs to one location
> ---
>
> Key: HDDS-2219
> URL: https://issues.apache.org/jira/browse/HDDS-2219
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: build
>Reporter: Marton Elek
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbe
>
> The hadoop distribution tar file contains jar files scripts and default 
> configuration files.
> The scripts and configuration files are stored in multiple separated projects 
> without any reason:
> {code:java}
> ls hadoop-hdds/common/src/main/bin/
> hadoop-config.cmd  hadoop-config.sh  hadoop-daemons.sh  hadoop-functions.sh  
> workers.sh
> ls hadoop-ozone/common/src/main/bin 
> ozone  ozone-config.sh  start-ozone.sh  stop-ozone.sh
> ls hadoop-ozone/common/src/main/shellprofile.d 
> hadoop-ozone.sh
> ls hadoop-ozone/dist/src/main/conf 
> dn-audit-log4j2.properties  log4j.properties  om-audit-log4j2.properties  
> ozone-shell-log4j.properties  ozone-site.xml  scm-audit-log4j2.properties
>  {code}
> All of these scripts can be moved to the hadoop-ozone/dist/src/shell
> hadoop-ozone/dist/dev-support/bin/dist-layout-stitching also should be 
> updated to copy all of them to the right place in the tar.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14922) On StartUp , Snapshot modification time got changed

2019-10-25 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959700#comment-16959700
 ] 

Hadoop QA commented on HDFS-14922:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 51s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 2 new + 622 unchanged - 1 fixed = 624 total (was 623) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
|   | hadoop.hdfs.server.balancer.TestBalancer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14922 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12984012/HDFS-14922.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 685fabb0a20c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8625265 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28182/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28182/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Commented] (HDFS-14935) Optimize DFSNetworkTopology#isNodeInScope

2019-10-25 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959690#comment-16959690
 ] 

Hadoop QA commented on HDFS-14935:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
53s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}111m 21s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}182m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.hdfs.server.balancer.TestBalancer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14935 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12984010/HDFS-14935.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d03ef3b4520c 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8625265 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28181/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28181/testReport/ |
| Max. process+thread count | 2547 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Commented] (HDFS-14745) Backport HDFS persistent memory read cache support to branch-3.1

2019-10-25 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959666#comment-16959666
 ] 

Hadoop QA commented on HDFS-14745:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 22m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.1 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
13s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
16s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
27s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
42s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
42s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
27s{color} | {color:green} branch-3.1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 12m 29s{color} 
| {color:red} root generated 5 new + 1269 unchanged - 5 fixed = 1274 total (was 
1274) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 22s{color} | {color:orange} root: The patch generated 1 new + 772 unchanged 
- 11 fixed = 773 total (was 783) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
10s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}175m 53s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
57s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}343m 36s{color} | 
{color:black} {color} |
\\
\\
|| 

[jira] [Commented] (HDDS-1847) Datanode Kerberos principal and keytab config key looks inconsistent

2019-10-25 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959648#comment-16959648
 ] 

Hadoop QA commented on HDDS-1847:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-hdds in trunk failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
40s{color} | {color:red} hadoop-ozone in trunk failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-hdds in trunk failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
13s{color} | {color:red} hadoop-ozone in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-hdds in trunk failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-ozone in trunk failed. {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 17m 
21s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-hdds in trunk failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-ozone in trunk failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
21s{color} | {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 21s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 15s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} hadoop-hdds: The patch generated 43 new + 0 
unchanged - 0 fixed = 43 total (was 0) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-ozone: The patch generated 5 new + 0 
unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-hdds in the patch failed. {color} |
| 

[jira] [Work logged] (HDDS-1847) Datanode Kerberos principal and keytab config key looks inconsistent

2019-10-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1847?focusedWorklogId=334068=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-334068
 ]

ASF GitHub Bot logged work on HDDS-1847:


Author: ASF GitHub Bot
Created on: 25/Oct/19 10:54
Start Date: 25/Oct/19 10:54
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1678: HDDS-1847: 
Datanode Kerberos principal and keytab config key looks inconsistent
URL: https://github.com/apache/hadoop/pull/1678#issuecomment-546306818
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 77 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 59 | Maven dependency ordering for branch |
   | -1 | mvninstall | 36 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 40 | hadoop-ozone in trunk failed. |
   | -1 | compile | 18 | hadoop-hdds in trunk failed. |
   | -1 | compile | 13 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 56 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 955 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 18 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 16 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1041 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 30 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 17 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | -1 | mvninstall | 32 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 35 | hadoop-ozone in the patch failed. |
   | -1 | compile | 21 | hadoop-hdds in the patch failed. |
   | -1 | compile | 15 | hadoop-ozone in the patch failed. |
   | -1 | javac | 21 | hadoop-hdds in the patch failed. |
   | -1 | javac | 15 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 26 | hadoop-hdds: The patch generated 43 new + 0 
unchanged - 0 fixed = 43 total (was 0) |
   | -0 | checkstyle | 27 | hadoop-ozone: The patch generated 5 new + 0 
unchanged - 0 fixed = 5 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 818 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 20 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 16 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 29 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 16 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 24 | hadoop-hdds in the patch failed. |
   | -1 | unit | 23 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 2608 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.4 Server=19.03.4 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1678/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1678 |
   | JIRA Issue | HDDS-1847 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 5736d78e1d6f 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8625265 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1678/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1678/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1678/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1678/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1678/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1678/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1678/1/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1678/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | 

[jira] [Work logged] (HDDS-1847) Datanode Kerberos principal and keytab config key looks inconsistent

2019-10-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1847?focusedWorklogId=334061=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-334061
 ]

ASF GitHub Bot logged work on HDDS-1847:


Author: ASF GitHub Bot
Created on: 25/Oct/19 10:16
Start Date: 25/Oct/19 10:16
Worklog Time Spent: 10m 
  Work Description: christeoh commented on issue #1678: HDDS-1847: Datanode 
Kerberos principal and keytab config key looks inconsistent
URL: https://github.com/apache/hadoop/pull/1678#issuecomment-546295286
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 334061)
Time Spent: 0.5h  (was: 20m)

> Datanode Kerberos principal and keytab config key looks inconsistent
> 
>
> Key: HDDS-1847
> URL: https://issues.apache.org/jira/browse/HDDS-1847
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Eric Yang
>Assignee: Chris Teoh
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Ozone Kerberos configuration can be very confusing:
> | config name | Description |
> | hdds.scm.kerberos.principal | SCM service principal |
> | hdds.scm.kerberos.keytab.file | SCM service keytab file |
> | ozone.om.kerberos.principal | Ozone Manager service principal |
> | ozone.om.kerberos.keytab.file | Ozone Manager keytab file |
> | hdds.scm.http.kerberos.principal | SCM service spnego principal |
> | hdds.scm.http.kerberos.keytab.file | SCM service spnego keytab file |
> | ozone.om.http.kerberos.principal | Ozone Manager spnego principal |
> | ozone.om.http.kerberos.keytab.file | Ozone Manager spnego keytab file |
> | hdds.datanode.http.kerberos.keytab | Datanode spnego keytab file |
> | hdds.datanode.http.kerberos.principal | Datanode spnego principal |
> | dfs.datanode.kerberos.principal | Datanode service principal |
> | dfs.datanode.keytab.file | Datanode service keytab file |
> The prefix are very different for each of the datanode configuration.  It 
> would be nice to have some consistency for datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14935) Optimize DFSNetworkTopology#isNodeInScope

2019-10-25 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959623#comment-16959623
 ] 

Lisheng Sun edited comment on HDFS-14935 at 10/25/19 9:59 AM:
--

hi [~ayushtkn]

this Jira is used to optimize code and no need to write repetitive code.

use existing code as follow:
{code:java}
scope = NodeBase.normalize(scope);

public static String normalize(String path) {
  if (path == null) {
throw new IllegalArgumentException(
"Network Location is null ");
  }

  if (path.length() == 0) {
return ROOT;
  }
  
  if (path.charAt(0) != PATH_SEPARATOR) {
throw new IllegalArgumentException(
   "Network Location path does not start 
with "
   +PATH_SEPARATOR_STR+ ": "+path);
  }
  
  int len = path.length();
  if (path.charAt(len-1) == PATH_SEPARATOR) {
return path.substring(0, len-1);
  }
  return path;
}
{code}


was (Author: leosun08):
hi [~ayushtkn]

this Jira is used to optimize code and no need to write repetitive code.

use existing code as follow:
{code:java}
scope = NodeBase.normalize(scope);

public static String normalize(String path) {
  if (path == null) {
throw new IllegalArgumentException(
"Network Location is null ");
  }

  if (path.length() == 0) {
return ROOT;
  }
  
  if (path.charAt(0) != PATH_SEPARATOR) {
throw new IllegalArgumentException(
   "Network Location path does not start 
with "
   +PATH_SEPARATOR_STR+ ": "+path);
  }
  
  int len = path.length();
  if (path.charAt(len-1) == PATH_SEPARATOR) {
return path.substring(0, len-1);
  }
  return path;
}
{code}
 https://issues.apache.org/jira/secure/attachment/12984010/HDFS-14935.002.patch

> Optimize DFSNetworkTopology#isNodeInScope
> -
>
> Key: HDFS-14935
> URL: https://issues.apache.org/jira/browse/HDFS-14935
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14935.001.patch, HDFS-14935.002.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14935) Optimize DFSNetworkTopology#isNodeInScope

2019-10-25 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959623#comment-16959623
 ] 

Lisheng Sun edited comment on HDFS-14935 at 10/25/19 9:58 AM:
--

hi [~ayushtkn]

this Jira is used to optimize code and no need to write repetitive code.

use existing code as follow:
{code:java}
scope = NodeBase.normalize(scope);

public static String normalize(String path) {
  if (path == null) {
throw new IllegalArgumentException(
"Network Location is null ");
  }

  if (path.length() == 0) {
return ROOT;
  }
  
  if (path.charAt(0) != PATH_SEPARATOR) {
throw new IllegalArgumentException(
   "Network Location path does not start 
with "
   +PATH_SEPARATOR_STR+ ": "+path);
  }
  
  int len = path.length();
  if (path.charAt(len-1) == PATH_SEPARATOR) {
return path.substring(0, len-1);
  }
  return path;
}
{code}
 


was (Author: leosun08):
hi [~ayushtkn]

this Jira is used to optimize code and no need to write repetitive 
code.https://issues.apache.org/jira/secure/attachment/12984010/HDFS-14935.002.patch

use existing code as follow:
{code:java}
scope = NodeBase.normalize(scope);

public static String normalize(String path) {
  if (path == null) {
throw new IllegalArgumentException(
"Network Location is null ");
  }

  if (path.length() == 0) {
return ROOT;
  }
  
  if (path.charAt(0) != PATH_SEPARATOR) {
throw new IllegalArgumentException(
   "Network Location path does not start 
with "
   +PATH_SEPARATOR_STR+ ": "+path);
  }
  
  int len = path.length();
  if (path.charAt(len-1) == PATH_SEPARATOR) {
return path.substring(0, len-1);
  }
  return path;
}
{code}
 

> Optimize DFSNetworkTopology#isNodeInScope
> -
>
> Key: HDFS-14935
> URL: https://issues.apache.org/jira/browse/HDFS-14935
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14935.001.patch, HDFS-14935.002.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14935) Optimize DFSNetworkTopology#isNodeInScope

2019-10-25 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959623#comment-16959623
 ] 

Lisheng Sun edited comment on HDFS-14935 at 10/25/19 9:58 AM:
--

hi [~ayushtkn]

this Jira is used to optimize code and no need to write repetitive code.

use existing code as follow:
{code:java}
scope = NodeBase.normalize(scope);

public static String normalize(String path) {
  if (path == null) {
throw new IllegalArgumentException(
"Network Location is null ");
  }

  if (path.length() == 0) {
return ROOT;
  }
  
  if (path.charAt(0) != PATH_SEPARATOR) {
throw new IllegalArgumentException(
   "Network Location path does not start 
with "
   +PATH_SEPARATOR_STR+ ": "+path);
  }
  
  int len = path.length();
  if (path.charAt(len-1) == PATH_SEPARATOR) {
return path.substring(0, len-1);
  }
  return path;
}
{code}
 https://issues.apache.org/jira/secure/attachment/12984010/HDFS-14935.002.patch


was (Author: leosun08):
hi [~ayushtkn]

this Jira is used to optimize code and no need to write repetitive code.

use existing code as follow:
{code:java}
scope = NodeBase.normalize(scope);

public static String normalize(String path) {
  if (path == null) {
throw new IllegalArgumentException(
"Network Location is null ");
  }

  if (path.length() == 0) {
return ROOT;
  }
  
  if (path.charAt(0) != PATH_SEPARATOR) {
throw new IllegalArgumentException(
   "Network Location path does not start 
with "
   +PATH_SEPARATOR_STR+ ": "+path);
  }
  
  int len = path.length();
  if (path.charAt(len-1) == PATH_SEPARATOR) {
return path.substring(0, len-1);
  }
  return path;
}
{code}
 

> Optimize DFSNetworkTopology#isNodeInScope
> -
>
> Key: HDFS-14935
> URL: https://issues.apache.org/jira/browse/HDFS-14935
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14935.001.patch, HDFS-14935.002.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14935) Optimize DFSNetworkTopology#isNodeInScope

2019-10-25 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959623#comment-16959623
 ] 

Lisheng Sun edited comment on HDFS-14935 at 10/25/19 9:57 AM:
--

hi [~ayushtkn]

this Jira is used to optimize code and no need to write repetitive 
code.https://issues.apache.org/jira/secure/attachment/12984010/HDFS-14935.002.patch

use existing code as follow:
{code:java}
scope = NodeBase.normalize(scope);

public static String normalize(String path) {
  if (path == null) {
throw new IllegalArgumentException(
"Network Location is null ");
  }

  if (path.length() == 0) {
return ROOT;
  }
  
  if (path.charAt(0) != PATH_SEPARATOR) {
throw new IllegalArgumentException(
   "Network Location path does not start 
with "
   +PATH_SEPARATOR_STR+ ": "+path);
  }
  
  int len = path.length();
  if (path.charAt(len-1) == PATH_SEPARATOR) {
return path.substring(0, len-1);
  }
  return path;
}
{code}
 


was (Author: leosun08):
hi [~ayushtkn]

this Jira is used to optimize code and no need to write repetitive code.

use existing code as follow:
{code:java}
scope = NodeBase.normalize(scope);

public static String normalize(String path) {
  if (path == null) {
throw new IllegalArgumentException(
"Network Location is null ");
  }

  if (path.length() == 0) {
return ROOT;
  }
  
  if (path.charAt(0) != PATH_SEPARATOR) {
throw new IllegalArgumentException(
   "Network Location path does not start 
with "
   +PATH_SEPARATOR_STR+ ": "+path);
  }
  
  int len = path.length();
  if (path.charAt(len-1) == PATH_SEPARATOR) {
return path.substring(0, len-1);
  }
  return path;
}
{code}
 

> Optimize DFSNetworkTopology#isNodeInScope
> -
>
> Key: HDFS-14935
> URL: https://issues.apache.org/jira/browse/HDFS-14935
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14935.001.patch, HDFS-14935.002.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14935) Optimize DFSNetworkTopology#isNodeInScope

2019-10-25 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959623#comment-16959623
 ] 

Lisheng Sun edited comment on HDFS-14935 at 10/25/19 9:55 AM:
--

hi [~ayushtkn]

this Jira is used to optimize code and no need to write repetitive code.

use existing code as follow:
{code:java}
scope = NodeBase.normalize(scope);

public static String normalize(String path) {
  if (path == null) {
throw new IllegalArgumentException(
"Network Location is null ");
  }

  if (path.length() == 0) {
return ROOT;
  }
  
  if (path.charAt(0) != PATH_SEPARATOR) {
throw new IllegalArgumentException(
   "Network Location path does not start 
with "
   +PATH_SEPARATOR_STR+ ": "+path);
  }
  
  int len = path.length();
  if (path.charAt(len-1) == PATH_SEPARATOR) {
return path.substring(0, len-1);
  }
  return path;
}
{code}
 


was (Author: leosun08):
this Jira is used to optimize code and no need to write repetitive code.

use existing code as follow:
{code:java}
scope = NodeBase.normalize(scope);

public static String normalize(String path) {
  if (path == null) {
throw new IllegalArgumentException(
"Network Location is null ");
  }

  if (path.length() == 0) {
return ROOT;
  }
  
  if (path.charAt(0) != PATH_SEPARATOR) {
throw new IllegalArgumentException(
   "Network Location path does not start 
with "
   +PATH_SEPARATOR_STR+ ": "+path);
  }
  
  int len = path.length();
  if (path.charAt(len-1) == PATH_SEPARATOR) {
return path.substring(0, len-1);
  }
  return path;
}
{code}
 

> Optimize DFSNetworkTopology#isNodeInScope
> -
>
> Key: HDFS-14935
> URL: https://issues.apache.org/jira/browse/HDFS-14935
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14935.001.patch, HDFS-14935.002.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14935) Optimize DFSNetworkTopology#isNodeInScope

2019-10-25 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959623#comment-16959623
 ] 

Lisheng Sun commented on HDFS-14935:


this Jira is used to optimize code and no need to write repetitive code.

use existing code as follow:
{code:java}
scope = NodeBase.normalize(scope);

public static String normalize(String path) {
  if (path == null) {
throw new IllegalArgumentException(
"Network Location is null ");
  }

  if (path.length() == 0) {
return ROOT;
  }
  
  if (path.charAt(0) != PATH_SEPARATOR) {
throw new IllegalArgumentException(
   "Network Location path does not start 
with "
   +PATH_SEPARATOR_STR+ ": "+path);
  }
  
  int len = path.length();
  if (path.charAt(len-1) == PATH_SEPARATOR) {
return path.substring(0, len-1);
  }
  return path;
}
{code}
 

> Optimize DFSNetworkTopology#isNodeInScope
> -
>
> Key: HDFS-14935
> URL: https://issues.apache.org/jira/browse/HDFS-14935
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14935.001.patch, HDFS-14935.002.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1847) Datanode Kerberos principal and keytab config key looks inconsistent

2019-10-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1847?focusedWorklogId=334042=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-334042
 ]

ASF GitHub Bot logged work on HDDS-1847:


Author: ASF GitHub Bot
Created on: 25/Oct/19 09:42
Start Date: 25/Oct/19 09:42
Worklog Time Spent: 10m 
  Work Description: christeoh commented on issue #1678: HDDS-1847: Datanode 
Kerberos principal and keytab config key looks inconsistent
URL: https://github.com/apache/hadoop/pull/1678#issuecomment-546284039
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 334042)
Time Spent: 20m  (was: 10m)

> Datanode Kerberos principal and keytab config key looks inconsistent
> 
>
> Key: HDDS-1847
> URL: https://issues.apache.org/jira/browse/HDDS-1847
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Eric Yang
>Assignee: Chris Teoh
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Ozone Kerberos configuration can be very confusing:
> | config name | Description |
> | hdds.scm.kerberos.principal | SCM service principal |
> | hdds.scm.kerberos.keytab.file | SCM service keytab file |
> | ozone.om.kerberos.principal | Ozone Manager service principal |
> | ozone.om.kerberos.keytab.file | Ozone Manager keytab file |
> | hdds.scm.http.kerberos.principal | SCM service spnego principal |
> | hdds.scm.http.kerberos.keytab.file | SCM service spnego keytab file |
> | ozone.om.http.kerberos.principal | Ozone Manager spnego principal |
> | ozone.om.http.kerberos.keytab.file | Ozone Manager spnego keytab file |
> | hdds.datanode.http.kerberos.keytab | Datanode spnego keytab file |
> | hdds.datanode.http.kerberos.principal | Datanode spnego principal |
> | dfs.datanode.kerberos.principal | Datanode service principal |
> | dfs.datanode.keytab.file | Datanode service keytab file |
> The prefix are very different for each of the datanode configuration.  It 
> would be nice to have some consistency for datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1847) Datanode Kerberos principal and keytab config key looks inconsistent

2019-10-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1847:
-
Labels: newbie pull-request-available  (was: newbie)

> Datanode Kerberos principal and keytab config key looks inconsistent
> 
>
> Key: HDDS-1847
> URL: https://issues.apache.org/jira/browse/HDDS-1847
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Eric Yang
>Assignee: Chris Teoh
>Priority: Major
>  Labels: newbie, pull-request-available
>
> Ozone Kerberos configuration can be very confusing:
> | config name | Description |
> | hdds.scm.kerberos.principal | SCM service principal |
> | hdds.scm.kerberos.keytab.file | SCM service keytab file |
> | ozone.om.kerberos.principal | Ozone Manager service principal |
> | ozone.om.kerberos.keytab.file | Ozone Manager keytab file |
> | hdds.scm.http.kerberos.principal | SCM service spnego principal |
> | hdds.scm.http.kerberos.keytab.file | SCM service spnego keytab file |
> | ozone.om.http.kerberos.principal | Ozone Manager spnego principal |
> | ozone.om.http.kerberos.keytab.file | Ozone Manager spnego keytab file |
> | hdds.datanode.http.kerberos.keytab | Datanode spnego keytab file |
> | hdds.datanode.http.kerberos.principal | Datanode spnego principal |
> | dfs.datanode.kerberos.principal | Datanode service principal |
> | dfs.datanode.keytab.file | Datanode service keytab file |
> The prefix are very different for each of the datanode configuration.  It 
> would be nice to have some consistency for datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1847) Datanode Kerberos principal and keytab config key looks inconsistent

2019-10-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1847?focusedWorklogId=334040=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-334040
 ]

ASF GitHub Bot logged work on HDDS-1847:


Author: ASF GitHub Bot
Created on: 25/Oct/19 09:40
Start Date: 25/Oct/19 09:40
Worklog Time Spent: 10m 
  Work Description: christeoh commented on pull request #1678: HDDS-1847: 
Datanode Kerberos principal and keytab config key looks inconsistent
URL: https://github.com/apache/hadoop/pull/1678
 
 
   Refactored out the following configurations out of ScmConfigKeys to Java 
based configuration classes:-
   
   - HDDS_SCM_KERBEROS_KEYTAB_FILE_KEY
   - HDDS_SCM_KERBEROS_PRINCIPAL_KEY
   - HDDS_SCM_HTTP_KERBEROS_PRINCIPAL_KEY
   - HDDS_SCM_HTTP_KERBEROS_KEYTAB_FILE_KEY
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 334040)
Remaining Estimate: 0h
Time Spent: 10m

> Datanode Kerberos principal and keytab config key looks inconsistent
> 
>
> Key: HDDS-1847
> URL: https://issues.apache.org/jira/browse/HDDS-1847
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Eric Yang
>Assignee: Chris Teoh
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Ozone Kerberos configuration can be very confusing:
> | config name | Description |
> | hdds.scm.kerberos.principal | SCM service principal |
> | hdds.scm.kerberos.keytab.file | SCM service keytab file |
> | ozone.om.kerberos.principal | Ozone Manager service principal |
> | ozone.om.kerberos.keytab.file | Ozone Manager keytab file |
> | hdds.scm.http.kerberos.principal | SCM service spnego principal |
> | hdds.scm.http.kerberos.keytab.file | SCM service spnego keytab file |
> | ozone.om.http.kerberos.principal | Ozone Manager spnego principal |
> | ozone.om.http.kerberos.keytab.file | Ozone Manager spnego keytab file |
> | hdds.datanode.http.kerberos.keytab | Datanode spnego keytab file |
> | hdds.datanode.http.kerberos.principal | Datanode spnego principal |
> | dfs.datanode.kerberos.principal | Datanode service principal |
> | dfs.datanode.keytab.file | Datanode service keytab file |
> The prefix are very different for each of the datanode configuration.  It 
> would be nice to have some consistency for datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14308) DFSStripedInputStream curStripeBuf is not freed by unbuffer()

2019-10-25 Thread Zhao Yi Ming (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959599#comment-16959599
 ] 

Zhao Yi Ming commented on HDFS-14308:
-

[~lindongdong] Thanks for your review! Yes, you are right. I changed the code 
based on your comments. And because we used the github commits to track the 
code review, not the Jira patch, if you can add your comments on the github, 
that will be great. Thanks again!

> DFSStripedInputStream curStripeBuf is not freed by unbuffer()
> -
>
> Key: HDFS-14308
> URL: https://issues.apache.org/jira/browse/HDFS-14308
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.0.0
>Reporter: Joe McDonnell
>Assignee: Zhao Yi Ming
>Priority: Major
> Attachments: ec_heap_dump.png
>
>
> Some users of HDFS cache opened HDFS file handles to avoid repeated 
> roundtrips to the NameNode. For example, Impala caches up to 20,000 HDFS file 
> handles by default. Recent tests on erasure coded files show that the open 
> file handles can consume a large amount of memory when not in use.
> For example, here is output from Impala's JMX endpoint when 608 file handles 
> are cached
> {noformat}
> {
> "name": "java.nio:type=BufferPool,name=direct",
> "modelerType": "sun.management.ManagementFactoryHelper$1",
> "Name": "direct",
> "TotalCapacity": 1921048960,
> "MemoryUsed": 1921048961,
> "Count": 633,
> "ObjectName": "java.nio:type=BufferPool,name=direct"
> },{noformat}
> This shows direct buffer memory usage of 3MB per DFSStripedInputStream. 
> Attached is output from Eclipse MAT showing that the direct buffers come from 
> DFSStripedInputStream objects. Both Impala and HBase call unbuffer() when a 
> file handle is being cached and potentially unused for significant chunks of 
> time, yet this shows that the memory remains in use.
> To support caching file handles on erasure coded files, DFSStripedInputStream 
> should avoid holding buffers after the unbuffer() call. See HDFS-7694. 
> "unbuffer()" is intended to move an input stream to a lower memory state to 
> support these caching use cases. In particular, the curStripeBuf seems to be 
> allocated from the BUFFER_POOL on a resetCurStripeBuffer(true) call. It is 
> not freed until close().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14922) On StartUp , Snapshot modification time got changed

2019-10-25 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14922:
-
Attachment: HDFS-14922.001.patch
Status: Patch Available  (was: Open)

> On StartUp , Snapshot modification time got changed
> ---
>
> Key: HDFS-14922
> URL: https://issues.apache.org/jira/browse/HDFS-14922
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14922.001.patch
>
>
> Snapshot modification time got changed on namenode restart



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2365) TestRatisPipelineProvider#testCreatePipelinesDnExclude is flaky

2019-10-25 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2365:
---
Status: Patch Available  (was: In Progress)

> TestRatisPipelineProvider#testCreatePipelinesDnExclude is flaky
> ---
>
> Key: HDDS-2365
> URL: https://issues.apache.org/jira/browse/HDDS-2365
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> TestRatisPipelineProvider#testCreatePipelinesDnExclude is flaky, failing in 
> CI intermittently:
> * 
> https://github.com/elek/ozone-ci-03/blob/master/pr/pr-hdds-2360-9pxww/integration/hadoop-ozone/integration-test/org.apache.hadoop.hdds.scm.pipeline.TestRatisPipelineProvider.txt
> * 
> https://github.com/elek/ozone-ci-03/blob/master/pr/pr-hdds-2352-cxhw9/integration/hadoop-ozone/integration-test/org.apache.hadoop.hdds.scm.pipeline.TestRatisPipelineProvider.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14935) Optimize DFSNetworkTopology#isNodeInScope

2019-10-25 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959566#comment-16959566
 ] 

Ayush Saxena commented on HDFS-14935:
-

How does this optimize or increase perfeomance?

> Optimize DFSNetworkTopology#isNodeInScope
> -
>
> Key: HDFS-14935
> URL: https://issues.apache.org/jira/browse/HDFS-14935
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14935.001.patch, HDFS-14935.002.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14935) Optimize DFSNetworkTopology#isNodeInScope

2019-10-25 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14935:
---
Attachment: HDFS-14935.002.patch

> Optimize DFSNetworkTopology#isNodeInScope
> -
>
> Key: HDFS-14935
> URL: https://issues.apache.org/jira/browse/HDFS-14935
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14935.001.patch, HDFS-14935.002.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14934) [SBN Read] Standby NN throws many InterruptedExceptions when dfs.ha.tail-edits.period is 0

2019-10-25 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959563#comment-16959563
 ] 

Takanobu Asanuma commented on HDFS-14934:
-

Thanks for your reply, [~ayushtkn] . We have seen the issue with 
trunk(3f89084ac756c9296d412821d76ff2bee57d0c2f) which includes HDFS-14655.

> [SBN Read] Standby NN throws many InterruptedExceptions when 
> dfs.ha.tail-edits.period is 0
> --
>
> Key: HDFS-14934
> URL: https://issues.apache.org/jira/browse/HDFS-14934
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Takanobu Asanuma
>Priority: Major
>
> When dfs.ha.tail-edits.period is 0ms (or very short-time), there are many 
> warn logs in standby NN.
> {noformat}
> 2019-10-25 16:25:46,945 [Logger channel (from parallel executor) to  hostname>/:] WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(55)) - Thread 
> (Thread[Logger channel (from parallel executor) to / address>:,5,main]) interrupted: 
> java.lang.InterruptedException
>   at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:509)
>   at 
> com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:82)
>   at 
> org.apache.hadoop.util.concurrent.ExecutorHelper.logThrowableFromAfterExecute(ExecutorHelper.java:48)
>   at 
> org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor.afterExecute(HadoopThreadPoolExecutor.java:90)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1157)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14935) Optimize DFSNetworkTopology#isNodeInScope

2019-10-25 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14935:
---
Summary: Optimize DFSNetworkTopology#isNodeInScope  (was: Unified constant 
in DFSNetworkTopology#isNodeInScope)

> Optimize DFSNetworkTopology#isNodeInScope
> -
>
> Key: HDFS-14935
> URL: https://issues.apache.org/jira/browse/HDFS-14935
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14935.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14935) Unified constant in DFSNetworkTopology#isNodeInScope

2019-10-25 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14935:
---
Attachment: (was: HDFS-14935.001.patch)

> Unified constant in DFSNetworkTopology#isNodeInScope
> 
>
> Key: HDFS-14935
> URL: https://issues.apache.org/jira/browse/HDFS-14935
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14935.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14935) Unified constant in DFSNetworkTopology#isNodeInScope

2019-10-25 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14935:
---
Attachment: HDFS-14935.001.patch

> Unified constant in DFSNetworkTopology#isNodeInScope
> 
>
> Key: HDFS-14935
> URL: https://issues.apache.org/jira/browse/HDFS-14935
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14935.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14935) Unified constant in DFSNetworkTopology#isNodeInScope

2019-10-25 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun reassigned HDFS-14935:
--

Assignee: Lisheng Sun

> Unified constant in DFSNetworkTopology#isNodeInScope
> 
>
> Key: HDFS-14935
> URL: https://issues.apache.org/jira/browse/HDFS-14935
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14935.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14935) Unified constant in DFSNetworkTopology#isNodeInScope

2019-10-25 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14935:
---
Attachment: HDFS-14935.001.patch
Status: Patch Available  (was: Open)

> Unified constant in DFSNetworkTopology#isNodeInScope
> 
>
> Key: HDFS-14935
> URL: https://issues.apache.org/jira/browse/HDFS-14935
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14935.001.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14935) Unified constant in DFSNetworkTopology#isNodeInScope

2019-10-25 Thread Lisheng Sun (Jira)
Lisheng Sun created HDFS-14935:
--

 Summary: Unified constant in DFSNetworkTopology#isNodeInScope
 Key: HDFS-14935
 URL: https://issues.apache.org/jira/browse/HDFS-14935
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Lisheng Sun






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14934) [SBN Read] Standby NN throws many InterruptedExceptions when dfs.ha.tail-edits.period is 0

2019-10-25 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959551#comment-16959551
 ] 

Ayush Saxena commented on HDFS-14934:
-

Well we too had this. As I said in
https://issues.apache.org/jira/browse/HDFS-14655?focusedCommentId=16920126=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16920126
Which version you are seeing this?

> [SBN Read] Standby NN throws many InterruptedExceptions when 
> dfs.ha.tail-edits.period is 0
> --
>
> Key: HDFS-14934
> URL: https://issues.apache.org/jira/browse/HDFS-14934
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Takanobu Asanuma
>Priority: Major
>
> When dfs.ha.tail-edits.period is 0ms (or very short-time), there are many 
> warn logs in standby NN.
> {noformat}
> 2019-10-25 16:25:46,945 [Logger channel (from parallel executor) to  hostname>/:] WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(55)) - Thread 
> (Thread[Logger channel (from parallel executor) to / address>:,5,main]) interrupted: 
> java.lang.InterruptedException
>   at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:509)
>   at 
> com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:82)
>   at 
> org.apache.hadoop.util.concurrent.ExecutorHelper.logThrowableFromAfterExecute(ExecutorHelper.java:48)
>   at 
> org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor.afterExecute(HadoopThreadPoolExecutor.java:90)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1157)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14775) Add Timestamp for longest FSN write/read lock held log

2019-10-25 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959549#comment-16959549
 ] 

Hadoop QA commented on HDFS-14775:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 36s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}146m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14775 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12983992/HDFS-14775.005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 56ee260a540d 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0db0f1e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28179/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28179/testReport/ |
| Max. process+thread count | 3279 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28179/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This 

[jira] [Commented] (HDFS-14934) [SBN Read] Standby NN throws many InterruptedExceptions when dfs.ha.tail-edits.period is 0

2019-10-25 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959530#comment-16959530
 ] 

Takanobu Asanuma commented on HDFS-14934:
-

Seems it is related to HDFS-14655.

> [SBN Read] Standby NN throws many InterruptedExceptions when 
> dfs.ha.tail-edits.period is 0
> --
>
> Key: HDFS-14934
> URL: https://issues.apache.org/jira/browse/HDFS-14934
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Takanobu Asanuma
>Priority: Major
>
> When dfs.ha.tail-edits.period is 0ms (or very short-time), there are many 
> warn logs in standby NN.
> {noformat}
> 2019-10-25 16:25:46,945 [Logger channel (from parallel executor) to  hostname>/:] WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(55)) - Thread 
> (Thread[Logger channel (from parallel executor) to / address>:,5,main]) interrupted: 
> java.lang.InterruptedException
>   at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:509)
>   at 
> com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:82)
>   at 
> org.apache.hadoop.util.concurrent.ExecutorHelper.logThrowableFromAfterExecute(ExecutorHelper.java:48)
>   at 
> org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor.afterExecute(HadoopThreadPoolExecutor.java:90)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1157)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14655) [SBN Read] Namenode crashes if one of The JN is down

2019-10-25 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959529#comment-16959529
 ] 

Takanobu Asanuma commented on HDFS-14655:
-

This issue may cause HDFS-14934.

> [SBN Read] Namenode crashes if one of The JN is down
> 
>
> Key: HDFS-14655
> URL: https://issues.apache.org/jira/browse/HDFS-14655
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Harshakiran Reddy
>Assignee: Ayush Saxena
>Priority: Critical
> Fix For: 2.10.0, 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14655-01.patch, HDFS-14655-02.patch, 
> HDFS-14655-03.patch, HDFS-14655-04.patch, HDFS-14655-05.patch, 
> HDFS-14655-06.patch, HDFS-14655-07.patch, HDFS-14655-08.patch, 
> HDFS-14655-branch-2-01.patch, HDFS-14655-branch-2-02.patch, 
> HDFS-14655.poc.patch
>
>
> {noformat}
> 2019-07-04 17:35:54,064 | INFO  | Logger channel (from parallel executor) to 
> XXX/XXX | Retrying connect to server: XXX/XXX. Already tried 
> 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, 
> sleepTime=1000 MILLISECONDS) | Client.java:975
> 2019-07-04 17:35:54,087 | FATAL | Edit log tailer | Unknown error encountered 
> while tailing edits. Shutting down standby NN. | EditLogTailer.java:474
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:717)
>   at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
>   at 
> com.google.common.util.concurrent.MoreExecutors$ListeningDecorator.execute(MoreExecutors.java:440)
>   at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:56)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.getJournaledEdits(IPCLoggerChannel.java:565)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.getJournaledEdits(AsyncLoggerSet.java:272)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectRpcInputStreams(QuorumJournalManager.java:533)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:508)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:275)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1681)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1714)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:307)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:483)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> 2019-07-04 17:35:54,112 | INFO  | Edit log tailer | Exiting with status 1: 
> java.lang.OutOfMemoryError: unable to create new native thread | 
> ExitUtil.java:210
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14934) [SBN Read] Standby NN throws many InterruptedExceptions when dfs.ha.tail-edits.period is 0

2019-10-25 Thread Takanobu Asanuma (Jira)
Takanobu Asanuma created HDFS-14934:
---

 Summary: [SBN Read] Standby NN throws many InterruptedExceptions 
when dfs.ha.tail-edits.period is 0
 Key: HDFS-14934
 URL: https://issues.apache.org/jira/browse/HDFS-14934
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Takanobu Asanuma


When dfs.ha.tail-edits.period is 0ms (or very short-time), there are many warn 
logs in standby NN.

{noformat}
2019-10-25 16:25:46,945 [Logger channel (from parallel executor) to /:] WARN  concurrent.ExecutorHelper 
(ExecutorHelper.java:logThrowableFromAfterExecute(55)) - Thread (Thread[Logger 
channel (from parallel executor) to /:,5,main]) 
interrupted: 
java.lang.InterruptedException
at 
com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:509)
at 
com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:82)
at 
org.apache.hadoop.util.concurrent.ExecutorHelper.logThrowableFromAfterExecute(ExecutorHelper.java:48)
at 
org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor.afterExecute(HadoopThreadPoolExecutor.java:90)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1157)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14908) LeaseManager should check parent-child relationship when filter open files.

2019-10-25 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959520#comment-16959520
 ] 

Xiaoqiao He commented on HDFS-14908:


Thanks [~LiJinglun] for your works and strict benchmark. IMO, we should keep 
code simple and readable since performance cost diff is small and only used by 
DFSAdmin. I prefer to v1 patch. Would like to hear some more suggestions from 
[~elgoiri],[~weichiu]. Thanks [~LiJinglun].

> LeaseManager should check parent-child relationship when filter open files.
> ---
>
> Key: HDFS-14908
> URL: https://issues.apache.org/jira/browse/HDFS-14908
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.0.1
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Minor
> Attachments: HDFS-14908.001.patch, HDFS-14908.002.patch, 
> HDFS-14908.003.patch, Test.java, TestV2.java, TestV3.java
>
>
> Now when doing listOpenFiles(), LeaseManager only checks whether the filter 
> path is the prefix of the open files. We should check whether the filter path 
> is the parent/ancestor of the open files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1600) Add userName and IPAddress as part of OMRequest.

2019-10-25 Thread YiSheng Lien (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959485#comment-16959485
 ] 

YiSheng Lien edited comment on HDDS-1600 at 10/25/19 6:57 AM:
--

Hello [~bharat], thanks the patch
Would you show us the method of testing the patch


was (Author: cxorm):
Hello [~bharat], thanks the patch,
Would you show us the method of testing the patch

> Add userName and IPAddress as part of OMRequest.
> 
>
> Key: HDDS-1600
> URL: https://issues.apache.org/jira/browse/HDDS-1600
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> In OM HA, the actual execution of request happens under GRPC context, so UGI 
> object which we retrieve from ProtobufRpcEngine.Server.getRemoteUser(); will 
> not be available.
> In similar manner ProtobufRpcEngine.Server.getRemoteIp().
>  
> So, during preExecute(which happens under RPC context) extract userName and 
> IPAddress and add it to the OMRequest, and then send the request to ratis 
> server.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1600) Add userName and IPAddress as part of OMRequest.

2019-10-25 Thread YiSheng Lien (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959485#comment-16959485
 ] 

YiSheng Lien commented on HDDS-1600:


Hello [~bharat], thanks the patch,
Would you show us the method of testing the patch

> Add userName and IPAddress as part of OMRequest.
> 
>
> Key: HDDS-1600
> URL: https://issues.apache.org/jira/browse/HDDS-1600
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1, 0.5.0
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> In OM HA, the actual execution of request happens under GRPC context, so UGI 
> object which we retrieve from ProtobufRpcEngine.Server.getRemoteUser(); will 
> not be available.
> In similar manner ProtobufRpcEngine.Server.getRemoteIp().
>  
> So, during preExecute(which happens under RPC context) extract userName and 
> IPAddress and add it to the OMRequest, and then send the request to ratis 
> server.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14933) Fixing a typo in documentation of Observer NameNode

2019-10-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959470#comment-16959470
 ] 

Hudson commented on HDFS-14933:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17573 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17573/])
HDFS-14933. Fixing a typo in documentation of Observer NameNode. (tasanuma: rev 
862526530a376524551805b8e32cc7f66ba6f03e)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/ObserverNameNode.md


> Fixing a typo in documentation of Observer NameNode
> ---
>
> Key: HDFS-14933
> URL: https://issues.apache.org/jira/browse/HDFS-14933
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Xieming Li
>Assignee: Xieming Li
>Priority: Trivial
> Fix For: 3.3.0
>
> Attachments: HDFS-14933.001.patch
>
>
> Fix a typo in documentation Observer NameNode
> https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-project-dist/hadoop-hdfs/ObserverNameNode.html
> This 
> {code}
>   
>   dfs.ha.tail-edits.period
>   10s
> 
> {code}
> should be changed to 
> {code}
>   
>   dfs.ha.tail-edits.period.backoff-max
>   10s
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2219) Move all the ozone dist scripts/configs to one location

2019-10-25 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2219 started by YiSheng Lien.
--
> Move all the ozone dist scripts/configs to one location
> ---
>
> Key: HDDS-2219
> URL: https://issues.apache.org/jira/browse/HDDS-2219
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: build
>Reporter: Marton Elek
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: newbe
>
> The hadoop distribution tar file contains jar files scripts and default 
> configuration files.
> The scripts and configuration files are stored in multiple separated projects 
> without any reason:
> {code:java}
> ls hadoop-hdds/common/src/main/bin/
> hadoop-config.cmd  hadoop-config.sh  hadoop-daemons.sh  hadoop-functions.sh  
> workers.sh
> ls hadoop-ozone/common/src/main/bin 
> ozone  ozone-config.sh  start-ozone.sh  stop-ozone.sh
> ls hadoop-ozone/common/src/main/shellprofile.d 
> hadoop-ozone.sh
> ls hadoop-ozone/dist/src/main/conf 
> dn-audit-log4j2.properties  log4j.properties  om-audit-log4j2.properties  
> ozone-shell-log4j.properties  ozone-site.xml  scm-audit-log4j2.properties
>  {code}
> All of these scripts can be moved to the hadoop-ozone/dist/src/shell
> hadoop-ozone/dist/dev-support/bin/dist-layout-stitching also should be 
> updated to copy all of them to the right place in the tar.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2356) Multipart upload report errors while writing to ozone Ratis pipeline

2019-10-25 Thread Li Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959468#comment-16959468
 ] 

Li Cheng commented on HDDS-2356:


[~bharat]Yea, you are right. It does happen randomly. Seeing it again. When 
will HDDS-2322 be merged into master?

> Multipart upload report errors while writing to ozone Ratis pipeline
> 
>
> Key: HDDS-2356
> URL: https://issues.apache.org/jira/browse/HDDS-2356
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
> Environment: Env: 4 VMs in total: 3 Datanodes on 3 VMs, 1 OM & 1 SCM 
> on a separate VM
>Reporter: Li Cheng
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Fix For: 0.5.0
>
>
> Env: 4 VMs in total: 3 Datanodes on 3 VMs, 1 OM & 1 SCM on a separate VM, say 
> it's VM0.
> I use goofys as a fuse and enable ozone S3 gateway to mount ozone to a path 
> on VM0, while reading data from VM0 local disk and write to mount path. The 
> dataset has various sizes of files from 0 byte to GB-level and it has a 
> number of ~50,000 files. 
> The writing is slow (1GB for ~10 mins) and it stops after around 4GB. As I 
> look at hadoop-root-om-VM_50_210_centos.out log, I see OM throwing errors 
> related with Multipart upload. This error eventually causes the  writing to 
> terminate and OM to be closed. 
>  
> 2019-10-24 16:01:59,527 [OMDoubleBufferFlushThread] ERROR - Terminating with 
> exit status 2: OMDoubleBuffer flush 
> threadOMDoubleBufferFlushThreadencountered Throwable error
> java.util.ConcurrentModificationException
>  at java.util.TreeMap.forEach(TreeMap.java:1004)
>  at 
> org.apache.hadoop.ozone.om.helpers.OmMultipartKeyInfo.getProto(OmMultipartKeyInfo.java:111)
>  at 
> org.apache.hadoop.ozone.om.codec.OmMultipartKeyInfoCodec.toPersistedFormat(OmMultipartKeyInfoCodec.java:38)
>  at 
> org.apache.hadoop.ozone.om.codec.OmMultipartKeyInfoCodec.toPersistedFormat(OmMultipartKeyInfoCodec.java:31)
>  at 
> org.apache.hadoop.hdds.utils.db.CodecRegistry.asRawData(CodecRegistry.java:68)
>  at 
> org.apache.hadoop.hdds.utils.db.TypedTable.putWithBatch(TypedTable.java:125)
>  at 
> org.apache.hadoop.ozone.om.response.s3.multipart.S3MultipartUploadCommitPartResponse.addToDBBatch(S3MultipartUploadCommitPartResponse.java:112)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.lambda$flushTransactions$0(OzoneManagerDoubleBuffer.java:137)
>  at java.util.Iterator.forEachRemaining(Iterator.java:116)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerDoubleBuffer.flushTransactions(OzoneManagerDoubleBuffer.java:135)
>  at java.lang.Thread.run(Thread.java:745)
> 2019-10-24 16:01:59,629 [shutdown-hook-0] INFO - SHUTDOWN_MSG:



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14933) Fixing a typo in documentation of Observer NameNode

2019-10-25 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14933:

Fix Version/s: 3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks for your contribution, [~risyomei]!

> Fixing a typo in documentation of Observer NameNode
> ---
>
> Key: HDFS-14933
> URL: https://issues.apache.org/jira/browse/HDFS-14933
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Xieming Li
>Assignee: Xieming Li
>Priority: Trivial
> Fix For: 3.3.0
>
> Attachments: HDFS-14933.001.patch
>
>
> Fix a typo in documentation Observer NameNode
> https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-project-dist/hadoop-hdfs/ObserverNameNode.html
> This 
> {code}
>   
>   dfs.ha.tail-edits.period
>   10s
> 
> {code}
> should be changed to 
> {code}
>   
>   dfs.ha.tail-edits.period.backoff-max
>   10s
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14933) Fixing a typo in documentation of Observer NameNode

2019-10-25 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14933:

Summary: Fixing a typo in documentation of Observer NameNode  (was: Fixing 
a typo in documentaion of Observer NameNode)

> Fixing a typo in documentation of Observer NameNode
> ---
>
> Key: HDFS-14933
> URL: https://issues.apache.org/jira/browse/HDFS-14933
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Xieming Li
>Assignee: Xieming Li
>Priority: Trivial
> Attachments: HDFS-14933.001.patch
>
>
> Fix a typo in documentation Observer NameNode
> https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-project-dist/hadoop-hdfs/ObserverNameNode.html
> This 
> {code}
>   
>   dfs.ha.tail-edits.period
>   10s
> 
> {code}
> should be changed to 
> {code}
>   
>   dfs.ha.tail-edits.period.backoff-max
>   10s
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2296) ozoneperf compose cluster shouln't start freon by default

2019-10-25 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2296.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> ozoneperf compose cluster shouln't start freon by default
> -
>
> Key: HDDS-2296
> URL: https://issues.apache.org/jira/browse/HDDS-2296
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: docker
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> During the original creation of the compose/ozoneperf we added an example 
> freon execution to make it clean how the data can be generated. This freon 
> process starts all the time when ozoneperf cluster is started (usually I 
> notice it when my CPU starts to use 100% of the available resources).
> Since the creation of this cluster definition we implemented multiple type of 
> freon tests and it's hard predict which tests should be executed. I propose 
> to remove the default execution of the random key generation but keep the 
> opportunity to run any of the tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14933) Fixing a typo in documentaion of Observer NameNode

2019-10-25 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959461#comment-16959461
 ] 

Takanobu Asanuma commented on HDFS-14933:
-

+1 on [^HDFS-14933.001.patch].

> Fixing a typo in documentaion of Observer NameNode
> --
>
> Key: HDFS-14933
> URL: https://issues.apache.org/jira/browse/HDFS-14933
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Reporter: Xieming Li
>Assignee: Xieming Li
>Priority: Trivial
> Attachments: HDFS-14933.001.patch
>
>
> Fix a typo in documentation Observer NameNode
> https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-project-dist/hadoop-hdfs/ObserverNameNode.html
> This 
> {code}
>   
>   dfs.ha.tail-edits.period
>   10s
> 
> {code}
> should be changed to 
> {code}
>   
>   dfs.ha.tail-edits.period.backoff-max
>   10s
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >