[jira] [Commented] (HDFS-14090) RBF: Improved isolation for downstream name nodes.

2019-06-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16859117#comment-16859117
 ] 

Hadoop QA commented on HDFS-14090:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
18s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch 
generated 8 new + 0 unchanged - 0 fixed = 8 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
13s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-rbf generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 25m 23s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-rbf |
|  |  Invocation of toString on namenodeSplit in 
org.apache.hadoop.hdfs.server.federation.fairness.DefaultFairnessPolicyController.assignHandlersToNameservices(Configuration)
  At DefaultFairnessPolicyController.java:in 
org.apache.hadoop.hdfs.server.federation.fairness.DefaultFairnessPolicyController.assignHandlersToNameservices(Configuration)
  At DefaultFairnessPolicyController.java:[line 86] |
| Failed junit tests | hadoop.hdfs.server.federation.router.TestRBFConfigFields 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14090 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12971211/HDFS-14090-HDFS-13891.002.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8ac26f19ed61 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HDDS-1657) Fix parallelStream usage in volume and key native acl.

2019-06-07 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16859111#comment-16859111
 ] 

Hudson commented on HDDS-1657:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16708 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16708/])
HDDS-1657. Fix parallelStream usage in volume and key native acl. (xyao: rev 
9deac3b6bf46ff8875cdf2dfa6f7064f9379bccd)
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocolPB/OzoneManagerProtocolClientSideTranslatorPB.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java


> Fix parallelStream usage in volume and key native acl.
> --
>
> Key: HDDS-1657
> URL: https://issues.apache.org/jira/browse/HDDS-1657
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1657.00.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Fix bug in volume and key native acl.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1543) Implement addAcl,removeAcl,setAcl,getAcl for Prefix

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1543?focusedWorklogId=256373=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256373
 ]

ASF GitHub Bot logged work on HDDS-1543:


Author: ASF GitHub Bot
Created on: 08/Jun/19 05:01
Start Date: 08/Jun/19 05:01
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #927: HDDS-1543. 
Implement addAcl,removeAcl,setAcl,getAcl for Prefix. Contr…
URL: https://github.com/apache/hadoop/pull/927#issuecomment-500095435
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 49 | Maven dependency ordering for branch |
   | +1 | mvninstall | 521 | trunk passed |
   | +1 | compile | 302 | trunk passed |
   | +1 | checkstyle | 98 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 975 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 168 | trunk passed |
   | 0 | spotbugs | 357 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 558 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for patch |
   | +1 | mvninstall | 484 | the patch passed |
   | +1 | compile | 284 | the patch passed |
   | +1 | cc | 284 | the patch passed |
   | +1 | javac | 284 | the patch passed |
   | +1 | checkstyle | 84 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 734 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 179 | the patch passed |
   | +1 | findbugs | 616 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 166 | hadoop-hdds in the patch failed. |
   | -1 | unit | 52 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 5511 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.om.exceptions.TestResultCodes |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/927 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux aacb4fffcf39 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 76b94c2 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/2/testReport/ |
   | Max. process+thread count | 340 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256373)
Time Spent: 40m  (was: 0.5h)

> Implement addAcl,removeAcl,setAcl,getAcl  for Prefix
> 
>
> Key: HDDS-1543
> URL: https://issues.apache.org/jira/browse/HDDS-1543
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 

[jira] [Work logged] (HDDS-1657) Fix parallelStream usage in volume and key native acl.

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1657?focusedWorklogId=256370=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256370
 ]

ASF GitHub Bot logged work on HDDS-1657:


Author: ASF GitHub Bot
Created on: 08/Jun/19 04:47
Start Date: 08/Jun/19 04:47
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #926: HDDS-1657. 
Fix parallelStream usage in volume and key native acl. Contributed by Ajay 
Kumar.
URL: https://github.com/apache/hadoop/pull/926
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256370)
Time Spent: 50m  (was: 40m)

> Fix parallelStream usage in volume and key native acl.
> --
>
> Key: HDDS-1657
> URL: https://issues.apache.org/jira/browse/HDDS-1657
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1657.00.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Fix bug in volume and key native acl.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1657) Fix parallelStream usage in volume and key native acl.

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1657?focusedWorklogId=256372=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256372
 ]

ASF GitHub Bot logged work on HDDS-1657:


Author: ASF GitHub Bot
Created on: 08/Jun/19 04:47
Start Date: 08/Jun/19 04:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #926: HDDS-1657. Fix 
parallelStream usage in volume and key native acl. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/926#issuecomment-500094406
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 13 | https://github.com/apache/hadoop/pull/926 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/926 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-926/2/console |
   | versions | git=2.7.4 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256372)
Time Spent: 1h  (was: 50m)

> Fix parallelStream usage in volume and key native acl.
> --
>
> Key: HDDS-1657
> URL: https://issues.apache.org/jira/browse/HDDS-1657
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1657.00.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Fix bug in volume and key native acl.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1657) Fix parallelStream usage in volume and key native acl.

2019-06-07 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1657:
-
   Resolution: Fixed
Fix Version/s: 0.4.1
   Status: Resolved  (was: Patch Available)

Thanks [~ajayydv] for the contribution. I've committed the patch to trunk. 

> Fix parallelStream usage in volume and key native acl.
> --
>
> Key: HDDS-1657
> URL: https://issues.apache.org/jira/browse/HDDS-1657
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
> Attachments: HDDS-1657.00.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Fix bug in volume and key native acl.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1657) Fix parallelStream usage in volume and key native acl.

2019-06-07 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1657:
-
Summary: Fix parallelStream usage in volume and key native acl.  (was: Fix 
bug in volume and key native acl.)

> Fix parallelStream usage in volume and key native acl.
> --
>
> Key: HDDS-1657
> URL: https://issues.apache.org/jira/browse/HDDS-1657
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1657.00.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Fix bug in volume and key native acl.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1543) Implement addAcl,removeAcl,setAcl,getAcl for Prefix

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1543?focusedWorklogId=256369=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256369
 ]

ASF GitHub Bot logged work on HDDS-1543:


Author: ASF GitHub Bot
Created on: 08/Jun/19 04:42
Start Date: 08/Jun/19 04:42
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #927: HDDS-1543. Implement 
addAcl,removeAcl,setAcl,getAcl for Prefix. Contr…
URL: https://github.com/apache/hadoop/pull/927#issuecomment-500094173
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256369)
Time Spent: 0.5h  (was: 20m)

> Implement addAcl,removeAcl,setAcl,getAcl  for Prefix
> 
>
> Key: HDDS-1543
> URL: https://issues.apache.org/jira/browse/HDDS-1543
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Implement addAcl,removeAcl,setAcl,getAcl  for Prefix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14090) RBF: Improved isolation for downstream name nodes.

2019-06-07 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-14090:
---
Status: Patch Available  (was: Open)

> RBF: Improved isolation for downstream name nodes.
> --
>
> Key: HDFS-14090
> URL: https://issues.apache.org/jira/browse/HDFS-14090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14090-HDFS-13891.001.patch, 
> HDFS-14090-HDFS-13891.002.patch, RBF_ Isolation design.pdf
>
>
> Router is a gateway to underlying name nodes. Gateway architectures, should 
> help minimize impact of clients connecting to healthy clusters vs unhealthy 
> clusters.
> For example - If there are 2 name nodes downstream, and one of them is 
> heavily loaded with calls spiking rpc queue times, due to back pressure the 
> same with start reflecting on the router. As a result of this, clients 
> connecting to healthy/faster name nodes will also slow down as same rpc queue 
> is maintained for all calls at the router layer. Essentially the same IPC 
> thread pool is used by router to connect to all name nodes.
> Currently router uses one single rpc queue for all calls. Lets discuss how we 
> can change the architecture and add some throttling logic for 
> unhealthy/slow/overloaded name nodes.
> One way could be to read from current call queue, immediately identify 
> downstream name node and maintain a separate queue for each underlying name 
> node. Another simpler way is to maintain some sort of rate limiter configured 
> for each name node and let routers drop/reject/send error requests after 
> certain threshold. 
> This won’t be a simple change as router’s ‘Server’ layer would need redesign 
> and implementation. Currently this layer is the same as name node.
> Opening this ticket to discuss, design and implement this feature.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1636) Tracing id is not propagated via async datanode grpc call

2019-06-07 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16859095#comment-16859095
 ] 

Hudson commented on HDDS-1636:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16707 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16707/])
HDDS-1636. Tracing id is not propagated via async datanode grpc call (xyao: rev 
46b23c11b033c76b25897d61de53e9e36bb2b4b5)
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/client/ContainerOperationClient.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestContainerStateMachineIdempotency.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestGetCommittedBlockLengthAndPutKey.java
* (edit) 
hadoop-hdds/client/src/test/java/org/apache/hadoop/hdds/scm/storage/TestChunkInputStream.java
* (edit) 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/storage/DistributedStorageHandler.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkInputStream.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockOutputStream.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/tracing/StringCodec.java
* (edit) 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/storage/ContainerProtocolCalls.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestContainerSmallFile.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
* (edit) 
hadoop-hdds/client/src/test/java/org/apache/hadoop/hdds/scm/storage/TestBlockInputStream.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/BlockOutputStreamEntry.java
* (edit) 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/KeyInputStream.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/scm/TestXceiverClientManager.java
* (edit) 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/BlockInputStream.java
* (edit) 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/TestChunkStreams.java


> Tracing id is not propagated via async datanode grpc call
> -
>
> Key: HDDS-1636
> URL: https://issues.apache.org/jira/browse/HDDS-1636
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Recently a new exception become visible in the datanode logs, using standard 
> freon (STANDLAONE)
> {code}
> datanode_2  | 2019-06-03 12:18:21 WARN  
> PropagationRegistry$ExceptionCatchingExtractorDecorator:60 - Error when 
> extracting SpanContext from carrier. Handling gracefully.
> datanode_2  | 
> io.jaegertracing.internal.exceptions.MalformedTracerStateStringException: 
> String does not match tracer state format: 
> 7576cabf-37a4-4232-9729-939a3fdb68c4WriteChunk150a8a848a951784256ca0801f7d9cf8b_stream_ed583cee-9552-4f1a-8c77-63f7d07b755f_chunk_1
> datanode_2  | at 
> org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:49)
> datanode_2  | at 
> org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:34)
> datanode_2  | at 
> io.jaegertracing.internal.PropagationRegistry$ExceptionCatchingExtractorDecorator.extract(PropagationRegistry.java:57)
> datanode_2  | at 
> io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:208)
> datanode_2  | at 
> io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:61)
> datanode_2  | at 
> io.opentracing.util.GlobalTracer.extract(GlobalTracer.java:143)
> datanode_2  | at 
> org.apache.hadoop.hdds.tracing.TracingUtil.importAndCreateScope(TracingUtil.java:102)
> datanode_2  | at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
> datanode_2  | at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:73)
> datanode_2  | at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:61)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:248)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
> datanode_2  | at 
> 

[jira] [Updated] (HDDS-1636) Tracing id is not propagated via async datanode grpc call

2019-06-07 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1636:
-
   Resolution: Fixed
Fix Version/s: 0.4.1
   Status: Resolved  (was: Patch Available)

Thanks [~elek] for the contribution and all for the reviews. I've committed the 
fix to trunk.

> Tracing id is not propagated via async datanode grpc call
> -
>
> Key: HDDS-1636
> URL: https://issues.apache.org/jira/browse/HDDS-1636
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Recently a new exception become visible in the datanode logs, using standard 
> freon (STANDLAONE)
> {code}
> datanode_2  | 2019-06-03 12:18:21 WARN  
> PropagationRegistry$ExceptionCatchingExtractorDecorator:60 - Error when 
> extracting SpanContext from carrier. Handling gracefully.
> datanode_2  | 
> io.jaegertracing.internal.exceptions.MalformedTracerStateStringException: 
> String does not match tracer state format: 
> 7576cabf-37a4-4232-9729-939a3fdb68c4WriteChunk150a8a848a951784256ca0801f7d9cf8b_stream_ed583cee-9552-4f1a-8c77-63f7d07b755f_chunk_1
> datanode_2  | at 
> org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:49)
> datanode_2  | at 
> org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:34)
> datanode_2  | at 
> io.jaegertracing.internal.PropagationRegistry$ExceptionCatchingExtractorDecorator.extract(PropagationRegistry.java:57)
> datanode_2  | at 
> io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:208)
> datanode_2  | at 
> io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:61)
> datanode_2  | at 
> io.opentracing.util.GlobalTracer.extract(GlobalTracer.java:143)
> datanode_2  | at 
> org.apache.hadoop.hdds.tracing.TracingUtil.importAndCreateScope(TracingUtil.java:102)
> datanode_2  | at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
> datanode_2  | at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:73)
> datanode_2  | at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:61)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:248)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.Contexts$ContextualizedServerCallListener.onMessage(Contexts.java:76)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
> datanode_2  | at 
> org.apache.hadoop.hdds.tracing.GrpcServerInterceptor$1.onMessage(GrpcServerInterceptor.java:46)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:263)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:686)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
> datanode_2  | at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> datanode_2  | at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> {code}
> It turned out that the tracingId propagation between XCeiverClient and Server 
> doesn't work very well (in case of Standalone and async commands)
>  1. there are many places (on the client side) where the traceId filled with  
> UUID.randomUUID().toString();  
>  2. This random id is propagated between the Output/InputStream and different 
> part of the clients
>  3. It is unnecessary, because in the XceiverClientGrpc and XceiverClientGrpc 
> the traceId field is overridden with the real opentracing id anyway 
> (sendCommand/sendCommandAsync)
>  4. Except in the XceiverClientGrpc.sendCommandAsync where this part is 
> accidentally missing.
> Things to fix:
>  1. fix XceiverClientGrpc.sendCommandAsync (replace any existing traceId with 
> the good one)
>  2. 

[jira] [Work logged] (HDDS-1636) Tracing id is not propagated via async datanode grpc call

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1636?focusedWorklogId=256355=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256355
 ]

ASF GitHub Bot logged work on HDDS-1636:


Author: ASF GitHub Bot
Created on: 08/Jun/19 03:40
Start Date: 08/Jun/19 03:40
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #895: HDDS-1636. 
Tracing id is not propagated via async datanode grpc call
URL: https://github.com/apache/hadoop/pull/895
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256355)
Time Spent: 2h 10m  (was: 2h)

> Tracing id is not propagated via async datanode grpc call
> -
>
> Key: HDDS-1636
> URL: https://issues.apache.org/jira/browse/HDDS-1636
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Recently a new exception become visible in the datanode logs, using standard 
> freon (STANDLAONE)
> {code}
> datanode_2  | 2019-06-03 12:18:21 WARN  
> PropagationRegistry$ExceptionCatchingExtractorDecorator:60 - Error when 
> extracting SpanContext from carrier. Handling gracefully.
> datanode_2  | 
> io.jaegertracing.internal.exceptions.MalformedTracerStateStringException: 
> String does not match tracer state format: 
> 7576cabf-37a4-4232-9729-939a3fdb68c4WriteChunk150a8a848a951784256ca0801f7d9cf8b_stream_ed583cee-9552-4f1a-8c77-63f7d07b755f_chunk_1
> datanode_2  | at 
> org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:49)
> datanode_2  | at 
> org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:34)
> datanode_2  | at 
> io.jaegertracing.internal.PropagationRegistry$ExceptionCatchingExtractorDecorator.extract(PropagationRegistry.java:57)
> datanode_2  | at 
> io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:208)
> datanode_2  | at 
> io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:61)
> datanode_2  | at 
> io.opentracing.util.GlobalTracer.extract(GlobalTracer.java:143)
> datanode_2  | at 
> org.apache.hadoop.hdds.tracing.TracingUtil.importAndCreateScope(TracingUtil.java:102)
> datanode_2  | at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
> datanode_2  | at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:73)
> datanode_2  | at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:61)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:248)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.Contexts$ContextualizedServerCallListener.onMessage(Contexts.java:76)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
> datanode_2  | at 
> org.apache.hadoop.hdds.tracing.GrpcServerInterceptor$1.onMessage(GrpcServerInterceptor.java:46)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:263)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:686)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
> datanode_2  | at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> datanode_2  | at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> {code}
> It turned out that the tracingId propagation between XCeiverClient and Server 
> doesn't work very well (in case of Standalone and async commands)
>  1. there are many 

[jira] [Work logged] (HDDS-1651) Create a http.policy config for Ozone

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1651?focusedWorklogId=256354=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256354
 ]

ASF GitHub Bot logged work on HDDS-1651:


Author: ASF GitHub Bot
Created on: 08/Jun/19 03:39
Start Date: 08/Jun/19 03:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #929: HDDS-1651 Create 
a http.policy config for Ozone
URL: https://github.com/apache/hadoop/pull/929#issuecomment-500090647
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 47 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1626 | trunk passed |
   | +1 | compile | 1283 | trunk passed |
   | +1 | checkstyle | 255 | trunk passed |
   | +1 | mvnsite | 316 | trunk passed |
   | +1 | shadedclient | 1394 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 259 | trunk passed |
   | 0 | spotbugs | 108 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 576 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | +1 | mvninstall | 258 | the patch passed |
   | +1 | compile | 1530 | the patch passed |
   | +1 | javac | 1530 | the patch passed |
   | +1 | checkstyle | 226 | root: The patch generated 0 new + 230 unchanged - 
15 fixed = 230 total (was 245) |
   | +1 | mvnsite | 310 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 763 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 257 | the patch passed |
   | +1 | findbugs | 607 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 537 | hadoop-common in the patch passed. |
   | +1 | unit | 131 | hadoop-hdfs-client in the patch passed. |
   | -1 | unit | 5731 | hadoop-hdfs in the patch failed. |
   | +1 | unit | 108 | common in the patch passed. |
   | +1 | asflicense | 71 | The patch does not generate ASF License warnings. |
   | | | 16164 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.web.TestWebHdfsTimeouts |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-929/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/929 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 1e5cee6c89b1 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 76b94c2 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-929/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-929/1/testReport/ |
   | Max. process+thread count | 2739 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdds/common U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-929/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256354)
Time Spent: 1h  (was: 50m)

> Create a http.policy config for Ozone
> -
>
> Key: HDDS-1651
> URL: 

[jira] [Commented] (HDFS-14545) RBF: Router should support GetUserMappingsProtocol

2019-06-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16859070#comment-16859070
 ] 

Hadoop QA commented on HDFS-14545:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
12s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 55s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14545 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12971220/HDFS-14545-HDFS-13891-10.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 69f8448a3948 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / bee9fff |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26923/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26923/testReport/ |
| Max. process+thread count | 1062 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 

[jira] [Commented] (HDFS-14548) Cannot create snapshot when the snapshotCounter reaches MaxSnapshotID

2019-06-07 Thread zhangqianqiong (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16859068#comment-16859068
 ] 

zhangqianqiong commented on HDFS-14548:
---

[~ayushtkn] [~arpaga]

If I shutdown the filesystem, and set the snapshotCounter to zero by modifying 
the fsimage, 

Will it cause a serious trouble? 

> Cannot create snapshot when the snapshotCounter reaches MaxSnapshotID
> -
>
> Key: HDFS-14548
> URL: https://issues.apache.org/jira/browse/HDFS-14548
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: zhangqianqiong
>Priority: Major
> Attachments: 1559717485296.jpg
>
>
> when a new snapshot is created, the snapshotCounter would increment, but when 
> a snapshot is deleted, the snapshotCounter would not decrement. Over time, 
> when the snapshotCounter reaches the MaxSnapshotID, the new snapshot cannot 
> be created.
> By the way, How can I reset the snapshotCounter?
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1545) Cli to add,remove,get and delete acls for Ozone objects

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1545?focusedWorklogId=256321=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256321
 ]

ASF GitHub Bot logged work on HDDS-1545:


Author: ASF GitHub Bot
Created on: 08/Jun/19 01:39
Start Date: 08/Jun/19 01:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #920: HDDS-1545. Cli to 
add,remove,get and delete acls for Ozone objects. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/920#issuecomment-500082704
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 43 | Maven dependency ordering for branch |
   | +1 | mvninstall | 511 | trunk passed |
   | +1 | compile | 312 | trunk passed |
   | +1 | checkstyle | 94 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 900 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 192 | trunk passed |
   | 0 | spotbugs | 396 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 585 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | +1 | mvninstall | 457 | the patch passed |
   | +1 | compile | 296 | the patch passed |
   | +1 | cc | 296 | the patch passed |
   | +1 | javac | 296 | the patch passed |
   | -0 | checkstyle | 50 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 686 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 177 | the patch passed |
   | +1 | findbugs | 539 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 159 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1029 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 60 | The patch does not generate ASF License warnings. |
   | | | 6358 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-920/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/920 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 6c92a8374f83 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 76b94c2 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-920/3/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-920/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-920/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-920/3/testReport/ |
   | Max. process+thread count | 5011 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/dist hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-920/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256321)
Time Spent: 1h 40m  (was: 

[jira] [Work logged] (HDDS-1651) Create a http.policy config for Ozone

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1651?focusedWorklogId=256312=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256312
 ]

ASF GitHub Bot logged work on HDDS-1651:


Author: ASF GitHub Bot
Created on: 08/Jun/19 01:04
Start Date: 08/Jun/19 01:04
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #930: HDDS-1651. Create 
a http.policy config for Ozone
URL: https://github.com/apache/hadoop/pull/930#issuecomment-500080229
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 527 | trunk passed |
   | +1 | compile | 294 | trunk passed |
   | +1 | checkstyle | 90 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 890 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 188 | trunk passed |
   | 0 | spotbugs | 333 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 524 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 458 | the patch passed |
   | +1 | compile | 292 | the patch passed |
   | +1 | javac | 292 | the patch passed |
   | +1 | checkstyle | 97 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 694 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 181 | the patch passed |
   | +1 | findbugs | 538 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 148 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1337 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 57 | The patch does not generate ASF License warnings. |
   | | | 6546 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-930/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/930 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 5a57ea44076b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 76b94c2 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-930/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-930/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-930/1/testReport/ |
   | Max. process+thread count | 5361 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common U: hadoop-hdds/common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-930/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256312)
Time Spent: 50m  (was: 40m)

> Create a http.policy config for Ozone
> -
>
> Key: HDDS-1651
> URL: 

[jira] [Commented] (HDFS-14545) RBF: Router should support GetUserMappingsProtocol

2019-06-07 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16859061#comment-16859061
 ] 

Ayush Saxena commented on HDFS-14545:
-

Thanx [~lukmajercak] fo the review.
Handled comments as part of v10

> RBF: Router should support GetUserMappingsProtocol
> --
>
> Key: HDFS-14545
> URL: https://issues.apache.org/jira/browse/HDFS-14545
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14545-HDFS-13891-01.patch, 
> HDFS-14545-HDFS-13891-02.patch, HDFS-14545-HDFS-13891-03.patch, 
> HDFS-14545-HDFS-13891-04.patch, HDFS-14545-HDFS-13891-05.patch, 
> HDFS-14545-HDFS-13891-06.patch, HDFS-14545-HDFS-13891-07.patch, 
> HDFS-14545-HDFS-13891-08.patch, HDFS-14545-HDFS-13891-09.patch, 
> HDFS-14545-HDFS-13891-10.patch, HDFS-14545-HDFS-13891.000.patch
>
>
> We should be able to check the groups for a user from a Router.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14545) RBF: Router should support GetUserMappingsProtocol

2019-06-07 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14545:

Attachment: HDFS-14545-HDFS-13891-10.patch

> RBF: Router should support GetUserMappingsProtocol
> --
>
> Key: HDFS-14545
> URL: https://issues.apache.org/jira/browse/HDFS-14545
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14545-HDFS-13891-01.patch, 
> HDFS-14545-HDFS-13891-02.patch, HDFS-14545-HDFS-13891-03.patch, 
> HDFS-14545-HDFS-13891-04.patch, HDFS-14545-HDFS-13891-05.patch, 
> HDFS-14545-HDFS-13891-06.patch, HDFS-14545-HDFS-13891-07.patch, 
> HDFS-14545-HDFS-13891-08.patch, HDFS-14545-HDFS-13891-09.patch, 
> HDFS-14545-HDFS-13891-10.patch, HDFS-14545-HDFS-13891.000.patch
>
>
> We should be able to check the groups for a user from a Router.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1543) Implement addAcl,removeAcl,setAcl,getAcl for Prefix

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1543?focusedWorklogId=256298=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256298
 ]

ASF GitHub Bot logged work on HDDS-1543:


Author: ASF GitHub Bot
Created on: 08/Jun/19 00:01
Start Date: 08/Jun/19 00:01
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #927: HDDS-1543. 
Implement addAcl,removeAcl,setAcl,getAcl for Prefix. Contr…
URL: https://github.com/apache/hadoop/pull/927#issuecomment-500073723
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 34 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 46 | Maven dependency ordering for branch |
   | +1 | mvninstall | 574 | trunk passed |
   | +1 | compile | 308 | trunk passed |
   | +1 | checkstyle | 72 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 829 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 190 | trunk passed |
   | 0 | spotbugs | 340 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 528 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 485 | the patch passed |
   | +1 | compile | 313 | the patch passed |
   | +1 | cc | 313 | the patch passed |
   | +1 | javac | 313 | the patch passed |
   | -0 | checkstyle | 49 | hadoop-ozone: The patch generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 689 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 180 | the patch passed |
   | -1 | findbugs | 348 | hadoop-ozone generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0) |
   ||| _ Other Tests _ |
   | -1 | unit | 162 | hadoop-hdds in the patch failed. |
   | -1 | unit | 60 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 58 | The patch does not generate ASF License warnings. |
   | | | 5438 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-ozone |
   |  |  Potentially dangerous use of non-short-circuit logic in 
org.apache.hadoop.ozone.om.PrefixManagerImpl.removeAcl(OzoneObj, OzoneAcl)  At 
PrefixManagerImpl.java:logic in 
org.apache.hadoop.ozone.om.PrefixManagerImpl.removeAcl(OzoneObj, OzoneAcl)  At 
PrefixManagerImpl.java:[line 163] |
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.om.exceptions.TestResultCodes |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/927 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 19b79af66244 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 76b94c2 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/1/artifact/out/new-findbugs-hadoop-ozone.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/1/testReport/ |
   | Max. process+thread count | 440 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-927/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above 

[jira] [Comment Edited] (HDDS-1567) Define a set of environment variables to configure Ozone docker image

2019-06-07 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16859044#comment-16859044
 ] 

Eric Yang edited comment on HDDS-1567 at 6/7/19 11:46 PM:
--

The existing set of config used by docker-config are readily available in 
source code but some runtime operation can not be applied using config file 
method or tedious to populate the same config in multiple config files.  It 
would be useful for docker image to support the global environment settings.

The important config may need to pass in at deploy time to apply to multiple 
config files or cli command:
{code}
OZONE_INIT = true|false
OZONE_DATA_DIR = /data
OZONE_RATIS_QUORUM
OZONE_OM_HOSTS
OZONE_SCM_HOSTS
OZONE_HTTP_KERBEROS_KEYTAB
OZONE_HTTP_KERBEROS_PRINCIPAL
OZONE_KERBEROS_KEYTAB
OZONE_KERBEROS_PRINCIPAL
OZONE_TLS_TRUSTSTORE
OZONE_TLS_KEYSTORE
{code}

This is minimum set of config that system admin would be interested to control 
for bootstrapping a secure Ozone cluster.


was (Author: eyang):
The existing set of config used by docker-config are readily available in 
source code but some runtime operation can not be applied using config file 
method or tedious to populate the same config in multiple config files.  It 
would be useful for docker image to support the global environment settings.

The important config may need to pass in at deploy time to apply to multiple 
config files or cli command:
{code}
OZONE_INIT = true|false
OZONE_DATA_DIR = /data
OZONE_RATIS_QUORUM
OZONE_OM_HOSTS
OZONE_SCM_HOSTS
OZONE_KERBEROS_KEYTAB
OZONE_KERBEROS_PRINCIPAL
OZONE_TLS_TRUSTSTORE
OZONE_TLS_KEYSTORE
{code}

This is minimum set of config that system admin would be interested to control 
for bootstrapping a secure Ozone cluster.

> Define a set of environment variables to configure Ozone docker image
> -
>
> Key: HDDS-1567
> URL: https://issues.apache.org/jira/browse/HDDS-1567
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Priority: Major
>
> For developer that tries to setup docker image by end for testing purpose, it 
> would be nice to predefine a set of environment variables that can be passed 
> to Ozone docker image to configure the minimum set of configuration to start 
> Ozone containers.  There is a python script that converts environment 
> variables to config, but documentation does not show what setting can be 
> passed to configure the system.  This task would be a good starting point to 
> document the available configuration knobs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1567) Define a set of environment variables to configure Ozone docker image

2019-06-07 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16859045#comment-16859045
 ] 

Eric Yang commented on HDDS-1567:
-

[~anu] [~elek] I am all for config less ozone, but few security settings or 
operation settings can not be automated.  Let me know if there is more settings 
that are not covered in the list above.

> Define a set of environment variables to configure Ozone docker image
> -
>
> Key: HDDS-1567
> URL: https://issues.apache.org/jira/browse/HDDS-1567
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Priority: Major
>
> For developer that tries to setup docker image by end for testing purpose, it 
> would be nice to predefine a set of environment variables that can be passed 
> to Ozone docker image to configure the minimum set of configuration to start 
> Ozone containers.  There is a python script that converts environment 
> variables to config, but documentation does not show what setting can be 
> passed to configure the system.  This task would be a good starting point to 
> document the available configuration knobs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1567) Define a set of environment variables to configure Ozone docker image

2019-06-07 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16859044#comment-16859044
 ] 

Eric Yang commented on HDDS-1567:
-

The existing set of config used by docker-config are readily available in 
source code but some runtime operation can not be applied using config file 
method or tedious to populate the same config in multiple config files.  It 
would be useful for docker image to support the global environment settings.

The important config may need to pass in at deploy time to apply to multiple 
config files or cli command:
{code}
OZONE_INIT = true|false
OZONE_DATA_DIR = /data
OZONE_RATIS_QUORUM
OZONE_OM_HOSTS
OZONE_SCM_HOSTS
OZONE_KERBEROS_KEYTAB
OZONE_KERBEROS_PRINCIPAL
OZONE_TLS_TRUSTSTORE
OZONE_TLS_KEYSTORE
{code}

This is minimum set of config that system admin would be interested to control 
for bootstrapping a secure Ozone cluster.

> Define a set of environment variables to configure Ozone docker image
> -
>
> Key: HDDS-1567
> URL: https://issues.apache.org/jira/browse/HDDS-1567
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Priority: Major
>
> For developer that tries to setup docker image by end for testing purpose, it 
> would be nice to predefine a set of environment variables that can be passed 
> to Ozone docker image to configure the minimum set of configuration to start 
> Ozone containers.  There is a python script that converts environment 
> variables to config, but documentation does not show what setting can be 
> passed to configure the system.  This task would be a good starting point to 
> document the available configuration knobs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1651) Create a http.policy config for Ozone

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1651?focusedWorklogId=256290=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256290
 ]

ASF GitHub Bot logged work on HDDS-1651:


Author: ASF GitHub Bot
Created on: 07/Jun/19 23:14
Start Date: 07/Jun/19 23:14
Worklog Time Spent: 10m 
  Work Description: shwetayakkali commented on pull request #930: 
HDDS-1651. Create a http.policy config for Ozone
URL: https://github.com/apache/hadoop/pull/930
 
 
   Change-Id: Ia284f685f6d39a512124e6055537615d325ae96b
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256290)
Time Spent: 40m  (was: 0.5h)

> Create a http.policy config for Ozone
> -
>
> Key: HDDS-1651
> URL: https://issues.apache.org/jira/browse/HDDS-1651
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Shweta
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Ozone currently uses dfs.http.policy for HTTP policy. Ozone should have its 
> own ozone.http.policy configuration and if undefined, then fallback to 
> dfs.http.policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1651) Create a http.policy config for Ozone

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1651?focusedWorklogId=256289=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256289
 ]

ASF GitHub Bot logged work on HDDS-1651:


Author: ASF GitHub Bot
Created on: 07/Jun/19 23:12
Start Date: 07/Jun/19 23:12
Worklog Time Spent: 10m 
  Work Description: hanishakoneru commented on issue #929: HDDS-1651 Create 
a http.policy config for Ozone
URL: https://github.com/apache/hadoop/pull/929#issuecomment-500066933
 
 
   Hi @shwetayakkali, there seem to be some other commits also included in this 
PR. Can you please remove those.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256289)
Time Spent: 0.5h  (was: 20m)

> Create a http.policy config for Ozone
> -
>
> Key: HDDS-1651
> URL: https://issues.apache.org/jira/browse/HDDS-1651
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Shweta
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Ozone currently uses dfs.http.policy for HTTP policy. Ozone should have its 
> own ozone.http.policy configuration and if undefined, then fallback to 
> dfs.http.policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1651) Create a http.policy config for Ozone

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1651?focusedWorklogId=256287=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256287
 ]

ASF GitHub Bot logged work on HDDS-1651:


Author: ASF GitHub Bot
Created on: 07/Jun/19 23:09
Start Date: 07/Jun/19 23:09
Worklog Time Spent: 10m 
  Work Description: shwetayakkali commented on pull request #929: HDDS-1651 
Create a http.policy config for Ozone
URL: https://github.com/apache/hadoop/pull/929
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256287)
Time Spent: 20m  (was: 10m)

> Create a http.policy config for Ozone
> -
>
> Key: HDDS-1651
> URL: https://issues.apache.org/jira/browse/HDDS-1651
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Shweta
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Ozone currently uses dfs.http.policy for HTTP policy. Ozone should have its 
> own ozone.http.policy configuration and if undefined, then fallback to 
> dfs.http.policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1651) Create a http.policy config for Ozone

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1651:
-
Labels: newbie pull-request-available  (was: newbie)

> Create a http.policy config for Ozone
> -
>
> Key: HDDS-1651
> URL: https://issues.apache.org/jira/browse/HDDS-1651
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Shweta
>Priority: Major
>  Labels: newbie, pull-request-available
>
> Ozone currently uses dfs.http.policy for HTTP policy. Ozone should have its 
> own ozone.http.policy configuration and if undefined, then fallback to 
> dfs.http.policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1651) Create a http.policy config for Ozone

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1651?focusedWorklogId=256286=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256286
 ]

ASF GitHub Bot logged work on HDDS-1651:


Author: ASF GitHub Bot
Created on: 07/Jun/19 23:08
Start Date: 07/Jun/19 23:08
Worklog Time Spent: 10m 
  Work Description: shwetayakkali commented on pull request #929: HDDS-1651 
Create a http.policy config for Ozone
URL: https://github.com/apache/hadoop/pull/929
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256286)
Time Spent: 10m
Remaining Estimate: 0h

> Create a http.policy config for Ozone
> -
>
> Key: HDDS-1651
> URL: https://issues.apache.org/jira/browse/HDDS-1651
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Shweta
>Priority: Major
>  Labels: newbie, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Ozone currently uses dfs.http.policy for HTTP policy. Ozone should have its 
> own ozone.http.policy configuration and if undefined, then fallback to 
> dfs.http.policy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1657) Fix bug in volume and key native acl.

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1657?focusedWorklogId=256282=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256282
 ]

ASF GitHub Bot logged work on HDDS-1657:


Author: ASF GitHub Bot
Created on: 07/Jun/19 22:51
Start Date: 07/Jun/19 22:51
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on issue #926: HDDS-1657. Fix bug in 
volume and key native acl. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/926#issuecomment-500063144
 
 
   /retest
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256282)
Time Spent: 40m  (was: 0.5h)

> Fix bug in volume and key native acl.
> -
>
> Key: HDDS-1657
> URL: https://issues.apache.org/jira/browse/HDDS-1657
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1657.00.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Fix bug in volume and key native acl.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1620) Implement Volume Write Requests to use Cache and DoubleBuffer

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1620?focusedWorklogId=256280=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256280
 ]

ASF GitHub Bot logged work on HDDS-1620:


Author: ASF GitHub Bot
Created on: 07/Jun/19 22:44
Start Date: 07/Jun/19 22:44
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #884: 
HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r291774709
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/OMClientRequest.java
 ##
 @@ -180,6 +181,17 @@ protected OMResponse 
createErrorOMResponse(OMResponse.Builder omResponse,
 return omResponse.build();
   }
 
+
+  /*
+   * This method sets the omRequest. This will method will be called when
 
 Review comment:
   And also now PreExecute will be implemented by requests when there is a need 
to change OMRequest. So modified the preExecute from the abstract method, and 
implemented it in OMClientRequest which is base class for all the 
requests(Write).
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256280)
Time Spent: 2.5h  (was: 2h 20m)

> Implement Volume Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1620
> URL: https://issues.apache.org/jira/browse/HDDS-1620
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Implement Volume write requests to use OM Cache, double buffer. 
> In this Jira will add the changes to implement volume operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1620) Implement Volume Write Requests to use Cache and DoubleBuffer

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1620?focusedWorklogId=256274=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256274
 ]

ASF GitHub Bot logged work on HDDS-1620:


Author: ASF GitHub Bot
Created on: 07/Jun/19 22:41
Start Date: 07/Jun/19 22:41
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #884: 
HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r291774200
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/OMClientRequest.java
 ##
 @@ -180,6 +181,17 @@ protected OMResponse 
createErrorOMResponse(OMResponse.Builder omResponse,
 return omResponse.build();
   }
 
+
+  /*
+   * This method sets the omRequest. This will method will be called when
 
 Review comment:
   This is added so that it will help in Non-HA, to avoid the creation of 
OMClientRequest object multiple times. This is not mandatory for HA.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256274)
Time Spent: 2h 10m  (was: 2h)

> Implement Volume Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1620
> URL: https://issues.apache.org/jira/browse/HDDS-1620
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Implement Volume write requests to use OM Cache, double buffer. 
> In this Jira will add the changes to implement volume operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1620) Implement Volume Write Requests to use Cache and DoubleBuffer

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1620?focusedWorklogId=256276=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256276
 ]

ASF GitHub Bot logged work on HDDS-1620:


Author: ASF GitHub Bot
Created on: 07/Jun/19 22:41
Start Date: 07/Jun/19 22:41
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #884: 
HDDS-1620. Implement Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r291774200
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/OMClientRequest.java
 ##
 @@ -180,6 +181,17 @@ protected OMResponse 
createErrorOMResponse(OMResponse.Builder omResponse,
 return omResponse.build();
   }
 
+
+  /*
+   * This method sets the omRequest. This will method will be called when
 
 Review comment:
   This is added so that it will help in Non-HA, to avoid the creation of 
OMClientRequest object multiple times. This is not mandatory for HA. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256276)
Time Spent: 2h 20m  (was: 2h 10m)

> Implement Volume Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1620
> URL: https://issues.apache.org/jira/browse/HDDS-1620
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Implement Volume write requests to use OM Cache, double buffer. 
> In this Jira will add the changes to implement volume operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1543) Implement addAcl,removeAcl,setAcl,getAcl for Prefix

2019-06-07 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-1543:
-
Status: Patch Available  (was: In Progress)

> Implement addAcl,removeAcl,setAcl,getAcl  for Prefix
> 
>
> Key: HDDS-1543
> URL: https://issues.apache.org/jira/browse/HDDS-1543
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Implement addAcl,removeAcl,setAcl,getAcl  for Prefix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1543) Implement addAcl,removeAcl,setAcl,getAcl for Prefix

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1543?focusedWorklogId=256263=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256263
 ]

ASF GitHub Bot logged work on HDDS-1543:


Author: ASF GitHub Bot
Created on: 07/Jun/19 22:29
Start Date: 07/Jun/19 22:29
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #927: HDDS-1543. 
Implement addAcl,removeAcl,setAcl,getAcl for Prefix. Contr…
URL: https://github.com/apache/hadoop/pull/927
 
 
   …ibuted by Xiaoyu Yao.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256263)
Time Spent: 10m
Remaining Estimate: 0h

> Implement addAcl,removeAcl,setAcl,getAcl  for Prefix
> 
>
> Key: HDDS-1543
> URL: https://issues.apache.org/jira/browse/HDDS-1543
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Implement addAcl,removeAcl,setAcl,getAcl  for Prefix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1543) Implement addAcl,removeAcl,setAcl,getAcl for Prefix

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1543:
-
Labels: pull-request-available  (was: )

> Implement addAcl,removeAcl,setAcl,getAcl  for Prefix
> 
>
> Key: HDDS-1543
> URL: https://issues.apache.org/jira/browse/HDDS-1543
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>
> Implement addAcl,removeAcl,setAcl,getAcl  for Prefix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1661) Consolidate hadoop-hdds and hadoop-ozone into hadoop-ozone-project

2019-06-07 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang reassigned HDDS-1661:
---

Assignee: Bharat Viswanadham

> Consolidate hadoop-hdds and hadoop-ozone into hadoop-ozone-project
> --
>
> Key: HDDS-1661
> URL: https://issues.apache.org/jira/browse/HDDS-1661
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Ozone source code is some what fragmented in Hadoop source code.  The current 
> code looks like:
> {code}
> hadoop/pom.ozone.xml
> ├── hadoop-hdds
> └── hadoop-ozone
> {code}
> It is helpful to consolidate the project into high level grouping such as:
> {code}
> hadoop
> └── hadoop-ozone-project/pom.xml
> └── hadoop-ozone-project/hadoop-hdds
> └── hadoop-ozone-project/hadoop-ozone
> {code}
> This allows user to build ozone from hadoop-ozone-project directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1660) Use Picocli for Ozone Manager

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1660?focusedWorklogId=256235=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256235
 ]

ASF GitHub Bot logged work on HDDS-1660:


Author: ASF GitHub Bot
Created on: 07/Jun/19 21:34
Start Date: 07/Jun/19 21:34
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #925: HDDS-1660 Use 
Picocli for Ozone Manager
URL: https://github.com/apache/hadoop/pull/925#issuecomment-500046405
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 17 | Maven dependency ordering for branch |
   | +1 | mvninstall | 526 | trunk passed |
   | +1 | compile | 316 | trunk passed |
   | +1 | checkstyle | 94 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 885 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 187 | trunk passed |
   | 0 | spotbugs | 349 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 547 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for patch |
   | +1 | mvninstall | 492 | the patch passed |
   | +1 | compile | 356 | the patch passed |
   | +1 | javac | 356 | the patch passed |
   | +1 | checkstyle | 125 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 26 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 771 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 185 | the patch passed |
   | +1 | findbugs | 564 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 201 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1546 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 62 | The patch does not generate ASF License warnings. |
   | | | 7279 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-925/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/925 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs xml 
compile javac javadoc mvninstall shadedclient findbugs checkstyle |
   | uname | Linux e74689f0819e 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4e38daf |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-925/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-925/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-925/2/testReport/ |
   | Max. process+thread count | 4285 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test hadoop-ozone/tools U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-925/2/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256235)
Time 

[jira] [Work logged] (HDDS-1620) Implement Volume Write Requests to use Cache and DoubleBuffer

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1620?focusedWorklogId=256214=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256214
 ]

ASF GitHub Bot logged work on HDDS-1620:


Author: ASF GitHub Bot
Created on: 07/Jun/19 21:15
Start Date: 07/Jun/19 21:15
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #884: HDDS-1620. 
Implement Volume Write Requests to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/884#discussion_r291756985
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/OMClientRequest.java
 ##
 @@ -180,6 +181,17 @@ protected OMResponse 
createErrorOMResponse(OMResponse.Builder omResponse,
 return omResponse.build();
   }
 
+
+  /*
+   * This method sets the omRequest. This will method will be called when
 
 Review comment:
   @bharatviswa504 can we eliminate this method as a requirement for 
implementors to call. Since the preExecute always returns an OM request, the 
caller can take the result and update omRequest.
   
   It would be a good idea to minimize the work done by 
preExecute/validateAndUpdateCache and make as much common as possible.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256214)
Time Spent: 2h  (was: 1h 50m)

> Implement Volume Write Requests to use Cache and DoubleBuffer
> -
>
> Key: HDDS-1620
> URL: https://issues.apache.org/jira/browse/HDDS-1620
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Implement Volume write requests to use OM Cache, double buffer. 
> In this Jira will add the changes to implement volume operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14090) RBF: Improved isolation for downstream name nodes.

2019-06-07 Thread CR Hota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

CR Hota updated HDFS-14090:
---
Attachment: HDFS-14090-HDFS-13891.002.patch

> RBF: Improved isolation for downstream name nodes.
> --
>
> Key: HDFS-14090
> URL: https://issues.apache.org/jira/browse/HDFS-14090
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
> Attachments: HDFS-14090-HDFS-13891.001.patch, 
> HDFS-14090-HDFS-13891.002.patch, RBF_ Isolation design.pdf
>
>
> Router is a gateway to underlying name nodes. Gateway architectures, should 
> help minimize impact of clients connecting to healthy clusters vs unhealthy 
> clusters.
> For example - If there are 2 name nodes downstream, and one of them is 
> heavily loaded with calls spiking rpc queue times, due to back pressure the 
> same with start reflecting on the router. As a result of this, clients 
> connecting to healthy/faster name nodes will also slow down as same rpc queue 
> is maintained for all calls at the router layer. Essentially the same IPC 
> thread pool is used by router to connect to all name nodes.
> Currently router uses one single rpc queue for all calls. Lets discuss how we 
> can change the architecture and add some throttling logic for 
> unhealthy/slow/overloaded name nodes.
> One way could be to read from current call queue, immediately identify 
> downstream name node and maintain a separate queue for each underlying name 
> node. Another simpler way is to maintain some sort of rate limiter configured 
> for each name node and let routers drop/reject/send error requests after 
> certain threshold. 
> This won’t be a simple change as router’s ‘Server’ layer would need redesign 
> and implementation. Currently this layer is the same as name node.
> Opening this ticket to discuss, design and implement this feature.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1661) Consolidate hadoop-hdds and hadoop-ozone into hadoop-ozone-project

2019-06-07 Thread Eric Yang (JIRA)
Eric Yang created HDDS-1661:
---

 Summary: Consolidate hadoop-hdds and hadoop-ozone into 
hadoop-ozone-project
 Key: HDDS-1661
 URL: https://issues.apache.org/jira/browse/HDDS-1661
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Eric Yang


Ozone source code is some what fragmented in Hadoop source code.  The current 
code looks like:

{code}
hadoop/pom.ozone.xml
├── hadoop-hdds
└── hadoop-ozone
{code}

It is helpful to consolidate the project into high level grouping such as:
{code}
hadoop
└── hadoop-ozone-project/pom.xml
└── hadoop-ozone-project/hadoop-hdds
└── hadoop-ozone-project/hadoop-ozone
{code}

This allows user to build ozone from hadoop-ozone-project directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1554) Create disk tests for fault injection test

2019-06-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16858957#comment-16858957
 ] 

Hadoop QA commented on HDDS-1554:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} yamllint {color} | {color:blue}  0m  
0s{color} | {color:blue} yamllint was not available. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
1s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 30 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
52s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  5m 
43s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
14s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 53s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 27m 57s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher |
|   | 

[jira] [Work logged] (HDDS-1657) Fix bug in volume and key native acl.

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1657?focusedWorklogId=256190=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256190
 ]

ASF GitHub Bot logged work on HDDS-1657:


Author: ASF GitHub Bot
Created on: 07/Jun/19 20:02
Start Date: 07/Jun/19 20:02
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on issue #926: HDDS-1657. Fix bug in 
volume and key native acl. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/926#issuecomment-500020281
 
 
   all test failures reported in jenkins pass locally.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256190)
Time Spent: 0.5h  (was: 20m)

> Fix bug in volume and key native acl.
> -
>
> Key: HDDS-1657
> URL: https://issues.apache.org/jira/browse/HDDS-1657
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1657.00.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Fix bug in volume and key native acl.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1622) Use picocli for StorageContainerManager

2019-06-07 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1622:
---
Fix Version/s: 0.4.1

> Use picocli for StorageContainerManager
> ---
>
> Key: HDDS-1622
> URL: https://issues.apache.org/jira/browse/HDDS-1622
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.4.1
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> Recently we switched to use PicoCli with (almost) all of our daemons (eg. s3 
> Gateway, Freon, etc.)
> PicoCli has better output, it can generate nice help, and easier to use as 
> it's enough to put a few annotations and we don't need to add all the 
> boilerplate code to print out help, etc.
> StorageContainerManager and OzoneManager is not yet  supported. The previous 
> issue was closed HDDS-453 but since then we improved the GenericCli parser 
> (eg. in HDDS-1192), so I think we are ready to move.
> The main idea is to create a starter java similar to 
> org.apache.hadoop.ozone.s3.Gateway and we can start StorageContainerManager 
> from there.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14358) Provide LiveNode and DeadNode filter in DataNode UI

2019-06-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16858923#comment-16858923
 ] 

Hadoop QA commented on HDFS-14358:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HDFS-14358 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-14358 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12971202/HDFS-14358.002.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26922/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Provide LiveNode and DeadNode filter in DataNode UI
> ---
>
> Key: HDFS-14358
> URL: https://issues.apache.org/jira/browse/HDFS-14358
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.2
>Reporter: Ravuri Sushma sree
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14358.002.patch, HDFS-14358.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14358) Provide LiveNode and DeadNode filter in DataNode UI

2019-06-07 Thread hemanthboyina (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14358:
-
Attachment: HDFS-14358.002.patch

> Provide LiveNode and DeadNode filter in DataNode UI
> ---
>
> Key: HDFS-14358
> URL: https://issues.apache.org/jira/browse/HDFS-14358
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.2
>Reporter: Ravuri Sushma sree
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14358.002.patch, HDFS-14358.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1657) Fix bug in volume and key native acl.

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1657?focusedWorklogId=256165=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256165
 ]

ASF GitHub Bot logged work on HDDS-1657:


Author: ASF GitHub Bot
Created on: 07/Jun/19 19:15
Start Date: 07/Jun/19 19:15
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #926: HDDS-1657. Fix 
bug in volume and key native acl. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/926#issuecomment-56694
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for branch |
   | +1 | mvninstall | 495 | trunk passed |
   | +1 | compile | 297 | trunk passed |
   | +1 | checkstyle | 88 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 876 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 180 | trunk passed |
   | 0 | spotbugs | 329 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 518 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 465 | the patch passed |
   | +1 | compile | 304 | the patch passed |
   | +1 | javac | 304 | the patch passed |
   | +1 | checkstyle | 94 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 686 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | the patch passed |
   | +1 | findbugs | 538 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 154 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1216 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 6414 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.TestContainerStateMachineIdempotency |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.web.client.TestBuckets |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-926/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/926 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 4900dfcd8f25 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4e38daf |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-926/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-926/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-926/1/testReport/ |
   | Max. process+thread count | 4962 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-926/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256165)
Time Spent: 20m  (was: 10m)

> Fix bug in volume and key native acl.
> -
>
> 

[jira] [Commented] (HDFS-14545) RBF: Router should support GetUserMappingsProtocol

2019-06-07 Thread Lukas Majercak (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16858912#comment-16858912
 ] 

Lukas Majercak commented on HDFS-14545:
---

ConnectionPool lines 410, 411. Would be nice to either change "clazz0" to 
something like "clazzProtoPb" or remove these variables altogether. 

Nitpicks: 
- RouterRpcServer line 361 missing space before "=" 
- RouterUserProtocol line 45: you don't need .getName() there? Also maybe 
separating static and non static members visually
- TestRouterUserMappings line 295: this assert length == 2 seems kinda vague, 
can we pass in the actual groups ?


> RBF: Router should support GetUserMappingsProtocol
> --
>
> Key: HDFS-14545
> URL: https://issues.apache.org/jira/browse/HDFS-14545
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14545-HDFS-13891-01.patch, 
> HDFS-14545-HDFS-13891-02.patch, HDFS-14545-HDFS-13891-03.patch, 
> HDFS-14545-HDFS-13891-04.patch, HDFS-14545-HDFS-13891-05.patch, 
> HDFS-14545-HDFS-13891-06.patch, HDFS-14545-HDFS-13891-07.patch, 
> HDFS-14545-HDFS-13891-08.patch, HDFS-14545-HDFS-13891-09.patch, 
> HDFS-14545-HDFS-13891.000.patch
>
>
> We should be able to check the groups for a user from a Router.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14545) RBF: Router should support GetUserMappingsProtocol

2019-06-07 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16858873#comment-16858873
 ] 

Íñigo Goiri commented on HDFS-14545:


Let's go for this then.
[~lukmajercak] can you take a final look?

> RBF: Router should support GetUserMappingsProtocol
> --
>
> Key: HDFS-14545
> URL: https://issues.apache.org/jira/browse/HDFS-14545
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14545-HDFS-13891-01.patch, 
> HDFS-14545-HDFS-13891-02.patch, HDFS-14545-HDFS-13891-03.patch, 
> HDFS-14545-HDFS-13891-04.patch, HDFS-14545-HDFS-13891-05.patch, 
> HDFS-14545-HDFS-13891-06.patch, HDFS-14545-HDFS-13891-07.patch, 
> HDFS-14545-HDFS-13891-08.patch, HDFS-14545-HDFS-13891-09.patch, 
> HDFS-14545-HDFS-13891.000.patch
>
>
> We should be able to check the groups for a user from a Router.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1554) Create disk tests for fault injection test

2019-06-07 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16858871#comment-16858871
 ] 

Eric Yang commented on HDDS-1554:
-

Made a mistake with patch 003.  It contains some Dockerfile modification 
specifically to work for my development environment.  Revised patch 004 to 
remove the environment specific changes.

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1554.001.patch, HDDS-1554.002.patch, 
> HDDS-1554.003.patch, HDDS-1554.004.patch
>
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1554) Create disk tests for fault injection test

2019-06-07 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HDDS-1554:

Attachment: HDDS-1554.004.patch

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1554.001.patch, HDDS-1554.002.patch, 
> HDDS-1554.003.patch, HDDS-1554.004.patch
>
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1554) Create disk tests for fault injection test

2019-06-07 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16858868#comment-16858868
 ] 

Eric Yang commented on HDDS-1554:
-

Patch 003 fixes checkstyle and white space issues.  The failed unit tests don't 
seem to be related to this patch.

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1554.001.patch, HDDS-1554.002.patch, 
> HDDS-1554.003.patch
>
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1554) Create disk tests for fault injection test

2019-06-07 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HDDS-1554:

Attachment: HDDS-1554.003.patch

> Create disk tests for fault injection test
> --
>
> Key: HDDS-1554
> URL: https://issues.apache.org/jira/browse/HDDS-1554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1554.001.patch, HDDS-1554.002.patch, 
> HDDS-1554.003.patch
>
>
> The current plan for fault injection disk tests are:
>  # Scenario 1 - Read/Write test
>  ## Run docker-compose to bring up a cluster
>  ## Initialize scm and om
>  ## Upload data to Ozone cluster
>  ## Verify data is correct
>  ## Shutdown cluster
>  # Scenario 2 - Read/Only test
>  ## Repeat Scenario 1
>  ## Mount data disk as read only
>  ## Try to write data to Ozone cluster
>  ## Validate error message is correct
>  ## Shutdown cluster
>  # Scenario 3 - Corruption test
>  ## Repeat Scenario 2
>  ## Shutdown cluster
>  ## Modify data disk data
>  ## Restart cluster
>  ## Validate error message for read from corrupted data
>  ## Validate error message for write to corrupted volume



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1660) Use Picocli for Ozone Manager

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1660?focusedWorklogId=256118=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256118
 ]

ASF GitHub Bot logged work on HDDS-1660:


Author: ASF GitHub Bot
Created on: 07/Jun/19 17:51
Start Date: 07/Jun/19 17:51
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #925: HDDS-1660 Use 
Picocli for Ozone Manager
URL: https://github.com/apache/hadoop/pull/925#issuecomment-499978782
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 37 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 6 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 47 | Maven dependency ordering for branch |
   | +1 | mvninstall | 545 | trunk passed |
   | +1 | compile | 300 | trunk passed |
   | +1 | checkstyle | 93 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 812 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 184 | trunk passed |
   | 0 | spotbugs | 386 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 592 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | +1 | mvninstall | 493 | the patch passed |
   | +1 | compile | 312 | the patch passed |
   | +1 | javac | 312 | the patch passed |
   | +1 | checkstyle | 96 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 27 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 678 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 192 | the patch passed |
   | +1 | findbugs | 594 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 184 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2465 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 72 | The patch does not generate ASF License warnings. |
   | | | 8059 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.web.client.TestBuckets |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-925/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/925 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs xml 
compile javac javadoc mvninstall shadedclient findbugs checkstyle |
   | uname | Linux 22430c5e1af7 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 14552d1 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-925/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-925/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-925/1/testReport/ |
   | Max. process+thread count | 4204 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test hadoop-ozone/tools U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-925/1/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, 

[jira] [Updated] (HDDS-1657) Fix bug in volume and key native acl.

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1657:
-
Labels: pull-request-available  (was: )

> Fix bug in volume and key native acl.
> -
>
> Key: HDDS-1657
> URL: https://issues.apache.org/jira/browse/HDDS-1657
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1657.00.patch
>
>
> Fix bug in volume and key native acl.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1659) Define the process to add proposal/design docs to the Ozone subproject

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1659?focusedWorklogId=256091=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256091
 ]

ASF GitHub Bot logged work on HDDS-1659:


Author: ASF GitHub Bot
Created on: 07/Jun/19 17:28
Start Date: 07/Jun/19 17:28
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #922: HDDS-1659. Define 
the process to add proposal/design docs to the Ozone subproject
URL: https://github.com/apache/hadoop/pull/922#issuecomment-499970928
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 106 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | yamllint | 1 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 529 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1364 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 445 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 730 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 2899 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-922/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/922 |
   | Optional Tests | dupname asflicense mvnsite yamllint |
   | uname | Linux 658bedf076e5 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8547957 |
   | Max. process+thread count | 306 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/docs U: hadoop-hdds/docs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-922/2/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256091)
Time Spent: 1h 10m  (was: 1h)

> Define the process to add proposal/design docs to the Ozone subproject
> --
>
> Key: HDDS-1659
> URL: https://issues.apache.org/jira/browse/HDDS-1659
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We think that it would be more effective to collect all the design docs in 
> one place and make it easier to review them by the community.
> We propose to follow an approach where the proposals are committed to the 
> hadoop-hdds/docs project and the review can be the same as a review of a PR



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1657) Fix bug in volume and key native acl.

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1657?focusedWorklogId=256090=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256090
 ]

ASF GitHub Bot logged work on HDDS-1657:


Author: ASF GitHub Bot
Created on: 07/Jun/19 17:28
Start Date: 07/Jun/19 17:28
Worklog Time Spent: 10m 
  Work Description: ajayydv commented on pull request #926: HDDS-1657. Fix 
bug in volume and key native acl. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/926
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256090)
Time Spent: 10m
Remaining Estimate: 0h

> Fix bug in volume and key native acl.
> -
>
> Key: HDDS-1657
> URL: https://issues.apache.org/jira/browse/HDDS-1657
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1657.00.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Fix bug in volume and key native acl.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14545) RBF: Router should support GetUserMappingsProtocol

2019-06-07 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16858830#comment-16858830
 ] 

Ayush Saxena commented on HDFS-14545:
-

Thanx [~elgoiri] 
I know  we weren't having this part of Exception earlier, To make it generic 
these Exceptions got introduced. But this shouldn't ideally throw an exception, 
Since all the inputs to it are predefined and hard coded by us, this part 
doesn't have any user level inputs. So, I guess we can let it as is. Well we 
are logging the error message, The most we can do is wrap that message into an 
IO and throw back. But frankly we are having a beforehand proto check, and all 
corresponding classes we have only defined and they aren't going to change. I 
guess we can let this as is, Or we still have a chance to go back and follow 
the same 4 method trend, if something feels going wrong here.

Well I tried this at actual cluster too, and it worked as expected. You may 
even give a try, We can hold it off and rethink for sometime, if you suspect 
that we have missed upon something critical.
Let me know what can be done. :)

 

> RBF: Router should support GetUserMappingsProtocol
> --
>
> Key: HDFS-14545
> URL: https://issues.apache.org/jira/browse/HDFS-14545
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14545-HDFS-13891-01.patch, 
> HDFS-14545-HDFS-13891-02.patch, HDFS-14545-HDFS-13891-03.patch, 
> HDFS-14545-HDFS-13891-04.patch, HDFS-14545-HDFS-13891-05.patch, 
> HDFS-14545-HDFS-13891-06.patch, HDFS-14545-HDFS-13891-07.patch, 
> HDFS-14545-HDFS-13891-08.patch, HDFS-14545-HDFS-13891-09.patch, 
> HDFS-14545-HDFS-13891.000.patch
>
>
> We should be able to check the groups for a user from a Router.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1654) Ensure container state on datanode gets synced to disk whenever state change happens

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1654?focusedWorklogId=256079=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256079
 ]

ASF GitHub Bot logged work on HDDS-1654:


Author: ASF GitHub Bot
Created on: 07/Jun/19 17:11
Start Date: 07/Jun/19 17:11
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #923: 
HDDS-1654. Ensure container state on datanode gets synced to disk whennever 
state change happens.
URL: https://github.com/apache/hadoop/pull/923#discussion_r291679260
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
 ##
 @@ -249,6 +249,9 @@ public long takeSnapshot() throws IOException {
   LOG.info("Taking a snapshot to file {}", snapshotFile);
   try (FileOutputStream fos = new FileOutputStream(snapshotFile)) {
 persistContainerSet(fos);
+fos.flush();
+// make sure the snapshot file is synced
 
 Review comment:
   why do we need to make sure to flush to disk. 
   Asking this because in OM also we take a snapshot with lastappliedIndex 
value. So, trying to understand do we need to do it over there also.
Want to understand why do we need to do this/ has this caused any issues?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256079)
Time Spent: 50m  (was: 40m)

> Ensure container state on datanode gets synced to disk whenever state change 
> happens
> 
>
> Key: HDDS-1654
> URL: https://issues.apache.org/jira/browse/HDDS-1654
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently, whenever there is a container state change, it updates the 
> container but doesn't sync.
> The idea is here to is to force sync the state to disk everytime there is a 
> state change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12914) Block report leases cause missing blocks until next report

2019-06-07 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16858809#comment-16858809
 ] 

Íñigo Goiri commented on HDFS-12914:


I would like somebody with a little more experience with this to give a good 
review but for now a couple minor comments:
* {{checkBlockReportLease()}} could check for {{context == null}} at the 
beginning and return true there right away; then the final return would be just 
the {{checkLease()}}.
* When {{NameNodeRpcServer}} catches the {{UnregisteredNodeException}} we 
probably want to log that.
* We could use a lambda for {{runBlockOp()}}.
* User {{assertNotNull()}} instead of {{assertTrue(datanodeCommand != null)}}; 
actually can we check for the actual command?

> Block report leases cause missing blocks until next report
> --
>
> Key: HDFS-12914
> URL: https://issues.apache.org/jira/browse/HDFS-12914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0, 2.9.2
>Reporter: Daryn Sharp
>Assignee: Santosh Marella
>Priority: Critical
> Attachments: HDFS-12914-branch-2.001.patch, 
> HDFS-12914-trunk.00.patch, HDFS-12914-trunk.01.patch, HDFS-12914.005.patch, 
> HDFS-12914.006.patch
>
>
> {{BlockReportLeaseManager#checkLease}} will reject FBRs from DNs for 
> conditions such as "unknown datanode", "not in pending set", "lease has 
> expired", wrong lease id, etc.  Lease rejection does not throw an exception.  
> It returns false which bubbles up to  {{NameNodeRpcServer#blockReport}} and 
> interpreted as {{noStaleStorages}}.
> A re-registering node whose FBR is rejected from an invalid lease becomes 
> active with _no blocks_.  A replication storm ensues possibly causing DNs to 
> temporarily go dead (HDFS-12645), leading to more FBR lease rejections on 
> re-registration.  The cluster will have many "missing blocks" until the DNs 
> next FBR is sent and/or forced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14513) FSImage which is saving should be clean while NameNode shutdown

2019-06-07 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16858804#comment-16858804
 ] 

Íñigo Goiri commented on HDFS-14513:


[^HDFS-14513.007.patch] LGTM.
+1

> FSImage which is saving should be clean while NameNode shutdown
> ---
>
> Key: HDFS-14513
> URL: https://issues.apache.org/jira/browse/HDFS-14513
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14513.001.patch, HDFS-14513.002.patch, 
> HDFS-14513.003.patch, HDFS-14513.004.patch, HDFS-14513.005.patch, 
> HDFS-14513.006.patch, HDFS-14513.007.patch
>
>
> Checkpointer/FSImageSaver is regular tasks and dump NameNode meta to disk, at 
> most per hour by default. If it receive some command (e.g. transition to 
> active in HA mode) it will cancel checkpoint and delete tmp files using 
> {{FSImage#deleteCancelledCheckpoint}}. However if NameNode shutdown when 
> checkpoint, the tmp files will not be cleaned anymore. 
> Consider there are 500m inodes+blocks, it could cost 5~10min to finish once 
> checkpoint, if we shutdown NameNode during checkpointing, fsimage checkpoint 
> file will never be cleaned, after long time, there could be many useless 
> checkpoint files. So I propose that we should add hook to clean that when 
> shutdown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14545) RBF: Router should support GetUserMappingsProtocol

2019-06-07 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16858802#comment-16858802
 ] 

Íñigo Goiri commented on HDFS-14545:


Thanks [~ayushtkn] for the updates.
I'm good with  [^HDFS-14545-HDFS-13891-09.patch] but I want to bring up a 
couple things:
* The coverage is OK but could be improved. Anything easy we can do? Otherwise 
we can leave as is.
* The exception handling for the constructor is kind of weird but not sure we 
can do much better. The main issue is that if we have an exception we end up 
returning null but we don't manage that case. It may be good to throw the 
exception or improve the logging.

> RBF: Router should support GetUserMappingsProtocol
> --
>
> Key: HDFS-14545
> URL: https://issues.apache.org/jira/browse/HDFS-14545
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14545-HDFS-13891-01.patch, 
> HDFS-14545-HDFS-13891-02.patch, HDFS-14545-HDFS-13891-03.patch, 
> HDFS-14545-HDFS-13891-04.patch, HDFS-14545-HDFS-13891-05.patch, 
> HDFS-14545-HDFS-13891-06.patch, HDFS-14545-HDFS-13891-07.patch, 
> HDFS-14545-HDFS-13891-08.patch, HDFS-14545-HDFS-13891-09.patch, 
> HDFS-14545-HDFS-13891.000.patch
>
>
> We should be able to check the groups for a user from a Router.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1659) Define the process to add proposal/design docs to the Ozone subproject

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1659?focusedWorklogId=256048=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256048
 ]

ASF GitHub Bot logged work on HDDS-1659:


Author: ASF GitHub Bot
Created on: 07/Jun/19 16:39
Start Date: 07/Jun/19 16:39
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #922: HDDS-1659. Define 
the process to add proposal/design docs to the Ozone subproject
URL: https://github.com/apache/hadoop/pull/922#discussion_r291669157
 
 

 ##
 File path: hadoop-hdds/docs/content/design/ozone-enhancement-proposals.md
 ##
 @@ -0,0 +1,97 @@
+---
+title: Ozone Enhancement Proposals
+summary: Definition of the process to share new technical proposals with the 
Ozone community.
+date: 2019-06-07
+jira: HDDS-1659
+status: current
+author: Anu Enginner, Marton Elek
+---
+
+## Problem statement
+
+Some of the biggers features requires well defined plans before the 
implementation. Until now it was managed by uploading PDF design docs to 
selected JIRA. There are multiple problems with the current practice.
+
+ 1. There is no easy way to find existing up-to-date and outdated design docs.
+ 2. Design docs usually have better description of the problem that the user 
docs
+ 3. We need better tools to discuss the design docs in the development phase 
of the doc
+
+We propose to follow the same process what we have now, but instead of 
uploading a PDF to the JIRA, create a PR to merge the proposal document to the 
documentation project.
+
+## Non-goals
+
+ * Modify the existing workflow or approval process
+ * Migrate existing documents
+ * Make it harder to create design docs (it should be easy to support the 
creation of proposals for any kind of tasks)
+
+## Proposed solution
+
+ * Open a dedicated Jira (`HDDS-*` but with specific component)
+ * Use standard name prefix in the jira (easy to filter on the mailing list) 
`[OEP]
+ * Create a PR to merge the design doc (markdown) to 
`hadoop-hdds/docs/content/proposal` (will be part of the docs)
 
 Review comment:
   Update: I removed the HTTP/template changes.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256048)
Time Spent: 1h  (was: 50m)

> Define the process to add proposal/design docs to the Ozone subproject
> --
>
> Key: HDDS-1659
> URL: https://issues.apache.org/jira/browse/HDDS-1659
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> We think that it would be more effective to collect all the design docs in 
> one place and make it easier to review them by the community.
> We propose to follow an approach where the proposals are committed to the 
> hadoop-hdds/docs project and the review can be the same as a review of a PR



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1659) Define the process to add proposal/design docs to the Ozone subproject

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1659?focusedWorklogId=256044=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256044
 ]

ASF GitHub Bot logged work on HDDS-1659:


Author: ASF GitHub Bot
Created on: 07/Jun/19 16:35
Start Date: 07/Jun/19 16:35
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #922: HDDS-1659. Define 
the process to add proposal/design docs to the Ozone subproject
URL: https://github.com/apache/hadoop/pull/922#discussion_r291667783
 
 

 ##
 File path: hadoop-hdds/docs/content/design/ozone-enhancement-proposals.md
 ##
 @@ -0,0 +1,97 @@
+---
+title: Ozone Enhancement Proposals
+summary: Definition of the process to share new technical proposals with the 
Ozone community.
+date: 2019-06-07
+jira: HDDS-1659
+status: current
+author: Anu Enginner, Marton Elek
+---
+
+## Problem statement
+
+Some of the biggers features requires well defined plans before the 
implementation. Until now it was managed by uploading PDF design docs to 
selected JIRA. There are multiple problems with the current practice.
+
+ 1. There is no easy way to find existing up-to-date and outdated design docs.
+ 2. Design docs usually have better description of the problem that the user 
docs
+ 3. We need better tools to discuss the design docs in the development phase 
of the doc
+
+We propose to follow the same process what we have now, but instead of 
uploading a PDF to the JIRA, create a PR to merge the proposal document to the 
documentation project.
+
+## Non-goals
+
+ * Modify the existing workflow or approval process
+ * Migrate existing documents
+ * Make it harder to create design docs (it should be easy to support the 
creation of proposals for any kind of tasks)
+
+## Proposed solution
+
+ * Open a dedicated Jira (`HDDS-*` but with specific component)
+ * Use standard name prefix in the jira (easy to filter on the mailing list) 
`[OEP]
+ * Create a PR to merge the design doc (markdown) to 
`hadoop-hdds/docs/content/proposal` (will be part of the docs)
 
 Review comment:
   Note: this is a complex PR because it contains the changes to show design 
docs on the docs page. A normal design doc would contain just one markdown file.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256044)
Time Spent: 50m  (was: 40m)

> Define the process to add proposal/design docs to the Ozone subproject
> --
>
> Key: HDDS-1659
> URL: https://issues.apache.org/jira/browse/HDDS-1659
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> We think that it would be more effective to collect all the design docs in 
> one place and make it easier to review them by the community.
> We propose to follow an approach where the proposals are committed to the 
> hadoop-hdds/docs project and the review can be the same as a review of a PR



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1659) Define the process to add proposal/design docs to the Ozone subproject

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1659?focusedWorklogId=256037=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256037
 ]

ASF GitHub Bot logged work on HDDS-1659:


Author: ASF GitHub Bot
Created on: 07/Jun/19 16:31
Start Date: 07/Jun/19 16:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #922: HDDS-1659. Define 
the process to add proposal/design docs to the Ozone subproject
URL: https://github.com/apache/hadoop/pull/922#issuecomment-499833695
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 76 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | yamllint | 1 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 550 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1404 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 479 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 711 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 2899 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-922/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/922 |
   | Optional Tests | dupname asflicense mvnsite yamllint |
   | uname | Linux b807df69b643 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a91d24f |
   | Max. process+thread count | 327 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/docs U: hadoop-hdds/docs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-922/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256037)
Time Spent: 40m  (was: 0.5h)

> Define the process to add proposal/design docs to the Ozone subproject
> --
>
> Key: HDDS-1659
> URL: https://issues.apache.org/jira/browse/HDDS-1659
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> We think that it would be more effective to collect all the design docs in 
> one place and make it easier to review them by the community.
> We propose to follow an approach where the proposals are committed to the 
> hadoop-hdds/docs project and the review can be the same as a review of a PR



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1659) Define the process to add proposal/design docs to the Ozone subproject

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1659?focusedWorklogId=256036=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256036
 ]

ASF GitHub Bot logged work on HDDS-1659:


Author: ASF GitHub Bot
Created on: 07/Jun/19 16:30
Start Date: 07/Jun/19 16:30
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #922: HDDS-1659. Define 
the process to add proposal/design docs to the Ozone subproject
URL: https://github.com/apache/hadoop/pull/922#discussion_r291666382
 
 

 ##
 File path: hadoop-hdds/docs/content/design/ozone-enhancement-proposals.md
 ##
 @@ -0,0 +1,97 @@
+---
+title: Ozone Enhancement Proposals
+summary: Definition of the process to share new technical proposals with the 
Ozone community.
+date: 2019-06-07
+jira: HDDS-1659
+status: current
+author: Anu Enginner, Marton Elek
+---
+
+## Problem statement
+
+Some of the biggers features requires well defined plans before the 
implementation. Until now it was managed by uploading PDF design docs to 
selected JIRA. There are multiple problems with the current practice.
+
+ 1. There is no easy way to find existing up-to-date and outdated design docs.
+ 2. Design docs usually have better description of the problem that the user 
docs
+ 3. We need better tools to discuss the design docs in the development phase 
of the doc
+
+We propose to follow the same process what we have now, but instead of 
uploading a PDF to the JIRA, create a PR to merge the proposal document to the 
documentation project.
+
+## Non-goals
+
+ * Modify the existing workflow or approval process
+ * Migrate existing documents
+ * Make it harder to create design docs (it should be easy to support the 
creation of proposals for any kind of tasks)
+
+## Proposed solution
+
+ * Open a dedicated Jira (`HDDS-*` but with specific component)
+ * Use standard name prefix in the jira (easy to filter on the mailing list) 
`[OEP]
+ * Create a PR to merge the design doc (markdown) to 
`hadoop-hdds/docs/content/proposal` (will be part of the docs)
 
 Review comment:
   During an offline discussion @arp7 had some concerns about the usability of 
reviewing markdown files in PR vs. reviewing google docs.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256036)
Time Spent: 0.5h  (was: 20m)

> Define the process to add proposal/design docs to the Ozone subproject
> --
>
> Key: HDDS-1659
> URL: https://issues.apache.org/jira/browse/HDDS-1659
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We think that it would be more effective to collect all the design docs in 
> one place and make it easier to review them by the community.
> We propose to follow an approach where the proposals are committed to the 
> hadoop-hdds/docs project and the review can be the same as a review of a PR



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1622) Use picocli for StorageContainerManager

2019-06-07 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16858781#comment-16858781
 ] 

Hudson commented on HDDS-1622:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16704 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16704/])
HDDS-1622. Use picocli for StorageContainerManager (elek: rev 
85479577da1b8934cfbd97fa815985399f19d933)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestSecureOzoneCluster.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/TestStorageContainerManager.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManagerStarter.java
* (add) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMStarterInterface.java
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneClusterImpl.java
* (edit) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/HddsTestUtils.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
* (add) 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/server/TestStorageContainerManagerStarter.java
* (edit) hadoop-ozone/common/src/main/bin/ozone


> Use picocli for StorageContainerManager
> ---
>
> Key: HDDS-1622
> URL: https://issues.apache.org/jira/browse/HDDS-1622
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> Recently we switched to use PicoCli with (almost) all of our daemons (eg. s3 
> Gateway, Freon, etc.)
> PicoCli has better output, it can generate nice help, and easier to use as 
> it's enough to put a few annotations and we don't need to add all the 
> boilerplate code to print out help, etc.
> StorageContainerManager and OzoneManager is not yet  supported. The previous 
> issue was closed HDDS-453 but since then we improved the GenericCli parser 
> (eg. in HDDS-1192), so I think we are ready to move.
> The main idea is to create a starter java similar to 
> org.apache.hadoop.ozone.s3.Gateway and we can start StorageContainerManager 
> from there.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1622) Use picocli for StorageContainerManager

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1622?focusedWorklogId=256005=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-256005
 ]

ASF GitHub Bot logged work on HDDS-1622:


Author: ASF GitHub Bot
Created on: 07/Jun/19 15:57
Start Date: 07/Jun/19 15:57
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #918: HDDS-1622 Use 
picocli for StorageContainerManager
URL: https://github.com/apache/hadoop/pull/918
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 256005)
Time Spent: 4h 20m  (was: 4h 10m)

> Use picocli for StorageContainerManager
> ---
>
> Key: HDDS-1622
> URL: https://issues.apache.org/jira/browse/HDDS-1622
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> Recently we switched to use PicoCli with (almost) all of our daemons (eg. s3 
> Gateway, Freon, etc.)
> PicoCli has better output, it can generate nice help, and easier to use as 
> it's enough to put a few annotations and we don't need to add all the 
> boilerplate code to print out help, etc.
> StorageContainerManager and OzoneManager is not yet  supported. The previous 
> issue was closed HDDS-453 but since then we improved the GenericCli parser 
> (eg. in HDDS-1192), so I think we are ready to move.
> The main idea is to create a starter java similar to 
> org.apache.hadoop.ozone.s3.Gateway and we can start StorageContainerManager 
> from there.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1622) Use picocli for StorageContainerManager

2019-06-07 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton resolved HDDS-1622.

Resolution: Fixed

> Use picocli for StorageContainerManager
> ---
>
> Key: HDDS-1622
> URL: https://issues.apache.org/jira/browse/HDDS-1622
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> Recently we switched to use PicoCli with (almost) all of our daemons (eg. s3 
> Gateway, Freon, etc.)
> PicoCli has better output, it can generate nice help, and easier to use as 
> it's enough to put a few annotations and we don't need to add all the 
> boilerplate code to print out help, etc.
> StorageContainerManager and OzoneManager is not yet  supported. The previous 
> issue was closed HDDS-453 but since then we improved the GenericCli parser 
> (eg. in HDDS-1192), so I think we are ready to move.
> The main idea is to create a starter java similar to 
> org.apache.hadoop.ozone.s3.Gateway and we can start StorageContainerManager 
> from there.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1660) Use Picocli for Ozone Manager

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1660?focusedWorklogId=255990=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-255990
 ]

ASF GitHub Bot logged work on HDDS-1660:


Author: ASF GitHub Bot
Created on: 07/Jun/19 15:36
Start Date: 07/Jun/19 15:36
Worklog Time Spent: 10m 
  Work Description: sodonnel commented on issue #925: HDDS-1660 Use Picocli 
for Ozone Manager
URL: https://github.com/apache/hadoop/pull/925#issuecomment-499933530
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 255990)
Time Spent: 20m  (was: 10m)

> Use Picocli for Ozone Manager
> -
>
> Key: HDDS-1660
> URL: https://issues.apache.org/jira/browse/HDDS-1660
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Replicate the changes made in HDDS-1622 for the StorageContainerManager to 
> the Ozone Manager, so it also uses Picocli for the command line interface.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1660) Use Picocli for Ozone Manager

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1660:
-
Labels: pull-request-available  (was: )

> Use Picocli for Ozone Manager
> -
>
> Key: HDDS-1660
> URL: https://issues.apache.org/jira/browse/HDDS-1660
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>
> Replicate the changes made in HDDS-1622 for the StorageContainerManager to 
> the Ozone Manager, so it also uses Picocli for the command line interface.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1660) Use Picocli for Ozone Manager

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1660?focusedWorklogId=255989=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-255989
 ]

ASF GitHub Bot logged work on HDDS-1660:


Author: ASF GitHub Bot
Created on: 07/Jun/19 15:36
Start Date: 07/Jun/19 15:36
Worklog Time Spent: 10m 
  Work Description: sodonnel commented on pull request #925: HDDS-1660 Use 
Picocli for Ozone Manager
URL: https://github.com/apache/hadoop/pull/925
 
 
   Replicate the changes made in HDDS-1622 for the StorageContainerManager to 
the Ozone Manager, so it also uses Picocli for the command line interface.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 255989)
Time Spent: 10m
Remaining Estimate: 0h

> Use Picocli for Ozone Manager
> -
>
> Key: HDDS-1660
> URL: https://issues.apache.org/jira/browse/HDDS-1660
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Replicate the changes made in HDDS-1622 for the StorageContainerManager to 
> the Ozone Manager, so it also uses Picocli for the command line interface.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1636) Tracing id is not propagated via async datanode grpc call

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1636?focusedWorklogId=255982=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-255982
 ]

ASF GitHub Bot logged work on HDDS-1636:


Author: ASF GitHub Bot
Created on: 07/Jun/19 15:23
Start Date: 07/Jun/19 15:23
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #895: HDDS-1636. 
Tracing id is not propagated via async datanode grpc call
URL: https://github.com/apache/hadoop/pull/895#issuecomment-499928283
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 46 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 7 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for branch |
   | +1 | mvninstall | 588 | trunk passed |
   | +1 | compile | 318 | trunk passed |
   | +1 | checkstyle | 88 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 966 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 171 | trunk passed |
   | 0 | spotbugs | 363 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 555 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | +1 | mvninstall | 503 | the patch passed |
   | +1 | compile | 309 | the patch passed |
   | +1 | javac | 309 | the patch passed |
   | +1 | checkstyle | 83 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 723 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 169 | the patch passed |
   | +1 | findbugs | 616 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 175 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1653 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 7252 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-895/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/895 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 68a40c51a9aa 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon 
Mar 18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 14552d1 |
   | Default Java | 1.8.0_212 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-895/6/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-895/6/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-895/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-895/6/testReport/ |
   | Max. process+thread count | 5143 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/client hadoop-hdds/common hadoop-ozone/client 
hadoop-ozone/integration-test hadoop-ozone/objectstore-service 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-895/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: 

[jira] [Work logged] (HDDS-1636) Tracing id is not propagated via async datanode grpc call

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1636?focusedWorklogId=255981=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-255981
 ]

ASF GitHub Bot logged work on HDDS-1636:


Author: ASF GitHub Bot
Created on: 07/Jun/19 15:23
Start Date: 07/Jun/19 15:23
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #895: HDDS-1636. 
Tracing id is not propagated via async datanode grpc call
URL: https://github.com/apache/hadoop/pull/895#discussion_r291641173
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkInputStream.java
 ##
 @@ -76,12 +75,11 @@
 
   private static final int EOF = -1;
 
-  ChunkInputStream(ChunkInfo chunkInfo, BlockID blockId,
-  String traceId, XceiverClientSpi xceiverClient, boolean verifyChecksum) {
+  ChunkInputStream(ChunkInfo chunkInfo, BlockID blockId, 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 255981)
Time Spent: 1h 50m  (was: 1h 40m)

> Tracing id is not propagated via async datanode grpc call
> -
>
> Key: HDDS-1636
> URL: https://issues.apache.org/jira/browse/HDDS-1636
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Recently a new exception become visible in the datanode logs, using standard 
> freon (STANDLAONE)
> {code}
> datanode_2  | 2019-06-03 12:18:21 WARN  
> PropagationRegistry$ExceptionCatchingExtractorDecorator:60 - Error when 
> extracting SpanContext from carrier. Handling gracefully.
> datanode_2  | 
> io.jaegertracing.internal.exceptions.MalformedTracerStateStringException: 
> String does not match tracer state format: 
> 7576cabf-37a4-4232-9729-939a3fdb68c4WriteChunk150a8a848a951784256ca0801f7d9cf8b_stream_ed583cee-9552-4f1a-8c77-63f7d07b755f_chunk_1
> datanode_2  | at 
> org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:49)
> datanode_2  | at 
> org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:34)
> datanode_2  | at 
> io.jaegertracing.internal.PropagationRegistry$ExceptionCatchingExtractorDecorator.extract(PropagationRegistry.java:57)
> datanode_2  | at 
> io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:208)
> datanode_2  | at 
> io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:61)
> datanode_2  | at 
> io.opentracing.util.GlobalTracer.extract(GlobalTracer.java:143)
> datanode_2  | at 
> org.apache.hadoop.hdds.tracing.TracingUtil.importAndCreateScope(TracingUtil.java:102)
> datanode_2  | at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
> datanode_2  | at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:73)
> datanode_2  | at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:61)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:248)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.Contexts$ContextualizedServerCallListener.onMessage(Contexts.java:76)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
> datanode_2  | at 
> org.apache.hadoop.hdds.tracing.GrpcServerInterceptor$1.onMessage(GrpcServerInterceptor.java:46)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:263)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:686)
> datanode_2  | at 
> org.apache.ratis.thirdparty.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
> datanode_2  | at 
> 

[jira] [Commented] (HDFS-14532) Datanode's BlockSender checksum buffer is too big

2019-06-07 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16858722#comment-16858722
 ] 

Kihwal Lee commented on HDFS-14532:
---

Assuming 4 byte checksum per 512 byte data chunk, a 128kB checksum buffer will 
hold checksum data for 16MB of data.  It seems wasteful and more so if reads 
are short and seek-heavy. 

> Datanode's BlockSender checksum buffer is too big
> -
>
> Key: HDFS-14532
> URL: https://issues.apache.org/jira/browse/HDFS-14532
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Daryn Sharp
>Priority: Major
> Attachments: Screen Shot 2019-05-31 at 12.32.06 PM.png
>
>
> The BlockSender uses an excessively large 128K buffered input stream – 99% of 
> the entire instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14531) Datanode's ScanInfo requires excessive memory

2019-06-07 Thread Nathan Roberts (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16858709#comment-16858709
 ] 

Nathan Roberts commented on HDFS-14531:
---

Actually, maybe disabling the DirectoryScanner is more than a workaround. Maybe 
that should be the default. What is this really protecting against these days? 
For large disks it's super expensive memory-wise and if there are enough blocks 
or enough system memory pressure it can cause tons of I/O as well.

 

> Datanode's ScanInfo requires excessive memory
> -
>
> Key: HDFS-14531
> URL: https://issues.apache.org/jira/browse/HDFS-14531
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Priority: Major
> Attachments: Screen Shot 2019-05-31 at 12.25.54 PM.png
>
>
> The DirectoryScanner's ScanInfo map consumes ~4.5X memory as replicas as the 
> replica map.  For 1.1M replicas: the replica map is ~91M while the scan info 
> is ~405M.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1635) Maintain docker entrypoint and envtoconf inside ozone project

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1635?focusedWorklogId=255944=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-255944
 ]

ASF GitHub Bot logged work on HDDS-1635:


Author: ASF GitHub Bot
Created on: 07/Jun/19 14:40
Start Date: 07/Jun/19 14:40
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #894: HDDS-1635. 
Maintain docker entrypoint and envtoconf inside ozone project
URL: https://github.com/apache/hadoop/pull/894#issuecomment-499911531
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 37 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 562 | trunk passed |
   | +1 | compile | 330 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 804 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 201 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 486 | the patch passed |
   | +1 | compile | 328 | the patch passed |
   | +1 | javac | 328 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | pylint | 1 | Error running pylint. Please check pylint stderr files. |
   | +1 | pylint | 2 | There were no new pylint issues. |
   | +1 | shellcheck | 1 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 704 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 197 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 150 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1216 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 69 | The patch does not generate ASF License warnings. |
   | | | 5365 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-894/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/894 |
   | Optional Tests | dupname asflicense shellcheck shelldocs mvnsite unit 
compile javac javadoc mvninstall shadedclient pylint |
   | uname | Linux f2ba87fb78a6 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 14552d1 |
   | Default Java | 1.8.0_212 |
   | pylint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-894/3/artifact/out/patch-pylint-stderr.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-894/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-894/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-894/3/testReport/ |
   | Max. process+thread count | 4434 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-894/3/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 pylint=1.9.2 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 255944)
Time Spent: 2h  (was: 1h 50m)

> Maintain docker entrypoint and envtoconf inside ozone project
> -
>
> Key: HDDS-1635
> URL: https://issues.apache.org/jira/browse/HDDS-1635
> 

[jira] [Created] (HDFS-14554) Avoid to process duplicate blockreport of when namenode in startup safemode

2019-06-07 Thread He Xiaoqiao (JIRA)
He Xiaoqiao created HDFS-14554:
--

 Summary: Avoid to process duplicate blockreport of when namenode 
in startup safemode
 Key: HDFS-14554
 URL: https://issues.apache.org/jira/browse/HDFS-14554
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: He Xiaoqiao
Assignee: He Xiaoqiao


When NameNode restart and enter stage of startup safemode, load of NameNode 
could be very high since blockreport request storm from datanodes together. 
Sometimes datanode may request blockreport times because NameNode doesn't 
return in time and DataNode meet timeoutexception then retry, Thus it sharpens 
load of NameNode.
So we should check and filter duplicate blockreport request from one datannode 
and reduce load of Namenode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14531) Datanode's ScanInfo requires excessive memory

2019-06-07 Thread Kihwal Lee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16858706#comment-16858706
 ] 

Kihwal Lee commented on HDFS-14531:
---

One workaround is to disable DirectoryScanner by setting 
"fs.datanode.directoryscan.interval" to -1.

> Datanode's ScanInfo requires excessive memory
> -
>
> Key: HDFS-14531
> URL: https://issues.apache.org/jira/browse/HDFS-14531
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.0.0-alpha
>Reporter: Daryn Sharp
>Priority: Major
> Attachments: Screen Shot 2019-05-31 at 12.25.54 PM.png
>
>
> The DirectoryScanner's ScanInfo map consumes ~4.5X memory as replicas as the 
> replica map.  For 1.1M replicas: the replica map is ~91M while the scan info 
> is ~405M.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14553) Make queue size of BlockReportProcessingThread configurable

2019-06-07 Thread He Xiaoqiao (JIRA)
He Xiaoqiao created HDFS-14553:
--

 Summary: Make queue size of BlockReportProcessingThread 
configurable
 Key: HDFS-14553
 URL: https://issues.apache.org/jira/browse/HDFS-14553
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: He Xiaoqiao
Assignee: He Xiaoqiao


ArrayBlockingQueue size of BlockReportProcessingThread is static 1024 
currently, I propose to make this queue size configurable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12914) Block report leases cause missing blocks until next report

2019-06-07 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16858574#comment-16858574
 ] 

He Xiaoqiao commented on HDFS-12914:


Test TestClientProtocolForPipelineRecovery, TestBootstrapAliasmap, 
TestReconstructStripedFile at local and all test pass. I don't think 
TestWebHdfsTimeouts is related with this patch.
Ping [~elgoiri],[~xkrogen],[~jojochuang],[~daryn],[~kihwal], Do you mind give 
another kindly reviews?

> Block report leases cause missing blocks until next report
> --
>
> Key: HDFS-12914
> URL: https://issues.apache.org/jira/browse/HDFS-12914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0, 2.9.2
>Reporter: Daryn Sharp
>Assignee: Santosh Marella
>Priority: Critical
> Attachments: HDFS-12914-branch-2.001.patch, 
> HDFS-12914-trunk.00.patch, HDFS-12914-trunk.01.patch, HDFS-12914.005.patch, 
> HDFS-12914.006.patch
>
>
> {{BlockReportLeaseManager#checkLease}} will reject FBRs from DNs for 
> conditions such as "unknown datanode", "not in pending set", "lease has 
> expired", wrong lease id, etc.  Lease rejection does not throw an exception.  
> It returns false which bubbles up to  {{NameNodeRpcServer#blockReport}} and 
> interpreted as {{noStaleStorages}}.
> A re-registering node whose FBR is rejected from an invalid lease becomes 
> active with _no blocks_.  A replication storm ensues possibly causing DNs to 
> temporarily go dead (HDFS-12645), leading to more FBR lease rejections on 
> re-registration.  The cluster will have many "missing blocks" until the DNs 
> next FBR is sent and/or forced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1622) Use picocli for StorageContainerManager

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1622?focusedWorklogId=255876=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-255876
 ]

ASF GitHub Bot logged work on HDDS-1622:


Author: ASF GitHub Bot
Created on: 07/Jun/19 12:20
Start Date: 07/Jun/19 12:20
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #918: HDDS-1622 Use 
picocli for StorageContainerManager
URL: https://github.com/apache/hadoop/pull/918#issuecomment-499863601
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 79 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 89 | Maven dependency ordering for branch |
   | +1 | mvninstall | 695 | trunk passed |
   | +1 | compile | 315 | trunk passed |
   | +1 | checkstyle | 96 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 879 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 188 | trunk passed |
   | 0 | spotbugs | 349 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 550 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 40 | Maven dependency ordering for patch |
   | +1 | mvninstall | 485 | the patch passed |
   | +1 | compile | 318 | the patch passed |
   | +1 | javac | 318 | the patch passed |
   | +1 | checkstyle | 102 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 25 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 758 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 187 | the patch passed |
   | +1 | findbugs | 564 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 201 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1383 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 60 | The patch does not generate ASF License warnings. |
   | | | 7328 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-918/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/918 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle shellcheck shelldocs |
   | uname | Linux 7c1a006390c6 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a91d24f |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-918/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-918/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-918/3/testReport/ |
   | Max. process+thread count | 3956 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm hadoop-ozone/common 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-918/3/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 255876)
Time Spent: 4h 10m  (was: 4h)

> Use picocli for StorageContainerManager
> ---
>
> Key: HDDS-1622
> URL: 

[jira] [Work logged] (HDDS-1654) Ensure container state on datanode gets synced to disk whenever state change happens

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1654?focusedWorklogId=255875=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-255875
 ]

ASF GitHub Bot logged work on HDDS-1654:


Author: ASF GitHub Bot
Created on: 07/Jun/19 12:18
Start Date: 07/Jun/19 12:18
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #923: HDDS-1654. Ensure 
container state on datanode gets synced to disk whennever state change happens.
URL: https://github.com/apache/hadoop/pull/923#issuecomment-499862998
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 36 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 510 | trunk passed |
   | +1 | compile | 293 | trunk passed |
   | +1 | checkstyle | 90 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 902 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 177 | trunk passed |
   | 0 | spotbugs | 332 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 519 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 462 | the patch passed |
   | +1 | compile | 304 | the patch passed |
   | +1 | javac | 304 | the patch passed |
   | +1 | checkstyle | 94 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 688 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 181 | the patch passed |
   | +1 | findbugs | 538 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 152 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1482 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 46 | The patch does not generate ASF License warnings. |
   | | | 6662 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-923/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/923 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 2a24167232fc 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a91d24f |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-923/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-923/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-923/3/testReport/ |
   | Max. process+thread count | 5284 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-923/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 255875)
Time Spent: 40m  (was: 0.5h)

> Ensure container state on datanode gets synced to disk whenever state change 
> happens
> 
>
> Key: HDDS-1654
> URL: 

[jira] [Work logged] (HDDS-1654) Ensure container state on datanode gets synced to disk whenever state change happens

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1654?focusedWorklogId=255873=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-255873
 ]

ASF GitHub Bot logged work on HDDS-1654:


Author: ASF GitHub Bot
Created on: 07/Jun/19 12:10
Start Date: 07/Jun/19 12:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #923: HDDS-1654. Ensure 
container state on datanode gets synced to disk whennever state change happens.
URL: https://github.com/apache/hadoop/pull/923#issuecomment-499860903
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 518 | trunk passed |
   | +1 | compile | 281 | trunk passed |
   | +1 | checkstyle | 80 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 922 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 166 | trunk passed |
   | 0 | spotbugs | 337 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 528 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 458 | the patch passed |
   | +1 | compile | 289 | the patch passed |
   | +1 | javac | 289 | the patch passed |
   | +1 | checkstyle | 84 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 724 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 166 | the patch passed |
   | +1 | findbugs | 548 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 167 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1191 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 6360 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-923/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/923 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 2413c00c2d31 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a91d24f |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-923/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-923/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-923/2/testReport/ |
   | Max. process+thread count | 4171 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-923/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 255873)
Time Spent: 0.5h  (was: 20m)

> Ensure container state on datanode gets synced to disk whenever state change 
> happens
> 
>
> Key: HDDS-1654
> URL: 

[jira] [Work logged] (HDDS-1654) Ensure container state on datanode gets synced to disk whenever state change happens

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1654?focusedWorklogId=255871=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-255871
 ]

ASF GitHub Bot logged work on HDDS-1654:


Author: ASF GitHub Bot
Created on: 07/Jun/19 12:01
Start Date: 07/Jun/19 12:01
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #923: HDDS-1654. Ensure 
container state on datanode gets synced to disk whennever state change happens.
URL: https://github.com/apache/hadoop/pull/923#issuecomment-499858506
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 496 | trunk passed |
   | +1 | compile | 275 | trunk passed |
   | +1 | checkstyle | 78 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 812 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 155 | trunk passed |
   | 0 | spotbugs | 330 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 511 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 448 | the patch passed |
   | +1 | compile | 273 | the patch passed |
   | +1 | javac | 273 | the patch passed |
   | +1 | checkstyle | 73 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 704 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 178 | the patch passed |
   | +1 | findbugs | 544 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 152 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1145 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 58 | The patch does not generate ASF License warnings. |
   | | | 6111 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.container.common.impl.TestHddsDispatcher 
|
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-923/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/923 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 5e37e1335ebb 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a91d24f |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-923/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-923/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-923/1/testReport/ |
   | Max. process+thread count | 5190 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-923/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 255871)
Time Spent: 20m  (was: 10m)

> Ensure container state on datanode gets synced to disk whenever state change 
> happens
> 
>
> 

[jira] [Created] (HDDS-1660) Use Picocli for Ozone Manager

2019-06-07 Thread Stephen O'Donnell (JIRA)
Stephen O'Donnell created HDDS-1660:
---

 Summary: Use Picocli for Ozone Manager
 Key: HDDS-1660
 URL: https://issues.apache.org/jira/browse/HDDS-1660
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Manager
Reporter: Stephen O'Donnell
Assignee: Stephen O'Donnell


Replicate the changes made in HDDS-1622 for the StorageContainerManager to the 
Ozone Manager, so it also uses Picocli for the command line interface.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12914) Block report leases cause missing blocks until next report

2019-06-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16858465#comment-16858465
 ] 

Hadoop QA commented on HDFS-12914:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap |
|   | hadoop.hdfs.TestReconstructStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-12914 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12971152/HDFS-12914.006.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8ced204bed2e 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a91d24f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26921/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26921/testReport/ |
| Max. process+thread count | 4946 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| 

[jira] [Work logged] (HDDS-1622) Use picocli for StorageContainerManager

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1622?focusedWorklogId=255782=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-255782
 ]

ASF GitHub Bot logged work on HDDS-1622:


Author: ASF GitHub Bot
Created on: 07/Jun/19 10:30
Start Date: 07/Jun/19 10:30
Worklog Time Spent: 10m 
  Work Description: sodonnel commented on issue #918: HDDS-1622 Use picocli 
for StorageContainerManager
URL: https://github.com/apache/hadoop/pull/918#issuecomment-499837048
 
 
   We have agreed to drop the --init etc and replace with "init", however that 
makes this a breaking change that will impact the Docker builds.
   
   Discussed with @elek and we think it would be best to commit this change as 
it is (with the existing "--init") and then create a new Jira to make the 
switch to "init" with a new change after we also update the Ozone Manager with 
a similar change to this one. Are you ok with that approach @anuengineer ?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 255782)
Time Spent: 4h  (was: 3h 50m)

> Use picocli for StorageContainerManager
> ---
>
> Key: HDDS-1622
> URL: https://issues.apache.org/jira/browse/HDDS-1622
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Elek, Marton
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Recently we switched to use PicoCli with (almost) all of our daemons (eg. s3 
> Gateway, Freon, etc.)
> PicoCli has better output, it can generate nice help, and easier to use as 
> it's enough to put a few annotations and we don't need to add all the 
> boilerplate code to print out help, etc.
> StorageContainerManager and OzoneManager is not yet  supported. The previous 
> issue was closed HDDS-453 but since then we improved the GenericCli parser 
> (eg. in HDDS-1192), so I think we are ready to move.
> The main idea is to create a starter java similar to 
> org.apache.hadoop.ozone.s3.Gateway and we can start StorageContainerManager 
> from there.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1654) Ensure container state on datanode gets synced to disk whenever state change happens

2019-06-07 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-1654:
--
Status: Patch Available  (was: Open)

> Ensure container state on datanode gets synced to disk whenever state change 
> happens
> 
>
> Key: HDDS-1654
> URL: https://issues.apache.org/jira/browse/HDDS-1654
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, whenever there is a container state change, it updates the 
> container but doesn't sync.
> The idea is here to is to force sync the state to disk everytime there is a 
> state change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1654) Ensure container state on datanode gets synced to disk whenever state change happens

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1654:
-
Labels: pull-request-available  (was: )

> Ensure container state on datanode gets synced to disk whenever state change 
> happens
> 
>
> Key: HDDS-1654
> URL: https://issues.apache.org/jira/browse/HDDS-1654
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>
> Currently, whenever there is a container state change, it updates the 
> container but doesn't sync.
> The idea is here to is to force sync the state to disk everytime there is a 
> state change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1654) Ensure container state on datanode gets synced to disk whenever state change happens

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1654?focusedWorklogId=255778=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-255778
 ]

ASF GitHub Bot logged work on HDDS-1654:


Author: ASF GitHub Bot
Created on: 07/Jun/19 10:18
Start Date: 07/Jun/19 10:18
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on pull request #923: HDDS-1654. 
Ensure container state on datanode gets synced to disk whennever state change 
happens.
URL: https://github.com/apache/hadoop/pull/923
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 255778)
Time Spent: 10m
Remaining Estimate: 0h

> Ensure container state on datanode gets synced to disk whenever state change 
> happens
> 
>
> Key: HDDS-1654
> URL: https://issues.apache.org/jira/browse/HDDS-1654
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, whenever there is a container state change, it updates the 
> container but doesn't sync.
> The idea is here to is to force sync the state to disk everytime there is a 
> state change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1659) Define the process to add proposal/design docs to the Ozone subproject

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1659?focusedWorklogId=255776=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-255776
 ]

ASF GitHub Bot logged work on HDDS-1659:


Author: ASF GitHub Bot
Created on: 07/Jun/19 10:17
Start Date: 07/Jun/19 10:17
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #922: HDDS-1659. Define 
the process to add proposal/design docs to the Ozone subproject
URL: https://github.com/apache/hadoop/pull/922#issuecomment-499833695
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 76 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | yamllint | 1 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 550 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1404 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 479 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 711 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 2899 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-922/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/922 |
   | Optional Tests | dupname asflicense mvnsite yamllint |
   | uname | Linux b807df69b643 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a91d24f |
   | Max. process+thread count | 327 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/docs U: hadoop-hdds/docs |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-922/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 255776)
Time Spent: 20m  (was: 10m)

> Define the process to add proposal/design docs to the Ozone subproject
> --
>
> Key: HDDS-1659
> URL: https://issues.apache.org/jira/browse/HDDS-1659
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We think that it would be more effective to collect all the design docs in 
> one place and make it easier to review them by the community.
> We propose to follow an approach where the proposals are committed to the 
> hadoop-hdds/docs project and the review can be the same as a review of a PR



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1659) Define the process to add proposal/design docs to the Ozone subproject

2019-06-07 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1659:
---
Status: Patch Available  (was: Open)

> Define the process to add proposal/design docs to the Ozone subproject
> --
>
> Key: HDDS-1659
> URL: https://issues.apache.org/jira/browse/HDDS-1659
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We think that it would be more effective to collect all the design docs in 
> one place and make it easier to review them by the community.
> We propose to follow an approach where the proposals are committed to the 
> hadoop-hdds/docs project and the review can be the same as a review of a PR



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1642) Avoid shell references relative to the current script path

2019-06-07 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton reassigned HDDS-1642:
--

Assignee: Elek, Marton

> Avoid shell references relative to the current script path
> --
>
> Key: HDDS-1642
> URL: https://issues.apache.org/jira/browse/HDDS-1642
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Eric Yang
>Assignee: Elek, Marton
>Priority: Minor
>
> This is based on the review comment from [~eyang]:
> bq. You might need pwd -P to resolve symlinks. I don't recommend to use 
> script location to make decision of where binaries are supposed to be because 
> someone else can make newbie mistake and refactor your script to invalid the 
> original coding intend. See this blog to explain the right way to get the 
> directory of a bash script. This is the reason that I used OZONE_HOME as base 
> reference frequently.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1659) Define the process to add proposal/design docs to the Ozone subproject

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1659?focusedWorklogId=255762=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-255762
 ]

ASF GitHub Bot logged work on HDDS-1659:


Author: ASF GitHub Bot
Created on: 07/Jun/19 09:27
Start Date: 07/Jun/19 09:27
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #922: HDDS-1659. Define 
the process to add proposal/design docs to the Ozone subproject
URL: https://github.com/apache/hadoop/pull/922
 
 
   We think that it would be more effective to collect all the design docs in 
one place and make it easier to review them by the community.
   
   We propose to follow an approach where the proposals are committed to the 
hadoop-hdds/docs project and the review can be the same as a review of a PR
   
   See: https://issues.apache.org/jira/browse/HDDS-1659
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 255762)
Time Spent: 10m
Remaining Estimate: 0h

> Define the process to add proposal/design docs to the Ozone subproject
> --
>
> Key: HDDS-1659
> URL: https://issues.apache.org/jira/browse/HDDS-1659
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We think that it would be more effective to collect all the design docs in 
> one place and make it easier to review them by the community.
> We propose to follow an approach where the proposals are committed to the 
> hadoop-hdds/docs project and the review can be the same as a review of a PR



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1659) Define the process to add proposal/design docs to the Ozone subproject

2019-06-07 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1659:
-
Labels: pull-request-available  (was: )

> Define the process to add proposal/design docs to the Ozone subproject
> --
>
> Key: HDDS-1659
> URL: https://issues.apache.org/jira/browse/HDDS-1659
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>
> We think that it would be more effective to collect all the design docs in 
> one place and make it easier to review them by the community.
> We propose to follow an approach where the proposals are committed to the 
> hadoop-hdds/docs project and the review can be the same as a review of a PR



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14513) FSImage which is saving should be clean while NameNode shutdown

2019-06-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16858413#comment-16858413
 ] 

Hadoop QA commented on HDFS-14513:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}128m 
54s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}190m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14513 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12971139/HDFS-14513.007.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 900dc534e82e 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a91d24f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26920/testReport/ |
| Max. process+thread count | 2885 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26920/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> FSImage which is saving should be clean while 

[jira] [Commented] (HDFS-14550) RBF: Failed to get statistics from NameNodes before 2.9.0

2019-06-07 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16858394#comment-16858394
 ] 

He Xiaoqiao commented on HDFS-14550:


Thanks everyone [~aajisaka],[~elgoiri],[~crh], [~ayushtkn] for discussion, 
review and commit. sorry for no unit test to verify, I will prepare for that 
later at another issue.

> RBF: Failed to get statistics from NameNodes before 2.9.0
> -
>
> Key: HDFS-14550
> URL: https://issues.apache.org/jira/browse/HDFS-14550
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: He Xiaoqiao
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14550-HDFS-13891.001.patch
>
>
> DFSRouter fails to get stats from NameNodes that do not have HDFS-7877
> {noformat}
> 2019-06-03 17:40:15,407 ERROR 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService: 
> Cannot get stat from nn1:nn01:8022 using JMX
> org.codehaus.jettison.json.JSONException: 
> JSONObject["NumInMaintenanceLiveDataNodes"] not found.
> at org.codehaus.jettison.json.JSONObject.get(JSONObject.java:360)
> at org.codehaus.jettison.json.JSONObject.getInt(JSONObject.java:421)
> at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.updateJMXParameters(NamenodeHeartbeatService.java:345)
> at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.getNamenodeStatusReport(NamenodeHeartbeatService.java:278)
> at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.updateState(NamenodeHeartbeatService.java:206)
> at 
> org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.periodicInvoke(NamenodeHeartbeatService.java:160)
> at 
> org.apache.hadoop.hdfs.server.federation.router.PeriodicService$1.run(PeriodicService.java:178)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12914) Block report leases cause missing blocks until next report

2019-06-07 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16858392#comment-16858392
 ] 

He Xiaoqiao commented on HDFS-12914:


[^HDFS-12914.006.patch] fix checkstyle only.
Test #TestDataNodeHotSwapVolumes and #TestDirectoryScanner at local and both 
passed. I believe #TestWebHdfsTimeouts is unrelated with this patch.

> Block report leases cause missing blocks until next report
> --
>
> Key: HDFS-12914
> URL: https://issues.apache.org/jira/browse/HDFS-12914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0, 2.9.2
>Reporter: Daryn Sharp
>Assignee: Santosh Marella
>Priority: Critical
> Attachments: HDFS-12914-branch-2.001.patch, 
> HDFS-12914-trunk.00.patch, HDFS-12914-trunk.01.patch, HDFS-12914.005.patch, 
> HDFS-12914.006.patch
>
>
> {{BlockReportLeaseManager#checkLease}} will reject FBRs from DNs for 
> conditions such as "unknown datanode", "not in pending set", "lease has 
> expired", wrong lease id, etc.  Lease rejection does not throw an exception.  
> It returns false which bubbles up to  {{NameNodeRpcServer#blockReport}} and 
> interpreted as {{noStaleStorages}}.
> A re-registering node whose FBR is rejected from an invalid lease becomes 
> active with _no blocks_.  A replication storm ensues possibly causing DNs to 
> temporarily go dead (HDFS-12645), leading to more FBR lease rejections on 
> re-registration.  The cluster will have many "missing blocks" until the DNs 
> next FBR is sent and/or forced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12914) Block report leases cause missing blocks until next report

2019-06-07 Thread He Xiaoqiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Xiaoqiao updated HDFS-12914:
---
Attachment: HDFS-12914.006.patch

> Block report leases cause missing blocks until next report
> --
>
> Key: HDFS-12914
> URL: https://issues.apache.org/jira/browse/HDFS-12914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0, 2.9.2
>Reporter: Daryn Sharp
>Assignee: Santosh Marella
>Priority: Critical
> Attachments: HDFS-12914-branch-2.001.patch, 
> HDFS-12914-trunk.00.patch, HDFS-12914-trunk.01.patch, HDFS-12914.005.patch, 
> HDFS-12914.006.patch
>
>
> {{BlockReportLeaseManager#checkLease}} will reject FBRs from DNs for 
> conditions such as "unknown datanode", "not in pending set", "lease has 
> expired", wrong lease id, etc.  Lease rejection does not throw an exception.  
> It returns false which bubbles up to  {{NameNodeRpcServer#blockReport}} and 
> interpreted as {{noStaleStorages}}.
> A re-registering node whose FBR is rejected from an invalid lease becomes 
> active with _no blocks_.  A replication storm ensues possibly causing DNs to 
> temporarily go dead (HDFS-12645), leading to more FBR lease rejections on 
> re-registration.  The cluster will have many "missing blocks" until the DNs 
> next FBR is sent and/or forced.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14513) FSImage which is saving should be clean while NameNode shutdown

2019-06-07 Thread He Xiaoqiao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16858383#comment-16858383
 ] 

He Xiaoqiao commented on HDFS-14513:


[^HDFS-14513.006.patch] catch IOException in #saveNamespace which is not 
expected and cause TestSaveNamespace failed. [^HDFS-14513.007.patch] update and 
verify TestSaveNamespace again at local. Thanks [~elgoiri].

> FSImage which is saving should be clean while NameNode shutdown
> ---
>
> Key: HDFS-14513
> URL: https://issues.apache.org/jira/browse/HDFS-14513
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: He Xiaoqiao
>Assignee: He Xiaoqiao
>Priority: Major
> Attachments: HDFS-14513.001.patch, HDFS-14513.002.patch, 
> HDFS-14513.003.patch, HDFS-14513.004.patch, HDFS-14513.005.patch, 
> HDFS-14513.006.patch, HDFS-14513.007.patch
>
>
> Checkpointer/FSImageSaver is regular tasks and dump NameNode meta to disk, at 
> most per hour by default. If it receive some command (e.g. transition to 
> active in HA mode) it will cancel checkpoint and delete tmp files using 
> {{FSImage#deleteCancelledCheckpoint}}. However if NameNode shutdown when 
> checkpoint, the tmp files will not be cleaned anymore. 
> Consider there are 500m inodes+blocks, it could cost 5~10min to finish once 
> checkpoint, if we shutdown NameNode during checkpointing, fsimage checkpoint 
> file will never be cleaned, after long time, there could be many useless 
> checkpoint files. So I propose that we should add hook to clean that when 
> shutdown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >