[jira] [Work logged] (HDDS-2185) createmrenv failure not reflected in acceptance test result

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2185?focusedWorklogId=319955=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319955
 ]

ASF GitHub Bot logged work on HDDS-2185:


Author: ASF GitHub Bot
Created on: 28/Sep/19 05:40
Start Date: 28/Sep/19 05:40
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1533: HDDS-2185. 
createmrenv failure not reflected in acceptance test result
URL: https://github.com/apache/hadoop/pull/1533#issuecomment-536156279
 
 
   Thanks @elek for reviewing and committing it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319955)
Time Spent: 1h 10m  (was: 1h)

> createmrenv failure not reflected in acceptance test result
> ---
>
> Key: HDDS-2185
> URL: https://issues.apache.org/jira/browse/HDDS-2185
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Part of the MR tests fail, but it's not reflected in the test report, which 
> shows all green.
> {noformat:title=https://github.com/elek/ozone-ci/blob/679228c146628cd4d1a416e1ffc9c513d19fb43d/pr/pr-hdds-2179-9bnxk/acceptance/output.log#L718-L730}
> ==
> hadoop31-createmrenv :: Create directories required for MR test   
> ==
> Create test volume, bucket and key| PASS |
> --
> Create user dir for hadoop| FAIL |
> 1 != 0
> --
> hadoop31-createmrenv :: Create directories required for MR test   | FAIL |
> 2 critical tests, 1 passed, 1 failed
> 2 tests total, 1 passed, 1 failed
> ==
> Output:  
> /tmp/smoketest/hadoop31/result/robot-hadoop31-hadoop31-createmrenv-scm.xml
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2202) Remove unused import in OmUtils

2019-09-27 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2202:

Assignee: Dinesh Chitlangia
  Status: Patch Available  (was: Open)

> Remove unused import in OmUtils
> ---
>
> Key: HDDS-2202
> URL: https://issues.apache.org/jira/browse/HDDS-2202
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Fix hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
> Remove L49: Unused import - org.apache.hadoop.ozone.om.OMMetadataManager;
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2202) Remove unused import in OmUtils

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2202:
-
Labels: newbie pull-request-available  (was: newbie)

> Remove unused import in OmUtils
> ---
>
> Key: HDDS-2202
> URL: https://issues.apache.org/jira/browse/HDDS-2202
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie, pull-request-available
>
> Fix hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
> Remove L49: Unused import - org.apache.hadoop.ozone.om.OMMetadataManager;
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2202) Remove unused import in OmUtils

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2202?focusedWorklogId=319954=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319954
 ]

ASF GitHub Bot logged work on HDDS-2202:


Author: ASF GitHub Bot
Created on: 28/Sep/19 05:38
Start Date: 28/Sep/19 05:38
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on issue #1543: HDDS-2202. 
Remove unused import in OmUtils
URL: https://github.com/apache/hadoop/pull/1543#issuecomment-536156198
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319954)
Time Spent: 20m  (was: 10m)

> Remove unused import in OmUtils
> ---
>
> Key: HDDS-2202
> URL: https://issues.apache.org/jira/browse/HDDS-2202
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Fix hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
> Remove L49: Unused import - org.apache.hadoop.ozone.om.OMMetadataManager;
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2202) Remove unused import in OmUtils

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2202?focusedWorklogId=319953=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319953
 ]

ASF GitHub Bot logged work on HDDS-2202:


Author: ASF GitHub Bot
Created on: 28/Sep/19 05:38
Start Date: 28/Sep/19 05:38
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #1543: 
HDDS-2202. Remove unused import in OmUtils
URL: https://github.com/apache/hadoop/pull/1543
 
 
   Removed unused import.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319953)
Remaining Estimate: 0h
Time Spent: 10m

> Remove unused import in OmUtils
> ---
>
> Key: HDDS-2202
> URL: https://issues.apache.org/jira/browse/HDDS-2202
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Dinesh Chitlangia
>Priority: Minor
>  Labels: newbie, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Fix hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
> Remove L49: Unused import - org.apache.hadoop.ozone.om.OMMetadataManager;
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2202) Remove unused import in OmUtils

2019-09-27 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2202:
---

 Summary: Remove unused import in OmUtils
 Key: HDDS-2202
 URL: https://issues.apache.org/jira/browse/HDDS-2202
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Dinesh Chitlangia


Fix hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java

Remove L49: Unused import - org.apache.hadoop.ozone.om.OMMetadataManager;

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2140) Add robot test for GDPR feature

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2140?focusedWorklogId=319951=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319951
 ]

ASF GitHub Bot logged work on HDDS-2140:


Author: ASF GitHub Bot
Created on: 28/Sep/19 05:22
Start Date: 28/Sep/19 05:22
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on issue #1542: HDDS-2140. 
Add robot test for GDPR feature
URL: https://github.com/apache/hadoop/pull/1542#issuecomment-536155378
 
 
   @elek Request your review pls. Thanks!
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319951)
Time Spent: 0.5h  (was: 20m)

> Add robot test for GDPR feature
> ---
>
> Key: HDDS-2140
> URL: https://issues.apache.org/jira/browse/HDDS-2140
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Add robot test for GDPR feature so it can be run during smoke tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2140) Add robot test for GDPR feature

2019-09-27 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2140:

Target Version/s: 0.5.0

> Add robot test for GDPR feature
> ---
>
> Key: HDDS-2140
> URL: https://issues.apache.org/jira/browse/HDDS-2140
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Add robot test for GDPR feature so it can be run during smoke tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2140) Add robot test for GDPR feature

2019-09-27 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2140:

Component/s: test

> Add robot test for GDPR feature
> ---
>
> Key: HDDS-2140
> URL: https://issues.apache.org/jira/browse/HDDS-2140
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Add robot test for GDPR feature so it can be run during smoke tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2140) Add robot test for GDPR feature

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2140?focusedWorklogId=319950=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319950
 ]

ASF GitHub Bot logged work on HDDS-2140:


Author: ASF GitHub Bot
Created on: 28/Sep/19 05:19
Start Date: 28/Sep/19 05:19
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on issue #1542: HDDS-2140. 
Add robot test for GDPR feature
URL: https://github.com/apache/hadoop/pull/1542#issuecomment-536155219
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319950)
Time Spent: 20m  (was: 10m)

> Add robot test for GDPR feature
> ---
>
> Key: HDDS-2140
> URL: https://issues.apache.org/jira/browse/HDDS-2140
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Add robot test for GDPR feature so it can be run during smoke tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2140) Add robot test for GDPR feature

2019-09-27 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2140:

Status: Patch Available  (was: In Progress)

> Add robot test for GDPR feature
> ---
>
> Key: HDDS-2140
> URL: https://issues.apache.org/jira/browse/HDDS-2140
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Add robot test for GDPR feature so it can be run during smoke tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2140) Add robot test for GDPR feature

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2140?focusedWorklogId=319949=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319949
 ]

ASF GitHub Bot logged work on HDDS-2140:


Author: ASF GitHub Bot
Created on: 28/Sep/19 05:17
Start Date: 28/Sep/19 05:17
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #1542: 
HDDS-2140. Add robot test for GDPR feature
URL: https://github.com/apache/hadoop/pull/1542
 
 
   Tested using test-single script.
   
   `$ ../test-single.sh om gdpr/gdpr.robot
   
   ozone-gdpr :: Smoketest Ozone GDPR Feature
   
   Test GDPR(disabled) without explicit options   | PASS |
   
--
   Test GDPR with --enforcegdpr=true | PASS 
|
   
--
   Test GDPR with -g=true   
  | PASS |
   
--
   Test GDPR with -g=false  
 | PASS |
   
--
   ozone-gdpr :: Smoketest Ozone GDPR Feature   | PASS |
   4 critical tests, 4 passed, 0 failed
   4 tests total, 4 passed, 0 failed
   
   Output:  /tmp/smoketest/ozone/result/robot-ozone-ozone-gdpr-om.xml
   Log: 
~/apache/hadoop/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone/result/log.html
   Report:  
~/apache/hadoop/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone/result/report.html`
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319949)
Remaining Estimate: 0h
Time Spent: 10m

> Add robot test for GDPR feature
> ---
>
> Key: HDDS-2140
> URL: https://issues.apache.org/jira/browse/HDDS-2140
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Add robot test for GDPR feature so it can be run during smoke tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2140) Add robot test for GDPR feature

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2140:
-
Labels: pull-request-available  (was: )

> Add robot test for GDPR feature
> ---
>
> Key: HDDS-2140
> URL: https://issues.apache.org/jira/browse/HDDS-2140
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
>
> Add robot test for GDPR feature so it can be run during smoke tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2183) Container and pipline subcommands of scmcli should be grouped

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2183?focusedWorklogId=319942=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319942
 ]

ASF GitHub Bot logged work on HDDS-2183:


Author: ASF GitHub Bot
Created on: 28/Sep/19 04:22
Start Date: 28/Sep/19 04:22
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1532: HDDS-2183. 
Container and pipline subcommands of scmcli should be grouped.
URL: https://github.com/apache/hadoop/pull/1532#issuecomment-536152232
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 72 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 30 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 32 | hadoop-ozone in trunk failed. |
   | -1 | compile | 18 | hadoop-hdds in trunk failed. |
   | -1 | compile | 13 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 47 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 915 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 16 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 999 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 27 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 17 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 32 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 33 | hadoop-ozone in the patch failed. |
   | -1 | compile | 22 | hadoop-hdds in the patch failed. |
   | -1 | compile | 15 | hadoop-ozone in the patch failed. |
   | -1 | javac | 22 | hadoop-hdds in the patch failed. |
   | -1 | javac | 15 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 55 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 861 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 17 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 33 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 20 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 28 | hadoop-hdds in the patch failed. |
   | -1 | unit | 28 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 2537 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1532/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1532 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux dceccfae9a7e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 14b4fbc |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1532/3/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1532/3/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1532/3/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1532/3/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1532/3/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1532/3/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1532/3/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1532/3/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1532/3/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 

[jira] [Commented] (HDDS-2175) Propagate System Exceptions from the OzoneManager

2019-09-27 Thread Supratim Deka (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939841#comment-16939841
 ] 

Supratim Deka commented on HDDS-2175:
-

Note from [~aengineer] posted on the github PR:

Also are these call stacks something that the end user should ever see? I have 
always found as user a call stack useless, it might be useful for the developer 
for debugging purposes, but clients are generally things used by real users. 
Maybe if these stacks are not logged in the ozone.log, we can log them, 
provided we can guard them via a config key and by default we do not do that.


> Propagate System Exceptions from the OzoneManager
> -
>
> Key: HDDS-2175
> URL: https://issues.apache.org/jira/browse/HDDS-2175
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Exceptions encountered while processing requests on the OM are categorized as 
> business exceptions and system exceptions. All of the business exceptions are 
> captured as OMException and have an associated status code which is returned 
> to the client. The handling of these is not going to be changed.
> Currently system exceptions are returned as INTERNAL ERROR to the client with 
> a 1 line message string from the exception. The scope of this jira is to 
> capture system exceptions and propagate the related information(including the 
> complete stack trace) back to the client.
> There are 3 sub-tasks required to achieve this
> 1. Separate capture and handling for OMException and the other 
> exceptions(IOException). For system exceptions, use Hadoop IPC 
> ServiceException mechanism to send the stack trace to the client.
> 2. track and propagate exceptions inside Ratis OzoneManagerStateMachine and 
> propagate up to the OzoneManager layer (on the leader). Currently, these 
> exceptions are not being tracked.
> 3. Handle and propagate exceptions from Ratis.
> Will raise jira for each sub-task.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14853) NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is not present

2019-09-27 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14853:

Attachment: HDFS-14853-branch-2-Addendum-01.patch

> NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode 
> is not present
> 
>
> Key: HDFS-14853
> URL: https://issues.apache.org/jira/browse/HDFS-14853
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 2.10.0, 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14853-branch-2-Addendum-01.patch, 
> HDFS-14853.001.patch, HDFS-14853.002.patch, HDFS-14853.003.patch
>
>
>  
> {{org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:229)
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:77)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14853) NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is not present

2019-09-27 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939836#comment-16939836
 ] 

Ayush Saxena commented on HDFS-14853:
-

Just realized branch-2 doesn't have {{DatanodeID.EMPTY_DATANODE_ID}}, replacing 
it with actual definition for branch-2 Addendum.

> NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode 
> is not present
> 
>
> Key: HDFS-14853
> URL: https://issues.apache.org/jira/browse/HDFS-14853
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 2.10.0, 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14853.001.patch, HDFS-14853.002.patch, 
> HDFS-14853.003.patch
>
>
>  
> {{org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:229)
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:77)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2183) Container and pipline subcommands of scmcli should be grouped

2019-09-27 Thread YiSheng Lien (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939835#comment-16939835
 ] 

YiSheng Lien commented on HDDS-2183:


Hello everyonw,

Attachments are some figuration about the fix.

> Container and pipline subcommands of scmcli should be grouped
> -
>
> Key: HDDS-2183
> URL: https://issues.apache.org/jira/browse/HDDS-2183
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Once upon an time when we had only a few subcommands under `ozone scmcli` to 
> manage containers.
>  
> Now we have many admin commands some of them are grouped to a subcommand (eg. 
> safemode, replicationmanager) some of are not.
>  
> I propose to group the container and pipeline related commands:
>  
> Instead of "ozone scmcli info" use "ozone scmcli container info"
> Instead of "ozone scmcli list" use "ozone scmcli container list"
> Instead of "ozone scmcli listPipelines" use "ozone scmcli pipeline list"
>  
> And so on...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14853) NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is not present

2019-09-27 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939829#comment-16939829
 ] 

Ayush Saxena commented on HDFS-14853:
-

Thanx [~xkrogen] cherry-picked this to branch-3.2 branch-3.1 and branch-2


> NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode 
> is not present
> 
>
> Key: HDFS-14853
> URL: https://issues.apache.org/jira/browse/HDFS-14853
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14853.001.patch, HDFS-14853.002.patch, 
> HDFS-14853.003.patch
>
>
>  
> {{org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:229)
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:77)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14853) NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode is not present

2019-09-27 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14853:

Fix Version/s: 3.2.2
   3.1.4
   2.10.0

> NPE in DFSNetworkTopology#chooseRandomWithStorageType() when the excludedNode 
> is not present
> 
>
> Key: HDFS-14853
> URL: https://issues.apache.org/jira/browse/HDFS-14853
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ranith Sardar
>Assignee: Ranith Sardar
>Priority: Major
> Fix For: 2.10.0, 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14853.001.patch, HDFS-14853.002.patch, 
> HDFS-14853.003.patch
>
>
>  
> {{org.apache.hadoop.ipc.RemoteException(java.lang.NullPointerException): 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:229)
>   at 
> org.apache.hadoop.hdfs.net.DFSNetworkTopology.chooseRandomWithStorageType(DFSNetworkTopology.java:77)}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14850) Optimize FileSystemAccessService#getFileSystemConfiguration

2019-09-27 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939823#comment-16939823
 ] 

Lisheng Sun commented on HDFS-14850:


hi [~elgoiri]  Should we commit this patch to trunk? Thank you.

> Optimize FileSystemAccessService#getFileSystemConfiguration
> ---
>
> Key: HDFS-14850
> URL: https://issues.apache.org/jira/browse/HDFS-14850
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs, performance
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14850.001.patch, HDFS-14850.002.patch, 
> HDFS-14850.003.patch, HDFS-14850.004(2).patch, HDFS-14850.004.patch, 
> HDFS-14850.005.patch
>
>
> {code:java}
>  @Override
>   public Configuration getFileSystemConfiguration() {
> Configuration conf = new Configuration(true);
> ConfigurationUtils.copy(serviceHadoopConf, conf);
> conf.setBoolean(FILE_SYSTEM_SERVICE_CREATED, true);
> // Force-clear server-side umask to make HttpFS match WebHDFS behavior
> conf.set(FsPermission.UMASK_LABEL, "000");
> return conf;
>   }
> {code}
> As above code,when call 
> FileSystemAccessService#getFileSystemConfiguration,current code  new 
> Configuration every time.  
> It is not necessary and affects performance. I think it only need to new 
> Configuration in FileSystemAccessService#init once and  
> FileSystemAccessService#getFileSystemConfiguration get it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6524) Choosing datanode retries times considering with block replica number

2019-09-27 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-6524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939822#comment-16939822
 ] 

Lisheng Sun commented on HDFS-6524:
---

[~elgoiri] Could you have time to continue to reveiew this patch? Thank you.

> Choosing datanode  retries times considering with block replica number
> --
>
> Key: HDFS-6524
> URL: https://issues.apache.org/jira/browse/HDFS-6524
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha1
>Reporter: Liang Xie
>Assignee: Lisheng Sun
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6524.001.patch, HDFS-6524.002.patch, 
> HDFS-6524.003.patch, HDFS-6524.004.patch, HDFS-6524.005(2).patch, 
> HDFS-6524.005.patch, HDFS-6524.006.patch, HDFS-6524.txt
>
>
> Currently the chooseDataNode() does retry with the setting: 
> dfsClientConf.maxBlockAcquireFailures, which by default is 3 
> (DFS_CLIENT_MAX_BLOCK_ACQUIRE_FAILURES_DEFAULT = 3), it would be better 
> having another option, block replication factor. One cluster with only  two 
> block replica setting, or using Reed-solomon encoding solution with one 
> replica factor. It helps to reduce the long tail latency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14814) RBF: RouterQuotaUpdateService supports inherited rule.

2019-09-27 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939814#comment-16939814
 ] 

Íñigo Goiri commented on HDFS-14814:


If you want to return a List of Entry, just create a new Map.

> RBF: RouterQuotaUpdateService supports inherited rule.
> --
>
> Key: HDFS-14814
> URL: https://issues.apache.org/jira/browse/HDFS-14814
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: HDFS-14814.001.patch, HDFS-14814.002.patch, 
> HDFS-14814.003.patch, HDFS-14814.004.patch, HDFS-14814.005.patch, 
> HDFS-14814.006.patch, HDFS-14814.007.patch, HDFS-14814.008.patch, 
> HDFS-14814.009.patch
>
>
> I want to add a rule *'The quota should be set the same as the nearest 
> parent'* to Global Quota. Supposing we have the mount table below.
> M1: /dir-a                            ns0->/dir-a     \{nquota=10,squota=20}
> M2: /dir-a/dir-b                 ns1->/dir-b     \{nquota=-1,squota=30}
> M3: /dir-a/dir-b/dir-c       ns2->/dir-c     \{nquota=-1,squota=-1}
> M4: /dir-d                           ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota for the remote locations on the namespaces should be:
>  ns0->/dir-a     \{nquota=10,squota=20}
>  ns1->/dir-b     \{nquota=10,squota=30}
>  ns2->/dir-c      \{nquota=10,squota=30}
>  ns3->/dir-d     \{nquota=-1,squota=-1}
>  
> The quota of the remote location is set the same as the corresponding 
> MountTable, and if there is no quota of the MountTable then the quota is set 
> to the nearest parent MountTable with quota.
>  
> It's easy to implement it. In RouterQuotaUpdateService each time we compute 
> the currentQuotaUsage, we can get the quota info for each MountTable. We can 
> do a
>  check and fix all the MountTable which's quota doesn't match the rule above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make OM Generic related configuration support HA style config

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=319914=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319914
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 28/Sep/19 00:45
Start Date: 28/Sep/19 00:45
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1511: HDDS-2162. Make 
OM Generic related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#issuecomment-536135612
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 148 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for branch |
   | -1 | mvninstall | 55 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 28 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-hdds in trunk failed. |
   | -1 | compile | 17 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 62 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1103 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 25 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 26 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1212 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 35 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 18 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | -1 | mvninstall | 38 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 42 | hadoop-ozone in the patch failed. |
   | -1 | compile | 25 | hadoop-hdds in the patch failed. |
   | -1 | compile | 19 | hadoop-ozone in the patch failed. |
   | -1 | javac | 25 | hadoop-hdds in the patch failed. |
   | -1 | javac | 19 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 70 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 906 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 22 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 34 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 19 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 30 | hadoop-hdds in the patch failed. |
   | -1 | unit | 24 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 3011 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1511 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux deed36415153 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 14b4fbc |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/4/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/4/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/4/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/4/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/4/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/4/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/4/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/4/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1511/4/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 

[jira] [Created] (HDDS-2201) Rename VolumeList to UserVolumeInfo

2019-09-27 Thread Anu Engineer (Jira)
Anu Engineer created HDDS-2201:
--

 Summary: Rename VolumeList to UserVolumeInfo
 Key: HDDS-2201
 URL: https://issues.apache.org/jira/browse/HDDS-2201
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Manager
Reporter: Anu Engineer
Assignee: Anu Engineer


Under Ozone Manager, The volume points to a structure called volumeInfo, Bucket 
points to BucketInfo, Key points to KeyInfo. However, User points to 
VolumeList. duh?

This JIRA proposes to refactor the VolumeList as UserVolumeInfo. Why not, 
UserInfo, because that structure is already taken by the security work of Ozone 
Manager.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2162) Make OM Generic related configuration support HA style config

2019-09-27 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2162:
-
Description: 
To have a single configuration to use across OM cluster, few of the configs 
like 

-OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,-

-OZONE_OM_KERBEROS_PRINCIPAL_KEY,-

-OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,-

-OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append with 
service id and node id.-

 

Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.

 

This Jira is to fix the above configs.

 

  was:
To have a single configuration to use across OM cluster, few of the configs 
like 

OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,

OZONE_OM_KERBEROS_PRINCIPAL_KEY,

OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,

OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append with 
service id and node id.

 

Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.

 

This Jira is to fix the above configs.

 


> Make OM Generic related configuration support HA style config
> -
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> -OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,-
> -OZONE_OM_KERBEROS_PRINCIPAL_KEY,-
> -OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,-
> -OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.-
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2162) Make OM Generic related configuration support HA style config

2019-09-27 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2162:
-
Summary: Make OM Generic related configuration support HA style config  
(was: Make Kerberos related configuration support HA style config)

> Make OM Generic related configuration support HA style config
> -
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,
> OZONE_OM_KERBEROS_PRINCIPAL_KEY,
> OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,
> OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2162) Make Kerberos related configuration support HA style config

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2162?focusedWorklogId=319898=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319898
 ]

ASF GitHub Bot logged work on HDDS-2162:


Author: ASF GitHub Bot
Created on: 27/Sep/19 23:53
Start Date: 27/Sep/19 23:53
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1511: 
HDDS-2162. Make Kerberos related configuration support HA style config.
URL: https://github.com/apache/hadoop/pull/1511#discussion_r329284058
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerConfiguration.java
 ##
 @@ -119,10 +119,13 @@ public void testDefaultPortIfNotSpecified() throws 
Exception {
 String omNode1Id = "omNode1";
 String omNode2Id = "omNode2";
 String omNodesKeyValue = omNode1Id + "," + omNode2Id;
-conf.set(OMConfigKeys.OZONE_OM_NODES_KEY, omNodesKeyValue);
+String serviceID = "service1";
+conf.set(OMConfigKeys.OZONE_OM_SERVICE_IDS_KEY, serviceID);
+conf.set(OMConfigKeys.OZONE_OM_NODES_KEY + "." + serviceID,
+omNodesKeyValue);
 
-String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode1Id);
-String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(null, omNode2Id);
+String omNode1RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode1Id);
+String omNode2RpcAddrKey = getOMAddrKeyWithSuffix(serviceID, omNode2Id);
 
 
 Review comment:
   As discussed offline for now removed Kerberos config.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319898)
Time Spent: 5.5h  (was: 5h 20m)

> Make Kerberos related configuration support HA style config
> ---
>
> Key: HDDS-2162
> URL: https://issues.apache.org/jira/browse/HDDS-2162
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> To have a single configuration to use across OM cluster, few of the configs 
> like 
> OZONE_OM_KERBEROS_KEYTAB_FILE_KEY,
> OZONE_OM_KERBEROS_PRINCIPAL_KEY,
> OZONE_OM_HTTP_KERBEROS_KEYTAB_FILE,
> OZONE_OM_HTTP_KERBEROS_PRINCIPAL_KEY need to support configs which append 
> with service id and node id.
>  
> Addressed OM_DB_DIRS, OZONE_OM_ADDRESS_KEY also in this patch.
>  
> This Jira is to fix the above configs.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14305) Serial number in BlockTokenSecretManager could overlap between different namenodes

2019-09-27 Thread Konstantin Shvachko (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939793#comment-16939793
 ] 

Konstantin Shvachko commented on HDFS-14305:


Attached v07 patch. This one goes after reverting the current commit (v06).
In v07 patch:
# Ensure we do not use negative numbers as initial serialNo.
# Added an assert verifying new serialNo if always in the range.
# Added unit test, which tests the range conditions.
# Removed double initialization of the keys - {{generateKeys()}} was called 
twice.
# Added logging of the range when NN starts, this should cover HDFS-14793.

If there are no objections for reverting I'll do it over the weekend.

> Serial number in BlockTokenSecretManager could overlap between different 
> namenodes
> --
>
> Key: HDFS-14305
> URL: https://issues.apache.org/jira/browse/HDFS-14305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, security
>Reporter: Chao Sun
>Assignee: Xiaoqiao He
>Priority: Major
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14305-007.patch, HDFS-14305.001.patch, 
> HDFS-14305.002.patch, HDFS-14305.003.patch, HDFS-14305.004.patch, 
> HDFS-14305.005.patch, HDFS-14305.006.patch
>
>
> Currently, a {{BlockTokenSecretManager}} starts with a random integer as the 
> initial serial number, and then use this formula to rotate it:
> {code:java}
> this.intRange = Integer.MAX_VALUE / numNNs;
> this.nnRangeStart = intRange * nnIndex;
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
>  {code}
> while {{numNNs}} is the total number of NameNodes in the cluster, and 
> {{nnIndex}} is the index of the current NameNode specified in the 
> configuration {{dfs.ha.namenodes.}}.
> However, with this approach, different NameNode could have overlapping ranges 
> for serial number. For simplicity, let's assume {{Integer.MAX_VALUE}} is 100, 
> and we have 2 NameNodes {{nn1}} and {{nn2}} in configuration. Then the ranges 
> for these two are:
> {code}
> nn1 -> [-49, 49]
> nn2 -> [1, 99]
> {code}
> This is because the initial serial number could be any negative integer.
> Moreover, when the keys are updated, the serial number will again be updated 
> with the formula:
> {code}
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
> {code}
> which means the new serial number could be updated to a range that belongs to 
> a different NameNode, thus increasing the chance of collision again.
> When the collision happens, DataNodes could overwrite an existing key which 
> will cause clients to fail because of {{InvalidToken}} error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14305) Serial number in BlockTokenSecretManager could overlap between different namenodes

2019-09-27 Thread Konstantin Shvachko (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HDFS-14305:
---
Attachment: HDFS-14305-007.patch

> Serial number in BlockTokenSecretManager could overlap between different 
> namenodes
> --
>
> Key: HDFS-14305
> URL: https://issues.apache.org/jira/browse/HDFS-14305
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, security
>Reporter: Chao Sun
>Assignee: Xiaoqiao He
>Priority: Major
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14305-007.patch, HDFS-14305.001.patch, 
> HDFS-14305.002.patch, HDFS-14305.003.patch, HDFS-14305.004.patch, 
> HDFS-14305.005.patch, HDFS-14305.006.patch
>
>
> Currently, a {{BlockTokenSecretManager}} starts with a random integer as the 
> initial serial number, and then use this formula to rotate it:
> {code:java}
> this.intRange = Integer.MAX_VALUE / numNNs;
> this.nnRangeStart = intRange * nnIndex;
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
>  {code}
> while {{numNNs}} is the total number of NameNodes in the cluster, and 
> {{nnIndex}} is the index of the current NameNode specified in the 
> configuration {{dfs.ha.namenodes.}}.
> However, with this approach, different NameNode could have overlapping ranges 
> for serial number. For simplicity, let's assume {{Integer.MAX_VALUE}} is 100, 
> and we have 2 NameNodes {{nn1}} and {{nn2}} in configuration. Then the ranges 
> for these two are:
> {code}
> nn1 -> [-49, 49]
> nn2 -> [1, 99]
> {code}
> This is because the initial serial number could be any negative integer.
> Moreover, when the keys are updated, the serial number will again be updated 
> with the formula:
> {code}
> this.serialNo = (this.serialNo % intRange) + (nnRangeStart);
> {code}
> which means the new serial number could be updated to a range that belongs to 
> a different NameNode, thus increasing the chance of collision again.
> When the collision happens, DataNodes could overwrite an existing key which 
> will cause clients to fail because of {{InvalidToken}} error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14730) Remove unused configuration dfs.web.authentication.filter

2019-09-27 Thread Chen Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang updated HDFS-14730:
--
Status: Patch Available  (was: Open)

> Remove unused configuration dfs.web.authentication.filter 
> --
>
> Key: HDFS-14730
> URL: https://issues.apache.org/jira/browse/HDFS-14730
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14730.001.patch, HDFS-14730.002.patch
>
>
> After HADOOP-16314, this configuration is not used any where, so I propose to 
> deprecate it to avoid misuse.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2164) om.db.checkpoints is getting filling up fast

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2164?focusedWorklogId=319880=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319880
 ]

ASF GitHub Bot logged work on HDDS-2164:


Author: ASF GitHub Bot
Created on: 27/Sep/19 22:56
Start Date: 27/Sep/19 22:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1536: HDDS-2164 : 
om.db.checkpoints is getting filling up fast.
URL: https://github.com/apache/hadoop/pull/1536#issuecomment-536120797
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 77 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 70 | Maven dependency ordering for branch |
   | -1 | mvninstall | 37 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 42 | hadoop-ozone in trunk failed. |
   | -1 | compile | 19 | hadoop-hdds in trunk failed. |
   | -1 | compile | 13 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 60 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 946 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 16 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1037 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 31 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 17 | hadoop-ozone in trunk failed. |
   | -0 | patch | 1065 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | -1 | mvninstall | 31 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 34 | hadoop-ozone in the patch failed. |
   | -1 | compile | 20 | hadoop-hdds in the patch failed. |
   | -1 | compile | 16 | hadoop-ozone in the patch failed. |
   | -1 | javac | 20 | hadoop-hdds in the patch failed. |
   | -1 | javac | 16 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 24 | The patch passed checkstyle in hadoop-hdds |
   | +1 | checkstyle | 27 | hadoop-ozone: The patch generated 0 new + 0 
unchanged - 1 fixed = 0 total (was 1) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 793 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 16 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 30 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 16 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 24 | hadoop-hdds in the patch failed. |
   | -1 | unit | 23 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 2594 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1536/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1536 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux fa8a6c6e8a26 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 13b427f |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1536/2/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1536/2/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1536/2/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1536/2/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1536/2/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1536/2/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1536/2/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 

[jira] [Work logged] (HDDS-2019) Handle Set DtService of token in S3Gateway for OM HA

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2019?focusedWorklogId=319879=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319879
 ]

ASF GitHub Bot logged work on HDDS-2019:


Author: ASF GitHub Bot
Created on: 27/Sep/19 22:53
Start Date: 27/Sep/19 22:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1489: HDDS-2019. 
Handle Set DtService of token in S3Gateway for OM HA.
URL: https://github.com/apache/hadoop/pull/1489#issuecomment-536120276
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 37 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 36 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 44 | hadoop-ozone in trunk failed. |
   | -1 | compile | 21 | hadoop-hdds in trunk failed. |
   | -1 | compile | 16 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 63 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 865 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 25 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 20 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 983 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 47 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 21 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 34 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 37 | hadoop-ozone in the patch failed. |
   | -1 | compile | 26 | hadoop-hdds in the patch failed. |
   | -1 | compile | 19 | hadoop-ozone in the patch failed. |
   | -1 | javac | 26 | hadoop-hdds in the patch failed. |
   | -1 | javac | 19 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 59 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 730 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 21 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 32 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 21 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 28 | hadoop-hdds in the patch failed. |
   | -1 | unit | 27 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 2416 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1489/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1489 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux edb0fc8744f3 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 13b427f |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1489/3/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1489/3/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1489/3/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1489/3/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1489/3/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1489/3/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1489/3/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1489/3/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1489/3/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1489/3/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 

[jira] [Commented] (HDDS-1146) Adding container related metrics in SCM

2019-09-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939788#comment-16939788
 ] 

Hudson commented on HDDS-1146:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17409 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17409/])
HDDS-1146. Adding container related metrics in SCM. (#1541) (github: rev 
14b4fbc019c98e982466083838226af8790a53a8)
* (edit) 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/container/metrics/TestSCMContainerManagerMetrics.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/IncrementalContainerReportHandler.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/AbstractContainerReportHandler.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/metrics/SCMContainerManagerMetrics.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerManager.java
* (edit) 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerReportHandler.java


> Adding container related metrics in SCM
> ---
>
> Key: HDDS-1146
> URL: https://issues.apache.org/jira/browse/HDDS-1146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: HDDS-1146.000.patch, HDDS-1146.001.patch, 
> HDDS-1146.002.patch
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of containers
>  * Number of open containers
>  * Number of closed containers
>  * Number of quasi closed containers
>  * Number of closing containers
> Above are already handled in HDDS-918.
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
> Handled in HDDS-2193.
>  * Number of successful container report processing
>  * Number of failed container report processing
>  * Number of successful incremental container report processing
>  * Number of failed incremental container report processing
> These will be handled in this jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1146) Adding container related metrics in SCM

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1146?focusedWorklogId=319861=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319861
 ]

ASF GitHub Bot logged work on HDDS-1146:


Author: ASF GitHub Bot
Created on: 27/Sep/19 22:26
Start Date: 27/Sep/19 22:26
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1541: 
HDDS-1146. Adding container related metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1541#discussion_r329271150
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/IncrementalContainerReportHandler.java
 ##
 @@ -66,16 +67,25 @@ public void onMessage(final 
IncrementalContainerReportFromDatanode report,
 nodeManager.addContainer(dd, id);
 processContainerReplica(dd, replicaProto);
   } catch (ContainerNotFoundException e) {
+success = false;
 LOG.warn("Container {} not found!", replicaProto.getContainerID());
   } catch (NodeNotFoundException ex) {
+success = false;
 LOG.error("Received ICR from unknown datanode {} {}",
 report.getDatanodeDetails(), ex);
   } catch (IOException e) {
+success = false;
 LOG.error("Exception while processing ICR for container {}",
 replicaProto.getContainerID());
   }
 }
 
+if (success) {
+  getContainerManager().notifyContainerReportProcessing(false, true);
+} else {
 
 Review comment:
   As discussed offline, will change this in later jira. For now we shall go 
with the current way.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319861)
Time Spent: 2h 50m  (was: 2h 40m)

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-1146
> URL: https://issues.apache.org/jira/browse/HDDS-1146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: HDDS-1146.000.patch, HDDS-1146.001.patch, 
> HDDS-1146.002.patch
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of containers
>  * Number of open containers
>  * Number of closed containers
>  * Number of quasi closed containers
>  * Number of closing containers
> Above are already handled in HDDS-918.
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
> Handled in HDDS-2193.
>  * Number of successful container report processing
>  * Number of failed container report processing
>  * Number of successful incremental container report processing
>  * Number of failed incremental container report processing
> These will be handled in this jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1146) Adding container related metrics in SCM

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1146?focusedWorklogId=319859=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319859
 ]

ASF GitHub Bot logged work on HDDS-1146:


Author: ASF GitHub Bot
Created on: 27/Sep/19 22:24
Start Date: 27/Sep/19 22:24
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1541: HDDS-1146. 
Adding container related metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1541#issuecomment-536114953
 
 
   Test failures are not related to this patch.
   Thank You @anuengineer and @avijayanhwx for the review.
   I will commit this to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319859)
Time Spent: 2.5h  (was: 2h 20m)

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-1146
> URL: https://issues.apache.org/jira/browse/HDDS-1146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: HDDS-1146.000.patch, HDDS-1146.001.patch, 
> HDDS-1146.002.patch
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of containers
>  * Number of open containers
>  * Number of closed containers
>  * Number of quasi closed containers
>  * Number of closing containers
> Above are already handled in HDDS-918.
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
> Handled in HDDS-2193.
>  * Number of successful container report processing
>  * Number of failed container report processing
>  * Number of successful incremental container report processing
>  * Number of failed incremental container report processing
> These will be handled in this jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1146) Adding container related metrics in SCM

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1146?focusedWorklogId=319860=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319860
 ]

ASF GitHub Bot logged work on HDDS-1146:


Author: ASF GitHub Bot
Created on: 27/Sep/19 22:24
Start Date: 27/Sep/19 22:24
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1541: 
HDDS-1146. Adding container related metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1541
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319860)
Time Spent: 2h 40m  (was: 2.5h)

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-1146
> URL: https://issues.apache.org/jira/browse/HDDS-1146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: HDDS-1146.000.patch, HDDS-1146.001.patch, 
> HDDS-1146.002.patch
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of containers
>  * Number of open containers
>  * Number of closed containers
>  * Number of quasi closed containers
>  * Number of closing containers
> Above are already handled in HDDS-918.
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
> Handled in HDDS-2193.
>  * Number of successful container report processing
>  * Number of failed container report processing
>  * Number of successful incremental container report processing
>  * Number of failed incremental container report processing
> These will be handled in this jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1146) Adding container related metrics in SCM

2019-09-27 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1146:
-
Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-1146
> URL: https://issues.apache.org/jira/browse/HDDS-1146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: HDDS-1146.000.patch, HDDS-1146.001.patch, 
> HDDS-1146.002.patch
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of containers
>  * Number of open containers
>  * Number of closed containers
>  * Number of quasi closed containers
>  * Number of closing containers
> Above are already handled in HDDS-918.
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
> Handled in HDDS-2193.
>  * Number of successful container report processing
>  * Number of failed container report processing
>  * Number of successful incremental container report processing
>  * Number of failed incremental container report processing
> These will be handled in this jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1146) Adding container related metrics in SCM

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1146?focusedWorklogId=319858=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319858
 ]

ASF GitHub Bot logged work on HDDS-1146:


Author: ASF GitHub Bot
Created on: 27/Sep/19 22:22
Start Date: 27/Sep/19 22:22
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #1541: HDDS-1146. 
Adding container related metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1541#discussion_r329270479
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/IncrementalContainerReportHandler.java
 ##
 @@ -66,16 +67,25 @@ public void onMessage(final 
IncrementalContainerReportFromDatanode report,
 nodeManager.addContainer(dd, id);
 processContainerReplica(dd, replicaProto);
   } catch (ContainerNotFoundException e) {
+success = false;
 LOG.warn("Container {} not found!", replicaProto.getContainerID());
   } catch (NodeNotFoundException ex) {
+success = false;
 LOG.error("Received ICR from unknown datanode {} {}",
 report.getDatanodeDetails(), ex);
   } catch (IOException e) {
+success = false;
 LOG.error("Exception while processing ICR for container {}",
 replicaProto.getContainerID());
   }
 }
 
+if (success) {
+  getContainerManager().notifyContainerReportProcessing(false, true);
+} else {
 
 Review comment:
   IMO, we should change it to 1 line instead of if-else. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319858)
Time Spent: 2h 20m  (was: 2h 10m)

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-1146
> URL: https://issues.apache.org/jira/browse/HDDS-1146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1146.000.patch, HDDS-1146.001.patch, 
> HDDS-1146.002.patch
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of containers
>  * Number of open containers
>  * Number of closed containers
>  * Number of quasi closed containers
>  * Number of closing containers
> Above are already handled in HDDS-918.
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
> Handled in HDDS-2193.
>  * Number of successful container report processing
>  * Number of failed container report processing
>  * Number of successful incremental container report processing
>  * Number of failed incremental container report processing
> These will be handled in this jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1146) Adding container related metrics in SCM

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1146?focusedWorklogId=319851=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319851
 ]

ASF GitHub Bot logged work on HDDS-1146:


Author: ASF GitHub Bot
Created on: 27/Sep/19 21:50
Start Date: 27/Sep/19 21:50
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1541: HDDS-1146. 
Adding container related metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1541#issuecomment-536106044
 
 
   > @bharatviswa504 If you don't mind, can you tell me where a metric like 
numContainerReportsSuccess would be useful? Failures I guess may be useful.
   
   If the user wants to know numContainerReportsProcessed successfully in 
system. And he wants to compare with failure count, then it might be useful.
   
   And also I got to know yesterday it would be good to have metrics in the 
system for some actions happened/performed. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319851)
Time Spent: 2h 10m  (was: 2h)

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-1146
> URL: https://issues.apache.org/jira/browse/HDDS-1146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1146.000.patch, HDDS-1146.001.patch, 
> HDDS-1146.002.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of containers
>  * Number of open containers
>  * Number of closed containers
>  * Number of quasi closed containers
>  * Number of closing containers
> Above are already handled in HDDS-918.
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
> Handled in HDDS-2193.
>  * Number of successful container report processing
>  * Number of failed container report processing
>  * Number of successful incremental container report processing
>  * Number of failed incremental container report processing
> These will be handled in this jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2199) In SCMNodeManager dnsToUuidMap cannot track multiple DNs on the same host

2019-09-27 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell reassigned HDDS-2199:
---

Assignee: Stephen O'Donnell

> In SCMNodeManager dnsToUuidMap cannot track multiple DNs on the same host
> -
>
> Key: HDDS-2199
> URL: https://issues.apache.org/jira/browse/HDDS-2199
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>
> Often in test clusters and tests, we start multiple datanodes on the same 
> host.
> In SCMNodeManager.register() there is a map of hostname -> datanode UUID 
> called dnsToUuidMap.
> If several DNs register from the same host, the entry in the map will be 
> overwritten and the last DN to register will 'win'.
> This means that the method getNodeByAddress() does not return the correct 
> DatanodeDetails object when many hosts are registered from the same address.
> This method is only used in SCMBlockProtocolServer.sortDatanodes() to allow 
> it to see if one of the nodes matches the client, but it need to be used by 
> the Decommission code.
> Perhaps we could change the getNodeByAddress() method to returns a list of 
> DNs? In normal production clusters, there should only be one returned, but in 
> test clusters, there may be many. Any code looking for a specific DN entry 
> would need to iterate the list and match on the port number too, as host:port 
> would be the unique definition of a datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1146) Adding container related metrics in SCM

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1146?focusedWorklogId=319847=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319847
 ]

ASF GitHub Bot logged work on HDDS-1146:


Author: ASF GitHub Bot
Created on: 27/Sep/19 21:43
Start Date: 27/Sep/19 21:43
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1541: HDDS-1146. 
Adding container related metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1541#issuecomment-536106044
 
 
   > @bharatviswa504 If you don't mind, can you tell me where a metric like 
numContainerReportsSuccess would be useful? Failures I guess may be useful.
   
   If the user wants to know numContainerReportsProcessed successfully in 
system. And he wants to compare with failure count, then it might be useful.
   
   And also I got to know yesterday it would be good to have metrics in the 
system for some actions happened. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319847)
Time Spent: 2h  (was: 1h 50m)

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-1146
> URL: https://issues.apache.org/jira/browse/HDDS-1146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1146.000.patch, HDDS-1146.001.patch, 
> HDDS-1146.002.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of containers
>  * Number of open containers
>  * Number of closed containers
>  * Number of quasi closed containers
>  * Number of closing containers
> Above are already handled in HDDS-918.
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
> Handled in HDDS-2193.
>  * Number of successful container report processing
>  * Number of failed container report processing
>  * Number of successful incremental container report processing
>  * Number of failed incremental container report processing
> These will be handled in this jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1146) Adding container related metrics in SCM

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1146?focusedWorklogId=319846=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319846
 ]

ASF GitHub Bot logged work on HDDS-1146:


Author: ASF GitHub Bot
Created on: 27/Sep/19 21:43
Start Date: 27/Sep/19 21:43
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #1541: HDDS-1146. Adding 
container related metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1541#issuecomment-536101491
 
 
   @bharatviswa504 If you don't mind, can you tell me where a metric like 
numContainerReportsSuccess would be useful? Failures I guess may be useful.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319846)
Time Spent: 1h 50m  (was: 1h 40m)

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-1146
> URL: https://issues.apache.org/jira/browse/HDDS-1146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1146.000.patch, HDDS-1146.001.patch, 
> HDDS-1146.002.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of containers
>  * Number of open containers
>  * Number of closed containers
>  * Number of quasi closed containers
>  * Number of closing containers
> Above are already handled in HDDS-918.
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
> Handled in HDDS-2193.
>  * Number of successful container report processing
>  * Number of failed container report processing
>  * Number of successful incremental container report processing
>  * Number of failed incremental container report processing
> These will be handled in this jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2181) Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2181?focusedWorklogId=319845=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319845
 ]

ASF GitHub Bot logged work on HDDS-2181:


Author: ASF GitHub Bot
Created on: 27/Sep/19 21:39
Start Date: 27/Sep/19 21:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1528: HDDS-2181. Ozone 
Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#issuecomment-536105096
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 77 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 30 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 23 | hadoop-ozone in trunk failed. |
   | -1 | compile | 19 | hadoop-hdds in trunk failed. |
   | -1 | compile | 13 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 46 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 938 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 20 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 16 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1026 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 30 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 16 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 33 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 25 | hadoop-ozone in the patch failed. |
   | -1 | compile | 22 | hadoop-hdds in the patch failed. |
   | -1 | compile | 15 | hadoop-ozone in the patch failed. |
   | -1 | javac | 22 | hadoop-hdds in the patch failed. |
   | -1 | javac | 15 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 53 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 810 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 16 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 29 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 16 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 24 | hadoop-hdds in the patch failed. |
   | -1 | unit | 19 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 2471 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1528 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 20a513024337 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 13b427f |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/4/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/4/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/4/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/4/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/4/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/4/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/4/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/4/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1528/4/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 

[jira] [Work logged] (HDDS-1146) Adding container related metrics in SCM

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1146?focusedWorklogId=319844=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319844
 ]

ASF GitHub Bot logged work on HDDS-1146:


Author: ASF GitHub Bot
Created on: 27/Sep/19 21:33
Start Date: 27/Sep/19 21:33
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #1541: HDDS-1146. Adding 
container related metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1541#issuecomment-536101491
 
 
   @bharatviswa504 If you don't mind, can you tell me where a metric like 
numContainerReportsSuccess would be useful? Failures I guess may be useful.
   
   If the user wants to know numContainerReportsProcessed successfully in 
system. And he wants to compare with failure count, then it might be useful.
   
   And also I got to know yesterday it would be good to have metrics in the 
system for some actions happened. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319844)
Time Spent: 1h 40m  (was: 1.5h)

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-1146
> URL: https://issues.apache.org/jira/browse/HDDS-1146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1146.000.patch, HDDS-1146.001.patch, 
> HDDS-1146.002.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of containers
>  * Number of open containers
>  * Number of closed containers
>  * Number of quasi closed containers
>  * Number of closing containers
> Above are already handled in HDDS-918.
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
> Handled in HDDS-2193.
>  * Number of successful container report processing
>  * Number of failed container report processing
>  * Number of successful incremental container report processing
>  * Number of failed incremental container report processing
> These will be handled in this jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1146) Adding container related metrics in SCM

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1146?focusedWorklogId=319843=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319843
 ]

ASF GitHub Bot logged work on HDDS-1146:


Author: ASF GitHub Bot
Created on: 27/Sep/19 21:32
Start Date: 27/Sep/19 21:32
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #1541: HDDS-1146. Adding 
container related metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1541#issuecomment-536101491
 
 
   @bharatviswa504 If you don't mind, can you tell me where a metric like 
numContainerReportsSuccess would be useful? Failures I guess may be useful.
   
   If the user wants to know numContainerReportsProcessed successfully in 
system. And he wants to compare with failure count, then it might be useful.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319843)
Time Spent: 1.5h  (was: 1h 20m)

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-1146
> URL: https://issues.apache.org/jira/browse/HDDS-1146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1146.000.patch, HDDS-1146.001.patch, 
> HDDS-1146.002.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of containers
>  * Number of open containers
>  * Number of closed containers
>  * Number of quasi closed containers
>  * Number of closing containers
> Above are already handled in HDDS-918.
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
> Handled in HDDS-2193.
>  * Number of successful container report processing
>  * Number of failed container report processing
>  * Number of successful incremental container report processing
>  * Number of failed incremental container report processing
> These will be handled in this jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1146) Adding container related metrics in SCM

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1146?focusedWorklogId=319841=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319841
 ]

ASF GitHub Bot logged work on HDDS-1146:


Author: ASF GitHub Bot
Created on: 27/Sep/19 21:30
Start Date: 27/Sep/19 21:30
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1541: 
HDDS-1146. Adding container related metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1541#discussion_r329258003
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/IncrementalContainerReportHandler.java
 ##
 @@ -66,16 +67,25 @@ public void onMessage(final 
IncrementalContainerReportFromDatanode report,
 nodeManager.addContainer(dd, id);
 processContainerReplica(dd, replicaProto);
   } catch (ContainerNotFoundException e) {
+success = false;
 LOG.warn("Container {} not found!", replicaProto.getContainerID());
   } catch (NodeNotFoundException ex) {
+success = false;
 LOG.error("Received ICR from unknown datanode {} {}",
 report.getDatanodeDetails(), ex);
   } catch (IOException e) {
+success = false;
 LOG.error("Exception while processing ICR for container {}",
 replicaProto.getContainerID());
   }
 }
 
+if (success) {
+  getContainerManager().notifyContainerReportProcessing(false, true);
+} else {
 
 Review comment:
   We can do that, just due to copy/paste from ICR code it is like this. (End 
result is same)
   Let me know if you want to change it?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319841)
Time Spent: 1h 20m  (was: 1h 10m)

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-1146
> URL: https://issues.apache.org/jira/browse/HDDS-1146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1146.000.patch, HDDS-1146.001.patch, 
> HDDS-1146.002.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of containers
>  * Number of open containers
>  * Number of closed containers
>  * Number of quasi closed containers
>  * Number of closing containers
> Above are already handled in HDDS-918.
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
> Handled in HDDS-2193.
>  * Number of successful container report processing
>  * Number of failed container report processing
>  * Number of successful incremental container report processing
>  * Number of failed incremental container report processing
> These will be handled in this jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1146) Adding container related metrics in SCM

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1146?focusedWorklogId=319840=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319840
 ]

ASF GitHub Bot logged work on HDDS-1146:


Author: ASF GitHub Bot
Created on: 27/Sep/19 21:27
Start Date: 27/Sep/19 21:27
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on pull request #1541: HDDS-1146. 
Adding container related metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1541#discussion_r329256932
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/IncrementalContainerReportHandler.java
 ##
 @@ -66,16 +67,25 @@ public void onMessage(final 
IncrementalContainerReportFromDatanode report,
 nodeManager.addContainer(dd, id);
 processContainerReplica(dd, replicaProto);
   } catch (ContainerNotFoundException e) {
+success = false;
 LOG.warn("Container {} not found!", replicaProto.getContainerID());
   } catch (NodeNotFoundException ex) {
+success = false;
 LOG.error("Received ICR from unknown datanode {} {}",
 report.getDatanodeDetails(), ex);
   } catch (IOException e) {
+success = false;
 LOG.error("Exception while processing ICR for container {}",
 replicaProto.getContainerID());
   }
 }
 
+if (success) {
+  getContainerManager().notifyContainerReportProcessing(false, true);
+} else {
 
 Review comment:
   Any reason we are not doing 
_getContainerManager().notifyContainerReportProcessing(false, success)_ ?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319840)
Time Spent: 1h 10m  (was: 1h)

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-1146
> URL: https://issues.apache.org/jira/browse/HDDS-1146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1146.000.patch, HDDS-1146.001.patch, 
> HDDS-1146.002.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of containers
>  * Number of open containers
>  * Number of closed containers
>  * Number of quasi closed containers
>  * Number of closing containers
> Above are already handled in HDDS-918.
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
> Handled in HDDS-2193.
>  * Number of successful container report processing
>  * Number of failed container report processing
>  * Number of successful incremental container report processing
>  * Number of failed incremental container report processing
> These will be handled in this jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1146) Adding container related metrics in SCM

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1146?focusedWorklogId=319838=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319838
 ]

ASF GitHub Bot logged work on HDDS-1146:


Author: ASF GitHub Bot
Created on: 27/Sep/19 21:25
Start Date: 27/Sep/19 21:25
Worklog Time Spent: 10m 
  Work Description: avijayanhwx commented on issue #1541: HDDS-1146. Adding 
container related metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1541#issuecomment-536101491
 
 
   @bharatviswa504 If you don't mind, can you tell me where a metric like 
numContainerReportsSuccess would be useful? Failures I guess may be useful.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319838)
Time Spent: 1h  (was: 50m)

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-1146
> URL: https://issues.apache.org/jira/browse/HDDS-1146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1146.000.patch, HDDS-1146.001.patch, 
> HDDS-1146.002.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of containers
>  * Number of open containers
>  * Number of closed containers
>  * Number of quasi closed containers
>  * Number of closing containers
> Above are already handled in HDDS-918.
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
> Handled in HDDS-2193.
>  * Number of successful container report processing
>  * Number of failed container report processing
>  * Number of successful incremental container report processing
>  * Number of failed incremental container report processing
> These will be handled in this jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1146) Adding container related metrics in SCM

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1146?focusedWorklogId=319833=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319833
 ]

ASF GitHub Bot logged work on HDDS-1146:


Author: ASF GitHub Bot
Created on: 27/Sep/19 21:06
Start Date: 27/Sep/19 21:06
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1541: HDDS-1146. 
Adding container related metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1541#issuecomment-536096677
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1737 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 77 | Maven dependency ordering for branch |
   | -1 | mvninstall | 48 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 48 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-hdds in trunk failed. |
   | -1 | compile | 14 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 68 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 947 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 18 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1050 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 38 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 19 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for patch |
   | -1 | mvninstall | 35 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 39 | hadoop-ozone in the patch failed. |
   | -1 | compile | 23 | hadoop-hdds in the patch failed. |
   | -1 | compile | 18 | hadoop-ozone in the patch failed. |
   | -1 | javac | 23 | hadoop-hdds in the patch failed. |
   | -1 | javac | 18 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 62 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 798 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 18 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 31 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 19 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 28 | hadoop-hdds in the patch failed. |
   | -1 | unit | 24 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 4336 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1541/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1541 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 8278c62831ff 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 13b427f |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1541/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1541/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1541/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1541/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1541/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1541/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1541/1/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1541/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1541/1/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 

[jira] [Work logged] (HDDS-1146) Adding container related metrics in SCM

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1146?focusedWorklogId=319832=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319832
 ]

ASF GitHub Bot logged work on HDDS-1146:


Author: ASF GitHub Bot
Created on: 27/Sep/19 21:06
Start Date: 27/Sep/19 21:06
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1541: HDDS-1146. 
Adding container related metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1541#issuecomment-536096562
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1407 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 40 | Maven dependency ordering for branch |
   | -1 | mvninstall | 48 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 47 | hadoop-ozone in trunk failed. |
   | -1 | compile | 22 | hadoop-hdds in trunk failed. |
   | -1 | compile | 14 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 68 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 949 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 21 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 19 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1049 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 36 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 19 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | -1 | mvninstall | 36 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 38 | hadoop-ozone in the patch failed. |
   | -1 | compile | 24 | hadoop-hdds in the patch failed. |
   | -1 | compile | 16 | hadoop-ozone in the patch failed. |
   | -1 | javac | 24 | hadoop-hdds in the patch failed. |
   | -1 | javac | 16 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 61 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 773 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 20 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 17 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 32 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 18 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 27 | hadoop-hdds in the patch failed. |
   | -1 | unit | 26 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 3947 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1541/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1541 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 13e1622b27d9 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 13b427f |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1541/2/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1541/2/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1541/2/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1541/2/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1541/2/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1541/2/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1541/2/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1541/2/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1541/2/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 

[jira] [Work logged] (HDDS-2019) Handle Set DtService of token in S3Gateway for OM HA

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2019?focusedWorklogId=319834=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319834
 ]

ASF GitHub Bot logged work on HDDS-2019:


Author: ASF GitHub Bot
Created on: 27/Sep/19 21:06
Start Date: 27/Sep/19 21:06
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1489: HDDS-2019. 
Handle Set DtService of token in S3Gateway for OM HA.
URL: https://github.com/apache/hadoop/pull/1489#issuecomment-536096732
 
 
   Ping @xiaoyuyao for the review.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319834)
Time Spent: 2.5h  (was: 2h 20m)

> Handle Set DtService of token in S3Gateway for OM HA
> 
>
> Key: HDDS-2019
> URL: https://issues.apache.org/jira/browse/HDDS-2019
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> When OM HA is enabled, when tokens are generated, the service name should be 
> set with address of all OM's.
>  
> Current without HA, it is set with Om RpcAddress string. This Jira is to 
> handle:
>  # Set dtService with all OM address. Right now in OMClientProducer, UGI is 
> created with S3 token, and serviceName of token is set with OMAddress, for HA 
> case, this should be set with all OM RPC addresses.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1146) Adding container related metrics in SCM

2019-09-27 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HDDS-1146:
---

Assignee: Bharat Viswanadham  (was: Supratim Deka)

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-1146
> URL: https://issues.apache.org/jira/browse/HDDS-1146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1146.000.patch, HDDS-1146.001.patch, 
> HDDS-1146.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of containers
>  * Number of open containers
>  * Number of closed containers
>  * Number of quasi closed containers
>  * Number of closing containers
> Above are already handled in HDDS-918.
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
> Handled in HDDS-2193.
>  * Number of successful container report processing
>  * Number of failed container report processing
>  * Number of successful incremental container report processing
>  * Number of failed incremental container report processing
> These will be handled in this jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1146) Adding container related metrics in SCM

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1146?focusedWorklogId=319783=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319783
 ]

ASF GitHub Bot logged work on HDDS-1146:


Author: ASF GitHub Bot
Created on: 27/Sep/19 20:04
Start Date: 27/Sep/19 20:04
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1541: 
HDDS-1146. Adding container related metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1541#discussion_r329231264
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerManager.java
 ##
 @@ -179,4 +179,12 @@ ContainerInfo getMatchingContainer(long size, String 
owner,
*/
   ContainerInfo getMatchingContainer(long size, String owner,
   Pipeline pipeline, List excludedContainerIDS);
+
+  /**
+   * Once after report processor handler completes, call this to notify
+   * container manager to increment metrics.
+   * @param isFullReport
+   * @param success
+   */
+  void notifyContainerReportProcessing(boolean isFullReport, boolean success);
 
 Review comment:
   From proto, there is no enum. But i see we have TypedEvent
   TypedEvent
 INCREMENTAL_CONTAINER_REPORT = new TypedEvent<>(
 IncrementalContainerReportFromDatanode.class,
 "Incremental_Container_Report")
   TypedEvent CONTAINER_REPORT =
 new TypedEvent<>(ContainerReportFromDatanode.class, 
"Container_Report");
   
   Do you mean we want to use them instead of boolean?  
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319783)
Time Spent: 0.5h  (was: 20m)

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-1146
> URL: https://issues.apache.org/jira/browse/HDDS-1146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1146.000.patch, HDDS-1146.001.patch, 
> HDDS-1146.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of containers
>  * Number of open containers
>  * Number of closed containers
>  * Number of quasi closed containers
>  * Number of closing containers
> Above are already handled in HDDS-918.
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
> Handled in HDDS-2193.
>  * Number of successful container report processing
>  * Number of failed container report processing
>  * Number of successful incremental container report processing
>  * Number of failed incremental container report processing
> These will be handled in this jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1146) Adding container related metrics in SCM

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1146?focusedWorklogId=319782=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319782
 ]

ASF GitHub Bot logged work on HDDS-1146:


Author: ASF GitHub Bot
Created on: 27/Sep/19 19:59
Start Date: 27/Sep/19 19:59
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #1541: HDDS-1146. 
Adding container related metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1541#discussion_r329229253
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ContainerManager.java
 ##
 @@ -179,4 +179,12 @@ ContainerInfo getMatchingContainer(long size, String 
owner,
*/
   ContainerInfo getMatchingContainer(long size, String owner,
   Pipeline pipeline, List excludedContainerIDS);
+
+  /**
+   * Once after report processor handler completes, call this to notify
+   * container manager to increment metrics.
+   * @param isFullReport
+   * @param success
+   */
+  void notifyContainerReportProcessing(boolean isFullReport, boolean success);
 
 Review comment:
   I am not sure, But I think we have a Enum that indicates if the report is 
full or partial. If there is already one, then we might want to use it. 
Otherwise, I am okay with the boolean.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319782)
Time Spent: 20m  (was: 10m)

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-1146
> URL: https://issues.apache.org/jira/browse/HDDS-1146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1146.000.patch, HDDS-1146.001.patch, 
> HDDS-1146.002.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of containers
>  * Number of open containers
>  * Number of closed containers
>  * Number of quasi closed containers
>  * Number of closing containers
> Above are already handled in HDDS-918.
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
> Handled in HDDS-2193.
>  * Number of successful container report processing
>  * Number of failed container report processing
>  * Number of successful incremental container report processing
>  * Number of failed incremental container report processing
> These will be handled in this jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1146) Adding container related metrics in SCM

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1146?focusedWorklogId=319775=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319775
 ]

ASF GitHub Bot logged work on HDDS-1146:


Author: ASF GitHub Bot
Created on: 27/Sep/19 19:53
Start Date: 27/Sep/19 19:53
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1541: 
HDDS-1146. Adding container related metrics in SCM.
URL: https://github.com/apache/hadoop/pull/1541
 
 
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319775)
Remaining Estimate: 0h
Time Spent: 10m

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-1146
> URL: https://issues.apache.org/jira/browse/HDDS-1146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1146.000.patch, HDDS-1146.001.patch, 
> HDDS-1146.002.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of containers
>  * Number of open containers
>  * Number of closed containers
>  * Number of quasi closed containers
>  * Number of closing containers
> Above are already handled in HDDS-918.
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
> Handled in HDDS-2193.
>  * Number of successful container report processing
>  * Number of failed container report processing
>  * Number of successful incremental container report processing
>  * Number of failed incremental container report processing
> These will be handled in this jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1146) Adding container related metrics in SCM

2019-09-27 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1146:
-
Status: Patch Available  (was: Open)

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-1146
> URL: https://issues.apache.org/jira/browse/HDDS-1146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1146.000.patch, HDDS-1146.001.patch, 
> HDDS-1146.002.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of containers
>  * Number of open containers
>  * Number of closed containers
>  * Number of quasi closed containers
>  * Number of closing containers
> Above are already handled in HDDS-918.
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
> Handled in HDDS-2193.
>  * Number of successful container report processing
>  * Number of failed container report processing
>  * Number of successful incremental container report processing
>  * Number of failed incremental container report processing
> These will be handled in this jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1146) Adding container related metrics in SCM

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1146:
-
Labels: pull-request-available  (was: )

> Adding container related metrics in SCM
> ---
>
> Key: HDDS-1146
> URL: https://issues.apache.org/jira/browse/HDDS-1146
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Nanda kumar
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1146.000.patch, HDDS-1146.001.patch, 
> HDDS-1146.002.patch
>
>
> This jira aims to add more container related metrics to SCM.
>  Following metrics will be added as part of this jira:
>  * Number of containers
>  * Number of open containers
>  * Number of closed containers
>  * Number of quasi closed containers
>  * Number of closing containers
> Above are already handled in HDDS-918.
>  * Number of successful create container calls
>  * Number of failed create container calls
>  * Number of successful delete container calls
>  * Number of failed delete container calls
> Handled in HDDS-2193.
>  * Number of successful container report processing
>  * Number of failed container report processing
>  * Number of successful incremental container report processing
>  * Number of failed incremental container report processing
> These will be handled in this jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14564) Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable

2019-09-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939718#comment-16939718
 ] 

Hudson commented on HDFS-14564:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17408 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17408/])
HDFS-14564: Add libhdfs APIs for readFully; add readFully to (weichiu: rev 
13b427fc05da7352fadd7214adfa09c326bba238)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataInputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/libhdfs_wrapper_defines.h
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ByteBufferPositionedReadable.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreams.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/CryptoStreamsTestBase.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/libhdfspp_wrapper_defines.h
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsNormal.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs-tests/test_libhdfs_ops.c
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestByteBufferPread.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoStreamsForLocalFS.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/hdfs_shim.c
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/tests/libhdfs_wrapper_undefs.h
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/include/hdfs/hdfs.h


> Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable
> -
>
> Key: HDFS-14564
> URL: https://issues.apache.org/jira/browse/HDFS-14564
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Fix For: 3.3.0
>
>
> Splitting this out from HDFS-14478
> The {{PositionedReadable#readFully}} APIs have existed for a while, but have 
> never been exposed via libhdfs.
> HDFS-3246 added a new interface called {{ByteBufferPositionedReadable}} that 
> provides a {{ByteBuffer}} version of {{PositionedReadable}}, but it does not 
> contain a {{readFully}} method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2198) SCM should not consider containers in CLOSING state to come out of safemode

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2198?focusedWorklogId=319768=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319768
 ]

ASF GitHub Bot logged work on HDDS-2198:


Author: ASF GitHub Bot
Created on: 27/Sep/19 19:31
Start Date: 27/Sep/19 19:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1540: HDDS-2198. SCM 
should not consider containers in CLOSING state to come out of safemode.
URL: https://github.com/apache/hadoop/pull/1540#issuecomment-536069314
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 50 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | -1 | mvninstall | 34 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 35 | hadoop-ozone in trunk failed. |
   | -1 | compile | 24 | hadoop-hdds in trunk failed. |
   | -1 | compile | 15 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 66 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1062 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 24 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 20 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1178 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 46 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 21 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 38 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 31 | hadoop-ozone in the patch failed. |
   | -1 | compile | 26 | hadoop-hdds in the patch failed. |
   | -1 | compile | 18 | hadoop-ozone in the patch failed. |
   | -1 | javac | 26 | hadoop-hdds in the patch failed. |
   | -1 | javac | 18 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 29 | hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 877 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 19 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 33 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 19 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 26 | hadoop-hdds in the patch failed. |
   | -1 | unit | 23 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 2753 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1540 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 4cf6e1cb6b3a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ce58c05 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/1/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/1/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/1/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/1/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/1/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1540/1/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | 

[jira] [Resolved] (HDFS-14564) Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable

2019-09-27 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-14564.

Fix Version/s: 3.3.0
   Resolution: Fixed

Thanks [~stakiar] for the patch and [~smeng] for review!

> Add libhdfs APIs for readFully; add readFully to ByteBufferPositionedReadable
> -
>
> Key: HDFS-14564
> URL: https://issues.apache.org/jira/browse/HDFS-14564
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs, native
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Fix For: 3.3.0
>
>
> Splitting this out from HDFS-14478
> The {{PositionedReadable#readFully}} APIs have existed for a while, but have 
> never been exposed via libhdfs.
> HDFS-3246 added a new interface called {{ByteBufferPositionedReadable}} that 
> provides a {{ByteBuffer}} version of {{PositionedReadable}}, but it does not 
> contain a {{readFully}} method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2200) Recon does not handle the NULL snapshot from OM DB cleanly.

2019-09-27 Thread Aravindan Vijayan (Jira)
Aravindan Vijayan created HDDS-2200:
---

 Summary: Recon does not handle the NULL snapshot from OM DB 
cleanly.
 Key: HDDS-2200
 URL: https://issues.apache.org/jira/browse/HDDS-2200
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Recon
Reporter: Aravindan Vijayan
Assignee: Aravindan Vijayan


{code}
2019-09-27 11:35:19,835 [pool-9-thread-1] ERROR  - Null snapshot location 
got from OM.
2019-09-27 11:35:19,839 [pool-9-thread-1] INFO   - Calling reprocess on 
Recon tasks.
2019-09-27 11:35:19,840 [pool-7-thread-1] INFO   - Starting a 'reprocess' 
run of ContainerKeyMapperTask.
2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Creating new Recon 
Container DB at /tmp/recon/db/recon-container.db_1569609319840
2019-09-27 11:35:20,069 [pool-7-thread-1] INFO   - Cleaning up old Recon 
Container DB at /tmp/recon/db/recon-container.db_1569609258721.
2019-09-27 11:35:20,144 [pool-9-thread-1] ERROR  - Unexpected error :
java.util.concurrent.ExecutionException: java.lang.NullPointerException
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at 
org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.reInitializeTasks(ReconTaskControllerImpl.java:181)
at 
org.apache.hadoop.ozone.recon.spi.impl.OzoneManagerServiceProviderImpl.syncDataFromOM(OzoneManagerServiceProviderImpl.java:333)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.ozone.recon.tasks.ContainerKeyMapperTask.reprocess(ContainerKeyMapperTask.java:81)
at 
org.apache.hadoop.ozone.recon.tasks.ReconTaskControllerImpl.lambda$reInitializeTasks$3(ReconTaskControllerImpl.java:176)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2198) SCM should not consider containers in CLOSING state to come out of safemode

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2198?focusedWorklogId=319744=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319744
 ]

ASF GitHub Bot logged work on HDDS-2198:


Author: ASF GitHub Bot
Created on: 27/Sep/19 18:44
Start Date: 27/Sep/19 18:44
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on pull request #1540: 
HDDS-2198. SCM should not consider containers in CLOSING state to come out of 
safemode.
URL: https://github.com/apache/hadoop/pull/1540
 
 
   
   There are cases where SCM can be stuck in safemode for ever if it considers 
containers in CLOSING state for coming out of safemode
   
   * If there are 5 containers in OPEN state inside SCM
   * Out of 5, 3 containers are created in datanodes by the client.
   * 2 containers are yet to be created in datanodes
   * Due to some pipeline issue, pipeline close action is sent.
   * All 5 container's state are changed from OPEN to CLOSING in SCM.
   * Eventually , 3 container's state moves from CLOSING to CLOSED in SCM as 
the datanodes closes those containers.
   * 2 of the containers are still in CLOSING state.
   * SCM is restarted.
   * SCM will never gets container reports for the containers which were in 
CLOSING state as those containers were never created in datanodes.
   * SCM will remain in safemode.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319744)
Remaining Estimate: 0h
Time Spent: 10m

> SCM should not consider containers in CLOSING state to come out of safemode
> ---
>
> Key: HDDS-2198
> URL: https://issues.apache.org/jira/browse/HDDS-2198
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Nilotpal Nandi
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There are cases where SCM can be stuck in safemode for ever if it considers 
> containers in CLOSING state for coming out of safemode
> * If there are 5 containers in OPEN state inside SCM 
> * Out of 5, 3 containers are created in datanodes by the client.
> * 2 containers are yet to be created in datanodes
> * Due to some pipeline issue, pipeline close action is sent.
> * All 5 container's state are changed from OPEN to CLOSING in SCM.
> * Eventually , 3 container's state moves from CLOSING to CLOSED in SCM as the 
> datanodes closes those containers.
> * 2 of the containers are still in CLOSING state.
> * SCM is restarted.
> * SCM will never gets container reports for the containers which were in 
> CLOSING state as those containers were never created in datanodes.
> * SCM will remain in safemode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2198) SCM should not consider containers in CLOSING state to come out of safemode

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2198:
-
Labels: pull-request-available  (was: )

> SCM should not consider containers in CLOSING state to come out of safemode
> ---
>
> Key: HDDS-2198
> URL: https://issues.apache.org/jira/browse/HDDS-2198
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Nilotpal Nandi
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>
> There are cases where SCM can be stuck in safemode for ever if it considers 
> containers in CLOSING state for coming out of safemode
> * If there are 5 containers in OPEN state inside SCM 
> * Out of 5, 3 containers are created in datanodes by the client.
> * 2 containers are yet to be created in datanodes
> * Due to some pipeline issue, pipeline close action is sent.
> * All 5 container's state are changed from OPEN to CLOSING in SCM.
> * Eventually , 3 container's state moves from CLOSING to CLOSED in SCM as the 
> datanodes closes those containers.
> * 2 of the containers are still in CLOSING state.
> * SCM is restarted.
> * SCM will never gets container reports for the containers which were in 
> CLOSING state as those containers were never created in datanodes.
> * SCM will remain in safemode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2198) SCM should not consider containers in CLOSING state to come out of safemode

2019-09-27 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-2198:
--
Status: Patch Available  (was: Open)

> SCM should not consider containers in CLOSING state to come out of safemode
> ---
>
> Key: HDDS-2198
> URL: https://issues.apache.org/jira/browse/HDDS-2198
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Nilotpal Nandi
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There are cases where SCM can be stuck in safemode for ever if it considers 
> containers in CLOSING state for coming out of safemode
> * If there are 5 containers in OPEN state inside SCM 
> * Out of 5, 3 containers are created in datanodes by the client.
> * 2 containers are yet to be created in datanodes
> * Due to some pipeline issue, pipeline close action is sent.
> * All 5 container's state are changed from OPEN to CLOSING in SCM.
> * Eventually , 3 container's state moves from CLOSING to CLOSED in SCM as the 
> datanodes closes those containers.
> * 2 of the containers are still in CLOSING state.
> * SCM is restarted.
> * SCM will never gets container reports for the containers which were in 
> CLOSING state as those containers were never created in datanodes.
> * SCM will remain in safemode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2034) Async RATIS pipeline creation and destroy through heartbeat commands

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2034?focusedWorklogId=319725=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319725
 ]

ASF GitHub Bot logged work on HDDS-2034:


Author: ASF GitHub Bot
Created on: 27/Sep/19 18:28
Start Date: 27/Sep/19 18:28
Worklog Time Spent: 10m 
  Work Description: lokeshj1703 commented on pull request #1469: HDDS-2034. 
Async RATIS pipeline creation and destroy through heartbea…
URL: https://github.com/apache/hadoop/pull/1469#discussion_r329055495
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/BlockManagerImpl.java
 ##
 @@ -188,6 +208,15 @@ public AllocatedBlock allocateBlock(final long size, 
ReplicationType type,
   // TODO: #CLUTIL Remove creation logic when all replication types and
   // factors are handled by pipeline creator
   pipeline = pipelineManager.createPipeline(type, factor);
+  // wait until pipeline is ready
+  long current = System.currentTimeMillis();
+  while (!pipeline.isOpen() && System.currentTimeMillis() <
+  (current + pipelineCreateWaitTimeout)) {
+try {
+  Thread.sleep(1000);
+} catch (InterruptedException e) {
+}
+  }
 
 Review comment:
   I think we also need to handle the case where no OPEN pipelines are there 
but pipelines are present in ALLOCATED state.
   I think it would be better idea to make the call return 
CompletableFuture\ and complete this future or throw an 
exception on timeout. I think it would be better than using a while loop. The 
futures can be completed on OPEN_PIPELINE event.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319725)
Time Spent: 7h 40m  (was: 7.5h)

> Async RATIS pipeline creation and destroy through heartbeat commands
> 
>
> Key: HDDS-2034
> URL: https://issues.apache.org/jira/browse/HDDS-2034
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h 40m
>  Remaining Estimate: 0h
>
> Currently, pipeline creation and destroy are synchronous operations. SCM 
> directly connect to each datanode of the pipeline through gRPC channel to 
> create the pipeline to destroy the pipeline.  
> This task is to remove the gRPC channel, send pipeline creation and destroy 
> action through heartbeat command to each datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2034) Async RATIS pipeline creation and destroy through heartbeat commands

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2034?focusedWorklogId=319726=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319726
 ]

ASF GitHub Bot logged work on HDDS-2034:


Author: ASF GitHub Bot
Created on: 27/Sep/19 18:28
Start Date: 27/Sep/19 18:28
Worklog Time Spent: 10m 
  Work Description: lokeshj1703 commented on pull request #1469: HDDS-2034. 
Async RATIS pipeline creation and destroy through heartbea…
URL: https://github.com/apache/hadoop/pull/1469#discussion_r329154805
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/ClosePipelineCommandHandler.java
 ##
 @@ -0,0 +1,166 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.container.common.statemachine.commandhandler;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.
+StorageContainerDatanodeProtocolProtos.ClosePipelineCommandProto;
+import org.apache.hadoop.hdds.protocol.proto.
+StorageContainerDatanodeProtocolProtos.SCMCommandProto;
+import org.apache.hadoop.hdds.ratis.RatisHelper;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.client.HddsClientUtils;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
+import org.apache.hadoop.hdds.security.x509.SecurityConfig;
+import org.apache.hadoop.hdds.security.x509.certificate.client
+.CertificateClient;
+import org.apache.hadoop.ozone.container.common.statemachine
+.SCMConnectionManager;
+import org.apache.hadoop.ozone.container.common.statemachine.StateContext;
+import org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer;
+import org.apache.hadoop.ozone.protocol.commands.ClosePipelineCommand;
+import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
+import org.apache.hadoop.util.Time;
+import org.apache.ratis.client.RaftClient;
+import org.apache.ratis.grpc.GrpcTlsConfig;
+import org.apache.ratis.protocol.RaftGroupId;
+import org.apache.ratis.protocol.RaftPeer;
+import org.apache.ratis.retry.RetryPolicy;
+import org.apache.ratis.rpc.SupportedRpcType;
+import org.apache.ratis.util.TimeDuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.concurrent.atomic.AtomicLong;
+
+/**
+ * Handler for close pipeline command received from SCM.
+ */
+public class ClosePipelineCommandHandler implements CommandHandler {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ClosePipelineCommandHandler.class);
+
+  private AtomicLong invocationCount = new AtomicLong(0);
+  private long totalTime;
+
+  /**
+   * Constructs a closePipelineCommand handler.
+   */
+  public ClosePipelineCommandHandler() {
+  }
+
+  /**
+   * Handles a given SCM command.
+   *
+   * @param command   - SCM Command
+   * @param ozoneContainer- Ozone Container.
+   * @param context   - Current Context.
+   * @param connectionManager - The SCMs that we are talking to.
+   */
+  @Override
+  public void handle(SCMCommand command, OzoneContainer ozoneContainer,
+  StateContext context, SCMConnectionManager connectionManager) {
+invocationCount.incrementAndGet();
+final long startTime = Time.monotonicNow();
+final DatanodeDetails datanode = context.getParent()
+.getDatanodeDetails();
+final ClosePipelineCommandProto closeCommand =
+((ClosePipelineCommand)command).getProto();
+final PipelineID pipelineID = PipelineID.getFromProtobuf(
+closeCommand.getPipelineID());
+
+try {
+  destroyPipeline(datanode, pipelineID, context);
+  LOG.info("Close Pipeline #{} command on datanode #{}.", pipelineID,
+  datanode.getUuidString());
+} catch (IOException e) {
+  LOG.error("Can't close pipeline #{}", pipelineID, e);
+} finally {
+  long endTime = Time.monotonicNow();
+  totalTime += endTime - startTime;
+}
+
+  }
+
+  /**
+   * Returns the command type that this command handler handles.
+   *
+   * 

[jira] [Work logged] (HDDS-2034) Async RATIS pipeline creation and destroy through heartbeat commands

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2034?focusedWorklogId=319732=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319732
 ]

ASF GitHub Bot logged work on HDDS-2034:


Author: ASF GitHub Bot
Created on: 27/Sep/19 18:28
Start Date: 27/Sep/19 18:28
Worklog Time Spent: 10m 
  Work Description: lokeshj1703 commented on pull request #1469: HDDS-2034. 
Async RATIS pipeline creation and destroy through heartbea…
URL: https://github.com/apache/hadoop/pull/1469#discussion_r329179806
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/OneReplicaPipelineSafeModeRule.java
 ##
 @@ -75,69 +66,59 @@ public OneReplicaPipelineSafeModeRule(String ruleName, 
EventQueue eventQueue,
 HDDS_SCM_SAFEMODE_ONE_NODE_REPORTED_PIPELINE_PCT  +
 " value should be >= 0.0 and <= 1.0");
 
+// Exclude CLOSED pipeline
 int totalPipelineCount =
 pipelineManager.getPipelines(HddsProtos.ReplicationType.RATIS,
-HddsProtos.ReplicationFactor.THREE).size();
+HddsProtos.ReplicationFactor.THREE, Pipeline.PipelineState.OPEN)
+.size() +
+pipelineManager.getPipelines(HddsProtos.ReplicationType.RATIS,
 
 Review comment:
   Should we also include factor ONE pipeline here like 
HealthyPipelineSafeModeRule?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319732)

> Async RATIS pipeline creation and destroy through heartbeat commands
> 
>
> Key: HDDS-2034
> URL: https://issues.apache.org/jira/browse/HDDS-2034
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h 10m
>  Remaining Estimate: 0h
>
> Currently, pipeline creation and destroy are synchronous operations. SCM 
> directly connect to each datanode of the pipeline through gRPC channel to 
> create the pipeline to destroy the pipeline.  
> This task is to remove the gRPC channel, send pipeline creation and destroy 
> action through heartbeat command to each datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2034) Async RATIS pipeline creation and destroy through heartbeat commands

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2034?focusedWorklogId=319728=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319728
 ]

ASF GitHub Bot logged work on HDDS-2034:


Author: ASF GitHub Bot
Created on: 27/Sep/19 18:28
Start Date: 27/Sep/19 18:28
Worklog Time Spent: 10m 
  Work Description: lokeshj1703 commented on pull request #1469: HDDS-2034. 
Async RATIS pipeline creation and destroy through heartbea…
URL: https://github.com/apache/hadoop/pull/1469#discussion_r329195075
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SCMPipelineManager.java
 ##
 @@ -148,17 +166,20 @@ private void initializePipelineState() throws 
IOException {
   }
 
   @Override
-  public synchronized Pipeline createPipeline(
-  ReplicationType type, ReplicationFactor factor) throws IOException {
+  public synchronized Pipeline createPipeline(ReplicationType type,
+  ReplicationFactor factor) throws IOException {
 lock.writeLock().lock();
 try {
   Pipeline pipeline = pipelineFactory.create(type, factor);
   pipelineStore.put(pipeline.getId().getProtobuf().toByteArray(),
   pipeline.getProtobufMessage().toByteArray());
   stateManager.addPipeline(pipeline);
   nodeManager.addPipeline(pipeline);
-  metrics.incNumPipelineCreated();
-  metrics.createPerPipelineMetrics(pipeline);
+  metrics.incNumPipelineAllocated();
+  if (pipeline.isOpen()) {
+metrics.incNumPipelineCreated();
+metrics.createPerPipelineMetrics(pipeline);
+  }
 
 Review comment:
   `  metrics.incNumPipelineCreated();
   metrics.createPerPipelineMetrics(pipeline);
   `
   I guess we should move these inside openPipeline function.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319728)
Time Spent: 8h  (was: 7h 50m)

> Async RATIS pipeline creation and destroy through heartbeat commands
> 
>
> Key: HDDS-2034
> URL: https://issues.apache.org/jira/browse/HDDS-2034
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h
>  Remaining Estimate: 0h
>
> Currently, pipeline creation and destroy are synchronous operations. SCM 
> directly connect to each datanode of the pipeline through gRPC channel to 
> create the pipeline to destroy the pipeline.  
> This task is to remove the gRPC channel, send pipeline creation and destroy 
> action through heartbeat command to each datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2034) Async RATIS pipeline creation and destroy through heartbeat commands

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2034?focusedWorklogId=319730=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319730
 ]

ASF GitHub Bot logged work on HDDS-2034:


Author: ASF GitHub Bot
Created on: 27/Sep/19 18:28
Start Date: 27/Sep/19 18:28
Worklog Time Spent: 10m 
  Work Description: lokeshj1703 commented on pull request #1469: HDDS-2034. 
Async RATIS pipeline creation and destroy through heartbea…
URL: https://github.com/apache/hadoop/pull/1469#discussion_r329155082
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/commandhandler/CreatePipelineCommandHandler.java
 ##
 @@ -0,0 +1,228 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.container.common.statemachine.commandhandler;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.proto.
+StorageContainerDatanodeProtocolProtos.CreatePipelineCommandProto;
+import org.apache.hadoop.hdds.protocol.proto.
+StorageContainerDatanodeProtocolProtos.SCMCommandProto;
+import org.apache.hadoop.hdds.protocol.proto.
+StorageContainerDatanodeProtocolProtos.CreatePipelineACKProto;
+import org.apache.hadoop.hdds.ratis.RatisHelper;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.client.HddsClientUtils;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
+import org.apache.hadoop.hdds.security.x509.SecurityConfig;
+import org.apache.hadoop.hdds.security.x509.certificate.client
+.CertificateClient;
+import org.apache.hadoop.io.MultipleIOException;
+import org.apache.hadoop.ozone.container.common.statemachine
+.SCMConnectionManager;
+import org.apache.hadoop.ozone.container.common.statemachine.StateContext;
+import org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer;
+import org.apache.hadoop.ozone.protocol.commands.CommandStatus;
+import org.apache.hadoop.ozone.protocol.commands.CreatePipelineCommand;
+import org.apache.hadoop.ozone.protocol.commands.CreatePipelineCommandStatus;
+import org.apache.hadoop.ozone.protocol.commands.SCMCommand;
+import org.apache.hadoop.util.Time;
+import org.apache.ratis.client.RaftClient;
+import org.apache.ratis.grpc.GrpcTlsConfig;
+import org.apache.ratis.protocol.NotLeaderException;
+import org.apache.ratis.protocol.RaftClientReply;
+import org.apache.ratis.protocol.RaftGroup;
+import org.apache.ratis.protocol.RaftGroupId;
+import org.apache.ratis.protocol.RaftPeer;
+import org.apache.ratis.retry.RetryPolicy;
+import org.apache.ratis.rpc.SupportedRpcType;
+import org.apache.ratis.util.TimeDuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+import java.util.concurrent.RejectedExecutionException;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.function.Consumer;
+import java.util.stream.Collectors;
+
+/**
+ * Handler for create pipeline command received from SCM.
+ */
+public class CreatePipelineCommandHandler implements CommandHandler {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(CreatePipelineCommandHandler.class);
+
+  private AtomicLong invocationCount = new AtomicLong(0);
+  private long totalTime;
+
+  /**
+   * Constructs a createPipelineCommand handler.
+   */
+  public CreatePipelineCommandHandler() {
+  }
+
+  /**
+   * Handles a given SCM command.
+   *
+   * @param command   - SCM Command
+   * @param ozoneContainer- Ozone Container.
+   * @param context   - Current Context.
+   * @param connectionManager - The SCMs that we are talking to.
+   */
+  @Override
+  public void handle(SCMCommand command, OzoneContainer ozoneContainer,
+  StateContext context, SCMConnectionManager connectionManager) {
+invocationCount.incrementAndGet();
+final long startTime = Time.monotonicNow();
+final DatanodeDetails dn = context.getParent()
+

[jira] [Work logged] (HDDS-2034) Async RATIS pipeline creation and destroy through heartbeat commands

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2034?focusedWorklogId=319727=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319727
 ]

ASF GitHub Bot logged work on HDDS-2034:


Author: ASF GitHub Bot
Created on: 27/Sep/19 18:28
Start Date: 27/Sep/19 18:28
Worklog Time Spent: 10m 
  Work Description: lokeshj1703 commented on pull request #1469: HDDS-2034. 
Async RATIS pipeline creation and destroy through heartbea…
URL: https://github.com/apache/hadoop/pull/1469#discussion_r329190146
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/SafeModeHandler.java
 ##
 @@ -129,7 +129,8 @@ private void cleanupPipelines() {
 List pipelineList = scmPipelineManager.getPipelines();
 pipelineList.forEach((pipeline) -> {
   try {
-if (pipeline.getPipelineState() == Pipeline.PipelineState.ALLOCATED) {
+if (pipeline.getPipelineState() == Pipeline.PipelineState.ALLOCATED &&
+pipeline.isAllocationTimeout()) {
   scmPipelineManager.finalizeAndDestroyPipeline(pipeline, false);
 
 Review comment:
   This condition is currently always false because 
pipeline.isAllocationTimeout() always return false.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319727)
Time Spent: 8h  (was: 7h 50m)

> Async RATIS pipeline creation and destroy through heartbeat commands
> 
>
> Key: HDDS-2034
> URL: https://issues.apache.org/jira/browse/HDDS-2034
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h
>  Remaining Estimate: 0h
>
> Currently, pipeline creation and destroy are synchronous operations. SCM 
> directly connect to each datanode of the pipeline through gRPC channel to 
> create the pipeline to destroy the pipeline.  
> This task is to remove the gRPC channel, send pipeline creation and destroy 
> action through heartbeat command to each datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2034) Async RATIS pipeline creation and destroy through heartbeat commands

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2034?focusedWorklogId=319729=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319729
 ]

ASF GitHub Bot logged work on HDDS-2034:


Author: ASF GitHub Bot
Created on: 27/Sep/19 18:28
Start Date: 27/Sep/19 18:28
Worklog Time Spent: 10m 
  Work Description: lokeshj1703 commented on pull request #1469: HDDS-2034. 
Async RATIS pipeline creation and destroy through heartbea…
URL: https://github.com/apache/hadoop/pull/1469#discussion_r329158010
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/safemode/HealthyPipelineSafeModeRule.java
 ##
 @@ -49,43 +39,52 @@
  * through in a cluster.
  */
 public class HealthyPipelineSafeModeRule
-extends SafeModeExitRule{
+extends SafeModeExitRule{
 
   public static final Logger LOG =
   LoggerFactory.getLogger(HealthyPipelineSafeModeRule.class);
-  private final PipelineManager pipelineManager;
   private final int healthyPipelineThresholdCount;
   private int currentHealthyPipelineCount = 0;
-  private final Set processedDatanodeDetails =
-  new HashSet<>();
 
   HealthyPipelineSafeModeRule(String ruleName, EventQueue eventQueue,
   PipelineManager pipelineManager,
   SCMSafeModeManager manager, Configuration configuration) {
 super(manager, ruleName, eventQueue);
-this.pipelineManager = pipelineManager;
 double healthyPipelinesPercent =
 configuration.getDouble(HddsConfigKeys.
 HDDS_SCM_SAFEMODE_HEALTHY_PIPELINE_THRESHOLD_PCT,
 HddsConfigKeys.
 HDDS_SCM_SAFEMODE_HEALTHY_PIPELINE_THRESHOLD_PCT_DEFAULT);
 
+int minHealthyPipelines = 0;
+
+boolean createPipelineInSafemode = configuration.getBoolean(
+HddsConfigKeys.HDDS_SCM_SAFEMODE_PIPELINE_CREATION,
+HddsConfigKeys.HDDS_SCM_SAFEMODE_PIPELINE_CREATION_DEFAULT);
+
+if (createPipelineInSafemode) {
+  minHealthyPipelines =
+  configuration.getInt(HddsConfigKeys.HDDS_SCM_SAFEMODE_MIN_PIPELINE,
+  HddsConfigKeys.HDDS_SCM_SAFEMODE_MIN_PIPELINE_DEFAULT);
+}
+
 Preconditions.checkArgument(
 (healthyPipelinesPercent >= 0.0 && healthyPipelinesPercent <= 1.0),
 HddsConfigKeys.
 HDDS_SCM_SAFEMODE_HEALTHY_PIPELINE_THRESHOLD_PCT
 + " value should be >= 0.0 and <= 1.0");
 
-// As we want to wait for 3 node pipelines
-int pipelineCount =
+// As we want to wait for RATIS write pipelines, no matter ONE or THREE
+int pipelineCount = pipelineManager.getPipelines(
+HddsProtos.ReplicationType.RATIS, Pipeline.PipelineState.OPEN).size() +
 pipelineManager.getPipelines(HddsProtos.ReplicationType.RATIS,
-HddsProtos.ReplicationFactor.THREE).size();
+Pipeline.PipelineState.ALLOCATED).size();
 
 
 Review comment:
   We can rather use the api getPipelines(ReplicationType type).
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319729)

> Async RATIS pipeline creation and destroy through heartbeat commands
> 
>
> Key: HDDS-2034
> URL: https://issues.apache.org/jira/browse/HDDS-2034
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h
>  Remaining Estimate: 0h
>
> Currently, pipeline creation and destroy are synchronous operations. SCM 
> directly connect to each datanode of the pipeline through gRPC channel to 
> create the pipeline to destroy the pipeline.  
> This task is to remove the gRPC channel, send pipeline creation and destroy 
> action through heartbeat command to each datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2034) Async RATIS pipeline creation and destroy through heartbeat commands

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2034?focusedWorklogId=319731=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319731
 ]

ASF GitHub Bot logged work on HDDS-2034:


Author: ASF GitHub Bot
Created on: 27/Sep/19 18:28
Start Date: 27/Sep/19 18:28
Worklog Time Spent: 10m 
  Work Description: lokeshj1703 commented on pull request #1469: HDDS-2034. 
Async RATIS pipeline creation and destroy through heartbea…
URL: https://github.com/apache/hadoop/pull/1469#discussion_r329183465
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/pipeline/Pipeline.java
 ##
 @@ -65,6 +66,7 @@ private Pipeline(PipelineID id, ReplicationType type,
 this.factor = factor;
 this.state = state;
 this.nodeStatus = nodeStatus;
+this.creationTime = System.currentTimeMillis();
   }
 
 Review comment:
   We need to set this field in Builder(Pipeline pipeline) constructor as well.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319731)
Time Spent: 8h 10m  (was: 8h)

> Async RATIS pipeline creation and destroy through heartbeat commands
> 
>
> Key: HDDS-2034
> URL: https://issues.apache.org/jira/browse/HDDS-2034
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 8h 10m
>  Remaining Estimate: 0h
>
> Currently, pipeline creation and destroy are synchronous operations. SCM 
> directly connect to each datanode of the pipeline through gRPC channel to 
> create the pipeline to destroy the pipeline.  
> This task is to remove the gRPC channel, send pipeline creation and destroy 
> action through heartbeat command to each datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14509) DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 3.x

2019-09-27 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939658#comment-16939658
 ] 

Brahma Reddy Battula commented on HDFS-14509:
-

[~shv] thanks for explanation.
{quote}NN 3.x does not include storage types into block token until the upgrade 
is finalized.
 This will require changes on branch-3.x only.
{quote}
Yes,I was targeting existing 2.7 ( 2.8 or 2.6) versions where it can smooth 
upgrade(they can't port any issue before they plan for upgrade.). 

 
{quote}As I said #2 seems more general, so let's just go with it. If nobody 
objects.
{quote}
Should be fine with this approrach also, Existing cluster might need to update 
this patch before upgrade.

 

> DN throws InvalidToken due to inequality of password when upgrade NN 2.x to 
> 3.x
> ---
>
> Key: HDFS-14509
> URL: https://issues.apache.org/jira/browse/HDFS-14509
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yuxuan Wang
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HDFS-14509-001.patch
>
>
> According to the doc, if we want to upgrade cluster from 2.x to 3.x, we need 
> upgrade NN first. And there will be a intermediate state that NN is 3.x and 
> DN is 2.x. At that moment, if a client reads (or writes) a block, it will get 
> a block token from NN and then deliver the token to DN who can verify the 
> token. But the verification in the code now is :
> {code:title=BlockTokenSecretManager.java|borderStyle=solid}
> public void checkAccess(...)
> {
> ...
> id.readFields(new DataInputStream(new 
> ByteArrayInputStream(token.getIdentifier(;
> ...
> if (!Arrays.equals(retrievePassword(id), token.getPassword())) {
>   throw new InvalidToken("Block token with " + id.toString()
>   + " doesn't have the correct token password");
> }
> }
> {code} 
> And {{retrievePassword(id)}} is:
> {code} 
> public byte[] retrievePassword(BlockTokenIdentifier identifier)
> {
> ...
> return createPassword(identifier.getBytes(), key.getKey());
> }
> {code} 
> So, if NN's identifier add new fields, DN will lose the fields and compute 
> wrong password.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14858) [SBN read] Allow configurably enable/disable AlignmentContext on NameNode

2019-09-27 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939633#comment-16939633
 ] 

Hadoop QA commented on HDFS-14858:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 505 unchanged - 0 fixed = 506 total (was 505) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 59s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}156m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
|   | hadoop.hdfs.tools.TestDFSZKFailoverController |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:efed4450bf1 |
| JIRA Issue | HDFS-14858 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12981235/HDFS-14858.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 7ef4fe6ca7cc 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ce58c05 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27977/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 

[jira] [Created] (HDDS-2199) In SCMNodeManager dnsToUuidMap cannot track multiple DNs on the same host

2019-09-27 Thread Stephen O'Donnell (Jira)
Stephen O'Donnell created HDDS-2199:
---

 Summary: In SCMNodeManager dnsToUuidMap cannot track multiple DNs 
on the same host
 Key: HDDS-2199
 URL: https://issues.apache.org/jira/browse/HDDS-2199
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Affects Versions: 0.5.0
Reporter: Stephen O'Donnell


Often in test clusters and tests, we start multiple datanodes on the same host.

In SCMNodeManager.register() there is a map of hostname -> datanode UUID called 
dnsToUuidMap.

If several DNs register from the same host, the entry in the map will be 
overwritten and the last DN to register will 'win'.

This means that the method getNodeByAddress() does not return the correct 
DatanodeDetails object when many hosts are registered from the same address.

This method is only used in SCMBlockProtocolServer.sortDatanodes() to allow it 
to see if one of the nodes matches the client, but it need to be used by the 
Decommission code.

Perhaps we could change the getNodeByAddress() method to returns a list of DNs? 
In normal production clusters, there should only be one returned, but in test 
clusters, there may be many. Any code looking for a specific DN entry would 
need to iterate the list and match on the port number too, as host:port would 
be the unique definition of a datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2164) om.db.checkpoints is getting filling up fast

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2164?focusedWorklogId=319675=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319675
 ]

ASF GitHub Bot logged work on HDDS-2164:


Author: ASF GitHub Bot
Created on: 27/Sep/19 16:54
Start Date: 27/Sep/19 16:54
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #1536: HDDS-2164 : 
om.db.checkpoints is getting filling up fast.
URL: https://github.com/apache/hadoop/pull/1536#issuecomment-536017770
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319675)
Time Spent: 50m  (was: 40m)

> om.db.checkpoints is getting filling up fast
> 
>
> Key: HDDS-2164
> URL: https://issues.apache.org/jira/browse/HDDS-2164
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Nanda kumar
>Assignee: Aravindan Vijayan
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {{om.db.checkpoints}} is filling up fast, we should also clean this up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2034) Async RATIS pipeline creation and destroy through heartbeat commands

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2034?focusedWorklogId=319653=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319653
 ]

ASF GitHub Bot logged work on HDDS-2034:


Author: ASF GitHub Bot
Created on: 27/Sep/19 16:28
Start Date: 27/Sep/19 16:28
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #1469: HDDS-2034. Async 
RATIS pipeline creation and destroy through heartbea…
URL: https://github.com/apache/hadoop/pull/1469#issuecomment-536009169
 
 
   Once we get all the people to review this change, and then perhaps we should 
commit this change in this branch test it and then cherry-pick this change into 
trunk. The decommission path etc. will interact with this code path.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319653)
Time Spent: 7.5h  (was: 7h 20m)

> Async RATIS pipeline creation and destroy through heartbeat commands
> 
>
> Key: HDDS-2034
> URL: https://issues.apache.org/jira/browse/HDDS-2034
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7.5h
>  Remaining Estimate: 0h
>
> Currently, pipeline creation and destroy are synchronous operations. SCM 
> directly connect to each datanode of the pipeline through gRPC channel to 
> create the pipeline to destroy the pipeline.  
> This task is to remove the gRPC channel, send pipeline creation and destroy 
> action through heartbeat command to each datanode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2181) Ozone Manager should send correct ACL type in ACL requests to Authorizer

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2181?focusedWorklogId=319652=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319652
 ]

ASF GitHub Bot logged work on HDDS-2181:


Author: ASF GitHub Bot
Created on: 27/Sep/19 16:25
Start Date: 27/Sep/19 16:25
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on issue #1528: HDDS-2181. 
Ozone Manager should send correct ACL type in ACL requests…
URL: https://github.com/apache/hadoop/pull/1528#issuecomment-536008173
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319652)
Time Spent: 4h  (was: 3h 50m)

> Ozone Manager should send correct ACL type in ACL requests to Authorizer
> 
>
> Key: HDDS-2181
> URL: https://issues.apache.org/jira/browse/HDDS-2181
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Currently, Ozone manager sends "WRITE" as ACLType for key create, key delete 
> and bucket create operation. Fix the acl type in all requests to the 
> authorizer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2169) Avoid buffer copies while submitting client requests in Ratis

2019-09-27 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939548#comment-16939548
 ] 

Hadoop QA commented on HDDS-2169:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
41s{color} | {color:red} hadoop-hdds in trunk failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-ozone in trunk failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-hdds in trunk failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-ozone in trunk failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-hdds in trunk failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-ozone in trunk failed. {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 17m 
16s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-hdds in trunk failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-ozone in trunk failed. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
15s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 20s{color} 
| {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 15s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 58s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-hdds in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
16s{color} | {color:red} hadoop-ozone in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-hdds in the patch failed. {color} |
| 

[jira] [Work logged] (HDDS-2169) Avoid buffer copies while submitting client requests in Ratis

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2169?focusedWorklogId=319619=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319619
 ]

ASF GitHub Bot logged work on HDDS-2169:


Author: ASF GitHub Bot
Created on: 27/Sep/19 15:40
Start Date: 27/Sep/19 15:40
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1517: HDDS-2169
URL: https://github.com/apache/hadoop/pull/1517#issuecomment-535992089
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 78 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for branch |
   | -1 | mvninstall | 41 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 26 | hadoop-ozone in trunk failed. |
   | -1 | compile | 19 | hadoop-hdds in trunk failed. |
   | -1 | compile | 12 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 61 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 947 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 20 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 16 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 1036 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 32 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 16 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for patch |
   | -1 | mvninstall | 33 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 26 | hadoop-ozone in the patch failed. |
   | -1 | compile | 20 | hadoop-hdds in the patch failed. |
   | -1 | compile | 15 | hadoop-ozone in the patch failed. |
   | -1 | javac | 20 | hadoop-hdds in the patch failed. |
   | -1 | javac | 15 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 50 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 778 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 19 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 16 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 28 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 16 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 24 | hadoop-hdds in the patch failed. |
   | -1 | unit | 20 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 30 | The patch does not generate ASF License warnings. |
   | | | 2492 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1517 |
   | JIRA Issue | HDDS-2169 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 62979f013a8d 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ce58c05 |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/5/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/5/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/5/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/5/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/5/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/5/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/5/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/5/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/5/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1517/5/artifact/out/patch-mvninstall-hadoop-ozone.txt

[jira] [Commented] (HDFS-14745) Backport HDFS persistent memory read cache support to branch-3.1

2019-09-27 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939518#comment-16939518
 ] 

Hadoop QA commented on HDFS-14745:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m  
7s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} branch-3.1 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
23s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
43s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
20s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m 
19s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
34s{color} | {color:green} branch-3.1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
17s{color} | {color:green} branch-3.1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 12m 15s{color} 
| {color:red} root generated 142 new + 1134 unchanged - 142 fixed = 1276 total 
(was 1276) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 21s{color} | {color:orange} root: The patch generated 1 new + 771 unchanged 
- 11 fixed = 772 total (was 782) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
10s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 6 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}137m 44s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
47s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | 

[jira] [Created] (HDDS-2198) SCM should not consider containers in CLOSING state to come out of safemode

2019-09-27 Thread Nanda kumar (Jira)
Nanda kumar created HDDS-2198:
-

 Summary: SCM should not consider containers in CLOSING state to 
come out of safemode
 Key: HDDS-2198
 URL: https://issues.apache.org/jira/browse/HDDS-2198
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Reporter: Nanda kumar
Assignee: Nanda kumar


There are cases where SCM can be stuck in safemode for ever if it considers 
containers in CLOSING state for coming out of safemode

* If there are 5 containers in OPEN state inside SCM 
* Out of 5, 3 containers are created in datanodes by the client.
* 2 containers are yet to be created in datanodes
* Due to some pipeline issue, pipeline close action is sent.
* All 5 container's state are changed from OPEN to CLOSING in SCM.
* Eventually , 3 container's state moves from CLOSING to CLOSED in SCM as the 
datanodes closes those containers.
* 2 of the containers are still in CLOSING state.
* SCM is restarted.
* SCM will never gets container reports for the containers which were in 
CLOSING state as those containers were never created in datanodes.
* SCM will remain in safemode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2198) SCM should not consider containers in CLOSING state to come out of safemode

2019-09-27 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-2198:
--
Reporter: Nilotpal Nandi  (was: Nanda kumar)

> SCM should not consider containers in CLOSING state to come out of safemode
> ---
>
> Key: HDDS-2198
> URL: https://issues.apache.org/jira/browse/HDDS-2198
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Nilotpal Nandi
>Assignee: Nanda kumar
>Priority: Major
>
> There are cases where SCM can be stuck in safemode for ever if it considers 
> containers in CLOSING state for coming out of safemode
> * If there are 5 containers in OPEN state inside SCM 
> * Out of 5, 3 containers are created in datanodes by the client.
> * 2 containers are yet to be created in datanodes
> * Due to some pipeline issue, pipeline close action is sent.
> * All 5 container's state are changed from OPEN to CLOSING in SCM.
> * Eventually , 3 container's state moves from CLOSING to CLOSED in SCM as the 
> datanodes closes those containers.
> * 2 of the containers are still in CLOSING state.
> * SCM is restarted.
> * SCM will never gets container reports for the containers which were in 
> CLOSING state as those containers were never created in datanodes.
> * SCM will remain in safemode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6524) Choosing datanode retries times considering with block replica number

2019-09-27 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-6524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939482#comment-16939482
 ] 

Lisheng Sun commented on HDFS-6524:
---

all failed unit tests are not related to this patch.

> Choosing datanode  retries times considering with block replica number
> --
>
> Key: HDFS-6524
> URL: https://issues.apache.org/jira/browse/HDFS-6524
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0-alpha1
>Reporter: Liang Xie
>Assignee: Lisheng Sun
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6524.001.patch, HDFS-6524.002.patch, 
> HDFS-6524.003.patch, HDFS-6524.004.patch, HDFS-6524.005(2).patch, 
> HDFS-6524.005.patch, HDFS-6524.006.patch, HDFS-6524.txt
>
>
> Currently the chooseDataNode() does retry with the setting: 
> dfsClientConf.maxBlockAcquireFailures, which by default is 3 
> (DFS_CLIENT_MAX_BLOCK_ACQUIRE_FAILURES_DEFAULT = 3), it would be better 
> having another option, block replication factor. One cluster with only  two 
> block replica setting, or using Reed-solomon encoding solution with one 
> replica factor. It helps to reduce the long tail latency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14574) [distcp] Add ability to increase the replication factor for fileList.seq

2019-09-27 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina reassigned HDFS-14574:


Assignee: (was: hemanthboyina)

> [distcp] Add ability to increase the replication factor for fileList.seq
> 
>
> Key: HDFS-14574
> URL: https://issues.apache.org/jira/browse/HDFS-14574
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: distcp
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> distcp creates fileList.seq with default replication factor = 3.
> For large clusters runing distcp job with thousands of mappers, that 
> 3-replica for the file listing file is not good enough, because DataNodes 
> easily run out of max number of xceivers.
>  
> It looks like we can pass in a distcp option, update replication factor in 
> when creating the sequence file writer: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/SimpleCopyListing.java#L517-L521]
>  
> Like this:
> {code:java}
> return SequenceFile.createWriter(getConf(),
> SequenceFile.Writer.file(pathToListFile),
> SequenceFile.Writer.keyClass(Text.class),
> SequenceFile.Writer.valueClass(CopyListingFileStatus.class),
> SequenceFile.Writer.compression(SequenceFile.CompressionType.NONE),
> SequenceFile.Writer.replication((short)100)); <-- this line
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14876) Remove unused imports from TestBlockMissingException.java and TestClose.java

2019-09-27 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939476#comment-16939476
 ] 

Lisheng Sun commented on HDFS-14876:


Thanks [~ayushtkn] for good suggestions. I updated this patch and uploaded the 
v001 patch. Thank you.

> Remove unused imports from TestBlockMissingException.java and TestClose.java
> 
>
> Key: HDFS-14876
> URL: https://issues.apache.org/jira/browse/HDFS-14876
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14876.000.patch, HDFS-14876.001.patch
>
>
> There 3 unused imports in TestBlockMissingException.java and TestClose.java. 
> Let's clean them up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14876) Remove unused imports from TestBlockMissingException.java and TestClose.java

2019-09-27 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14876:
---
Description: There 3 unused imports in TestBlockMissingException.java and 
TestClose.java. Let's clean them up.  (was: There 2 unused imports in 
TestBlockMissingException.java. Let's clean them up.)

> Remove unused imports from TestBlockMissingException.java and TestClose.java
> 
>
> Key: HDFS-14876
> URL: https://issues.apache.org/jira/browse/HDFS-14876
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14876.000.patch, HDFS-14876.001.patch
>
>
> There 3 unused imports in TestBlockMissingException.java and TestClose.java. 
> Let's clean them up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14876) Remove unused imports from TestBlockMissingException.java and TestClose.java

2019-09-27 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14876:
---
Summary: Remove unused imports from TestBlockMissingException.java and 
TestClose.java  (was: Remove unused imports from TestBlockMissingException.java)

> Remove unused imports from TestBlockMissingException.java and TestClose.java
> 
>
> Key: HDFS-14876
> URL: https://issues.apache.org/jira/browse/HDFS-14876
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14876.000.patch, HDFS-14876.001.patch
>
>
> There 2 unused imports in TestBlockMissingException.java. Let's clean them up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14876) Remove unused imports from TestBlockMissingException.java

2019-09-27 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14876:
---
Attachment: HDFS-14876.001.patch

> Remove unused imports from TestBlockMissingException.java
> -
>
> Key: HDFS-14876
> URL: https://issues.apache.org/jira/browse/HDFS-14876
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HDFS-14876.000.patch, HDFS-14876.001.patch
>
>
> There 2 unused imports in TestBlockMissingException.java. Let's clean them up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2001) Update Ratis version to 0.4.0

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2001?focusedWorklogId=319557=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319557
 ]

ASF GitHub Bot logged work on HDDS-2001:


Author: ASF GitHub Bot
Created on: 27/Sep/19 13:46
Start Date: 27/Sep/19 13:46
Worklog Time Spent: 10m 
  Work Description: nandakumar131 commented on issue #1497: HDDS-2001. 
Update Ratis version to 0.4.0.
URL: https://github.com/apache/hadoop/pull/1497#issuecomment-535946335
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 319557)
Time Spent: 2h  (was: 1h 50m)

> Update Ratis version to 0.4.0
> -
>
> Key: HDDS-2001
> URL: https://issues.apache.org/jira/browse/HDDS-2001
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Update Ratis version to 0.4.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14880) Balancer sequence of statistics & exit message is not correct

2019-09-27 Thread Renukaprasad C (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Renukaprasad C updated HDFS-14880:
--
Attachment: HDFS-14880.0001.patch
Status: Patch Available  (was: Open)

> Balancer sequence of statistics & exit message is not correct
> -
>
> Key: HDFS-14880
> URL: https://issues.apache.org/jira/browse/HDFS-14880
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 3.2.1, 3.1.1
> Environment: Run the balancer tool in cluster.
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
> Attachments: HDFS-14880.0001.patch
>
>
> Actual:
> Time Stamp Iteration# Bytes Already Moved Bytes Left To Move Bytes Being Moved
> The cluster is balanced. Exiting...
> Sep 27, 2019 5:13:15 PM   0   0 B  0 B
>   0 B
> Sep 27, 2019 5:13:15 PM Balancing took 1.726 seconds
> Done!
> Expected: Exit message should be after loggin all the balancer movement 
> statistics data.
> Time Stamp Iteration# Bytes Already Moved Bytes Left To Move Bytes Being Moved
> Sep 27, 2019 5:13:15 PM   0   0 B  0 B
>   0 B
> The cluster is balanced. Exiting...
> Sep 27, 2019 5:13:15 PM Balancing took 1.726 seconds
> Done!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14849) Erasure Coding: the internal block is replicated many times when datanode is decommissioning

2019-09-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939469#comment-16939469
 ] 

Hudson commented on HDFS-14849:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17407 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17407/])
HDFS-14849. Erasure Coding: the internal block is replicated many times 
(ayushsaxena: rev ce58c05f1d89a72c787f3571f78a9464d0ab3933)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestBlockMissingException.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockManager.java


> Erasure Coding: the internal block is replicated many times when datanode is 
> decommissioning
> 
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, erasure-coding
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Fix For: 3.3.0
>
> Attachments: HDFS-14849.001.patch, HDFS-14849.002.patch, 
> fsck-file.png, liveBlockIndices.png, scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC internal 
> block in that datanode will be replicated many times.
> // added 2019/09/19
> I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
> simultaneously. 
>  !scheduleReconstruction.png! 
>  !fsck-file.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14880) Balancer sequence of statistics & exit message is not correct

2019-09-27 Thread Renukaprasad C (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Renukaprasad C updated HDFS-14880:
--
Attachment: (was: HDFS-14880.0001.patch)

> Balancer sequence of statistics & exit message is not correct
> -
>
> Key: HDFS-14880
> URL: https://issues.apache.org/jira/browse/HDFS-14880
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 3.1.1, 3.2.1
> Environment: Run the balancer tool in cluster.
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
>
> Actual:
> Time Stamp Iteration# Bytes Already Moved Bytes Left To Move Bytes Being Moved
> The cluster is balanced. Exiting...
> Sep 27, 2019 5:13:15 PM   0   0 B  0 B
>   0 B
> Sep 27, 2019 5:13:15 PM Balancing took 1.726 seconds
> Done!
> Expected: Exit message should be after loggin all the balancer movement 
> statistics data.
> Time Stamp Iteration# Bytes Already Moved Bytes Left To Move Bytes Being Moved
> Sep 27, 2019 5:13:15 PM   0   0 B  0 B
>   0 B
> The cluster is balanced. Exiting...
> Sep 27, 2019 5:13:15 PM Balancing took 1.726 seconds
> Done!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14880) Balancer sequence of statistics & exit message is not correct

2019-09-27 Thread Renukaprasad C (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Renukaprasad C updated HDFS-14880:
--
Attachment: HDFS-14880.0001.patch

> Balancer sequence of statistics & exit message is not correct
> -
>
> Key: HDFS-14880
> URL: https://issues.apache.org/jira/browse/HDFS-14880
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 3.1.1, 3.2.1
> Environment: Run the balancer tool in cluster.
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
> Attachments: HDFS-14880.0001.patch
>
>
> Actual:
> Time Stamp Iteration# Bytes Already Moved Bytes Left To Move Bytes Being Moved
> The cluster is balanced. Exiting...
> Sep 27, 2019 5:13:15 PM   0   0 B  0 B
>   0 B
> Sep 27, 2019 5:13:15 PM Balancing took 1.726 seconds
> Done!
> Expected: Exit message should be after loggin all the balancer movement 
> statistics data.
> Time Stamp Iteration# Bytes Already Moved Bytes Left To Move Bytes Being Moved
> Sep 27, 2019 5:13:15 PM   0   0 B  0 B
>   0 B
> The cluster is balanced. Exiting...
> Sep 27, 2019 5:13:15 PM Balancing took 1.726 seconds
> Done!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14849) Erasure Coding: the internal block is replicated many times when datanode is decommissioning

2019-09-27 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939456#comment-16939456
 ] 

Ayush Saxena commented on HDFS-14849:
-

Committed to trunk.
Thanx [~marvelrock] for the contribution and [~ferhui] for the review!!!

> Erasure Coding: the internal block is replicated many times when datanode is 
> decommissioning
> 
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, erasure-coding
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Attachments: HDFS-14849.001.patch, HDFS-14849.002.patch, 
> fsck-file.png, liveBlockIndices.png, scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC internal 
> block in that datanode will be replicated many times.
> // added 2019/09/19
> I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
> simultaneously. 
>  !scheduleReconstruction.png! 
>  !fsck-file.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14849) Erasure Coding: the internal block is replicated many times when datanode is decommissioning

2019-09-27 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14849:

Fix Version/s: 3.3.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Erasure Coding: the internal block is replicated many times when datanode is 
> decommissioning
> 
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, erasure-coding
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Fix For: 3.3.0
>
> Attachments: HDFS-14849.001.patch, HDFS-14849.002.patch, 
> fsck-file.png, liveBlockIndices.png, scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC internal 
> block in that datanode will be replicated many times.
> // added 2019/09/19
> I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
> simultaneously. 
>  !scheduleReconstruction.png! 
>  !fsck-file.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14849) Erasure Coding: the internal block is replicated many times when datanode is decommissioning

2019-09-27 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939452#comment-16939452
 ] 

Ayush Saxena commented on HDFS-14849:
-

Thanx [~marvelrock] and [~ferhui].
v002 LGTM +1

> Erasure Coding: the internal block is replicated many times when datanode is 
> decommissioning
> 
>
> Key: HDFS-14849
> URL: https://issues.apache.org/jira/browse/HDFS-14849
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, erasure-coding
>Affects Versions: 3.3.0
>Reporter: HuangTao
>Assignee: HuangTao
>Priority: Major
>  Labels: EC, HDFS, NameNode
> Attachments: HDFS-14849.001.patch, HDFS-14849.002.patch, 
> fsck-file.png, liveBlockIndices.png, scheduleReconstruction.png
>
>
> When the datanode keeping in DECOMMISSION_INPROGRESS status, the EC internal 
> block in that datanode will be replicated many times.
> // added 2019/09/19
> I reproduced this scenario in a 163 nodes cluster with decommission 100 nodes 
> simultaneously. 
>  !scheduleReconstruction.png! 
>  !fsck-file.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1569) Add ability to SCM for creating multiple pipelines with same datanode

2019-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1569?focusedWorklogId=319539=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-319539
 ]

ASF GitHub Bot logged work on HDDS-1569:


Author: ASF GitHub Bot
Created on: 27/Sep/19 13:16
Start Date: 27/Sep/19 13:16
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1431: HDDS-1569 
Support creating multiple pipelines with same datanode
URL: https://github.com/apache/hadoop/pull/1431#issuecomment-535934495
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 46 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 13 new or modified test 
files. |
   ||| _ HDDS-1564 Compile Tests _ |
   | 0 | mvndep | 66 | Maven dependency ordering for branch |
   | -1 | mvninstall | 32 | hadoop-hdds in HDDS-1564 failed. |
   | -1 | mvninstall | 29 | hadoop-ozone in HDDS-1564 failed. |
   | -1 | compile | 20 | hadoop-hdds in HDDS-1564 failed. |
   | -1 | compile | 16 | hadoop-ozone in HDDS-1564 failed. |
   | +1 | checkstyle | 61 | HDDS-1564 passed |
   | +1 | mvnsite | 0 | HDDS-1564 passed |
   | +1 | shadedclient | 854 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in HDDS-1564 failed. |
   | -1 | javadoc | 20 | hadoop-ozone in HDDS-1564 failed. |
   | 0 | spotbugs | 951 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 31 | hadoop-hdds in HDDS-1564 failed. |
   | -1 | findbugs | 20 | hadoop-ozone in HDDS-1564 failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | -1 | mvninstall | 33 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 29 | hadoop-ozone in the patch failed. |
   | -1 | compile | 26 | hadoop-hdds in the patch failed. |
   | -1 | compile | 19 | hadoop-ozone in the patch failed. |
   | -1 | javac | 26 | hadoop-hdds in the patch failed. |
   | -1 | javac | 19 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 28 | hadoop-hdds: The patch generated 9 new + 0 
unchanged - 3 fixed = 9 total (was 3) |
   | -0 | checkstyle | 30 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 697 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 22 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 19 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 31 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 21 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 26 | hadoop-hdds in the patch failed. |
   | -1 | unit | 21 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 35 | The patch does not generate ASF License warnings. |
   | | | 2404 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/14/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1431 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux c646b56e640e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | HDDS-1564 / 7b5a5fe |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/14/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/14/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/14/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/14/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/14/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/14/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1431/14/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 

[jira] [Updated] (HDFS-14880) Balancer sequence of statistics & exit message is not correct

2019-09-27 Thread Renukaprasad C (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Renukaprasad C updated HDFS-14880:
--
Description: 
Actual:
Time Stamp Iteration# Bytes Already Moved Bytes Left To Move Bytes Being Moved
The cluster is balanced. Exiting...
Sep 27, 2019 5:13:15 PM   0   0 B  0 B  
0 B
Sep 27, 2019 5:13:15 PM Balancing took 1.726 seconds
Done!


Expected: Exit message should be after loggin all the balancer movement 
statistics data.
Time Stamp Iteration# Bytes Already Moved Bytes Left To Move Bytes Being Moved
Sep 27, 2019 5:13:15 PM   0   0 B  0 B  
0 B
The cluster is balanced. Exiting...
Sep 27, 2019 5:13:15 PM Balancing took 1.726 seconds
Done!

  was:
Time Stamp Iteration# Bytes Already Moved Bytes Left To Move Bytes Being Moved
The cluster is balanced. Exiting...
Sep 27, 2019 5:13:15 PM   0   0 B  0 B  
0 B
Sep 27, 2019 5:13:15 PM Balancing took 1.726 seconds
Done!


> Balancer sequence of statistics & exit message is not correct
> -
>
> Key: HDFS-14880
> URL: https://issues.apache.org/jira/browse/HDFS-14880
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 3.1.1, 3.2.1
> Environment: Run the balancer tool in cluster.
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
>
> Actual:
> Time Stamp Iteration# Bytes Already Moved Bytes Left To Move Bytes Being Moved
> The cluster is balanced. Exiting...
> Sep 27, 2019 5:13:15 PM   0   0 B  0 B
>   0 B
> Sep 27, 2019 5:13:15 PM Balancing took 1.726 seconds
> Done!
> Expected: Exit message should be after loggin all the balancer movement 
> statistics data.
> Time Stamp Iteration# Bytes Already Moved Bytes Left To Move Bytes Being Moved
> Sep 27, 2019 5:13:15 PM   0   0 B  0 B
>   0 B
> The cluster is balanced. Exiting...
> Sep 27, 2019 5:13:15 PM Balancing took 1.726 seconds
> Done!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >