[jira] [Commented] (HDDS-175) Refactor ContainerInfo to remove Pipeline object from it

2018-06-21 Thread Ajay Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520028#comment-16520028
 ] 

Ajay Kumar commented on HDDS-175:
-

[~shashikant] thanks for having a look. Removed {{WritingYarnApplications.md}} 
in patch v4. Failed test passed locally.
 [~anu] thanks for  our offline discussion on patch. Overloaded getContainer as 
suggested.

> Refactor ContainerInfo to remove Pipeline object from it 
> -
>
> Key: HDDS-175
> URL: https://issues.apache.org/jira/browse/HDDS-175
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-175.00.patch, HDDS-175.01.patch, HDDS-175.02.patch, 
> HDDS-175.03.patch, HDDS-175.04.patch
>
>
> Refactor ContainerInfo to remove Pipeline object from it. We can add below 4 
> fields to ContainerInfo to recreate pipeline if required:
> # pipelineId
> # replication type
> # expected replication count
> # DataNode where its replica exist



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-175) Refactor ContainerInfo to remove Pipeline object from it

2018-06-21 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-175:

Attachment: HDDS-175.04.patch

> Refactor ContainerInfo to remove Pipeline object from it 
> -
>
> Key: HDDS-175
> URL: https://issues.apache.org/jira/browse/HDDS-175
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.2.1
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-175.00.patch, HDDS-175.01.patch, HDDS-175.02.patch, 
> HDDS-175.03.patch, HDDS-175.04.patch
>
>
> Refactor ContainerInfo to remove Pipeline object from it. We can add below 4 
> fields to ContainerInfo to recreate pipeline if required:
> # pipelineId
> # replication type
> # expected replication count
> # DataNode where its replica exist



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-184) Upgrade common-langs version to 3.7 in hadoop-tools/hadoop-ozone

2018-06-21 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520009#comment-16520009
 ] 

genericqa commented on HDDS-184:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 23m 21s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 45s{color} 
| {color:red} hadoop-ozone in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}158m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
|   | hadoop.ozone.TestStorageContainerManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDDS-184 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928699/HDDS-184.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle 

[jira] [Comment Edited] (HDFS-13658) fsck, dfsadmin -report, and NN WebUI should report number of blocks that have 1 replica

2018-06-21 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519944#comment-16519944
 ] 

Xiao Chen edited comment on HDFS-13658 at 6/22/18 3:22 AM:
---

Thanks for revving Kitti. As discussed offline and as you hinted, let's not 
change priority queues for this.

bq.  Do you think I should keep or remove the integration test?
Given that unit test already covers the scenarios, we can remove the 
{{TestOneReplicaBlocksAlert}} test for reduced test execution time. :)

bq. number of the current replicas for decrement
I think we need to modify the call path for from {{BlockManager}}, to get the 
current replicas during removal (and handle similarly for decrements coming 
from {{update}}, which already has {{curReplicas}}). 
{{BlockManager#countNodes}} can do that, and it looks to be O(1).

Also would like to see a test that would confuse the decrements for added 
coverage.


was (Author: xiaochen):
Thanks for revving Kitti. As discussed offline and as you hinted, let's not 
change priority queues for this.

bq.  Do you think I should keep or remove the integration test?
Given that unit test already covers the scenarios, we can remove the 
{{TestOneReplicaBlocksAlert}} test for reduced test execution time. :)
bq. number of the current replicas for decrement
I think we need to modify the call path for from {{BlockManager}}, to get the 
current replicas during removal (and handle similarly for decrements coming 
from {{update}}, which already has {{curReplicas}}). 
{{BlockManager#countNodes}} can do that, and it looks to be O(1).


> fsck, dfsadmin -report, and NN WebUI should report number of blocks that have 
> 1 replica
> ---
>
> Key: HDFS-13658
> URL: https://issues.apache.org/jira/browse/HDFS-13658
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13658.001.patch, HDFS-13658.002.patch, 
> HDFS-13658.003.patch, HDFS-13658.004.patch, HDFS-13658.005.patch
>
>
> fsck, dfsadmin -report, and NN WebUI should report number of blocks that have 
> 1 replica. We have had many cases opened in which a customer has lost a disk 
> or a DN losing files/blocks due to the fact that they had blocks with only 1 
> replica. We need to make the customer better aware of this situation and that 
> they should take action.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13658) fsck, dfsadmin -report, and NN WebUI should report number of blocks that have 1 replica

2018-06-21 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519944#comment-16519944
 ] 

Xiao Chen commented on HDFS-13658:
--

Thanks for revving Kitti. As discussed offline and as you hinted, let's not 
change priority queues for this.

bq.  Do you think I should keep or remove the integration test?
Given that unit test already covers the scenarios, we can remove the 
{{TestOneReplicaBlocksAlert}} test for reduced test execution time. :)
bq. number of the current replicas for decrement
I think we need to modify the call path for from {{BlockManager}}, to get the 
current replicas during removal (and handle similarly for decrements coming 
from {{update}}, which already has {{curReplicas}}). 
{{BlockManager#countNodes}} can do that, and it looks to be O(1).


> fsck, dfsadmin -report, and NN WebUI should report number of blocks that have 
> 1 replica
> ---
>
> Key: HDFS-13658
> URL: https://issues.apache.org/jira/browse/HDFS-13658
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HDFS-13658.001.patch, HDFS-13658.002.patch, 
> HDFS-13658.003.patch, HDFS-13658.004.patch, HDFS-13658.005.patch
>
>
> fsck, dfsadmin -report, and NN WebUI should report number of blocks that have 
> 1 replica. We have had many cases opened in which a customer has lost a disk 
> or a DN losing files/blocks due to the fact that they had blocks with only 1 
> replica. We need to make the customer better aware of this situation and that 
> they should take action.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13692) StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet

2018-06-21 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519940#comment-16519940
 ] 

Hudson commented on HDFS-13692:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14462 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14462/])
HDFS-13692. StorageInfoDefragmenter floods log when compacting (yqlin: rev 
30728aced4a6b05394b3fc8c613f39fade9cf3c2)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java


> StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet
> --
>
> Key: HDFS-13692
> URL: https://issues.apache.org/jira/browse/HDFS-13692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Bharat Viswanadham
>Priority: Minor
> Fix For: 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HDFS-13692.00.patch
>
>
> StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet.  In 
> {{StorageInfoDefragmenter#scanAndCompactStorages}}, it prints for all the 
> StorageInfo under each DN. If there are 1k nodes in cluster, and each node 
> has 10 data dir configured, it will print 10k lines every compact interval 
> time (10 mins). The log looks large, we could switch log level from INFO to 
> DEBUG in {{StorageInfoDefragmenter#scanAndCompactStorages}}.
> {noformat}
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-329bd988-a558-43a6-b31c-9142548b0179 : 
> 0.876264591439
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-b5505847-1389-4a80-b9d8-876172a83897 : 0.933351976137211
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-ca164b2f-0a2c-4b26-8e99-f0ece0909997 : 
> 0.9330040998881849
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-89b912ba-339b-45e3-b981-541b22690ccb : 
> 0.9314626719970249
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-89c0377b-a49c-4288-9304-e104d98de5bd : 
> 0.9309580852251582
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-5ffad4d2-168a-446d-a92e-ef46a82f26f8 : 
> 0.8938870614035088
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-eecbbd34-10f4-4647-8710-0f5963da3aaa : 
> 0.8963103205353998
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-7aafa122-433f-49c8-bf00-11bcdd8ce048 : 
> 0.8950508004926109
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-eb9ba675-9c23-40a1-9241-c314dc0e2867 : 
> 0.8947356866877415
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13692) StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet

2018-06-21 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13692:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.4
   3.1.1
   3.2.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-3.0 and branch-3.1. Thanks [~bharatviswa] for 
fixing this.

> StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet
> --
>
> Key: HDFS-13692
> URL: https://issues.apache.org/jira/browse/HDFS-13692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Bharat Viswanadham
>Priority: Minor
> Fix For: 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HDFS-13692.00.patch
>
>
> StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet.  In 
> {{StorageInfoDefragmenter#scanAndCompactStorages}}, it prints for all the 
> StorageInfo under each DN. If there are 1k nodes in cluster, and each node 
> has 10 data dir configured, it will print 10k lines every compact interval 
> time (10 mins). The log looks large, we could switch log level from INFO to 
> DEBUG in {{StorageInfoDefragmenter#scanAndCompactStorages}}.
> {noformat}
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-329bd988-a558-43a6-b31c-9142548b0179 : 
> 0.876264591439
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-b5505847-1389-4a80-b9d8-876172a83897 : 0.933351976137211
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-ca164b2f-0a2c-4b26-8e99-f0ece0909997 : 
> 0.9330040998881849
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-89b912ba-339b-45e3-b981-541b22690ccb : 
> 0.9314626719970249
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-89c0377b-a49c-4288-9304-e104d98de5bd : 
> 0.9309580852251582
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-5ffad4d2-168a-446d-a92e-ef46a82f26f8 : 
> 0.8938870614035088
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-eecbbd34-10f4-4647-8710-0f5963da3aaa : 
> 0.8963103205353998
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-7aafa122-433f-49c8-bf00-11bcdd8ce048 : 
> 0.8950508004926109
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-eb9ba675-9c23-40a1-9241-c314dc0e2867 : 
> 0.8947356866877415
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13692) StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet

2018-06-21 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519928#comment-16519928
 ] 

Yiqun Lin commented on HDFS-13692:
--

LGTM, +1. Committing this.

> StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet
> --
>
> Key: HDFS-13692
> URL: https://issues.apache.org/jira/browse/HDFS-13692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: HDFS-13692.00.patch
>
>
> StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet.  In 
> {{StorageInfoDefragmenter#scanAndCompactStorages}}, it prints for all the 
> StorageInfo under each DN. If there are 1k nodes in cluster, and each node 
> has 10 data dir configured, it will print 10k lines every compact interval 
> time (10 mins). The log looks large, we could switch log level from INFO to 
> DEBUG in {{StorageInfoDefragmenter#scanAndCompactStorages}}.
> {noformat}
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-329bd988-a558-43a6-b31c-9142548b0179 : 
> 0.876264591439
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-b5505847-1389-4a80-b9d8-876172a83897 : 0.933351976137211
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-ca164b2f-0a2c-4b26-8e99-f0ece0909997 : 
> 0.9330040998881849
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-89b912ba-339b-45e3-b981-541b22690ccb : 
> 0.9314626719970249
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-89c0377b-a49c-4288-9304-e104d98de5bd : 
> 0.9309580852251582
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-5ffad4d2-168a-446d-a92e-ef46a82f26f8 : 
> 0.8938870614035088
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-eecbbd34-10f4-4647-8710-0f5963da3aaa : 
> 0.8963103205353998
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-7aafa122-433f-49c8-bf00-11bcdd8ce048 : 
> 0.8950508004926109
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-eb9ba675-9c23-40a1-9241-c314dc0e2867 : 
> 0.8947356866877415
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13609) [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via RPC

2018-06-21 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519923#comment-16519923
 ] 

Konstantin Shvachko commented on HDFS-13609:


Very comprehensive analysis [~xkrogen].
I was looking at {{BackupImage.tryConvergeJournalSpool()}}. Remembered some 
details. The spool in BackupNode context is an edits where BackupNode writes 
transactions received from NN during checkpointing the image, because ut cannot 
apply them to the memory state while writing the image. After completing the 
image write, BN reads the edits it saved, this is called 
{{convergeJournalSpool()}}. Since BN uses only EditLogFileStreams the effect of 
the optimization parameter should be completely ignored.
Anyways I checked and run {{TestBackupNode}} with both {{optimizeLatency = 
true}} and {{false}} and it passed.
I think it is safe to use the single parameter in this case.

> [Edit Tail Fast Path Pt 3] NameNode-side changes to support tailing edits via 
> RPC
> -
>
> Key: HDFS-13609
> URL: https://issues.apache.org/jira/browse/HDFS-13609
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Major
> Attachments: HDFS-13609-HDFS-12943.000.patch, 
> HDFS-13609-HDFS-12943.001.patch, HDFS-13609-HDFS-12943.002.patch
>
>
> See HDFS-13150 for the full design.
> This JIRA is targetted at the NameNode-side changes to enable tailing 
> in-progress edits via the RPC mechanism added in HDFS-13608. Most changes are 
> in the QuorumJournalManager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-184) Upgrade common-langs version to 3.7 in hadoop-tools/hadoop-ozone

2018-06-21 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519905#comment-16519905
 ] 

Takanobu Asanuma commented on HDDS-184:
---

Uploaded the 1st patch.

> Upgrade common-langs version to 3.7 in hadoop-tools/hadoop-ozone
> 
>
> Key: HDDS-184
> URL: https://issues.apache.org/jira/browse/HDDS-184
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDDS-184.1.patch
>
>
> This is a separated task from HADOOP-15495 for simplicity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-184) Upgrade common-langs version to 3.7 in hadoop-tools/hadoop-ozone

2018-06-21 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDDS-184:
--
Status: Patch Available  (was: Open)

> Upgrade common-langs version to 3.7 in hadoop-tools/hadoop-ozone
> 
>
> Key: HDDS-184
> URL: https://issues.apache.org/jira/browse/HDDS-184
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDDS-184.1.patch
>
>
> This is a separated task from HADOOP-15495 for simplicity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-184) Upgrade common-langs version to 3.7 in hadoop-tools/hadoop-ozone

2018-06-21 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDDS-184:
--
Attachment: HDDS-184.1.patch

> Upgrade common-langs version to 3.7 in hadoop-tools/hadoop-ozone
> 
>
> Key: HDDS-184
> URL: https://issues.apache.org/jira/browse/HDDS-184
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDDS-184.1.patch
>
>
> This is a separated task from HADOOP-15495 for simplicity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13693) Remove unnecessary search in INodeDirectory.addChild during image loading

2018-06-21 Thread zhouyingchao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519897#comment-16519897
 ] 

zhouyingchao commented on HDFS-13693:
-

Test the patch against a fsimage of a 70PB 2.4 cluster (200million files and 
300million blocks), the image loading time be reduced from 1210 seconds to 1138 
seconds.

> Remove unnecessary search in INodeDirectory.addChild during image loading
> -
>
> Key: HDFS-13693
> URL: https://issues.apache.org/jira/browse/HDFS-13693
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: zhouyingchao
>Priority: Major
> Attachments: HDFS-13693-001.patch
>
>
> In FSImageFormatPBINode.loadINodeDirectorySection, all child INodes are added 
> to their parent INode's map one by one. The adding procedure will search a 
> position in the parent's map and then insert the child to the position. 
> However, during image loading, the search is unnecessary since the insert 
> position should always be at the end of the map given the sequence they are 
> serialized on disk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-184) Upgrade common-langs version to 3.7 in hadoop-tools/hadoop-ozone

2018-06-21 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDDS-184:
-

 Summary: Upgrade common-langs version to 3.7 in 
hadoop-tools/hadoop-ozone
 Key: HDDS-184
 URL: https://issues.apache.org/jira/browse/HDDS-184
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma


This is a separated task from HADOOP-15495 for simplicity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-173) Refactor Dispatcher and implement Handler for new ContainerIO design

2018-06-21 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519870#comment-16519870
 ] 

genericqa commented on HDDS-173:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} HDDS-48 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  6m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
17s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m  
4s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
51s{color} | {color:green} HDDS-48 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  5m 
42s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
8s{color} | {color:green} HDDS-48 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
58s{color} | {color:green} HDDS-48 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 26m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 45 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  2m 
26s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
9s{color} | {color:red} hadoop-hdds/common generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
18s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
6s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} tools in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 22m 31s{color} 
| {color:red} integration-test in the patch failed. 

[jira] [Commented] (HDFS-12976) Introduce ObserverReadProxyProvider

2018-06-21 Thread Chao Sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519852#comment-16519852
 ] 

Chao Sun commented on HDFS-12976:
-

Good point [~vagarychen]! Will add that check.

[~shv] - back to this JIRA, I'm thinking for now just to keep the 
{{ConfiguredFailoverProxyProvider}} intact and fix the issue in HDFS-13687. Is 
that OK? 

> Introduce ObserverReadProxyProvider
> ---
>
> Key: HDFS-12976
> URL: https://issues.apache.org/jira/browse/HDFS-12976
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Konstantin Shvachko
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-12976-HDFS-12943.000.patch, 
> HDFS-12976-HDFS-12943.001.patch, HDFS-12976-HDFS-12943.002.patch, 
> HDFS-12976-HDFS-12943.003.patch, HDFS-12976.WIP.patch
>
>
> {{StandbyReadProxyProvider}} should implement {{FailoverProxyProvider}} 
> interface and be able to submit read requests to ANN and SBN(s).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10285) Storage Policy Satisfier in Namenode

2018-06-21 Thread Chris Douglas (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519841#comment-16519841
 ] 

Chris Douglas commented on HDFS-10285:
--

+1 on [~umamaheswararao]'s proposal. We can refine the code in trunk.

> Storage Policy Satisfier in Namenode
> 
>
> Key: HDFS-10285
> URL: https://issues.apache.org/jira/browse/HDFS-10285
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode
>Affects Versions: HDFS-10285
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
> Attachments: HDFS-10285-consolidated-merge-patch-00.patch, 
> HDFS-10285-consolidated-merge-patch-01.patch, 
> HDFS-10285-consolidated-merge-patch-02.patch, 
> HDFS-10285-consolidated-merge-patch-03.patch, 
> HDFS-10285-consolidated-merge-patch-04.patch, 
> HDFS-10285-consolidated-merge-patch-05.patch, 
> HDFS-SPS-TestReport-20170708.pdf, SPS Modularization.pdf, 
> Storage-Policy-Satisfier-in-HDFS-June-20-2017.pdf, 
> Storage-Policy-Satisfier-in-HDFS-May10.pdf, 
> Storage-Policy-Satisfier-in-HDFS-Oct-26-2017.pdf
>
>
> Heterogeneous storage in HDFS introduced the concept of storage policy. These 
> policies can be set on directory/file to specify the user preference, where 
> to store the physical block. When user set the storage policy before writing 
> data, then the blocks could take advantage of storage policy preferences and 
> stores physical block accordingly. 
> If user set the storage policy after writing and completing the file, then 
> the blocks would have been written with default storage policy (nothing but 
> DISK). User has to run the ‘Mover tool’ explicitly by specifying all such 
> file names as a list. In some distributed system scenarios (ex: HBase) it 
> would be difficult to collect all the files and run the tool as different 
> nodes can write files separately and file can have different paths.
> Another scenarios is, when user rename the files from one effected storage 
> policy file (inherited policy from parent directory) to another storage 
> policy effected directory, it will not copy inherited storage policy from 
> source. So it will take effect from destination file/dir parent storage 
> policy. This rename operation is just a metadata change in Namenode. The 
> physical blocks still remain with source storage policy.
> So, Tracking all such business logic based file names could be difficult for 
> admins from distributed nodes(ex: region servers) and running the Mover tool. 
> Here the proposal is to provide an API from Namenode itself for trigger the 
> storage policy satisfaction. A Daemon thread inside Namenode should track 
> such calls and process to DN as movement commands. 
> Will post the detailed design thoughts document soon. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-183) Create KeyValueContainerManager class

2018-06-21 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-183:

Description: 
This class is used to handle keyValueContainer operations.

This Jira is to build container map from .container files during startup.

  was:
This class is used to handle keyValueContainer operations.

In this jira, adding to build container map when datanode starts up.


> Create KeyValueContainerManager class
> -
>
> Key: HDDS-183
> URL: https://issues.apache.org/jira/browse/HDDS-183
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This class is used to handle keyValueContainer operations.
> This Jira is to build container map from .container files during startup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-183) Create KeyValueContainerManager class

2018-06-21 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-183:
---

 Summary: Create KeyValueContainerManager class
 Key: HDDS-183
 URL: https://issues.apache.org/jira/browse/HDDS-183
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Bharat Viswanadham


This class is used to handle keyValueContainer operations.

In this jira, adding to build container map when datanode starts up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-183) Create KeyValueContainerManager class

2018-06-21 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-183:
---

Assignee: Bharat Viswanadham

> Create KeyValueContainerManager class
> -
>
> Key: HDDS-183
> URL: https://issues.apache.org/jira/browse/HDDS-183
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> This class is used to handle keyValueContainer operations.
> In this jira, adding to build container map when datanode starts up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12977) Add stateId to RPC headers.

2018-06-21 Thread Chen Liang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519803#comment-16519803
 ] 

Chen Liang commented on HDFS-12977:
---

It seems the name space ID is picked up here by reading 
{{namesystem.getLastWrittenTransactionId()}}. I am thinking would it be better 
to pick up {{namesystem.getFSImage().getLastAppliedOrWrittenTxId()}} instead?

My understanding is that (please correct me if I'm wrong), 
{{getLastWrittenTransactionId()}} returns the last id that has been written to 
persistent storage, while {{getLastAppliedOrWrittenTxId()}} returns the last id 
that has been written storage, OR has been applied to name space, but not yet 
persisted, whichever is larger. I think as long as a change is applied to 
Standby in memory namespace, the id can be safely made visible for client to 
read, there seems no need to wait longer for it to be persisted here. What do 
you think [~shv], [~zero45]?

> Add stateId to RPC headers.
> ---
>
> Key: HDFS-12977
> URL: https://issues.apache.org/jira/browse/HDFS-12977
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ipc, namenode
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
>Priority: Major
> Fix For: HDFS-12943
>
> Attachments: HDFS_12977.trunk.001.patch, HDFS_12977.trunk.002.patch, 
> HDFS_12977.trunk.003.patch, HDFS_12977.trunk.004.patch, 
> HDFS_12977.trunk.005.patch, HDFS_12977.trunk.006.patch, 
> HDFS_12977.trunk.007.patch, HDFS_12977.trunk.008.patch
>
>
> stateId is a new field in the RPC headers of NameNode proto calls.
> stateId is the journal transaction Id, which represents LastSeenId for the 
> clients and LastWrittenId for NameNodes. See more in [reads from Standby 
> design 
> doc|https://issues.apache.org/jira/secure/attachment/12902925/ConsistentReadsFromStandbyNode.pdf].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-173) Refactor Dispatcher and implement Handler for new ContainerIO design

2018-06-21 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-173:

Fix Version/s: 0.2.1

> Refactor Dispatcher and implement Handler for new ContainerIO design
> 
>
> Key: HDDS-173
> URL: https://issues.apache.org/jira/browse/HDDS-173
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-173-HDDS-48.001.patch
>
>
> Dispatcher will pass the ContainerCommandRequests to the corresponding 
> Handler based on the ContainerType. Each ContainerType will have its own 
> Handler. The Handler class will process the message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13692) StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet

2018-06-21 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519798#comment-16519798
 ] 

genericqa commented on HDFS-13692:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 16s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 24s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}162m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
|   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13692 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928670/HDFS-13692.00.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 02abbccc72e0 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 59de967 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24483/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24483/testReport/ |
| Max. process+thread count | 2868 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Updated] (HDDS-173) Refactor Dispatcher and implement Handler for new ContainerIO design

2018-06-21 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-173:

Status: Patch Available  (was: Open)

> Refactor Dispatcher and implement Handler for new ContainerIO design
> 
>
> Key: HDDS-173
> URL: https://issues.apache.org/jira/browse/HDDS-173
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-173-HDDS-48.001.patch
>
>
> Dispatcher will pass the ContainerCommandRequests to the corresponding 
> Handler based on the ContainerType. Each ContainerType will have its own 
> Handler. The Handler class will process the message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-182) Integrate HddsDispatcher

2018-06-21 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-182:
---

 Summary: Integrate HddsDispatcher
 Key: HDDS-182
 URL: https://issues.apache.org/jira/browse/HDDS-182
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


1. Commands from SCM to Datanode should go through the new HddsDispatcher.
2. Cleanup container-service's ozone.container.common package.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-173) Refactor Dispatcher and implement Handler for new ContainerIO design

2018-06-21 Thread Hanisha Koneru (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519791#comment-16519791
 ] 

Hanisha Koneru commented on HDDS-173:
-

In this Jira, we add the KeyValueHandler, the new HddsDispatcher (which will 
replace the current Dispatcher) and refactor other classes. 
Some changes have been made to the DatanodeContainerProtocol.

Integration of HddsDispatcher into the code will be done in subsequent Jiras. 

> Refactor Dispatcher and implement Handler for new ContainerIO design
> 
>
> Key: HDDS-173
> URL: https://issues.apache.org/jira/browse/HDDS-173
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-173-HDDS-48.001.patch
>
>
> Dispatcher will pass the ContainerCommandRequests to the corresponding 
> Handler based on the ContainerType. Each ContainerType will have its own 
> Handler. The Handler class will process the message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-173) Refactor Dispatcher and implement Handler for new ContainerIO design

2018-06-21 Thread Hanisha Koneru (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-173:

Attachment: HDDS-173-HDDS-48.001.patch

> Refactor Dispatcher and implement Handler for new ContainerIO design
> 
>
> Key: HDDS-173
> URL: https://issues.apache.org/jira/browse/HDDS-173
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Attachments: HDDS-173-HDDS-48.001.patch
>
>
> Dispatcher will pass the ContainerCommandRequests to the corresponding 
> Handler based on the ContainerType. Each ContainerType will have its own 
> Handler. The Handler class will process the message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13692) StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet

2018-06-21 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519678#comment-16519678
 ] 

Bharat Viswanadham commented on HDFS-13692:
---

Hi [~linyiqun]

Thanks for reporting it. Yes, this can be changed to debug, as it unnecessarily 
fills up the log. Posted the patch for the same.

> StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet
> --
>
> Key: HDFS-13692
> URL: https://issues.apache.org/jira/browse/HDFS-13692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-13692.00.patch
>
>
> StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet.  In 
> {{StorageInfoDefragmenter#scanAndCompactStorages}}, it prints for all the 
> StorageInfo under each DN. If there are 1k nodes in cluster, and each node 
> has 10 data dir configured, it will print 10k lines every compact interval 
> time (10 mins). The log looks large, we could switch log level from INFO to 
> DEBUG in {{StorageInfoDefragmenter#scanAndCompactStorages}}.
> {noformat}
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-329bd988-a558-43a6-b31c-9142548b0179 : 
> 0.876264591439
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-b5505847-1389-4a80-b9d8-876172a83897 : 0.933351976137211
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-ca164b2f-0a2c-4b26-8e99-f0ece0909997 : 
> 0.9330040998881849
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-89b912ba-339b-45e3-b981-541b22690ccb : 
> 0.9314626719970249
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-89c0377b-a49c-4288-9304-e104d98de5bd : 
> 0.9309580852251582
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-5ffad4d2-168a-446d-a92e-ef46a82f26f8 : 
> 0.8938870614035088
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-eecbbd34-10f4-4647-8710-0f5963da3aaa : 
> 0.8963103205353998
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-7aafa122-433f-49c8-bf00-11bcdd8ce048 : 
> 0.8950508004926109
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-eb9ba675-9c23-40a1-9241-c314dc0e2867 : 
> 0.8947356866877415
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13692) StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet

2018-06-21 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13692:
--
Assignee: Bharat Viswanadham
  Status: Patch Available  (was: Open)

> StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet
> --
>
> Key: HDFS-13692
> URL: https://issues.apache.org/jira/browse/HDFS-13692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: HDFS-13692.00.patch
>
>
> StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet.  In 
> {{StorageInfoDefragmenter#scanAndCompactStorages}}, it prints for all the 
> StorageInfo under each DN. If there are 1k nodes in cluster, and each node 
> has 10 data dir configured, it will print 10k lines every compact interval 
> time (10 mins). The log looks large, we could switch log level from INFO to 
> DEBUG in {{StorageInfoDefragmenter#scanAndCompactStorages}}.
> {noformat}
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-329bd988-a558-43a6-b31c-9142548b0179 : 
> 0.876264591439
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-b5505847-1389-4a80-b9d8-876172a83897 : 0.933351976137211
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-ca164b2f-0a2c-4b26-8e99-f0ece0909997 : 
> 0.9330040998881849
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-89b912ba-339b-45e3-b981-541b22690ccb : 
> 0.9314626719970249
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-89c0377b-a49c-4288-9304-e104d98de5bd : 
> 0.9309580852251582
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-5ffad4d2-168a-446d-a92e-ef46a82f26f8 : 
> 0.8938870614035088
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-eecbbd34-10f4-4647-8710-0f5963da3aaa : 
> 0.8963103205353998
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-7aafa122-433f-49c8-bf00-11bcdd8ce048 : 
> 0.8950508004926109
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-eb9ba675-9c23-40a1-9241-c314dc0e2867 : 
> 0.8947356866877415
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13692) StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet

2018-06-21 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13692:
--
Target Version/s: 3.2.0, 3.1.1  (was: 3.2.0)

> StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet
> --
>
> Key: HDFS-13692
> URL: https://issues.apache.org/jira/browse/HDFS-13692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Assignee: Bharat Viswanadham
>Priority: Minor
> Attachments: HDFS-13692.00.patch
>
>
> StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet.  In 
> {{StorageInfoDefragmenter#scanAndCompactStorages}}, it prints for all the 
> StorageInfo under each DN. If there are 1k nodes in cluster, and each node 
> has 10 data dir configured, it will print 10k lines every compact interval 
> time (10 mins). The log looks large, we could switch log level from INFO to 
> DEBUG in {{StorageInfoDefragmenter#scanAndCompactStorages}}.
> {noformat}
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-329bd988-a558-43a6-b31c-9142548b0179 : 
> 0.876264591439
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-b5505847-1389-4a80-b9d8-876172a83897 : 0.933351976137211
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-ca164b2f-0a2c-4b26-8e99-f0ece0909997 : 
> 0.9330040998881849
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-89b912ba-339b-45e3-b981-541b22690ccb : 
> 0.9314626719970249
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-89c0377b-a49c-4288-9304-e104d98de5bd : 
> 0.9309580852251582
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-5ffad4d2-168a-446d-a92e-ef46a82f26f8 : 
> 0.8938870614035088
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-eecbbd34-10f4-4647-8710-0f5963da3aaa : 
> 0.8963103205353998
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-7aafa122-433f-49c8-bf00-11bcdd8ce048 : 
> 0.8950508004926109
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-eb9ba675-9c23-40a1-9241-c314dc0e2867 : 
> 0.8947356866877415
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13692) StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet

2018-06-21 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDFS-13692:
--
Attachment: HDFS-13692.00.patch

> StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet
> --
>
> Key: HDFS-13692
> URL: https://issues.apache.org/jira/browse/HDFS-13692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-13692.00.patch
>
>
> StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet.  In 
> {{StorageInfoDefragmenter#scanAndCompactStorages}}, it prints for all the 
> StorageInfo under each DN. If there are 1k nodes in cluster, and each node 
> has 10 data dir configured, it will print 10k lines every compact interval 
> time (10 mins). The log looks large, we could switch log level from INFO to 
> DEBUG in {{StorageInfoDefragmenter#scanAndCompactStorages}}.
> {noformat}
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-329bd988-a558-43a6-b31c-9142548b0179 : 
> 0.876264591439
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-b5505847-1389-4a80-b9d8-876172a83897 : 0.933351976137211
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-ca164b2f-0a2c-4b26-8e99-f0ece0909997 : 
> 0.9330040998881849
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-89b912ba-339b-45e3-b981-541b22690ccb : 
> 0.9314626719970249
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-89c0377b-a49c-4288-9304-e104d98de5bd : 
> 0.9309580852251582
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-5ffad4d2-168a-446d-a92e-ef46a82f26f8 : 
> 0.8938870614035088
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-eecbbd34-10f4-4647-8710-0f5963da3aaa : 
> 0.8963103205353998
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-7aafa122-433f-49c8-bf00-11bcdd8ce048 : 
> 0.8950508004926109
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-eb9ba675-9c23-40a1-9241-c314dc0e2867 : 
> 0.8947356866877415
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-176) Add keyCount and container maximum size to ContainerData

2018-06-21 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519661#comment-16519661
 ] 

Bharat Viswanadham commented on HDDS-176:
-

This patch is dependant on HDDS-169.

> Add keyCount and container maximum size to ContainerData
> 
>
> Key: HDDS-176
> URL: https://issues.apache.org/jira/browse/HDDS-176
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-176-HDDS-48.00.patch
>
>
> # ContainerData, should hold container maximum size, and this should be 
> serialized into .container file. This is needed because after some time, 
> container size can be changed. So, old containers will have different max 
> size than the newly created containers.
>  # And also add KeyCount which says the number of keys in the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-176) Add keyCount and container maximum size to ContainerData

2018-06-21 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-176 started by Bharat Viswanadham.
---
> Add keyCount and container maximum size to ContainerData
> 
>
> Key: HDDS-176
> URL: https://issues.apache.org/jira/browse/HDDS-176
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-176-HDDS-48.00.patch
>
>
> # ContainerData, should hold container maximum size, and this should be 
> serialized into .container file. This is needed because after some time, 
> container size can be changed. So, old containers will have different max 
> size than the newly created containers.
>  # And also add KeyCount which says the number of keys in the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-176) Add keyCount and container maximum size to ContainerData

2018-06-21 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-176:

Attachment: HDDS-176-HDDS-48.00.patch

> Add keyCount and container maximum size to ContainerData
> 
>
> Key: HDDS-176
> URL: https://issues.apache.org/jira/browse/HDDS-176
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-176-HDDS-48.00.patch
>
>
> # ContainerData, should hold container maximum size, and this should be 
> serialized into .container file. This is needed because after some time, 
> container size can be changed. So, old containers will have different max 
> size than the newly created containers.
>  # And also add KeyCount which says the number of keys in the container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13693) Remove unnecessary search in INodeDirectory.addChild during image loading

2018-06-21 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519340#comment-16519340
 ] 

genericqa commented on HDFS-13693:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
32s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  2s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m  4s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}161m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HDFS-13693 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928608/HDFS-13693-001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c0432cf3baa5 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 43541a1 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24482/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24482/testReport/ |
| Max. process+thread count | 3150 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24482/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   

[jira] [Updated] (HDFS-13693) Remove unnecessary search in INodeDirectory.addChild during image loading

2018-06-21 Thread zhouyingchao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouyingchao updated HDFS-13693:

Attachment: HDFS-13693-001.patch
Status: Patch Available  (was: Open)

Run all hdfs related unit tests and does not introduce new failures.

> Remove unnecessary search in INodeDirectory.addChild during image loading
> -
>
> Key: HDFS-13693
> URL: https://issues.apache.org/jira/browse/HDFS-13693
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: zhouyingchao
>Priority: Major
> Attachments: HDFS-13693-001.patch
>
>
> In FSImageFormatPBINode.loadINodeDirectorySection, all child INodes are added 
> to their parent INode's map one by one. The adding procedure will search a 
> position in the parent's map and then insert the child to the position. 
> However, during image loading, the search is unnecessary since the insert 
> position should always be at the end of the map given the sequence they are 
> serialized on disk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13693) Remove unnecessary search in INodeDirectory.addChild during image loading

2018-06-21 Thread zhouyingchao (JIRA)
zhouyingchao created HDFS-13693:
---

 Summary: Remove unnecessary search in INodeDirectory.addChild 
during image loading
 Key: HDFS-13693
 URL: https://issues.apache.org/jira/browse/HDFS-13693
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: zhouyingchao


In FSImageFormatPBINode.loadINodeDirectorySection, all child INodes are added 
to their parent INode's map one by one. The adding procedure will search a 
position in the parent's map and then insert the child to the position. 
However, during image loading, the search is unnecessary since the insert 
position should always be at the end of the map given the sequence they are 
serialized on disk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-168) Add ScmGroupID to Datanode Version File

2018-06-21 Thread Sandeep Nemuri (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Nemuri reassigned HDDS-168:
---

Assignee: Sandeep Nemuri

> Add ScmGroupID to Datanode Version File
> ---
>
> Key: HDDS-168
> URL: https://issues.apache.org/jira/browse/HDDS-168
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hanisha Koneru
>Assignee: Sandeep Nemuri
>Priority: Major
>
> Add the field {{ScmGroupID}} to Datanode Version file. This field identifies 
> the set of SCMs that this datanode talks to, or takes commands from.
> This value is not same as Cluster ID – since a cluster can technically have 
> more than one SCM group.
> Refer to [~anu]'s 
> [comment|https://issues.apache.org/jira/browse/HDDS-156?focusedCommentId=16511903=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16511903]
>  in HDDS-156.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13692) StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet

2018-06-21 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13692:
-
Description: 
StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet.  In 
{{StorageInfoDefragmenter#scanAndCompactStorages}}, it prints for all the 
StorageInfo under each DN. If there are 1k nodes in cluster, and each node has 
10 data dir configured, it will print 10k lines every compact interval time (10 
mins). The log looks large, we could switch log level from INFO to DEBUG in 
{{StorageInfoDefragmenter#scanAndCompactStorages}}.
{noformat}
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-329bd988-a558-43a6-b31c-9142548b0179 : 0.876264591439
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-b5505847-1389-4a80-b9d8-876172a83897 : 0.933351976137211
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-ca164b2f-0a2c-4b26-8e99-f0ece0909997 : 0.9330040998881849
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-89b912ba-339b-45e3-b981-541b22690ccb : 0.9314626719970249
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-89c0377b-a49c-4288-9304-e104d98de5bd : 0.9309580852251582
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-5ffad4d2-168a-446d-a92e-ef46a82f26f8 : 0.8938870614035088
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-eecbbd34-10f4-4647-8710-0f5963da3aaa : 0.8963103205353998
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-7aafa122-433f-49c8-bf00-11bcdd8ce048 : 0.8950508004926109
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-eb9ba675-9c23-40a1-9241-c314dc0e2867 : 0.8947356866877415
{noformat}

  was:
StorageInfoDefragmenter floods log when compacting StorageInfoTreeset.  In 
{{StorageInfoDefragmenter#scanAndCompactStorages}}, it prints for all the 
StorageInfo under each DN. If there are 1k nodes in cluster, and each node has 
10 data dir configured, it will print 10k lines every compact interval time (10 
mins). The log looks large, we could switch log level from INFO to DEBUG in 
{{StorageInfoDefragmenter#scanAndCompactStorages}}.
{noformat}
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-329bd988-a558-43a6-b31c-9142548b0179 : 0.876264591439
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-b5505847-1389-4a80-b9d8-876172a83897 : 0.933351976137211
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-ca164b2f-0a2c-4b26-8e99-f0ece0909997 : 0.9330040998881849
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-89b912ba-339b-45e3-b981-541b22690ccb : 0.9314626719970249
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-89c0377b-a49c-4288-9304-e104d98de5bd : 0.9309580852251582
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-5ffad4d2-168a-446d-a92e-ef46a82f26f8 : 0.8938870614035088
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-eecbbd34-10f4-4647-8710-0f5963da3aaa : 0.8963103205353998
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-7aafa122-433f-49c8-bf00-11bcdd8ce048 : 0.8950508004926109
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-eb9ba675-9c23-40a1-9241-c314dc0e2867 : 0.8947356866877415
{noformat}


> StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet
> --
>
> Key: HDFS-13692
>  

[jira] [Updated] (HDFS-13692) StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet

2018-06-21 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13692:
-
Summary: StorageInfoDefragmenter floods log when compacting StorageInfo 
TreeSet  (was: StorageInfoDefragmenter floods log when compacting StorageInfo 
Treeset)

> StorageInfoDefragmenter floods log when compacting StorageInfo TreeSet
> --
>
> Key: HDFS-13692
> URL: https://issues.apache.org/jira/browse/HDFS-13692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Priority: Minor
>
> StorageInfoDefragmenter floods log when compacting StorageInfoTreeset.  In 
> {{StorageInfoDefragmenter#scanAndCompactStorages}}, it prints for all the 
> StorageInfo under each DN. If there are 1k nodes in cluster, and each node 
> has 10 data dir configured, it will print 10k lines every compact interval 
> time (10 mins). The log looks large, we could switch log level from INFO to 
> DEBUG in {{StorageInfoDefragmenter#scanAndCompactStorages}}.
> {noformat}
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-329bd988-a558-43a6-b31c-9142548b0179 : 
> 0.876264591439
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-b5505847-1389-4a80-b9d8-876172a83897 : 0.933351976137211
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-ca164b2f-0a2c-4b26-8e99-f0ece0909997 : 
> 0.9330040998881849
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-89b912ba-339b-45e3-b981-541b22690ccb : 
> 0.9314626719970249
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-89c0377b-a49c-4288-9304-e104d98de5bd : 
> 0.9309580852251582
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-5ffad4d2-168a-446d-a92e-ef46a82f26f8 : 
> 0.8938870614035088
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-eecbbd34-10f4-4647-8710-0f5963da3aaa : 
> 0.8963103205353998
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-7aafa122-433f-49c8-bf00-11bcdd8ce048 : 
> 0.8950508004926109
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-eb9ba675-9c23-40a1-9241-c314dc0e2867 : 
> 0.8947356866877415
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13692) StorageInfoDefragmenter floods log when compacting StorageInfo Treeset

2018-06-21 Thread Yiqun Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-13692:
-
Summary: StorageInfoDefragmenter floods log when compacting StorageInfo 
Treeset  (was: StorageInfoDefragmenter floods log when compacting 
StorageInfoTreeset)

> StorageInfoDefragmenter floods log when compacting StorageInfo Treeset
> --
>
> Key: HDFS-13692
> URL: https://issues.apache.org/jira/browse/HDFS-13692
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Yiqun Lin
>Priority: Minor
>
> StorageInfoDefragmenter floods log when compacting StorageInfoTreeset.  In 
> {{StorageInfoDefragmenter#scanAndCompactStorages}}, it prints for all the 
> StorageInfo under each DN. If there are 1k nodes in cluster, and each node 
> has 10 data dir configured, it will print 10k lines every compact interval 
> time (10 mins). The log looks large, we could switch log level from INFO to 
> DEBUG in {{StorageInfoDefragmenter#scanAndCompactStorages}}.
> {noformat}
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-329bd988-a558-43a6-b31c-9142548b0179 : 
> 0.876264591439
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-b5505847-1389-4a80-b9d8-876172a83897 : 0.933351976137211
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-ca164b2f-0a2c-4b26-8e99-f0ece0909997 : 
> 0.9330040998881849
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-89b912ba-339b-45e3-b981-541b22690ccb : 
> 0.9314626719970249
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-89c0377b-a49c-4288-9304-e104d98de5bd : 
> 0.9309580852251582
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-5ffad4d2-168a-446d-a92e-ef46a82f26f8 : 
> 0.8938870614035088
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-eecbbd34-10f4-4647-8710-0f5963da3aaa : 
> 0.8963103205353998
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-7aafa122-433f-49c8-bf00-11bcdd8ce048 : 
> 0.8950508004926109
> 2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo 
> TreeSet fill ratio DS-eb9ba675-9c23-40a1-9241-c314dc0e2867 : 
> 0.8947356866877415
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13692) StorageInfoDefragmenter floods log when compacting StorageInfoTreeset

2018-06-21 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-13692:


 Summary: StorageInfoDefragmenter floods log when compacting 
StorageInfoTreeset
 Key: HDFS-13692
 URL: https://issues.apache.org/jira/browse/HDFS-13692
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.1.0
Reporter: Yiqun Lin


StorageInfoDefragmenter floods log when compacting StorageInfoTreeset.  In 
{{StorageInfoDefragmenter#scanAndCompactStorages}}, it prints for all the 
StorageInfo under each DN. If there are 1k nodes in cluster, and each node has 
10 data dir configured, it will print 10k lines every compact interval time (10 
mins). The log looks large, we could switch log level from INFO to DEBUG in 
{{StorageInfoDefragmenter#scanAndCompactStorages}}.
{noformat}
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-329bd988-a558-43a6-b31c-9142548b0179 : 0.876264591439
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-b5505847-1389-4a80-b9d8-876172a83897 : 0.933351976137211
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-ca164b2f-0a2c-4b26-8e99-f0ece0909997 : 0.9330040998881849
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-89b912ba-339b-45e3-b981-541b22690ccb : 0.9314626719970249
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-89c0377b-a49c-4288-9304-e104d98de5bd : 0.9309580852251582
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-5ffad4d2-168a-446d-a92e-ef46a82f26f8 : 0.8938870614035088
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-eecbbd34-10f4-4647-8710-0f5963da3aaa : 0.8963103205353998
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-7aafa122-433f-49c8-bf00-11bcdd8ce048 : 0.8950508004926109
2018-06-19 10:18:48,849 INFO [StorageInfoMonitor] 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: StorageInfo TreeSet 
fill ratio DS-eb9ba675-9c23-40a1-9241-c314dc0e2867 : 0.8947356866877415
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org