[jira] [Work logged] (HDFS-15834) Remove the usage of org.apache.log4j.Level

2021-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15834?focusedWorklogId=551704=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-551704
 ]

ASF GitHub Bot logged work on HDFS-15834:
-

Author: ASF GitHub Bot
Created on: 12/Feb/21 07:35
Start Date: 12/Feb/21 07:35
Worklog Time Spent: 10m 
  Work Description: aajisaka opened a new pull request #2696:
URL: https://github.com/apache/hadoop/pull/2696


   JIRA: https://issues.apache.org/jira/browse/HDFS-15834
   
   There are still some usages of org.apache.log4j.Level. They cannot simply be 
removed because they are used with Log4J1 Appender API and so on. I think they 
can be removed when upgraded to Log4j2.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 551704)
Remaining Estimate: 0h
Time Spent: 10m

> Remove the usage of org.apache.log4j.Level
> --
>
> Key: HDFS-15834
> URL: https://issues.apache.org/jira/browse/HDFS-15834
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Replace org.apache.log4j.Level with org.slf4j.event.Level in hadoop-hdfs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15834) Remove the usage of org.apache.log4j.Level

2021-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15834:
--
Labels: pull-request-available  (was: )

> Remove the usage of org.apache.log4j.Level
> --
>
> Key: HDFS-15834
> URL: https://issues.apache.org/jira/browse/HDFS-15834
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Replace org.apache.log4j.Level with org.slf4j.event.Level in hadoop-hdfs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15834) Remove the usage of org.apache.log4j.Level

2021-02-11 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HDFS-15834:


 Summary: Remove the usage of org.apache.log4j.Level
 Key: HDFS-15834
 URL: https://issues.apache.org/jira/browse/HDFS-15834
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Akira Ajisaka
Assignee: Akira Ajisaka


Replace org.apache.log4j.Level with org.slf4j.event.Level in hadoop-hdfs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15808) Add metrics for FSNamesystem read/write lock hold long time

2021-02-11 Thread tomscut (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tomscut updated HDFS-15808:
---
Summary: Add metrics for FSNamesystem read/write lock hold long time  (was: 
Add metrics for FSNamesystem read/write lock Add metrics for FSNamesystem 
read/write lock hold long time)

> Add metrics for FSNamesystem read/write lock hold long time
> ---
>
> Key: HDFS-15808
> URL: https://issues.apache.org/jira/browse/HDFS-15808
> Project: Hadoop HDFS
>  Issue Type: Wish
>  Components: hdfs
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: hdfs, lock, metrics, pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> To monitor how often read/write locks exceed thresholds, we can add two 
> metrics(ReadLockWarning/WriteLockWarning), which are exposed in JMX.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15808) Add metrics for FSNamesystem read/write lock Add metrics for FSNamesystem read/write lock hold long time

2021-02-11 Thread tomscut (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tomscut updated HDFS-15808:
---
Summary: Add metrics for FSNamesystem read/write lock Add metrics for 
FSNamesystem read/write lock hold long time  (was: Add metrics for FSNamesystem 
read/write lock warnings)

> Add metrics for FSNamesystem read/write lock Add metrics for FSNamesystem 
> read/write lock hold long time
> 
>
> Key: HDFS-15808
> URL: https://issues.apache.org/jira/browse/HDFS-15808
> Project: Hadoop HDFS
>  Issue Type: Wish
>  Components: hdfs
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: hdfs, lock, metrics, pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> To monitor how often read/write locks exceed thresholds, we can add two 
> metrics(ReadLockWarning/WriteLockWarning), which are exposed in JMX.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15814) Make some parameters configurable for DataNodeDiskMetrics

2021-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15814?focusedWorklogId=551678=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-551678
 ]

ASF GitHub Bot logged work on HDFS-15814:
-

Author: ASF GitHub Bot
Created on: 12/Feb/21 05:19
Start Date: 12/Feb/21 05:19
Worklog Time Spent: 10m 
  Work Description: tomscut commented on a change in pull request #2676:
URL: https://github.com/apache/hadoop/pull/2676#discussion_r574257636



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeDiskMetrics.java
##
@@ -61,11 +61,26 @@
   // code, status should not be overridden by daemon thread.
   private boolean overrideStatus = true;
 
-  public DataNodeDiskMetrics(DataNode dn, long diskOutlierDetectionIntervalMs) 
{
+  /**
+   * Minimum number of disks to run outlier detection.
+   */
+  private final long minOutlierDetectionDisks;
+  /**
+   * Threshold in milliseconds below which a disk is definitely not slow.
+   */
+  private final long lowThresholdMs;
+
+  public DataNodeDiskMetrics(DataNode dn, long diskOutlierDetectionIntervalMs, 
Configuration conf) {
 this.dn = dn;
 this.detectionInterval = diskOutlierDetectionIntervalMs;
-slowDiskDetector = new OutlierDetector(MIN_OUTLIER_DETECTION_DISKS,
-SLOW_DISK_LOW_THRESHOLD_MS);

Review comment:
   Yeah, do we need to change anything else?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 551678)
Time Spent: 50m  (was: 40m)

> Make some parameters configurable for DataNodeDiskMetrics
> -
>
> Key: HDFS-15814
> URL: https://issues.apache.org/jira/browse/HDFS-15814
> Project: Hadoop HDFS
>  Issue Type: Wish
>  Components: hdfs
>Reporter: tomscut
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> For ease of use, especially for small clusters, we can change some 
> parameters(MIN_OUTLIER_DETECTION_DISKS, SLOW_DISK_LOW_THRESHOLD_MS) 
> configurable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15814) Make some parameters configurable for DataNodeDiskMetrics

2021-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15814?focusedWorklogId=551677=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-551677
 ]

ASF GitHub Bot logged work on HDFS-15814:
-

Author: ASF GitHub Bot
Created on: 12/Feb/21 05:18
Start Date: 12/Feb/21 05:18
Worklog Time Spent: 10m 
  Work Description: tomscut commented on a change in pull request #2676:
URL: https://github.com/apache/hadoop/pull/2676#discussion_r574994811



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeDiskMetrics.java
##
@@ -61,11 +61,26 @@
   // code, status should not be overridden by daemon thread.
   private boolean overrideStatus = true;
 
-  public DataNodeDiskMetrics(DataNode dn, long diskOutlierDetectionIntervalMs) 
{
+  /**
+   * Minimum number of disks to run outlier detection.
+   */
+  private final long minOutlierDetectionDisks;
+  /**
+   * Threshold in milliseconds below which a disk is definitely not slow.
+   */
+  private final long lowThresholdMs;
+
+  public DataNodeDiskMetrics(DataNode dn, long diskOutlierDetectionIntervalMs, 
Configuration conf) {
 this.dn = dn;
 this.detectionInterval = diskOutlierDetectionIntervalMs;
-slowDiskDetector = new OutlierDetector(MIN_OUTLIER_DETECTION_DISKS,
-SLOW_DISK_LOW_THRESHOLD_MS);

Review comment:
   Hi Arpit, I've removed these old constants. Is there anything else I 
need to change?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 551677)
Time Spent: 40m  (was: 0.5h)

> Make some parameters configurable for DataNodeDiskMetrics
> -
>
> Key: HDFS-15814
> URL: https://issues.apache.org/jira/browse/HDFS-15814
> Project: Hadoop HDFS
>  Issue Type: Wish
>  Components: hdfs
>Reporter: tomscut
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> For ease of use, especially for small clusters, we can change some 
> parameters(MIN_OUTLIER_DETECTION_DISKS, SLOW_DISK_LOW_THRESHOLD_MS) 
> configurable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15821) Add metrics for in-service datanodes

2021-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15821?focusedWorklogId=551482=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-551482
 ]

ASF GitHub Bot logged work on HDFS-15821:
-

Author: ASF GitHub Bot
Created on: 11/Feb/21 19:49
Start Date: 11/Feb/21 19:49
Worklog Time Spent: 10m 
  Work Description: jbrennan333 commented on pull request #2690:
URL: https://github.com/apache/hadoop/pull/2690#issuecomment-46769


   @zehaoc2 the error in the unit test build does look like it is related.
   {noformat}
   [ERROR] 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2690/src/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java:[742,2]
 error: method does not override or implement a method from a supertype
   {noformat}
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 551482)
Time Spent: 50m  (was: 40m)

> Add metrics for in-service datanodes
> 
>
> Key: HDFS-15821
> URL: https://issues.apache.org/jira/browse/HDFS-15821
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Zehao Chen
>Assignee: Zehao Chen
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> We currently have metrics for live datanodes but some of the datanodes may be 
> in decommissioning state or maintenance state. Adding this metric allows us 
> to know how many nodes are currently in service, where NumInServiceDatanodes 
> = NumLiveDataNodes - NumDecomLiveDataNodes - NumInMaintenanceLiveDataNodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15821) Add metrics for in-service datanodes

2021-02-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15821?focusedWorklogId=551483=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-551483
 ]

ASF GitHub Bot logged work on HDFS-15821:
-

Author: ASF GitHub Bot
Created on: 11/Feb/21 19:49
Start Date: 11/Feb/21 19:49
Worklog Time Spent: 10m 
  Work Description: jbrennan333 edited a comment on pull request #2690:
URL: https://github.com/apache/hadoop/pull/2690#issuecomment-46769


   @zehaoc2 the error in the unit test build does look like it is related.
   
   `[ERROR] 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-2690/src/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/NamenodeBeanMetrics.java:[742,2]
 error: method does not override or implement a method from a supertype
   `
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 551483)
Time Spent: 1h  (was: 50m)

> Add metrics for in-service datanodes
> 
>
> Key: HDFS-15821
> URL: https://issues.apache.org/jira/browse/HDFS-15821
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Zehao Chen
>Assignee: Zehao Chen
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> We currently have metrics for live datanodes but some of the datanodes may be 
> in decommissioning state or maintenance state. Adding this metric allows us 
> to know how many nodes are currently in service, where NumInServiceDatanodes 
> = NumLiveDataNodes - NumDecomLiveDataNodes - NumInMaintenanceLiveDataNodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15222) HDFS: Output message of ""hdfs fsck -list-corruptfileblocks" command is not correct

2021-02-11 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17283251#comment-17283251
 ] 

Brahma Reddy Battula edited comment on HDFS-15222 at 2/11/21, 6:00 PM:
---

[~Sushma_28] thanks for reporting.. It's make sense to me.. The patch LGTM, let 
jenkins run on latest code. This might go only in trunk, as this change the 
output of the command and some test scripts might fail as they might validate 
the output.


was (Author: brahmareddy):
[~Sushma_28] thanks for reporting.. It's make sense me.. The patch LGTM, let 
jenkins run on latest code. this might go only in trunk, as this change the 
output of the command some test scripts might file if they validate the output.

> HDFS: Output message of ""hdfs fsck -list-corruptfileblocks" command is not 
> correct
> ---
>
> Key: HDFS-15222
> URL: https://issues.apache.org/jira/browse/HDFS-15222
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, tools
>Affects Versions: 3.1.1
> Environment: 3 node HA cluster
>Reporter: Souryakanta Dwivedy
>Assignee: Ravuri Sushma sree
>Priority: Minor
> Attachments: HDFS-15222.001.patch, HDFS-15222.002.patch, output1.PNG, 
> output2.PNG
>
>
> Output message of ""hdfs fsck -list-corruptfileblocks" command is not correct
>  
> Steps :-Steps :-       
>  * Create a directory and put files  -
>  * Corrupt the file blocks
>  * check the corrupted file blocks with "hdfs fsck -list-corruptfileblocks" 
> command    
> It will display corrupted file blocks with message as "The list of corrupt 
> files under path '/path' are:"   at the beginning which is wrong.   
> And at the end of output also the wrong message will display as "The 
> filesystem under path '/path' has  CORRUPT files"
>  
> Actual output : "The list of corrupt files under path '/path' are:"
>                            "The filesystem under path '/path' has  
> CORRUPT files"
> Expected output : "The list of corrupted file blocks under path '/path' are:"
>                               "The filesystem under path '/path' has  
> CORRUPT file blocks"
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15222) HDFS: Output message of ""hdfs fsck -list-corruptfileblocks" command is not correct

2021-02-11 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17283251#comment-17283251
 ] 

Brahma Reddy Battula commented on HDFS-15222:
-

[~Sushma_28] thanks for reporting.. It's make sense me.. The patch LGTM, let 
jenkins run on latest code. this might go only in trunk, as this change the 
output of the command some test scripts might file if they validate the output.

> HDFS: Output message of ""hdfs fsck -list-corruptfileblocks" command is not 
> correct
> ---
>
> Key: HDFS-15222
> URL: https://issues.apache.org/jira/browse/HDFS-15222
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, tools
>Affects Versions: 3.1.1
> Environment: 3 node HA cluster
>Reporter: Souryakanta Dwivedy
>Assignee: Ravuri Sushma sree
>Priority: Minor
> Attachments: HDFS-15222.001.patch, HDFS-15222.002.patch, output1.PNG, 
> output2.PNG
>
>
> Output message of ""hdfs fsck -list-corruptfileblocks" command is not correct
>  
> Steps :-Steps :-       
>  * Create a directory and put files  -
>  * Corrupt the file blocks
>  * check the corrupted file blocks with "hdfs fsck -list-corruptfileblocks" 
> command    
> It will display corrupted file blocks with message as "The list of corrupt 
> files under path '/path' are:"   at the beginning which is wrong.   
> And at the end of output also the wrong message will display as "The 
> filesystem under path '/path' has  CORRUPT files"
>  
> Actual output : "The list of corrupt files under path '/path' are:"
>                            "The filesystem under path '/path' has  
> CORRUPT files"
> Expected output : "The list of corrupted file blocks under path '/path' are:"
>                               "The filesystem under path '/path' has  
> CORRUPT file blocks"
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15494) TestReplicaCachingGetSpaceUsed #testReplicaCachingGetSpaceUsedByRBWReplica Fails on Windows

2021-02-11 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17283240#comment-17283240
 ] 

Brahma Reddy Battula commented on HDFS-15494:
-

LGTM.. Lets run the jenkin's run on latest code.

> TestReplicaCachingGetSpaceUsed #testReplicaCachingGetSpaceUsedByRBWReplica 
> Fails on Windows
> ---
>
> Key: HDFS-15494
> URL: https://issues.apache.org/jira/browse/HDFS-15494
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ravuri Sushma sree
>Assignee: Ravuri Sushma sree
>Priority: Major
> Attachments: HDFS-15494.001.patch
>
>
> TestReplicaCachingGetSpaceUsed #testReplicaCachingGetSpaceUsedByRBWReplica 
> Fails on Windows because when RBW should be renamed to Finalized, windows is 
> not supporting .
> This should be skipped on Windows 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15735) NameNode memory Leak on frequent execution of fsck

2021-02-11 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17283235#comment-17283235
 ] 

Brahma Reddy Battula commented on HDFS-15735:
-

{quote} Tracer is a {{private}} variable,

Not used anywhere,

Tracer is subject to removal due to CVE(IIRC), guess HADOOP-17387 and others, 
one recently mentioned too.
{quote}
 Looks you did not get what does we mean,  we talked about this config 
*"namenode.fsck.htrace."*
{quote}Harmless things are not always correct,

closing tracer in fsck() may impact if someone is using tracer post it(if so).

Closing in the last line of fsck may not be this issue what you are fixing. the 
moment you come out from the method control, the tracer would be subject to GC? 
closing it won't help, it will also make it subject to GC only.
{quote}
How it will impact for this ..?
{quote}Would request consider the other options as well.
{quote}
Let's anybody else have objection to go this.
{quote}On this note, I take my vote back. 
{quote}
thanks.

 

> NameNode memory Leak on frequent execution of fsck  
> 
>
> Key: HDFS-15735
> URL: https://issues.apache.org/jira/browse/HDFS-15735
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ravuri Sushma sree
>Assignee: Ravuri Sushma sree
>Priority: Major
> Attachments: HDFS-15735.001.patch
>
>
> The memory of the cluster NameNode continues to grow, and the full gc 
> eventually leads to the failure of the active and standby HDFS
> Htrace is used to track the processing time of fsck
> Checking the code it is found that the tracer object in NamenodeFsck.java was 
> only created but not closed because of this the memory footprint continues to 
> grow



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15812) after deleting data of hbase table hdfs size is not decreasing

2021-02-11 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17283216#comment-17283216
 ] 

Brahma Reddy Battula commented on HDFS-15812:
-

Have a look at *namenode audit logs* after you delete the table which can tell 
whether requests reached HDFS or not. 

 Looks like you are using *"hdp 3.1.4.0-315"* which might not completely 
*Apache Hadoop*..SO IMO as it's vendor specific , you can ask vendor forum 
also..

> after deleting data of hbase table hdfs size is not decreasing
> --
>
> Key: HDFS-15812
> URL: https://issues.apache.org/jira/browse/HDFS-15812
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.0.2-alpha
> Environment: HDP 3.1.4.0-315
> Hbase 2.0.2.3.1.4.0-315
>Reporter: Satya Gaurav
>Priority: Major
>
> I am deleting the data from hbase table, it's deleting from hbase table but 
> the size of the hdfs directory is not reducing. Even I ran the major 
> compaction but after that also hdfs size didn't reduce. Any solution for this 
> issue?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15787) Remove unnecessary Lease Renew in FSNamesystem#internalReleaseLease

2021-02-11 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17283186#comment-17283186
 ] 

Ayush Saxena commented on HDFS-15787:
-

Thanx [~leosun08] for the patch, changes LGTM, but not sure why this was done 
like this.
[~shv] [~lukmajercak] you folks were involved in HDFS-11576, shall be great if 
you can spare some time to just confirm it was a miss there, not intentional, 
something what we can think as of now.
Thanx

> Remove unnecessary Lease Renew  in FSNamesystem#internalReleaseLease
> 
>
> Key: HDFS-15787
> URL: https://issues.apache.org/jira/browse/HDFS-15787
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-15787.001.patch, HDFS-15787.002.patch
>
>
> The method of FSNamesystem#internalReleaseLease() as follow:
>  
> {code:java}
> boolean internalReleaseLease(Lease lease, String src, INodesInPath iip,
> String recoveryLeaseHolder) throws IOException {
>   ...
> // Start recovery of the last block for this file
> // Only do so if there is no ongoing recovery for this block,
> // or the previous recovery for this block timed out.
> if (blockManager.addBlockRecoveryAttempt(lastBlock)) {
>   long blockRecoveryId = nextGenerationStamp(
>   blockManager.isLegacyBlock(lastBlock));
>   if(copyOnTruncate) {
> lastBlock.setGenerationStamp(blockRecoveryId);
>   } else if(truncateRecovery) {
> recoveryBlock.setGenerationStamp(blockRecoveryId);
>   }
>   uc.initializeBlockRecovery(lastBlock, blockRecoveryId, true);
>   // Cannot close file right now, since the last block requires recovery.
>   // This may potentially cause infinite loop in lease recovery
>   // if there are no valid replicas on data-nodes.
>   NameNode.stateChangeLog.warn(
>   "DIR* NameSystem.internalReleaseLease: " +
>   "File " + src + " has not been closed." +
>   " Lease recovery is in progress. " +
>   "RecoveryId = " + blockRecoveryId + " for block " + lastBlock);
> }
> lease = reassignLease(lease, src, recoveryLeaseHolder, pendingFile);
> leaseManager.renewLease(lease);
> break;
>   }
>   return false;
> }
> {code}
>  Call LeaseManager#renewLease in 
> FSNamesystem#reassignLease=>FSNamesystem#reassignLeaseInternal.
>  So no need to call LeaseManager#renewLease  again after 
> leaseManager#renewLease.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15812) after deleting data of hbase table hdfs size is not decreasing

2021-02-11 Thread Surendra Singh Lilhore (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17283149#comment-17283149
 ] 

Surendra Singh Lilhore commented on HDFS-15812:
---

[~satycse06], Please can you check the namenode log, what happened to hbase 
related files after deleting table  ?

> after deleting data of hbase table hdfs size is not decreasing
> --
>
> Key: HDFS-15812
> URL: https://issues.apache.org/jira/browse/HDFS-15812
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.0.2-alpha
> Environment: HDP 3.1.4.0-315
> Hbase 2.0.2.3.1.4.0-315
>Reporter: Satya Gaurav
>Priority: Major
>
> I am deleting the data from hbase table, it's deleting from hbase table but 
> the size of the hdfs directory is not reducing. Even I ran the major 
> compaction but after that also hdfs size didn't reduce. Any solution for this 
> issue?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15812) after deleting data of hbase table hdfs size is not decreasing

2021-02-11 Thread Satya Gaurav (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17282908#comment-17282908
 ] 

Satya Gaurav commented on HDFS-15812:
-

[~anoop.hbase]

No, I didn't take any snapshots or backup of this deleted table.

> after deleting data of hbase table hdfs size is not decreasing
> --
>
> Key: HDFS-15812
> URL: https://issues.apache.org/jira/browse/HDFS-15812
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.0.2-alpha
> Environment: HDP 3.1.4.0-315
> Hbase 2.0.2.3.1.4.0-315
>Reporter: Satya Gaurav
>Priority: Major
>
> I am deleting the data from hbase table, it's deleting from hbase table but 
> the size of the hdfs directory is not reducing. Even I ran the major 
> compaction but after that also hdfs size didn't reduce. Any solution for this 
> issue?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org