[jira] [Commented] (HDFS-17008) Fix RBF JDK 11 javadoc warnings

2023-05-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17722006#comment-17722006
 ] 

ASF GitHub Bot commented on HDFS-17008:
---

hadoop-yetus commented on PR #5648:
URL: https://github.com/apache/hadoop/pull/5648#issuecomment-1545090106

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  
hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 generated 0 new + 0 
unchanged - 73 fixed = 0 total (was 73)  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  22m 12s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 119m  7s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5648/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5648 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux f569c25e8c6a 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 77fae62ae1d7855581fbd6291931bb9cf8772a3f |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5648/2/testReport/ |
   | Max. process+thread count | 2453 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 

[jira] [Commented] (HDFS-17007) TestPendingReconstruction.testProcessPendingReconstructions verify HDFS-11960 test case is wrong

2023-05-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721997#comment-17721997
 ] 

ASF GitHub Bot commented on HDFS-17007:
---

LiuGuH commented on PR #5642:
URL: https://github.com/apache/hadoop/pull/5642#issuecomment-1545058013

   > [HDFS-15086](https://issues.apache.org/jira/browse/HDFS-15086) changed 
storedBlock to blockInfo without any specific reason.
   > 
   > > (1) It does not stop PendingReconstructionMonitor. The blockid will into 
timeouts queue because of timout duration is 3s.
   > 
   > I don't catch this, can you elaborate, what will fail? Does putting a 
sleep of more than 3 seconds anywhere in the test lead to failures if this 
isn't there, if yes, can you tell me where to put that, so that I can try 
locally
   
 GenericTestUtils.waitFor(() -> pendingReconstruction.size() == 0, 500,
 1);
 // The pending queue should be empty.
 assertEquals("Size of pendingReconstructions ", 0,
 pendingReconstruction.size());
   } finally {
 if (cluster != null) {
   cluster.shutdown();
 }
   }
   
   GenericTestUtils.waitFor() will  sleep maximum 10s.  The 
pendingReconstruction  will timeout after 3s ,then will be into timeout list. 
   PendingReconstructionMonitor will not be stoped if I change  
GenericTestUtils.waitFor to 1s .
   




> TestPendingReconstruction.testProcessPendingReconstructions verify HDFS-11960 
> test case is wrong
> 
>
> Key: HDFS-17007
> URL: https://issues.apache.org/jira/browse/HDFS-17007
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: liuguanghua
>Priority: Minor
>  Labels: pull-request-available
>
> TestPendingReconstruction.testProcessPendingReconstructions() verify 
> HDFS-11960 is wrong.
> (1) It does not  stop PendingReconstructionMonitor. The blockid will into 
> timeouts queue because of timout duration is 3s.
> (2) Test blockid should be blk_1_1 with different genstamp.  
> (3) The blk_1_1 should test with the same DatanodeDescriptor



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17007) TestPendingReconstruction.testProcessPendingReconstructions verify HDFS-11960 test case is wrong

2023-05-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17007:
--
Labels: pull-request-available  (was: )

> TestPendingReconstruction.testProcessPendingReconstructions verify HDFS-11960 
> test case is wrong
> 
>
> Key: HDFS-17007
> URL: https://issues.apache.org/jira/browse/HDFS-17007
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: liuguanghua
>Priority: Minor
>  Labels: pull-request-available
>
> TestPendingReconstruction.testProcessPendingReconstructions() verify 
> HDFS-11960 is wrong.
> (1) It does not  stop PendingReconstructionMonitor. The blockid will into 
> timeouts queue because of timout duration is 3s.
> (2) Test blockid should be blk_1_1 with different genstamp.  
> (3) The blk_1_1 should test with the same DatanodeDescriptor



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17007) TestPendingReconstruction.testProcessPendingReconstructions verify HDFS-11960 test case is wrong

2023-05-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721996#comment-17721996
 ] 

ASF GitHub Bot commented on HDFS-17007:
---

LiuGuH commented on PR #5642:
URL: https://github.com/apache/hadoop/pull/5642#issuecomment-1545057769

 GenericTestUtils.waitFor(() -> pendingReconstruction.size() == 0, 500,
 1);
 // The pending queue should be empty.
 assertEquals("Size of pendingReconstructions ", 0,
 pendingReconstruction.size());
   } finally {
 if (cluster != null) {
   cluster.shutdown();
 }
   }
   
   GenericTestUtils.waitFor() will  sleep maximum 10s.  The 
pendingReconstruction  will timeout after 3s ,then will be into timeout list. 
   PendingReconstructionMonitor will not be stoped if I change  
GenericTestUtils.waitFor to 1s .
   




> TestPendingReconstruction.testProcessPendingReconstructions verify HDFS-11960 
> test case is wrong
> 
>
> Key: HDFS-17007
> URL: https://issues.apache.org/jira/browse/HDFS-17007
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: liuguanghua
>Priority: Minor
>
> TestPendingReconstruction.testProcessPendingReconstructions() verify 
> HDFS-11960 is wrong.
> (1) It does not  stop PendingReconstructionMonitor. The blockid will into 
> timeouts queue because of timout duration is 3s.
> (2) Test blockid should be blk_1_1 with different genstamp.  
> (3) The blk_1_1 should test with the same DatanodeDescriptor



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17001) Support getStatus API in WebHDFS

2023-05-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721993#comment-17721993
 ] 

ASF GitHub Bot commented on HDFS-17001:
---

zhtttylz commented on PR #5628:
URL: https://github.com/apache/hadoop/pull/5628#issuecomment-1545039162

   @ayushtkn Thank you very much for your valuable suggestion. We will create a 
ticket to add this feature to HTTPFs!




> Support getStatus API in WebHDFS
> 
>
> Key: HDFS-17001
> URL: https://issues.apache.org/jira/browse/HDFS-17001
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 3.4.0
>Reporter: Hualong Zhang
>Assignee: Hualong Zhang
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2023-05-08-14-34-51-873.png
>
>
> WebHDFS should support getStatus:
> !image-2023-05-08-14-34-51-873.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16990) HttpFS Add Support getFileLinkStatus API

2023-05-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721989#comment-17721989
 ] 

ASF GitHub Bot commented on HDFS-16990:
---

zhtttylz commented on PR #5602:
URL: https://github.com/apache/hadoop/pull/5602#issuecomment-1545026505

   @ayushtkn @slfan1989  Thank you for your assistance in reviewing the code!




> HttpFS Add Support getFileLinkStatus API
> 
>
> Key: HDFS-16990
> URL: https://issues.apache.org/jira/browse/HDFS-16990
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: httpfs
>Affects Versions: 3.4.0
>Reporter: Hualong Zhang
>Assignee: Hualong Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> HttpFS should implement the *getFileLinkStatus* API already implemented in 
> WebHDFS.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17001) Support getStatus API in WebHDFS

2023-05-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721986#comment-17721986
 ] 

ASF GitHub Bot commented on HDFS-17001:
---

zhtttylz commented on code in PR #5628:
URL: https://github.com/apache/hadoop/pull/5628#discussion_r1191851916


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java:
##
@@ -2255,6 +2256,40 @@ public void testFileLinkStatus() throws Exception {
 }
   }
 
+  @Test
+  public void testFsStatus() throws Exception {
+final Configuration conf = WebHdfsTestUtil.createConf();
+try {
+  cluster = new MiniDFSCluster.Builder(conf).build();
+  cluster.waitActive();
+
+  final WebHdfsFileSystem webHdfs =
+  WebHdfsTestUtil.getWebHdfsFileSystem(conf,
+  WebHdfsConstants.WEBHDFS_SCHEME);
+
+  final DistributedFileSystem dfs = cluster.getFileSystem();
+
+  final String path = "/foo";
+  OutputStream os = webHdfs.create(new Path(path));
+  os.write(new byte[1024]);
+
+  FsStatus webHdfsFsStatus = webHdfs.getStatus(new Path("/"));
+  Assert.assertNotNull(webHdfsFsStatus);
+
+  FsStatus dfsFsStatus = dfs.getStatus(new Path("/"));
+  Assert.assertNotNull(dfsFsStatus);
+
+  //Validate used free and capacity are the same as DistributedFileSystem
+  Assert.assertEquals(webHdfsFsStatus.getUsed(), dfsFsStatus.getUsed());
+  Assert.assertEquals(webHdfsFsStatus.getRemaining(),
+  dfsFsStatus.getRemaining());
+  Assert.assertEquals(webHdfsFsStatus.getCapacity(),
+  dfsFsStatus.getCapacity());
+} finally {
+  cluster.shutdown();
+}

Review Comment:
   Thank you for your valuable suggestion. I sincerely appreciate it and will 
promptly implement the required adjustments to the code!





> Support getStatus API in WebHDFS
> 
>
> Key: HDFS-17001
> URL: https://issues.apache.org/jira/browse/HDFS-17001
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 3.4.0
>Reporter: Hualong Zhang
>Assignee: Hualong Zhang
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2023-05-08-14-34-51-873.png
>
>
> WebHDFS should support getStatus:
> !image-2023-05-08-14-34-51-873.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16965) Add switch to decide whether to enable native codec.

2023-05-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721983#comment-17721983
 ] 

ASF GitHub Bot commented on HDFS-16965:
---

YuanbenWang commented on PR #5520:
URL: https://github.com/apache/hadoop/pull/5520#issuecomment-1545017444

   Thank you for the ticket and merging. Looking forward to meeting you in the 
next PR. @ayushtkn 
   
   




> Add switch to decide whether to enable native codec.
> 
>
> Key: HDFS-16965
> URL: https://issues.apache.org/jira/browse/HDFS-16965
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: erasure-coding
>Affects Versions: 3.3.4
>Reporter: WangYuanben
>Assignee: WangYuanben
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> Sometimes we need to create codec without ISA-L, while priority is given to 
> native codec by default. So it is necessary to add switch to decide whether 
> to enable native codec.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16979) RBF: Add dfsrouter port in hdfsauditlog

2023-05-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721982#comment-17721982
 ] 

ASF GitHub Bot commented on HDFS-16979:
---

LiuGuH commented on PR #5552:
URL: https://github.com/apache/hadoop/pull/5552#issuecomment-1545015145

   > The new code looks good to me.
   > 
   > @LiuGuH for future changes, please keep the commit history so people can 
see the changes between reviews.
   
   
   
   > The new code looks good to me.
   > 
   > @LiuGuH for future changes, please keep the commit history so people can 
see the changes between reviews.
   
   OK,I'll pay attention to that. Thank you.




> RBF: Add dfsrouter port in hdfsauditlog
> ---
>
> Key: HDFS-16979
> URL: https://issues.apache.org/jira/browse/HDFS-16979
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: liuguanghua
>Priority: Major
>  Labels: pull-request-available
>
>  
> when client is using proxyuser via realuser, the hdfs aduilg log is lack of 
> dfsrouter port infomation.
> client (using proxyuser)-> dfsrouter -> namenode
> clientport dfsrouterport
> hdfsauditlog should record dfsrouterport



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17008) Fix RBF JDK 11 javadoc warnings

2023-05-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721981#comment-17721981
 ] 

ASF GitHub Bot commented on HDFS-17008:
---

virajjasani commented on code in PR #5648:
URL: https://github.com/apache/hadoop/pull/5648#discussion_r1191846344


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java:
##
@@ -670,6 +670,9 @@ public RouterServiceState getRouterState() {
 
   /**
* Compare router state.
+   *
+   * @param routerState the router service state.
+   * @return true if the given router state is same as the state maintainer by 
the router object.

Review Comment:
   done



##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java:
##
@@ -484,14 +486,14 @@ private RetryDecision shouldRetry(final IOException ioe, 
final int retryCount,
* Invokes a method against the ClientProtocol proxy server. If a standby
* exception is generated by the call to the client, retries using the
* alternate server.
-   *

Review Comment:
   done, added breakline





> Fix RBF JDK 11 javadoc warnings
> ---
>
> Key: HDFS-17008
> URL: https://issues.apache.org/jira/browse/HDFS-17008
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> HDFS-16978 excluded proto packages from maven-javadoc-plugin for rbf, hence 
> now we have JDK 11 javadoc warnings (e.g. 
> [here|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5554/14/artifact/out/results-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1.txt]).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17001) Support getStatus API in WebHDFS

2023-05-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721980#comment-17721980
 ] 

ASF GitHub Bot commented on HDFS-17001:
---

slfan1989 commented on code in PR #5628:
URL: https://github.com/apache/hadoop/pull/5628#discussion_r1191845826


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java:
##
@@ -2255,6 +2256,40 @@ public void testFileLinkStatus() throws Exception {
 }
   }
 
+  @Test
+  public void testFsStatus() throws Exception {
+final Configuration conf = WebHdfsTestUtil.createConf();
+try {
+  cluster = new MiniDFSCluster.Builder(conf).build();
+  cluster.waitActive();
+
+  final WebHdfsFileSystem webHdfs =
+  WebHdfsTestUtil.getWebHdfsFileSystem(conf,
+  WebHdfsConstants.WEBHDFS_SCHEME);
+
+  final DistributedFileSystem dfs = cluster.getFileSystem();
+
+  final String path = "/foo";
+  OutputStream os = webHdfs.create(new Path(path));
+  os.write(new byte[1024]);
+
+  FsStatus webHdfsFsStatus = webHdfs.getStatus(new Path("/"));
+  Assert.assertNotNull(webHdfsFsStatus);
+
+  FsStatus dfsFsStatus = dfs.getStatus(new Path("/"));
+  Assert.assertNotNull(dfsFsStatus);
+
+  //Validate used free and capacity are the same as DistributedFileSystem
+  Assert.assertEquals(webHdfsFsStatus.getUsed(), dfsFsStatus.getUsed());
+  Assert.assertEquals(webHdfsFsStatus.getRemaining(),
+  dfsFsStatus.getRemaining());
+  Assert.assertEquals(webHdfsFsStatus.getCapacity(),
+  dfsFsStatus.getCapacity());
+} finally {
+  cluster.shutdown();
+}

Review Comment:
   @zhtttylz Thanks for the contribution! should we close the os?





> Support getStatus API in WebHDFS
> 
>
> Key: HDFS-17001
> URL: https://issues.apache.org/jira/browse/HDFS-17001
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 3.4.0
>Reporter: Hualong Zhang
>Assignee: Hualong Zhang
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2023-05-08-14-34-51-873.png
>
>
> WebHDFS should support getStatus:
> !image-2023-05-08-14-34-51-873.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12737) Thousands of sockets lingering in TIME_WAIT state due to frequent file open operations

2023-05-11 Thread Dheeren Beborrtha (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721964#comment-17721964
 ] 

Dheeren Beborrtha commented on HDFS-12737:
--

We are observing this issue in Hbase cluster of around 75 RSs experiencing the 
issue where Region Server is littered with following logs:
{noformat}
2023-05-09 18:47:46,092 WARN  
[RpcServer.default.FPBQ.Fifo.handler=27,queue=3,port=16020] hdfs.DFSClient: 
Connection failure: Failed to connect to 
hbase1wn41-0.subnetpoc1.vcn12231050.oraclevcn.com/10.1.64.234:1019 for file 
/apps/hbase/data/data/default/usertable2/fe172ff893d8afcf20c008e3765077da/cf/921cfad177b0434a957079cd4506c834
 for block 
BP-1395570538-10.1.21.157-1682117242080:blk_1093623349_19885353:org.apache.hadoop.net.ConnectTimeoutException:
 6 millis timeout while waiting for channel to be ready for connect. ch : 
java.nio.channels.SocketChannel[connection-pending 
remote=hbase1wn41-0.subnetpoc1.vcn12231050.oraclevcn.com/10.1.64.234:1019]
org.apache.hadoop.net.ConnectTimeoutException: 6 millis timeout while 
waiting for channel to be ready for connect. ch : 
java.nio.channels.SocketChannel[connection-pending 
remote=hbase1wn41-0.subnetpoc1.vcn12231050.oraclevcn.com/10.1.64.234:1019]
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:589)
        at 
org.apache.hadoop.hdfs.DFSClient.newConnectedPeer(DFSClient.java:3033)
        at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.nextTcpPeer(BlockReaderFactory.java:829)
        at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:754)
        at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:381)
        at 
org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:755)
        at 
org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1199)
        at 
org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:1151)
        at org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1511)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1475)
        at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:98)
        at 
org.apache.hadoop.hbase.io.util.BlockIOUtils.preadWithExtra(BlockIOUtils.java:233)
        at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readAtOffset(HFileBlock.java:1456)
        at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1679)
        at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1490)
        at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1308)
        at 
org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:318)
        at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:659)
        at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:612)
        at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:306)
        at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:214)
        at 
org.apache.hadoop.hbase.regionserver.StoreScanner.seekScanners(StoreScanner.java:408)
        at 
org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:253)
        at 
org.apache.hadoop.hbase.regionserver.HStore.createScanner(HStore.java:2100)
        at 
org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2091)
        at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:7049)
        at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:7029)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:3043)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:3023)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:3005)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2999)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2614)
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2538)
        at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45945)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:384)
        at 
org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:131){noformat}
|| ||
|[root@hbase1wn61-0 ~]# netstat -nat \| awk '\{print $6}' \| sort \| uniq -c \| 
sort -n
      1 established)
      1 Foreign
      1 SYN_RECV
      2 FIN_WAIT1
      2 

[jira] [Commented] (HDFS-17008) Fix RBF JDK 11 javadoc warnings

2023-05-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721949#comment-17721949
 ] 

ASF GitHub Bot commented on HDFS-17008:
---

goiri commented on code in PR #5648:
URL: https://github.com/apache/hadoop/pull/5648#discussion_r1191783244


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java:
##
@@ -484,14 +486,14 @@ private RetryDecision shouldRetry(final IOException ioe, 
final int retryCount,
* Invokes a method against the ClientProtocol proxy server. If a standby
* exception is generated by the call to the client, retries using the
* alternate server.
-   *

Review Comment:
   Does this give warnings? If no, it is easier to read with the break line.
   The other option is to explicitly add the breakline.





> Fix RBF JDK 11 javadoc warnings
> ---
>
> Key: HDFS-17008
> URL: https://issues.apache.org/jira/browse/HDFS-17008
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> HDFS-16978 excluded proto packages from maven-javadoc-plugin for rbf, hence 
> now we have JDK 11 javadoc warnings (e.g. 
> [here|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5554/14/artifact/out/results-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1.txt]).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16965) Add switch to decide whether to enable native codec.

2023-05-11 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721944#comment-17721944
 ] 

Ayush Saxena commented on HDFS-16965:
-

Committed to trunk.

Thanx [~wangyuanben] for the contribution. Welcome to Hadoop!!!

> Add switch to decide whether to enable native codec.
> 
>
> Key: HDFS-16965
> URL: https://issues.apache.org/jira/browse/HDFS-16965
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: erasure-coding
>Affects Versions: 3.3.4
>Reporter: WangYuanben
>Assignee: WangYuanben
>Priority: Minor
>  Labels: pull-request-available
>
> Sometimes we need to create codec without ISA-L, while priority is given to 
> native codec by default. So it is necessary to add switch to decide whether 
> to enable native codec.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16965) Add switch to decide whether to enable native codec.

2023-05-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721943#comment-17721943
 ] 

ASF GitHub Bot commented on HDFS-16965:
---

ayushtkn merged PR #5520:
URL: https://github.com/apache/hadoop/pull/5520




> Add switch to decide whether to enable native codec.
> 
>
> Key: HDFS-16965
> URL: https://issues.apache.org/jira/browse/HDFS-16965
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: erasure-coding
>Affects Versions: 3.3.4
>Reporter: WangYuanben
>Assignee: WangYuanben
>Priority: Minor
>  Labels: pull-request-available
>
> Sometimes we need to create codec without ISA-L, while priority is given to 
> native codec by default. So it is necessary to add switch to decide whether 
> to enable native codec.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16965) Add switch to decide whether to enable native codec.

2023-05-11 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HDFS-16965.
-
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Add switch to decide whether to enable native codec.
> 
>
> Key: HDFS-16965
> URL: https://issues.apache.org/jira/browse/HDFS-16965
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: erasure-coding
>Affects Versions: 3.3.4
>Reporter: WangYuanben
>Assignee: WangYuanben
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> Sometimes we need to create codec without ISA-L, while priority is given to 
> native codec by default. So it is necessary to add switch to decide whether 
> to enable native codec.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17008) Fix RBF JDK 11 javadoc warnings

2023-05-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721938#comment-17721938
 ] 

ASF GitHub Bot commented on HDFS-17008:
---

simbadzina commented on code in PR #5648:
URL: https://github.com/apache/hadoop/pull/5648#discussion_r1191736478


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/Router.java:
##
@@ -670,6 +670,9 @@ public RouterServiceState getRouterState() {
 
   /**
* Compare router state.
+   *
+   * @param routerState the router service state.
+   * @return true if the given router state is same as the state maintainer by 
the router object.

Review Comment:
   Quick nit: Typo `maintained`





> Fix RBF JDK 11 javadoc warnings
> ---
>
> Key: HDFS-17008
> URL: https://issues.apache.org/jira/browse/HDFS-17008
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> HDFS-16978 excluded proto packages from maven-javadoc-plugin for rbf, hence 
> now we have JDK 11 javadoc warnings (e.g. 
> [here|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5554/14/artifact/out/results-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1.txt]).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17009) RBF: state store putAll should also return failed records

2023-05-11 Thread Viraj Jasani (Jira)
Viraj Jasani created HDFS-17009:
---

 Summary: RBF: state store putAll should also return failed records
 Key: HDFS-17009
 URL: https://issues.apache.org/jira/browse/HDFS-17009
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Viraj Jasani
Assignee: Viraj Jasani


State store implementations allow adding/updating multiple records using 
putAll. The implementation returns whether all records were successfully added 
or updated. We should also allow the implementation to return which records 
failed to get updated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17008) Fix RBF JDK 11 javadoc warnings

2023-05-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17008:
--
Labels: pull-request-available  (was: )

> Fix RBF JDK 11 javadoc warnings
> ---
>
> Key: HDFS-17008
> URL: https://issues.apache.org/jira/browse/HDFS-17008
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> HDFS-16978 excluded proto packages from maven-javadoc-plugin for rbf, hence 
> now we have JDK 11 javadoc warnings (e.g. 
> [here|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5554/14/artifact/out/results-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1.txt]).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17008) Fix RBF JDK 11 javadoc warnings

2023-05-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721923#comment-17721923
 ] 

ASF GitHub Bot commented on HDFS-17008:
---

virajjasani opened a new pull request, #5648:
URL: https://github.com/apache/hadoop/pull/5648

   (no comment)




> Fix RBF JDK 11 javadoc warnings
> ---
>
> Key: HDFS-17008
> URL: https://issues.apache.org/jira/browse/HDFS-17008
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>
> HDFS-16978 excluded proto packages from maven-javadoc-plugin for rbf, hence 
> now we have JDK 11 javadoc warnings (e.g. 
> [here|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5554/14/artifact/out/results-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1.txt]).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17003) Erasure coding: invalidate wrong block after reporting bad blocks from datanode

2023-05-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721921#comment-17721921
 ] 

ASF GitHub Bot commented on HDFS-17003:
---

sodonnel commented on PR #5643:
URL: https://github.com/apache/hadoop/pull/5643#issuecomment-1544680459

   If I understand correctly, for a replicated block, if there are two corrupt 
block the code in InvalidateCorruptReplicas will be called when the block has 
been replicated correctly. At that point there will be 3 good replicas and 2 
corrupt stored in the corruptReplicas map. The code in the above method will 
then iterate over those two and "invalidate" them on the datanodes they are 
stored in.
   
   For EC, the same applies, however we are sending the blockID + index of the 
last reported replica to both DNs. All we store in the corruptReplica map, is 
the block group ID (ie the block ID with the replica index stripped out) and 
then the list of nodes hosting it. At this point in the code we don't know what 
index is on each of the nodes hosting a corrupt replica. Is this correct?
   
   Its not clear to me how the fix in this PR fixes the problem:
   
   ```
   if (blk.isStriped()) {
 DatanodeStorageInfo[] storages = getStorages(blk);
 for (DatanodeStorageInfo storage : storages) {
   final Block b = getBlockOnStorage(blk, storage);
   if (b != null) {
 reported = b;
   }
 }
   }
   ```
   For each node stored, we get the storages for the block, which will be the 
nodes hosting it. Then we getStorageOnBlock and it is sure to return non-null 
for each of the storages, as they all host a block in the group, right?
   
   Do we not need to somehow find the replica index for the block for each of 
the nodes listed, and then setup the "reported block" with the correct blockID 
+ index for that node, passing that to invalidate?
   
   Would something like this work - NOTE - I have not tested this at all:
   
   ```
   -

> Erasure coding: invalidate wrong block after reporting bad blocks from 
> datanode
> ---
>
> Key: HDFS-17003
> URL: https://issues.apache.org/jira/browse/HDFS-17003
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: farmmamba
>Priority: Critical
>  Labels: pull-request-available
>
> After receiving reportBadBlocks RPC from datanode, NameNode compute wrong 
> block to invalidate. It is a dangerous behaviour and may cause data loss. 
> Some logs in our production as below:
>  
> NameNode log:
> {code:java}
> 2023-05-08 21:23:49,112 INFO org.apache.hadoop.hdfs.StateChange: *DIR* 
> reportBadBlocks for block: 
> BP-932824627--1680179358678:blk_-9223372036848404320_1471186 on datanode: 
> datanode1:50010
> 2023-05-08 21:23:49,183 INFO org.apache.hadoop.hdfs.StateChange: *DIR* 
> reportBadBlocks for block: 
> BP-932824627--1680179358678:blk_-9223372036848404319_1471186 on datanode: 
> datanode2:50010{code}
> datanode1 log:
> {code:java}
> 2023-05-08 21:23:49,088 WARN 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Reporting bad 
> BP-932824627--1680179358678:blk_-9223372036848404320_1471186 on 
> /data7/hadoop/hdfs/datanode
> 2023-05-08 21:24:00,509 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Failed 
> to delete replica blk_-9223372036848404319_1471186: ReplicaInfo not 
> found.{code}
>  
> This phenomenon can be reproduced.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17008) Fix RBF JDK 11 javadoc warnings

2023-05-11 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-17008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-17008:
---
Summary: Fix RBF JDK 11 javadoc warnings  (was: Fix rbf jdk 11 javadoc 
warnings)

> Fix RBF JDK 11 javadoc warnings
> ---
>
> Key: HDFS-17008
> URL: https://issues.apache.org/jira/browse/HDFS-17008
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>
> HDFS-16978 excluded proto packages from maven-javadoc-plugin for rbf, hence 
> now we have JDK 11 javadoc warnings (e.g. 
> [here|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5554/14/artifact/out/results-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1.txt]).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17008) Fix rbf jdk 11 javadoc warnings

2023-05-11 Thread Viraj Jasani (Jira)
Viraj Jasani created HDFS-17008:
---

 Summary: Fix rbf jdk 11 javadoc warnings
 Key: HDFS-17008
 URL: https://issues.apache.org/jira/browse/HDFS-17008
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


HDFS-16978 excluded proto packages from maven-javadoc-plugin for rbf, hence now 
we have JDK 11 javadoc warnings (e.g. 
[here|https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5554/14/artifact/out/results-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1.txt]).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16965) Add switch to decide whether to enable native codec.

2023-05-11 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721854#comment-17721854
 ] 

Ayush Saxena commented on HDFS-16965:
-

Added [~wangyuanben] as HDFS Contributor to assign the ticket

> Add switch to decide whether to enable native codec.
> 
>
> Key: HDFS-16965
> URL: https://issues.apache.org/jira/browse/HDFS-16965
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: erasure-coding
>Affects Versions: 3.3.4
>Reporter: WangYuanben
>Assignee: WangYuanben
>Priority: Minor
>  Labels: pull-request-available
>
> Sometimes we need to create codec without ISA-L, while priority is given to 
> native codec by default. So it is necessary to add switch to decide whether 
> to enable native codec.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-16965) Add switch to decide whether to enable native codec.

2023-05-11 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena reassigned HDFS-16965:
---

Assignee: WangYuanben

> Add switch to decide whether to enable native codec.
> 
>
> Key: HDFS-16965
> URL: https://issues.apache.org/jira/browse/HDFS-16965
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: erasure-coding
>Affects Versions: 3.3.4
>Reporter: WangYuanben
>Assignee: WangYuanben
>Priority: Minor
>  Labels: pull-request-available
>
> Sometimes we need to create codec without ISA-L, while priority is given to 
> native codec by default. So it is necessary to add switch to decide whether 
> to enable native codec.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17003) Erasure coding: invalidate wrong block after reporting bad blocks from datanode

2023-05-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721808#comment-17721808
 ] 

ASF GitHub Bot commented on HDFS-17003:
---

hadoop-yetus commented on PR #5643:
URL: https://github.com/apache/hadoop/pull/5643#issuecomment-1544110986

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 38s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   1m  7s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   3m 21s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   3m  9s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 212m  0s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5643/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 317m 30s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5643/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5643 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 47fe60f099f4 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 11ac8ffa350cf7b60c5465f4464b0a921d78d7e2 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5643/2/testReport/ |
   | Max. process+thread count | 

[jira] [Commented] (HDFS-17003) Erasure coding: invalidate wrong block after reporting bad blocks from datanode

2023-05-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721796#comment-17721796
 ] 

ASF GitHub Bot commented on HDFS-17003:
---

hadoop-yetus commented on PR #5643:
URL: https://github.com/apache/hadoop/pull/5643#issuecomment-1544059352

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 53s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 10s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   3m 15s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   3m  8s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 34s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 206m 45s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5643/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 311m 57s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5643/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5643 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 0beff46017b7 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f38a250b1c02fbe7d684855facbdf55c87b97eaf |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5643/1/testReport/ |
   | Max. process+thread 

[jira] [Commented] (HDFS-16697) Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always prevent safe mode from being turned off

2023-05-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721773#comment-17721773
 ] 

ASF GitHub Bot commented on HDFS-16697:
---

Likkey opened a new pull request, #5569:
URL: https://github.com/apache/hadoop/pull/5569

   ### Description of PR
   
   It is found that “dfs.namenode.resource.checked.volumes.minimum” lacks a 
condition check and an associated exception handling mechanism, which makes it 
impossible to find the root cause of the impact when a misconfiguration occurs.
   Add a mechanism to check the value of minimumRedundantVolumes to ensure that 
the value is greater than the number of NameNode storage volumes to avoid never 
being able to turn off safe mode afterwards.
   
   JIRA:(https://issues.apache.org/jira/browse/HDFS-16697))]
   
   ### How was this patch tested?
   
   This patch provides a check of the configuration items,it will throw an 
IllegalArgumentException and a detailed error message when the value is greater 
than the number of NameNode storage volumes, and printing a warning message in 
the log in order to solve the problem in time and avoid the misconfiguration 
from affecting the subsequent operations of the program.




> Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always 
> prevent safe mode from being turned off
> 
>
> Key: HDFS-16697
> URL: https://issues.apache.org/jira/browse/HDFS-16697
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.3
> Environment: Linux version 4.15.0-142-generic 
> (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 
> 5.4.0-6ubuntu1~16.04.12))
> java version "1.8.0_162"
> Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
>Reporter: ECFuzz
>Assignee: ECFuzz
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> {code:java}
> 
>   dfs.namenode.resource.checked.volumes.minimum
>   1
>   
>     The minimum number of redundant NameNode storage volumes required.
>   
> {code}
> I found that when setting the value of 
> “dfs.namenode.resource.checked.volumes.minimum” is greater than the total 
> number of storage volumes in the NameNode, it is always impossible to turn 
> off the safe mode, and when in safe mode, the file system only accepts read 
> data requests, but not delete, modify and other change requests, which is 
> greatly limited by the function.
> The default value of the configuration item is 1, we set to 2 as an example 
> for illustration, after starting hdfs logs and the client will throw the 
> relevant reminders.
> {code:java}
> 2022-07-27 17:37:31,772 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: NameNode low on 
> available disk space. Already in safe mode.
> 2022-07-27 17:37:31,772 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe 
> mode is ON.
> Resources are low on NN. Please add or free up more resourcesthen turn off 
> safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
> the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
> leave" to turn safe mode off.
> {code}
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create 
> directory /hdfsapi/test. Name node is in safe mode.
> Resources are low on NN. Please add or free up more resourcesthen turn off 
> safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
> the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
> leave" to turn safe mode off. NamenodeHostName:192.168.1.167
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1468)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1455)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3174)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1145)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:714)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
>         at 

[jira] [Commented] (HDFS-16697) Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always prevent safe mode from being turned off

2023-05-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721772#comment-17721772
 ] 

ASF GitHub Bot commented on HDFS-16697:
---

Likkey closed pull request #5569: HDFS-16697.Add code to check for 
minimumRedundantVolumes.
URL: https://github.com/apache/hadoop/pull/5569




> Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always 
> prevent safe mode from being turned off
> 
>
> Key: HDFS-16697
> URL: https://issues.apache.org/jira/browse/HDFS-16697
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.3
> Environment: Linux version 4.15.0-142-generic 
> (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 
> 5.4.0-6ubuntu1~16.04.12))
> java version "1.8.0_162"
> Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
>Reporter: ECFuzz
>Assignee: ECFuzz
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> {code:java}
> 
>   dfs.namenode.resource.checked.volumes.minimum
>   1
>   
>     The minimum number of redundant NameNode storage volumes required.
>   
> {code}
> I found that when setting the value of 
> “dfs.namenode.resource.checked.volumes.minimum” is greater than the total 
> number of storage volumes in the NameNode, it is always impossible to turn 
> off the safe mode, and when in safe mode, the file system only accepts read 
> data requests, but not delete, modify and other change requests, which is 
> greatly limited by the function.
> The default value of the configuration item is 1, we set to 2 as an example 
> for illustration, after starting hdfs logs and the client will throw the 
> relevant reminders.
> {code:java}
> 2022-07-27 17:37:31,772 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: NameNode low on 
> available disk space. Already in safe mode.
> 2022-07-27 17:37:31,772 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe 
> mode is ON.
> Resources are low on NN. Please add or free up more resourcesthen turn off 
> safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
> the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
> leave" to turn safe mode off.
> {code}
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create 
> directory /hdfsapi/test. Name node is in safe mode.
> Resources are low on NN. Please add or free up more resourcesthen turn off 
> safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
> the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
> leave" to turn safe mode off. NamenodeHostName:192.168.1.167
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1468)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1455)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3174)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1145)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:714)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
>         at java.base/java.security.AccessController.doPrivileged(Native 
> Method)
>         at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916){code}
> According to the prompt, it is believed that there is not enough resource 
> space to meet the corresponding conditions to close safe mode, but after 
> adding or releasing more resources and lowering the resource condition 
> threshold "dfs.namenode.resource.du.reserved", it still fails to close safe 
> mode and throws the same prompt .
> According to the source code, we know that if the NameNode has redundant 
> storage volumes less than the "dfs.namenode.resource.checked.volumes.minimum" 
> set the minimum number of 

[jira] [Commented] (HDFS-16965) Add switch to decide whether to enable native codec.

2023-05-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721741#comment-17721741
 ] 

ASF GitHub Bot commented on HDFS-16965:
---

YuanbenWang commented on PR #5520:
URL: https://github.com/apache/hadoop/pull/5520#issuecomment-1543813297

   @ayushtkn  Hello,Would you please help assign the 
Jira([HDFS-16965](https://issues.apache.org/jira/browse/HDFS-16965)) ticket to 
me? Could you please help review this PR?




> Add switch to decide whether to enable native codec.
> 
>
> Key: HDFS-16965
> URL: https://issues.apache.org/jira/browse/HDFS-16965
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: erasure-coding
>Affects Versions: 3.3.4
>Reporter: WangYuanben
>Priority: Minor
>  Labels: pull-request-available
>
> Sometimes we need to create codec without ISA-L, while priority is given to 
> native codec by default. So it is necessary to add switch to decide whether 
> to enable native codec.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17001) Support getStatus API in WebHDFS

2023-05-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721737#comment-17721737
 ] 

ASF GitHub Bot commented on HDFS-17001:
---

zhtttylz commented on PR #5628:
URL: https://github.com/apache/hadoop/pull/5628#issuecomment-1543800176

   @ayushtkn @slfan1989 Could you please help review this PR again? The `Junit 
Test` error is not caused by our pr.




> Support getStatus API in WebHDFS
> 
>
> Key: HDFS-17001
> URL: https://issues.apache.org/jira/browse/HDFS-17001
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 3.4.0
>Reporter: Hualong Zhang
>Assignee: Hualong Zhang
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2023-05-08-14-34-51-873.png
>
>
> WebHDFS should support getStatus:
> !image-2023-05-08-14-34-51-873.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17003) Erasure coding: invalidate wrong block after reporting bad blocks from datanode

2023-05-11 Thread farmmamba (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721707#comment-17721707
 ] 

farmmamba commented on HDFS-17003:
--

1、destroy d1, d2 manually.

2、read this ec file to trigger reconstruction soonly,  After reconstructing, 
new data blocks are d1', d2'.

3、d1', d2' send IBR to namenode. When namenode receives the last IBR,  it will 
execute invalidateCorruptReplicas method in addStoredBlock.

4、In invalidateCorruptReplicas method, it will use the blockid of last IBR to 
invalidate blocks. for example if using block id of d1', then it will 

      send invalidate command to d1, d2 to invalidate d1.  Because d2 does not 
match the block id of d1 block, The corrupt d2 will not be deleted.

5、FBR on the datanode with d2,  both d2 and d2' are exist.  The d2' may be 
deleted mistakenly.

> Erasure coding: invalidate wrong block after reporting bad blocks from 
> datanode
> ---
>
> Key: HDFS-17003
> URL: https://issues.apache.org/jira/browse/HDFS-17003
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: farmmamba
>Priority: Critical
>  Labels: pull-request-available
>
> After receiving reportBadBlocks RPC from datanode, NameNode compute wrong 
> block to invalidate. It is a dangerous behaviour and may cause data loss. 
> Some logs in our production as below:
>  
> NameNode log:
> {code:java}
> 2023-05-08 21:23:49,112 INFO org.apache.hadoop.hdfs.StateChange: *DIR* 
> reportBadBlocks for block: 
> BP-932824627--1680179358678:blk_-9223372036848404320_1471186 on datanode: 
> datanode1:50010
> 2023-05-08 21:23:49,183 INFO org.apache.hadoop.hdfs.StateChange: *DIR* 
> reportBadBlocks for block: 
> BP-932824627--1680179358678:blk_-9223372036848404319_1471186 on datanode: 
> datanode2:50010{code}
> datanode1 log:
> {code:java}
> 2023-05-08 21:23:49,088 WARN 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Reporting bad 
> BP-932824627--1680179358678:blk_-9223372036848404320_1471186 on 
> /data7/hadoop/hdfs/datanode
> 2023-05-08 21:24:00,509 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Failed 
> to delete replica blk_-9223372036848404319_1471186: ReplicaInfo not 
> found.{code}
>  
> This phenomenon can be reproduced.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17003) Erasure coding: invalidate wrong block after reporting bad blocks from datanode

2023-05-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721695#comment-17721695
 ] 

ASF GitHub Bot commented on HDFS-17003:
---

hfutatzhanghb opened a new pull request, #5643:
URL: https://github.com/apache/hadoop/pull/5643

   The description is in HDFS-17003.




> Erasure coding: invalidate wrong block after reporting bad blocks from 
> datanode
> ---
>
> Key: HDFS-17003
> URL: https://issues.apache.org/jira/browse/HDFS-17003
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: farmmamba
>Priority: Critical
>
> After receiving reportBadBlocks RPC from datanode, NameNode compute wrong 
> block to invalidate. It is a dangerous behaviour and may cause data loss. 
> Some logs in our production as below:
>  
> NameNode log:
> {code:java}
> 2023-05-08 21:23:49,112 INFO org.apache.hadoop.hdfs.StateChange: *DIR* 
> reportBadBlocks for block: 
> BP-932824627--1680179358678:blk_-9223372036848404320_1471186 on datanode: 
> datanode1:50010
> 2023-05-08 21:23:49,183 INFO org.apache.hadoop.hdfs.StateChange: *DIR* 
> reportBadBlocks for block: 
> BP-932824627--1680179358678:blk_-9223372036848404319_1471186 on datanode: 
> datanode2:50010{code}
> datanode1 log:
> {code:java}
> 2023-05-08 21:23:49,088 WARN 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Reporting bad 
> BP-932824627--1680179358678:blk_-9223372036848404320_1471186 on 
> /data7/hadoop/hdfs/datanode
> 2023-05-08 21:24:00,509 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Failed 
> to delete replica blk_-9223372036848404319_1471186: ReplicaInfo not 
> found.{code}
>  
> This phenomenon can be reproduced.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17003) Erasure coding: invalidate wrong block after reporting bad blocks from datanode

2023-05-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-17003:
--
Labels: pull-request-available  (was: )

> Erasure coding: invalidate wrong block after reporting bad blocks from 
> datanode
> ---
>
> Key: HDFS-17003
> URL: https://issues.apache.org/jira/browse/HDFS-17003
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: farmmamba
>Priority: Critical
>  Labels: pull-request-available
>
> After receiving reportBadBlocks RPC from datanode, NameNode compute wrong 
> block to invalidate. It is a dangerous behaviour and may cause data loss. 
> Some logs in our production as below:
>  
> NameNode log:
> {code:java}
> 2023-05-08 21:23:49,112 INFO org.apache.hadoop.hdfs.StateChange: *DIR* 
> reportBadBlocks for block: 
> BP-932824627--1680179358678:blk_-9223372036848404320_1471186 on datanode: 
> datanode1:50010
> 2023-05-08 21:23:49,183 INFO org.apache.hadoop.hdfs.StateChange: *DIR* 
> reportBadBlocks for block: 
> BP-932824627--1680179358678:blk_-9223372036848404319_1471186 on datanode: 
> datanode2:50010{code}
> datanode1 log:
> {code:java}
> 2023-05-08 21:23:49,088 WARN 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Reporting bad 
> BP-932824627--1680179358678:blk_-9223372036848404320_1471186 on 
> /data7/hadoop/hdfs/datanode
> 2023-05-08 21:24:00,509 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Failed 
> to delete replica blk_-9223372036848404319_1471186: ReplicaInfo not 
> found.{code}
>  
> This phenomenon can be reproduced.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17001) Support getStatus API in WebHDFS

2023-05-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721692#comment-17721692
 ] 

ASF GitHub Bot commented on HDFS-17001:
---

hadoop-yetus commented on PR #5628:
URL: https://github.com/apache/hadoop/pull/5628#issuecomment-1543595966

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m 24s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 14s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   5m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 55s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   6m 50s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m  9s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   5m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   4m 55s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 51s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   6m 49s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 38s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 21s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 206m 41s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5628/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |  20m 44s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 51s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 367m 22s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5628/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5628 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 8bda73a2640d 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / aa5576dd7661d4f2a582419990d564f83a2a5fe4 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 

[jira] [Updated] (HDFS-11960) Successfully closed files can stay under-replicated.

2023-05-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-11960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-11960:
--
Labels: pull-request-available  (was: )

> Successfully closed files can stay under-replicated.
> 
>
> Key: HDFS-11960
> URL: https://issues.apache.org/jira/browse/HDFS-11960
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HDFS-11960-v2.branch-2.txt, HDFS-11960-v2.trunk.txt, 
> HDFS-11960.patch
>
>
> If a certain set of conditions hold at the time of a file creation, a block 
> of the file can stay under-replicated.  This is because the block is 
> mistakenly taken out of the under-replicated block queue and never gets 
> reevaluated.
> Re-evaluation can be triggered if
> - a replica containing node dies.
> - setrep is called
> - NN repl queues are reinitialized (NN failover or restart)
> If none of these happens, the block stays under-replicated. 
> Here is how it happens.
> 1) A replica is finalized, but the ACK does not reach the upstream in time. 
> IBR is also delayed.
> 2) A close recovery happens, which updates the gen stamp of "healthy" 
> replicas.
> 3) The file is closed with the healthy replicas. It is added to the 
> replication queue.
> 4) A replication is scheduled, so it is added to the pending replication 
> list. The replication target is picked as the failed node in 1).
> 5) The old IBR is finally received for the failed/excluded node. In the 
> meantime, the replication fails, because there is already a finalized replica 
> (with older gen stamp) on the node.
> 6) The IBR processing removes the block from the pending list, adds it to 
> corrupt replicas list, and then issues invalidation. Since the block is in 
> neither replication queue nor pending list, it stays under-replicated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11960) Successfully closed files can stay under-replicated.

2023-05-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-11960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721678#comment-17721678
 ] 

ASF GitHub Bot commented on HDFS-11960:
---

LiuGuH opened a new pull request, #5642:
URL: https://github.com/apache/hadoop/pull/5642

   
   
   ### Description of PR
   TestPendingReconstruction.testProcessPendingReconstructions() verify 
[HDFS-11960](https://issues.apache.org/jira/browse/HDFS-11960) is wrong.
   
   (1) It does not  stop PendingReconstructionMonitor. The blockid will into 
timeouts queue because of timout duration is 3s.
   
   (2) Test blockid should be blk_1_1 with different genstamp.  
   
   (3) The blk_1_1 should test with the same DatanodeDescriptor
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Successfully closed files can stay under-replicated.
> 
>
> Key: HDFS-11960
> URL: https://issues.apache.org/jira/browse/HDFS-11960
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: HDFS-11960-v2.branch-2.txt, HDFS-11960-v2.trunk.txt, 
> HDFS-11960.patch
>
>
> If a certain set of conditions hold at the time of a file creation, a block 
> of the file can stay under-replicated.  This is because the block is 
> mistakenly taken out of the under-replicated block queue and never gets 
> reevaluated.
> Re-evaluation can be triggered if
> - a replica containing node dies.
> - setrep is called
> - NN repl queues are reinitialized (NN failover or restart)
> If none of these happens, the block stays under-replicated. 
> Here is how it happens.
> 1) A replica is finalized, but the ACK does not reach the upstream in time. 
> IBR is also delayed.
> 2) A close recovery happens, which updates the gen stamp of "healthy" 
> replicas.
> 3) The file is closed with the healthy replicas. It is added to the 
> replication queue.
> 4) A replication is scheduled, so it is added to the pending replication 
> list. The replication target is picked as the failed node in 1).
> 5) The old IBR is finally received for the failed/excluded node. In the 
> meantime, the replication fails, because there is already a finalized replica 
> (with older gen stamp) on the node.
> 6) The IBR processing removes the block from the pending list, adds it to 
> corrupt replicas list, and then issues invalidation. Since the block is in 
> neither replication queue nor pending list, it stays under-replicated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13507) RBF: Remove update functionality from routeradmin's add cmd

2023-05-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17721650#comment-17721650
 ] 

ASF GitHub Bot commented on HDFS-13507:
---

ayushtkn commented on PR #4990:
URL: https://github.com/apache/hadoop/pull/4990#issuecomment-1543449462

   @ZanderXu this can be updated the other PR is merged




> RBF: Remove update functionality from routeradmin's add cmd
> ---
>
> Key: HDFS-13507
> URL: https://issues.apache.org/jira/browse/HDFS-13507
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Wei Yan
>Assignee: Gang Li
>Priority: Minor
>  Labels: incompatible, pull-request-available
> Attachments: HDFS-13507-HDFS-13891.003.patch, 
> HDFS-13507-HDFS-13891.004.patch, HDFS-13507.000.patch, HDFS-13507.001.patch, 
> HDFS-13507.002.patch, HDFS-13507.003.patch
>
>
> Follow up the discussion in HDFS-13326. We should remove the "update" 
> functionality from routeradmin's add cmd, to make it consistent with RPC 
> calls.
> Note that: this is an incompatible change.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17007) TestPendingReconstruction.testProcessPendingReconstructions verify HDFS-11960 test case is wrong

2023-05-11 Thread liuguanghua (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liuguanghua updated HDFS-17007:
---
Description: 
TestPendingReconstruction.testProcessPendingReconstructions() verify HDFS-11960 
is wrong.

(1) It does not  stop PendingReconstructionMonitor. The blockid will into 
timeouts queue because of timout duration is 3s.

(2) Test blockid should be blk_1_1 with different genstamp.  

(3) The blk_1_1 should test with the same DatanodeDescriptor

  was:
 

Verify HDFS-11960


> TestPendingReconstruction.testProcessPendingReconstructions verify HDFS-11960 
> test case is wrong
> 
>
> Key: HDFS-17007
> URL: https://issues.apache.org/jira/browse/HDFS-17007
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: liuguanghua
>Priority: Minor
>
> TestPendingReconstruction.testProcessPendingReconstructions() verify 
> HDFS-11960 is wrong.
> (1) It does not  stop PendingReconstructionMonitor. The blockid will into 
> timeouts queue because of timout duration is 3s.
> (2) Test blockid should be blk_1_1 with different genstamp.  
> (3) The blk_1_1 should test with the same DatanodeDescriptor



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17007) TestPendingReconstruction.testProcessPendingReconstructions verify HDFS-11960 test case is wrong

2023-05-11 Thread liuguanghua (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liuguanghua updated HDFS-17007:
---
Description: 
 

Verify HDFS-11960

> TestPendingReconstruction.testProcessPendingReconstructions verify HDFS-11960 
> test case is wrong
> 
>
> Key: HDFS-17007
> URL: https://issues.apache.org/jira/browse/HDFS-17007
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: liuguanghua
>Priority: Minor
>
>  
> Verify HDFS-11960



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17007) TestPendingReconstruction.testProcessPendingReconstructions verify HDFS-11960 test case is wrong

2023-05-11 Thread liuguanghua (Jira)
liuguanghua created HDFS-17007:
--

 Summary: 
TestPendingReconstruction.testProcessPendingReconstructions verify HDFS-11960 
test case is wrong
 Key: HDFS-17007
 URL: https://issues.apache.org/jira/browse/HDFS-17007
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: liuguanghua






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16979) RBF: Add dfsrouter port in hdfsauditlog

2023-05-11 Thread liuguanghua (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liuguanghua updated HDFS-16979:
---
Description: 
 

when client is using proxyuser via realuser, the hdfs aduilg log is lack of 
dfsrouter port infomation.

client (using proxyuser)-> dfsrouter -> namenode
clientport dfsrouterport

hdfsauditlog should record dfsrouterport

  was:
When remote client request through dfsrouter to namenode, the hdfsauditlog 
record the remote client ip and port ,dfsrouter IP,but lack of dfsrouter port.

This patch is done for this scene.
 


> RBF: Add dfsrouter port in hdfsauditlog
> ---
>
> Key: HDFS-16979
> URL: https://issues.apache.org/jira/browse/HDFS-16979
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: liuguanghua
>Priority: Major
>  Labels: pull-request-available
>
>  
> when client is using proxyuser via realuser, the hdfs aduilg log is lack of 
> dfsrouter port infomation.
> client (using proxyuser)-> dfsrouter -> namenode
> clientport dfsrouterport
> hdfsauditlog should record dfsrouterport



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org