[jira] [Work logged] (HDFS-16521) DFS API to retrieve slow datanodes

2022-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16521?focusedWorklogId=762716=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-762716
 ]

ASF GitHub Bot logged work on HDFS-16521:
-

Author: ASF GitHub Bot
Created on: 27/Apr/22 05:32
Start Date: 27/Apr/22 05:32
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on code in PR #4107:
URL: https://github.com/apache/hadoop/pull/4107#discussion_r859381403


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:
##
@@ -4914,6 +4914,33 @@ int getNumberOfDatanodes(DatanodeReportType type) {
 }
   }
 
+  DatanodeInfo[] slowDataNodesReport() throws IOException {
+String operationName = "slowDataNodesReport";
+DatanodeInfo[] datanodeInfos;
+checkSuperuserPrivilege(operationName);

Review Comment:
   does it need to require super user privilege?



##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java:
##
@@ -433,7 +433,7 @@ static int run(DistributedFileSystem dfs, String[] argv, 
int idx) throws IOExcep
*/
   private static final String commonUsageSummary =
 "\t[-report [-live] [-dead] [-decommissioning] " +
-"[-enteringmaintenance] [-inmaintenance]]\n" +
+  "[-enteringmaintenance] [-inmaintenance] [-slownodes]]\n" +

Review Comment:
   The corresponding documentation needs to update when CLI commands are 
added/updated.



##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java:
##
@@ -632,6 +638,20 @@ private static void 
printDataNodeReports(DistributedFileSystem dfs,
 }
   }
 
+  private static void printSlowDataNodeReports(DistributedFileSystem dfs, 
boolean listNodes,

Review Comment:
   Can you provide a sample output? It would be confusing, I guess. I suspect 
you would need some kind of header to distinguish from the other data node 
reports. 



##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/ClientProtocol.java:
##
@@ -1868,4 +1868,16 @@ BatchedEntries listOpenFiles(long prevId,
*/
   @AtMostOnce
   void satisfyStoragePolicy(String path) throws IOException;
+
+  /**
+   * Get report on all of the slow Datanodes. Slow running datanodes are 
identified based on
+   * the Outlier detection algorithm, if slow peer tracking is enabled for the 
DFS cluster.
+   *
+   * @return Datanode report for slow running datanodes.
+   * @throws IOException If an I/O error occurs.
+   */
+  @Idempotent
+  @ReadOnly
+  DatanodeInfo[] getSlowDatanodeReport() throws IOException;

Review Comment:
   I just want to check with every one that it is okay to have an array of 
objects as the return value.
   I think it's fine but just want to check with every one, because once we 
decide the the interface it can't be changed later.



##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSAdmin.java:
##
@@ -433,7 +433,7 @@ static int run(DistributedFileSystem dfs, String[] argv, 
int idx) throws IOExcep
*/
   private static final String commonUsageSummary =
 "\t[-report [-live] [-dead] [-decommissioning] " +
-"[-enteringmaintenance] [-inmaintenance]]\n" +
+  "[-enteringmaintenance] [-inmaintenance] [-slownodes]]\n" +

Review Comment:
   In fact it would appear confusion to HDFS administrators. These subcommands 
are meant to filter the DNs in these states, and "slownodes" is not a defined 
DataNode state.





Issue Time Tracking
---

Worklog Id: (was: 762716)
Time Spent: 3.5h  (was: 3h 20m)

> DFS API to retrieve slow datanodes
> --
>
> Key: HDFS-16521
> URL: https://issues.apache.org/jira/browse/HDFS-16521
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Providing DFS API to retrieve slow nodes would help add an additional option 
> to "dfsadmin -report" that lists slow datanodes info for operators to take a 
> look, specifically useful filter for larger clusters.
> The other purpose of such API is for HDFS downstreamers without direct access 
> to namenode http port (only rpc port accessible) to retrieve slownodes.
> Moreover, 
> [FanOutOneBlockAsyncDFSOutput|https://github.com/apache/hbase/blob/master/hbase-asyncfs/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutput.java]
>  in HBase currently has to rely on it's own way of marking and excluding slow 
> nodes while 1) creating pipelines and 2) handling ack, based on factors like 
> the data length of the packet, processing time with last 

[jira] [Work logged] (HDFS-16554) Remove unused configuration dfs.namenode.block.deletion.increment.

2022-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16554?focusedWorklogId=762685=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-762685
 ]

ASF GitHub Bot logged work on HDFS-16554:
-

Author: ASF GitHub Bot
Created on: 27/Apr/22 03:46
Start Date: 27/Apr/22 03:46
Worklog Time Spent: 10m 
  Work Description: Hexiaoqiao commented on PR #4213:
URL: https://github.com/apache/hadoop/pull/4213#issuecomment-1110505405

   Committed to trunk. Thanks @smarthanwang for your contribution!




Issue Time Tracking
---

Worklog Id: (was: 762685)
Time Spent: 1h  (was: 50m)

> Remove unused configuration dfs.namenode.block.deletion.increment. 
> ---
>
> Key: HDFS-16554
> URL: https://issues.apache.org/jira/browse/HDFS-16554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chengwei Wang
>Assignee: Chengwei Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The configuration *_dfs.namenode.block.deletion.increment_* will not be used 
> after the feature HDFS-16043 that do block deletetion asynchronously. So it's 
> better to remove it.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16554) Remove unused configuration dfs.namenode.block.deletion.increment.

2022-04-26 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-16554:
---
Component/s: namenode

> Remove unused configuration dfs.namenode.block.deletion.increment. 
> ---
>
> Key: HDFS-16554
> URL: https://issues.apache.org/jira/browse/HDFS-16554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Chengwei Wang
>Assignee: Chengwei Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The configuration *_dfs.namenode.block.deletion.increment_* will not be used 
> after the feature HDFS-16043 that do block deletetion asynchronously. So it's 
> better to remove it.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16468) Define ssize_t for Windows

2022-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16468?focusedWorklogId=762684=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-762684
 ]

ASF GitHub Bot logged work on HDFS-16468:
-

Author: ASF GitHub Bot
Created on: 27/Apr/22 03:45
Start Date: 27/Apr/22 03:45
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4228:
URL: https://github.com/apache/hadoop/pull/4228#issuecomment-1110505106

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 53s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m 52s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  68m  5s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  68m 29s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 32s |  |  the patch passed  |
   | +1 :green_heart: |  cc  |   3m 32s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   3m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 32s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  30m 55s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 113m 39s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 221m  8s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4228/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4228 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit 
codespell golang |
   | uname | Linux d0aa6b9ffd8b 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 
19:07:44 UTC 2021 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7b26a9a96131cc3112107e471ed0b13e3a29dffd |
   | Default Java | Debian-11.0.14+9-post-Debian-1deb10u1 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4228/5/testReport/ |
   | Max. process+thread count | 585 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4228/5/console |
   | versions | git=2.20.1 maven=3.6.0 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




Issue Time Tracking
---

Worklog Id: (was: 762684)
Time Spent: 3h 40m  (was: 3.5h)

> Define ssize_t for Windows
> --
>
> Key: HDFS-16468
> URL: https://issues.apache.org/jira/browse/HDFS-16468
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Some C/C++ files use *ssize_t* data type. This isn't available for Windows 
> and we need to define an alias for this and set it to *long long* to make it 
> cross platform compatible.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Resolved] (HDFS-16554) Remove unused configuration dfs.namenode.block.deletion.increment.

2022-04-26 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He resolved HDFS-16554.

   Fix Version/s: 3.4.0
Hadoop Flags: Reviewed
Target Version/s:   (was: 3.4.0)
  Resolution: Fixed

Committed to trunk. Thanks [~smarthan] for your contribution!

> Remove unused configuration dfs.namenode.block.deletion.increment. 
> ---
>
> Key: HDFS-16554
> URL: https://issues.apache.org/jira/browse/HDFS-16554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chengwei Wang
>Assignee: Chengwei Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The configuration *_dfs.namenode.block.deletion.increment_* will not be used 
> after the feature HDFS-16043 that do block deletetion asynchronously. So it's 
> better to remove it.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16554) Remove unused configuration dfs.namenode.block.deletion.increment.

2022-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16554?focusedWorklogId=762682=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-762682
 ]

ASF GitHub Bot logged work on HDFS-16554:
-

Author: ASF GitHub Bot
Created on: 27/Apr/22 03:44
Start Date: 27/Apr/22 03:44
Worklog Time Spent: 10m 
  Work Description: Hexiaoqiao merged PR #4213:
URL: https://github.com/apache/hadoop/pull/4213




Issue Time Tracking
---

Worklog Id: (was: 762682)
Time Spent: 50m  (was: 40m)

> Remove unused configuration dfs.namenode.block.deletion.increment. 
> ---
>
> Key: HDFS-16554
> URL: https://issues.apache.org/jira/browse/HDFS-16554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chengwei Wang
>Assignee: Chengwei Wang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The configuration *_dfs.namenode.block.deletion.increment_* will not be used 
> after the feature HDFS-16043 that do block deletetion asynchronously. So it's 
> better to remove it.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16558) Consider changing the lock of delegation token from write lock to read lock

2022-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16558?focusedWorklogId=762681=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-762681
 ]

ASF GitHub Bot logged work on HDFS-16558:
-

Author: ASF GitHub Bot
Created on: 27/Apr/22 03:36
Start Date: 27/Apr/22 03:36
Worklog Time Spent: 10m 
  Work Description: Hexiaoqiao commented on PR #4230:
URL: https://github.com/apache/hadoop/pull/4230#issuecomment-1110498309

   `In a very busy authed cluster, renewing/caneling/getting delegation token 
get slow and it will slow down the speed of handling rpcs from client. Since 
AbstractDelegationTokenSecretManager is a thread-safe manager, we propose to 
change the fs lock from write lock to read lock(protect editlog rolling)`
   @yuanboliu Thanks for your proposal, it is a great improvement. I think it 
is proper for ADTS which is thread-safe as you mentioned above. But I am 
concern if it is also thread-safe for editlog sync. Consider both 
renew/cancel/get for different token, is it safe to keep the order when replay 
editlog? Thanks.




Issue Time Tracking
---

Worklog Id: (was: 762681)
Time Spent: 1h 40m  (was: 1.5h)

> Consider changing the lock of delegation token from write lock to read lock
> ---
>
> Key: HDFS-16558
> URL: https://issues.apache.org/jira/browse/HDFS-16558
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Yuanbo Liu
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2022-04-24-14-13-04-695.png, 
> image-2022-04-24-14-13-52-867.png, image-2022-04-24-14-57-18-740.png, 
> image-2022-04-24-14-58-25-294.png
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> In a very busy authed cluster, renewing/caneling/getting delegation token get 
> slow and it will slow down the speed of handling rpcs from client. Since 
> AbstractDelegationTokenSecretManager is a thread-safe manager, we propose to 
> change the fs lock from write lock to read lock(protect editlog rolling)
> !image-2022-04-24-14-58-25-294.png|width=318,height=194!
> !image-2022-04-24-14-13-52-867.png|width=324,height=173!
> !image-2022-04-24-14-57-18-740.png|width=303,height=184!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16554) Remove unused configuration dfs.namenode.block.deletion.increment.

2022-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16554?focusedWorklogId=762677=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-762677
 ]

ASF GitHub Bot logged work on HDFS-16554:
-

Author: ASF GitHub Bot
Created on: 27/Apr/22 03:28
Start Date: 27/Apr/22 03:28
Worklog Time Spent: 10m 
  Work Description: smarthanwang commented on PR #4213:
URL: https://github.com/apache/hadoop/pull/4213#issuecomment-1110494557

   @Hexiaoqiao thanks for review. The failed UTs are not related with this PR 
which seems failed for timeout.
   




Issue Time Tracking
---

Worklog Id: (was: 762677)
Time Spent: 40m  (was: 0.5h)

> Remove unused configuration dfs.namenode.block.deletion.increment. 
> ---
>
> Key: HDFS-16554
> URL: https://issues.apache.org/jira/browse/HDFS-16554
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chengwei Wang
>Assignee: Chengwei Wang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The configuration *_dfs.namenode.block.deletion.increment_* will not be used 
> after the feature HDFS-16043 that do block deletetion asynchronously. So it's 
> better to remove it.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-16528) Reconfigure slow peer enable for Namenode

2022-04-26 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-16528 started by Viraj Jasani.
---
> Reconfigure slow peer enable for Namenode
> -
>
> Key: HDFS-16528
> URL: https://issues.apache.org/jira/browse/HDFS-16528
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> HDFS-16396 provides reconfig options for several configs associated with 
> slownodes in Datanode. Similarly, HDFS-16287 and HDFS-16327 have added some 
> slownodes related configs as the reconfig options in Namenode.
> The purpose of this Jira is to add DFS_DATANODE_PEER_STATS_ENABLED_KEY as 
> reconfigurable option for Namenode (similar to how HDFS-16396 has included it 
> for Datanode).



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16528) Reconfigure slow peer enable for Namenode

2022-04-26 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HDFS-16528:

Status: Patch Available  (was: In Progress)

> Reconfigure slow peer enable for Namenode
> -
>
> Key: HDFS-16528
> URL: https://issues.apache.org/jira/browse/HDFS-16528
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> HDFS-16396 provides reconfig options for several configs associated with 
> slownodes in Datanode. Similarly, HDFS-16287 and HDFS-16327 have added some 
> slownodes related configs as the reconfig options in Namenode.
> The purpose of this Jira is to add DFS_DATANODE_PEER_STATS_ENABLED_KEY as 
> reconfigurable option for Namenode (similar to how HDFS-16396 has included it 
> for Datanode).



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16550) [SBN read] Improper cache-size for journal node may cause cluster crash

2022-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16550?focusedWorklogId=762648=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-762648
 ]

ASF GitHub Bot logged work on HDFS-16550:
-

Author: ASF GitHub Bot
Created on: 27/Apr/22 01:58
Start Date: 27/Apr/22 01:58
Worklog Time Spent: 10m 
  Work Description: tomscut commented on PR #4209:
URL: https://github.com/apache/hadoop/pull/4209#issuecomment-1110446887

   Hi @tasanuma @ayushtkn @sunchao @xkrogen , could you please take a look. 
Thanks.




Issue Time Tracking
---

Worklog Id: (was: 762648)
Time Spent: 40m  (was: 0.5h)

> [SBN read] Improper cache-size for journal node may cause cluster crash
> ---
>
> Key: HDFS-16550
> URL: https://issues.apache.org/jira/browse/HDFS-16550
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Tao Li
>Assignee: Tao Li
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2022-04-21-09-54-29-751.png, 
> image-2022-04-21-09-54-57-111.png, image-2022-04-21-12-32-56-170.png
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When we introduced {*}SBN Read{*}, we encountered a situation during upgrade 
> the JournalNodes.
> Cluster Info: 
> *Active: nn0*
> *Standby: nn1*
> 1. Rolling restart journal node. {color:#ff}(related config: 
> fs.journalnode.edit-cache-size.bytes=1G, -Xms1G, -Xmx=1G){color}
> 2. The cluster runs for a while, edits cache usage is increasing and memory 
> is used up.
> 3. {color:#ff}Active namenode(nn0){color} shutdown because of “{_}Timed 
> out waiting 12ms for a quorum of nodes to respond”{_}.
> 4. Transfer nn1 to Active state.
> 5. {color:#ff}New Active namenode(nn1){color} also shutdown because of 
> “{_}Timed out waiting 12ms for a quorum of nodes to respond” too{_}.
> 6. {color:#ff}The cluster crashed{color}.
>  
> Related code:
> {code:java}
> JournaledEditsCache(Configuration conf) {
>   capacity = conf.getInt(DFSConfigKeys.DFS_JOURNALNODE_EDIT_CACHE_SIZE_KEY,
>   DFSConfigKeys.DFS_JOURNALNODE_EDIT_CACHE_SIZE_DEFAULT);
>   if (capacity > 0.9 * Runtime.getRuntime().maxMemory()) {
> Journal.LOG.warn(String.format("Cache capacity is set at %d bytes but " +
> "maximum JVM memory is only %d bytes. It is recommended that you " +
> "decrease the cache size or increase the heap size.",
> capacity, Runtime.getRuntime().maxMemory()));
>   }
>   Journal.LOG.info("Enabling the journaled edits cache with a capacity " +
>   "of bytes: " + capacity);
>   ReadWriteLock lock = new ReentrantReadWriteLock(true);
>   readLock = new AutoCloseableLock(lock.readLock());
>   writeLock = new AutoCloseableLock(lock.writeLock());
>   initialize(INVALID_TXN_ID);
> } {code}
> Currently, *fs.journalNode.edit-cache-size-bytes* can be set to a larger size 
> than the memory requested by the process. If 
> {*}fs.journalNode.edit-cache-sie.bytes > 0.9 * 
> Runtime.getruntime().maxMemory(){*}, only warn logs are printed during 
> journalnode startup. This can easily be overlooked by users. However, as the 
> cluster runs to a certain period of time, it is likely to cause the cluster 
> to crash.
>  
> NN log:
> !image-2022-04-21-09-54-57-111.png|width=1012,height=47!
> !image-2022-04-21-12-32-56-170.png|width=809,height=218!
> IMO, when {*}fs.journalNode.edit-cache-size-bytes > threshold * 
> Runtime.getruntime ().maxMemory(){*}, we should throw an Exception and 
> {color:#ff}fast fail{color}. Giving a clear hint for users to update 
> related configurations. Or if cache-size exceeds 50% (or some other 
> threshold) of maxMemory, force cache-size to be 25% of maxMemory.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16528) Reconfigure slow peer enable for Namenode

2022-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16528?focusedWorklogId=762647=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-762647
 ]

ASF GitHub Bot logged work on HDFS-16528:
-

Author: ASF GitHub Bot
Created on: 27/Apr/22 01:37
Start Date: 27/Apr/22 01:37
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on PR #4186:
URL: https://github.com/apache/hadoop/pull/4186#issuecomment-1110437828

   Thanks for the review @tomscut !




Issue Time Tracking
---

Worklog Id: (was: 762647)
Time Spent: 3h 40m  (was: 3.5h)

> Reconfigure slow peer enable for Namenode
> -
>
> Key: HDFS-16528
> URL: https://issues.apache.org/jira/browse/HDFS-16528
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> HDFS-16396 provides reconfig options for several configs associated with 
> slownodes in Datanode. Similarly, HDFS-16287 and HDFS-16327 have added some 
> slownodes related configs as the reconfig options in Namenode.
> The purpose of this Jira is to add DFS_DATANODE_PEER_STATS_ENABLED_KEY as 
> reconfigurable option for Namenode (similar to how HDFS-16396 has included it 
> for Datanode).



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16547) [SBN read] Namenode in safe mode should not be transfered to observer state

2022-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16547?focusedWorklogId=762645=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-762645
 ]

ASF GitHub Bot logged work on HDFS-16547:
-

Author: ASF GitHub Bot
Created on: 27/Apr/22 01:29
Start Date: 27/Apr/22 01:29
Worklog Time Spent: 10m 
  Work Description: tomscut commented on PR #4201:
URL: https://github.com/apache/hadoop/pull/4201#issuecomment-1110433872

   Hi @sunchao @xkrogen , could you please take a look? Thanks a lot.




Issue Time Tracking
---

Worklog Id: (was: 762645)
Time Spent: 40m  (was: 0.5h)

> [SBN read] Namenode in safe mode should not be transfered to observer state
> ---
>
> Key: HDFS-16547
> URL: https://issues.apache.org/jira/browse/HDFS-16547
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Tao Li
>Assignee: Tao Li
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently, when a Namenode is in safemode(under starting or enter safemode 
> manually), we can transfer this Namenode to Observer by command. This 
> Observer node may receive many requests and then throw a SafemodeException, 
> this causes unnecessary failover on the client.
> So Namenode in safe mode should not be transfer to observer state.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16468) Define ssize_t for Windows

2022-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16468?focusedWorklogId=762632=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-762632
 ]

ASF GitHub Bot logged work on HDFS-16468:
-

Author: ASF GitHub Bot
Created on: 27/Apr/22 00:04
Start Date: 27/Apr/22 00:04
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4228:
URL: https://github.com/apache/hadoop/pull/4228#issuecomment-1110361697

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  25m 32s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 11s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 51s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  53m 11s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  53m 39s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 48s |  |  the patch passed  |
   | +1 :green_heart: |  cc  |   3m 48s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   3m 48s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 48s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 25s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 111m  6s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 195m 36s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4228/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4228 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit 
codespell golang |
   | uname | Linux 3fb06964645e 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 
19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7b26a9a96131cc3112107e471ed0b13e3a29dffd |
   | Default Java | Red Hat, Inc.-1.8.0_312-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4228/5/testReport/ |
   | Max. process+thread count | 601 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4228/5/console |
   | versions | git=2.27.0 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




Issue Time Tracking
---

Worklog Id: (was: 762632)
Time Spent: 3.5h  (was: 3h 20m)

> Define ssize_t for Windows
> --
>
> Key: HDFS-16468
> URL: https://issues.apache.org/jira/browse/HDFS-16468
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Some C/C++ files use *ssize_t* data type. This isn't available for Windows 
> and we need to define an alias for this and set it to *long long* to make it 
> cross platform compatible.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Work logged] (HDFS-16468) Define ssize_t for Windows

2022-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16468?focusedWorklogId=762608=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-762608
 ]

ASF GitHub Bot logged work on HDFS-16468:
-

Author: ASF GitHub Bot
Created on: 26/Apr/22 22:42
Start Date: 26/Apr/22 22:42
Worklog Time Spent: 10m 
  Work Description: goiri commented on code in PR #4228:
URL: https://github.com/apache/hadoop/pull/4228#discussion_r859202738


##
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/lib/x-platform/types.h:
##
@@ -0,0 +1,39 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#ifndef NATIVE_LIBHDFSPP_LIB_CROSS_PLATFORM_TYPES
+#define NATIVE_LIBHDFSPP_LIB_CROSS_PLATFORM_TYPES
+
+#if _WIN32 || _WIN64

Review Comment:
   It might be more readable to do something like:
   ```
   #if _WIN64
   typedef long int ssize_t;
   #elif _WIN32
   typedef int ssize_t;
   #else
   #include 
   #endif
   ```
   
   With the comments and so on obviously.





Issue Time Tracking
---

Worklog Id: (was: 762608)
Time Spent: 3h 20m  (was: 3h 10m)

> Define ssize_t for Windows
> --
>
> Key: HDFS-16468
> URL: https://issues.apache.org/jira/browse/HDFS-16468
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Some C/C++ files use *ssize_t* data type. This isn't available for Windows 
> and we need to define an alias for this and set it to *long long* to make it 
> cross platform compatible.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16468) Define ssize_t for Windows

2022-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16468?focusedWorklogId=762603=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-762603
 ]

ASF GitHub Bot logged work on HDFS-16468:
-

Author: ASF GitHub Bot
Created on: 26/Apr/22 22:38
Start Date: 26/Apr/22 22:38
Worklog Time Spent: 10m 
  Work Description: goiri commented on code in PR #4228:
URL: https://github.com/apache/hadoop/pull/4228#discussion_r859200993


##
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/examples/cc/cat/cat.cc:
##
@@ -62,7 +62,6 @@ int main(int argc, char *argv[]) {
   //wrapping file_raw into a unique pointer to guarantee deletion
   std::unique_ptr file(file_raw);
 
-  ssize_t total_bytes_read = 0;

Review Comment:
   I would prefer to do the cleanup of this unused variables (including the one 
in the tools) separately.





Issue Time Tracking
---

Worklog Id: (was: 762603)
Time Spent: 3h 10m  (was: 3h)

> Define ssize_t for Windows
> --
>
> Key: HDFS-16468
> URL: https://issues.apache.org/jira/browse/HDFS-16468
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Some C/C++ files use *ssize_t* data type. This isn't available for Windows 
> and we need to define an alias for this and set it to *long long* to make it 
> cross platform compatible.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16468) Define ssize_t for Windows

2022-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16468?focusedWorklogId=762536=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-762536
 ]

ASF GitHub Bot logged work on HDFS-16468:
-

Author: ASF GitHub Bot
Created on: 26/Apr/22 20:48
Start Date: 26/Apr/22 20:48
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4228:
URL: https://github.com/apache/hadoop/pull/4228#issuecomment-1110234952

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 38s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  69m 39s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  70m  2s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 53s |  |  the patch passed  |
   | +1 :green_heart: |  cc  |   3m 53s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   3m 53s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 53s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m  4s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 112m 25s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 212m 45s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4228/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4228 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit 
codespell golang |
   | uname | Linux 79f6b6f5312c 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 
19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7b26a9a96131cc3112107e471ed0b13e3a29dffd |
   | Default Java | Red Hat, Inc.-1.8.0_322-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4228/5/testReport/ |
   | Max. process+thread count | 603 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4228/5/console |
   | versions | git=2.9.5 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




Issue Time Tracking
---

Worklog Id: (was: 762536)
Time Spent: 3h  (was: 2h 50m)

> Define ssize_t for Windows
> --
>
> Key: HDFS-16468
> URL: https://issues.apache.org/jira/browse/HDFS-16468
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Some C/C++ files use *ssize_t* data type. This isn't available for Windows 
> and we need to define an alias for this and set it to *long long* to make it 
> cross platform compatible.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Commented] (HDFS-16094) HDFS balancer process start failed owing to daemon pid file is not cleared in some exception senario

2022-04-26 Thread Renukaprasad C (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17528382#comment-17528382
 ] 

Renukaprasad C commented on HDFS-16094:
---

Similar issue HDFS-15932 has been addressed. [~Daniel Ma] please check if any 
other information to be added.

> HDFS balancer process start failed owing to daemon pid file is not cleared in 
> some exception senario
> 
>
> Key: HDFS-16094
> URL: https://issues.apache.org/jira/browse/HDFS-16094
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.3.1
>Reporter: Daniel Ma
>Priority: Major
>
> HDFS balancer process start failed owing to daemon pid file is not cleared in 
> some exception senario, but there is no useful information in log to trouble 
> shoot as below.
> {code:java}
> //代码占位符
> hadoop_error "${daemonname} is running as process $(cat "${daemon_pidfile}")
> {code}
> but actually, the process is not running as the error msg details above.
> Therefore, some more explicit information should be print in error log to 
> guide  users to clear the pid file and where the pid file location is.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16468) Define ssize_t for Windows

2022-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16468?focusedWorklogId=762445=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-762445
 ]

ASF GitHub Bot logged work on HDFS-16468:
-

Author: ASF GitHub Bot
Created on: 26/Apr/22 17:23
Start Date: 26/Apr/22 17:23
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra commented on code in PR #4228:
URL: https://github.com/apache/hadoop/pull/4228#discussion_r858971962


##
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c:
##
@@ -487,10 +488,10 @@ static ssize_t wildcard_expandPath(const char* path, 
char* expanded)
  * allocated after using this function with expandedClasspath=NULL to get the
  * right size.
  */
-static ssize_t getClassPath_helper(const char *classpath, char* 
expandedClasspath)
+static x_platform_ssize_t getClassPath_helper(const char *classpath, char* 
expandedClasspath)
 {
-ssize_t length;
-ssize_t retval;
+x_platform_ssize_t length;

Review Comment:
   > Is this the best way to do this? Can't we just do the typedef for windows?
   
   I've handled this now.





Issue Time Tracking
---

Worklog Id: (was: 762445)
Time Spent: 2h 50m  (was: 2h 40m)

> Define ssize_t for Windows
> --
>
> Key: HDFS-16468
> URL: https://issues.apache.org/jira/browse/HDFS-16468
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Some C/C++ files use *ssize_t* data type. This isn't available for Windows 
> and we need to define an alias for this and set it to *long long* to make it 
> cross platform compatible.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16540) Data locality is lost when DataNode pod restarts in kubernetes

2022-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16540?focusedWorklogId=762435=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-762435
 ]

ASF GitHub Bot logged work on HDFS-16540:
-

Author: ASF GitHub Bot
Created on: 26/Apr/22 17:14
Start Date: 26/Apr/22 17:14
Worklog Time Spent: 10m 
  Work Description: huaxiangsun commented on PR #4170:
URL: https://github.com/apache/hadoop/pull/4170#issuecomment-1110051006

   Any more comments? Thanks.




Issue Time Tracking
---

Worklog Id: (was: 762435)
Time Spent: 3h 10m  (was: 3h)

> Data locality is lost when DataNode pod restarts in kubernetes 
> ---
>
> Key: HDFS-16540
> URL: https://issues.apache.org/jira/browse/HDFS-16540
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.3.2
>Reporter: Huaxiang Sun
>Assignee: Huaxiang Sun
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> We have HBase RegionServer and Hdfs DataNode running in one pod. When the pod 
> restarts, we found that data locality is lost after we do a major compaction 
> of hbase regions. After some debugging, we found that upon pod restarts, its 
> ip changes. In DatanodeManager, maps like networktopology are updated with 
> the new info. host2DatanodeMap is not updated accordingly. When hdfs client 
> with the new ip tries to find a local DataNode, it fails. 
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16528) Reconfigure slow peer enable for Namenode

2022-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16528?focusedWorklogId=762352=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-762352
 ]

ASF GitHub Bot logged work on HDFS-16528:
-

Author: ASF GitHub Bot
Created on: 26/Apr/22 14:35
Start Date: 26/Apr/22 14:35
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4186:
URL: https://github.com/apache/hadoop/pull/4186#issuecomment-1109875631

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  7s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  2s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 50s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 46s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 365m  6s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 485m 45s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4186/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4186 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 1314cff2b79d 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 
19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / cfd7d2df778b8eb0f46d790a6c3856209fb4a3f6 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4186/11/testReport/ |
   | Max. process+thread count | 2012 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4186/11/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This 

[jira] [Work logged] (HDFS-16543) Keep default value of dfs.datanode.directoryscan.throttle.limit.ms.per.sec consistent

2022-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16543?focusedWorklogId=762227=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-762227
 ]

ASF GitHub Bot logged work on HDFS-16543:
-

Author: ASF GitHub Bot
Created on: 26/Apr/22 10:51
Start Date: 26/Apr/22 10:51
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4178:
URL: https://github.com/apache/hadoop/pull/4178#issuecomment-1109647004

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m 20s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 17s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 48s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 22s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 35s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 364m 25s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 483m 43s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4178/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4178 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell xml spotbugs checkstyle |
   | uname | Linux 84d7ab623bcd 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d69b76429f17c4bf3f962ca1d2b0ec4b725f66c4 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4178/4/testReport/ |
   | Max. process+thread count | 2208 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4178/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
  

[jira] [Commented] (HDFS-14750) RBF: Improved isolation for downstream name nodes. {Dynamic}

2022-04-26 Thread Felix N (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17528060#comment-17528060
 ] 

Felix N commented on HDFS-14750:


Tried my hands at it since there seems to be no updates for this ticket.

The rough idea is to utilize the metrics added by HDFS-16296 and HDFS-16302, 
spawn a background thread that resizes the semaphores periodically based on the 
traffic to the namespaces (determined from the metrics).

> RBF: Improved isolation for downstream name nodes. {Dynamic}
> 
>
> Key: HDFS-14750
> URL: https://issues.apache.org/jira/browse/HDFS-14750
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: CR Hota
>Assignee: CR Hota
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira tracks the work around dynamic allocation of resources in routers 
> for downstream hdfs clusters. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16520) Improve EC pread: avoid potential reading whole block

2022-04-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16520?focusedWorklogId=762192=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-762192
 ]

ASF GitHub Bot logged work on HDFS-16520:
-

Author: ASF GitHub Bot
Created on: 26/Apr/22 09:10
Start Date: 26/Apr/22 09:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4104:
URL: https://github.com/apache/hadoop/pull/4104#issuecomment-1109548468

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 44s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m 39s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 23s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   6m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   6m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 36s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m  4s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 49s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   6m 25s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   6m  9s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   6m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   5m 46s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   6m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 42s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 245m 41s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4104/8/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 16s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 394m 14s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4104/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4104 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 0fd20f4378da 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 719ae6e859d2af3d72e0a75c4854b3f6734ec6f8 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 

[jira] [Commented] (HDFS-15775) Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project hadoop-hdfs-native-client: An Ant BuildException has occured: exec returne

2022-04-26 Thread Maria (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17527915#comment-17527915
 ] 

Maria commented on HDFS-15775:
--

[~hitendra] Any resolution for this issue. Pls help

> Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run 
> (make) on project hadoop-hdfs-native-client: An Ant BuildException has 
> occured: exec returned: 1
> 
>
> Key: HDFS-15775
> URL: https://issues.apache.org/jira/browse/HDFS-15775
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.2.1
> Environment:  
> Windows 10
> Apache Maven 3.6.3 (cecedd343002696d0abb50b32b541b8a6ba2883f)
> Maven home: C:\Bigdata\apache-maven\bin\..
> Java version: 1.8.0_152, vendor: Oracle Corporation, runtime: 
> C:\Java\jdk1.8.0_152\jre
> Default locale: en_US, platform encoding: Cp1252
> OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"
> git version 2.30.0.windows.1
> Visual Studio 2019 Professional
>Reporter: Hitendra
>Priority: Major
> Fix For: 3.2.1
>
> Attachments: CMakeError.log, CMakeOutput.log, out.txt
>
>
> When I build hadoop 3.3.0 on windows10, it failed. My command is 'mvn clean 
> package -Pdist,native-win -Pdocs -Psrc -Dtar -DskipTests 
> -Dmaven.javadoc.skip=true'. For details of log message, refer to log.txt. I 
> though there is no need of OpenSSL installation required for Windows 10.
> {code:java}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project 
> hadoop-hdfs-native-client: An Ant BuildException has occured: exec returned: 1
> [ERROR] around Ant part ... dir="C:\Bigdata\hadoop-src\hadoop-hdfs-project\hadoop-hdfs-native-client\target/native"
>  executable="cmake">... @ 5:135 in 
> C:\Bigdata\hadoop-src\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
> [ERROR] -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project 
> hadoop-hdfs-native-client: An Ant BuildException has occured: exec returned: 1
> around Ant part ... dir="C:\Bigdata\hadoop-src\hadoop-hdfs-project\hadoop-hdfs-native-client\target/native"
>  executable="cmake">... @ 5:135 in 
> C:\Bigdata\hadoop-src\hadoop-hdfs-project\hadoop-hdfs-native-client\target\antrun\build-main.xml
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16551) Backport HADOOP-17588 to 3.3 and other active old branches.

2022-04-26 Thread Renukaprasad C (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17527906#comment-17527906
 ] 

Renukaprasad C commented on HDFS-16551:
---

Thanks [~ste...@apache.org] & [~weichiu] for review & merge.

> Backport HADOOP-17588 to 3.3 and other active old branches.
> ---
>
> Key: HDFS-16551
> URL: https://issues.apache.org/jira/browse/HDFS-16551
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.10.2, 3.2.4, 3.3.4
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> This random issue has been handled in trunk, same needs to be backported to 
> active branches.
> org.apache.hadoop.crypto.CryptoInputStream.close() - when 2 threads try to 
> close the stream second thread, fails with error.
> This operation should be synchronized to avoid multiple threads to perform 
> the close operation concurrently.
> [~Hemanth Boyina] 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org